paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2019_SyxXhsAcFQ | Cohen Welling bases & SO(2)-Equivariant classifiers using Tensor nonlinearity. | In this paper we propose autoencoder architectures for learning a Cohen-Welling
(CW)-basis for images and their rotations. We use the learned CW-basis to build
a rotation equivariant classifier to classify images. The autoencoder and classi-
fier architectures use only tensor product nonlinearity. The model proposed by
Cohen & Welling (2014) uses ideas from group representation theory, and extracts
a basis exposing irreducible representations for images and their rotations. We
give several architectures to learn CW-bases including a novel coupling AE archi-
tecture to learn a coupled CW-bases for images in different scales simultaneously.
Our use of tensor product nonlinearity is inspired from recent work of Kondor
(2018a). Our classifier has very good accuracy and we use fewer parameters.
Even when the sample complexity to learn a good CW-basis is low we learn clas-
sifiers which perform impressively. We show that a coupled CW-bases in one scale
can be deployed to classify images in a classifier trained and tested on images in
a different scale with only a marginal dip in performance. | rejected-papers | This paper studies group equivariant neural network representations by building on the work by [Cohen and Welling, '14], which introduced learning of group irreducible representations, and [Kondor'18], who introduced tensor product non-linearities operating directly in the group Fourier domain.
Reviewers highlighted the significance of the approach, but were also unanimously concerned by the lack of clarity of the current manuscript, making its widespread impact within ICLR difficult, and the lack of a large-scale experiment that corroborates the usefulness of the approach. They were also very positive about the improvements of the paper during the author response phase. The AC completely agrees with this assessment of the paper. Therefore, the paper cannot be accepted at this time, but the AC strongly encourages the authors to resubmit their work in the next conference cycle by addressing the above remarks (improve clarity of presentation and include a large-scale experiment). | train | [
"rke0Td-K0m",
"Bye1UtGK2m",
"ryxwx6cGAQ",
"rJe_WwNyR7",
"rklZjm3rTm",
"r1gwWQ3Bpm",
"HJgtLS2STQ",
"ryxWwVhrpQ",
"rkgZoE2rTX",
"HJl2WNnS6Q",
"H1gjQCG63m",
"SygqtUCqnm"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We have experimented our algorithms on Fashion-MNIST dataset and reported the results in the current revision of the paper.",
"Review: This paper deals with the issue of learning rotation invariant autoencoders and classifiers. While this problem is well motivated, I found that this paper was fairly weak experi... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
7
] | [
-1,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
2,
4
] | [
"iclr_2019_SyxXhsAcFQ",
"iclr_2019_SyxXhsAcFQ",
"rJe_WwNyR7",
"HJgtLS2STQ",
"H1gjQCG63m",
"SygqtUCqnm",
"Bye1UtGK2m",
"H1gjQCG63m",
"H1gjQCG63m",
"H1gjQCG63m",
"iclr_2019_SyxXhsAcFQ",
"iclr_2019_SyxXhsAcFQ"
] |
iclr_2019_SyxYEoA5FX | Invariance and Inverse Stability under ReLU | We flip the usual approach to study invariance and robustness of neural networks by considering the non-uniqueness and instability of the inverse mapping. We provide theoretical and numerical results on the inverse of ReLU-layers. First, we derive a necessary and sufficient condition on the existence of invariance that provides a geometric interpretation. Next, we move to robustness via analyzing local effects on the inverse. To conclude, we show how this reverse point of view not only provides insights into key effects, but also enables to view adversarial examples from different perspectives. | rejected-papers | The main strength of the paper is to provide a clear mathematical characterization of invertible neural networks. The reviewers and the AC also note potential weakness including 1) the exposition of the paper can be much improved; 2) it's unclear how these analyses can help improve the training algorithm or architecture design since these characterizations are likely not computable; 3) the novelty compared to previous work Carlsson et al. 2017 may not be enough for ICLR acceptance. These weakness are considered critical issues by the AC in the decision. | train | [
"Hkl2yixyCX",
"H1e-ZkaByV",
"Hyejnbko07",
"B1eIec-YRQ",
"BJx3grAW0m",
"B1lua7RbAm",
"B1lldQ0bRX",
"rkxr33DOTm",
"H1xwrnD_pQ",
"rkg7S_D93X",
"Skx34ZHc37"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\n\n\n\nReview\n\nThis paper discusses invariances in ReLU networks. The discussion is anchored around the observation that while the spectral norm of neural networks layers (their product bounds the Lipschitz constant) has been investigated as a measure of robustness of nets, the smallest singular values are also... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
7
] | [
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"iclr_2019_SyxYEoA5FX",
"B1lldQ0bRX",
"H1xwrnD_pQ",
"BJx3grAW0m",
"rkxr33DOTm",
"iclr_2019_SyxYEoA5FX",
"Hkl2yixyCX",
"Skx34ZHc37",
"rkg7S_D93X",
"iclr_2019_SyxYEoA5FX",
"iclr_2019_SyxYEoA5FX"
] |
iclr_2019_SyxZOsA9tX | Accelerated Value Iteration via Anderson Mixing | Acceleration for reinforcement learning methods is an important and challenging theme. We introduce the Anderson acceleration technique into the value iteration, developing an accelerated value iteration algorithm that we call Anderson Accelerated Value Iteration (A2VI). We further apply our method to the Deep Q-learning algorithm, resulting in the Deep Anderson Accelerated Q-learning (DA2Q) algorithm. Our approach can be viewed as an approximation of the policy evaluation by interpolating on historical data. A2VI is more efficient than the modified policy iteration, which is a classical approximate method for policy evaluation. We give a theoretical analysis of our algorithm and conduct experiments on both toy problems and Atari games. Both the theoretical and empirical results show the effectiveness of our algorithm. | rejected-papers | The paper proposes to use Anderson Mixing to accelerate value iteration and DQN. The idea is interesting, with some theoretical and empirical support. However, reviewers feel that the contribution is somewhat limited, and certain parts (e.g., the DP view) can be further developed to strengthen the technical contribution. Furthermore, one reviewer points out that the empirical results are not very strong, where the improvements on 3 Atari games are not very substantial. Overall, while the paper is interesting and does have the potential, it seems too preliminary to be published in its current form.
Minor comments:
1. The paper is partially motivated by the claim given at the beginning of section 3: "Based on the observation that full policy evaluation accelerates convergence, ..." Can a reference be given?
2. Another way to look at Anderson Mixing is the standard linear value function approximation framework, where the previous K value functions serve as basis functions. See Mahadevan & Maggioni (JMLR'07), Parr et al. (ICML'08) and Konidaris et al. (AAAI'11) for a few examples of constructing basis functions; the approach here seems to provide another way to automatically construct basic functions. A discussion would be helpful. | train | [
"Bkxegtk-xV",
"BygKaKgc14",
"r1xWetP8JV",
"SJl2dAODC7",
"rJxYbh_vRQ",
"SygtNjuvCm",
"rkeg59OvAm",
"BJgqyzu-67",
"SyxldX402Q",
"Syg7ehHc3X",
"BklSoqjdhQ"
] | [
"public",
"author",
"public",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks a lot for the response and further explanations. Yes I completely agree with your comments on the difference, and in particular, the difference between the safeguards in that paper and the one in your paper. \n\nBtw, in the paper of Zhang et al., the safe-guards are enforced on the residuals for the general... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"BygKaKgc14",
"r1xWetP8JV",
"SJl2dAODC7",
"BklSoqjdhQ",
"Syg7ehHc3X",
"SyxldX402Q",
"BJgqyzu-67",
"iclr_2019_SyxZOsA9tX",
"iclr_2019_SyxZOsA9tX",
"iclr_2019_SyxZOsA9tX",
"iclr_2019_SyxZOsA9tX"
] |
iclr_2019_SyxaYsAqY7 | Second-Order Adversarial Attack and Certifiable Robustness | Adversarial training has been recognized as a strong defense against adversarial attacks. In this paper, we propose a powerful second-order attack method that reduces the accuracy of the defense model by Madry et al. (2017). We demonstrate that adversarial training overfits to the choice of the norm in the sense that it is only robust to the attack used for adversarial training, thus suggesting it has not achieved universal robustness. The effectiveness of our attack method motivates an investigation of provable robustness of a defense model. To this end, we introduce a framework that allows one to obtain a certifiable lower bound on the prediction accuracy against adversarial examples. We conduct experiments to show the effectiveness of our attack method. At the same time, our defense model achieves significant improvements compared to previous works under our proposed attack. | rejected-papers | The reviewers have agreed this work is not ready for publication at ICLR. | train | [
"HyxnNpt86Q",
"S1lzXzII67",
"BJeuzz5TnX",
"Skx7vgPc27",
"r1xf0z-XnQ",
"SyeJa4nC2Q",
"Syxt0EhChQ",
"H1xCLf502X",
"H1xyYFYAnm"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"We thank the reviewer for the thoughtful responses.\n\n1) We will emphasize the point that our method broke adversarial learning only when different norms are used in the training and testing in the main sections.\n\nWe agree that gradient obfuscation might exist from this perspective. This suggests that adversari... | [
-1,
-1,
4,
5,
3,
-1,
-1,
-1,
-1
] | [
-1,
-1,
5,
3,
5,
-1,
-1,
-1,
-1
] | [
"S1lzXzII67",
"H1xyYFYAnm",
"iclr_2019_SyxaYsAqY7",
"iclr_2019_SyxaYsAqY7",
"iclr_2019_SyxaYsAqY7",
"r1xf0z-XnQ",
"r1xf0z-XnQ",
"Skx7vgPc27",
"BJeuzz5TnX"
] |
iclr_2019_Syxgbh05tQ | Lyapunov-based Safe Policy Optimization | In many reinforcement learning applications, it is crucial that the agent interacts with the environment only through safe policies, i.e.,~policies that do not take the agent to certain undesirable situations. These problems are often formulated as a constrained Markov decision process (CMDP) in which the agent's goal is to optimize its main objective while not violating a number of safety constraints. In this paper, we propose safe policy optimization algorithms that are based on the Lyapunov approach to CMDPs, an approach that has well-established theoretical guarantees in control engineering. We first show how to generate a set of state-dependent Lyapunov constraints from the original CMDP safety constraints. We then propose safe policy gradient algorithms that train a neural network policy using DDPG or PPO, while guaranteeing near-constraint satisfaction at every policy update by projecting either the policy parameter or the action onto the set of feasible solutions induced by the linearized Lyapunov constraints. Unlike the existing (safe) constrained PG algorithms, ours are more data efficient as they are able to utilize both on-policy and off-policy data. Furthermore, the action-projection version of our algorithms often leads to less conservative policy updates and allows for natural integration into an end-to-end PG training pipeline. We evaluate our algorithms and compare them with CPO and the Lagrangian method on several high-dimensional continuous state and action simulated robot locomotion tasks, in which the agent must satisfy certain safety constraints while minimizing its expected cumulative cost. | rejected-papers | This is an interesting direction but multiple reviewers had concerns about the amount of novelty in the current work, and given the strong pool of other papers, this didn't quite reach the threshold.
| train | [
"rJxbF0F207",
"Skel86Fh0m",
"SJg1Qcri07",
"HkxnHeScnX",
"S1lgvRSc0m",
"B1xu5AMKC7",
"HJxb9lXfC7",
"HylotAzfRX",
"HygKIyvyam",
"Syghhbsp37"
] | [
"public",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"public",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for going over our response and adjusting her/his score accordingly. We are happy that we managed to address some of her/his concerns. \n\nRegarding comparison with CPO: Unfortunately, the current version of CPO on github is built in rllab, and no implementation of this algorithm is available... | [
-1,
-1,
6,
5,
-1,
-1,
-1,
-1,
6,
8
] | [
-1,
-1,
2,
3,
-1,
-1,
-1,
-1,
2,
3
] | [
"S1lgvRSc0m",
"SJg1Qcri07",
"iclr_2019_Syxgbh05tQ",
"iclr_2019_Syxgbh05tQ",
"B1xu5AMKC7",
"HkxnHeScnX",
"HygKIyvyam",
"Syghhbsp37",
"iclr_2019_Syxgbh05tQ",
"iclr_2019_Syxgbh05tQ"
] |
iclr_2019_SyxknjC9KQ | Dense Morphological Network: An Universal Function Approximator | Artificial neural networks are built on the basic operation of linear combination and non-linear activation function. Theoretically this structure can approximate any continuous function with three layer architecture. But in practice learning the parameters of such network can be hard. Also the choice of activation function can greatly impact the performance of the network. In this paper we are proposing to replace the basic linear combination operation with non-linear operations that do away with the need of additional non-linear activation function. To this end we are proposing the use of elementary morphological operations (dilation and erosion) as the basic operation in neurons. We show that these networks (Denoted as Morph-Net) with morphological operations can approximate any smooth function requiring less number of parameters than what is necessary for normal neural networks. The results show that our network perform favorably when compared with similar structured network. We have carried out our experiments on MNIST, Fashion-MNIST, CIFAR10 and CIFAR100. | rejected-papers | This work presents an interesting take on how to combine basic functions to lead to better activation functions. While the experiments in the paper show that the approach works well compared to the baselines that are used as reference, reviewers note that a more adequate assessment of the contribution would require comparing to stronger baselines or switching to tasks where the chosen baselines are indeed performing well. Authors are encouraged to follow the many suggestions of reviewers to strengthen their work. | val | [
"ryl9LrpkJV",
"rkeG3Oj1y4",
"B1gHuIGwpm",
"Byl-KT0pRQ",
"H1eSNCd6p7",
"Syxd-gzDTQ",
"BkeaejOa6Q",
"H1gh48_TTX",
"r1gIG8_6pQ",
"S1l6Toan3Q",
"HJeCtix9hQ",
"r1lgUd5v9X",
"BygipUQr5Q"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"Thank you for the update. \n\nModifying the network to accept 2D input is straightforward. But the theorem we have proved for the dense single layer case will not hold there. On the other hand, if we use 2D morphological operations in the network a single hidden layer will not be sufficient. So, we have to extend ... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
-1,
-1
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
-1,
-1
] | [
"BkeaejOa6Q",
"Byl-KT0pRQ",
"iclr_2019_SyxknjC9KQ",
"Syxd-gzDTQ",
"iclr_2019_SyxknjC9KQ",
"S1l6Toan3Q",
"B1gHuIGwpm",
"r1gIG8_6pQ",
"HJeCtix9hQ",
"iclr_2019_SyxknjC9KQ",
"iclr_2019_SyxknjC9KQ",
"BygipUQr5Q",
"iclr_2019_SyxknjC9KQ"
] |
iclr_2019_SyxnvsAqFm | Computation-Efficient Quantization Method for Deep Neural Networks | Deep Neural Networks, being memory and computation intensive, are a challenge to deploy in smaller devices. Numerous quantization techniques have been proposed to reduce the inference latency/memory consumption. However, these techniques impose a large overhead on the training procedure or need to change the training process. We present a non-intrusive quantization technique based on re-training the full precision model, followed by directly optimizing the corresponding binary model. The quantization training process takes no longer than the original training process. We also propose a new loss function to regularize the weights, resulting in reduced quantization error. Combining both help us achieve full precision accuracy on CIFAR dataset using binary quantization. We also achieve full precision accuracy on WikiText-2 using 2 bit quantization. Comparable results are also shown for ImageNet. We also present a 1.5 bits hybrid model exceeding the performance of TWN LSTM model for WikiText-2. | rejected-papers | The authors propose a technique for quantizing neural networks, which consist of repeated quantization/de-quantization operations during training, and the second step learns scale factors. The method is simple, clearly presented, and requires no change in the training procedure.
However, the authors noted that the work is somewhat incremental, and is similar to previously proposed approaches. As noted by the reviewers, the AC agrees that the work would be significantly strengthened by additional analysis of complexity in terms of computational time and memory relative to the other techniques.
| val | [
"r1l2A103JV",
"SJgHXAwh14",
"rkgHiQOsyN",
"Ske7pUNoy4",
"SJxhAc3dJ4",
"rylSuuHqpm",
"BklH98B9TX",
"SJxdMvH9pX",
"SJl-NOn7TQ",
"SkgUjCCFhX",
"BkxDCAl43Q"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"After reading the responses and the revised manuscript, the reviewer still did not find complexity analysis in Sec 5.5. The complexity analysis is important since the claimed contribution is to improve training quantitative models. There are still no comparison with other quantization methods in terms of computati... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"rylSuuHqpm",
"SJxdMvH9pX",
"SJxhAc3dJ4",
"SJxhAc3dJ4",
"BklH98B9TX",
"BkxDCAl43Q",
"SkgUjCCFhX",
"SJl-NOn7TQ",
"iclr_2019_SyxnvsAqFm",
"iclr_2019_SyxnvsAqFm",
"iclr_2019_SyxnvsAqFm"
] |
iclr_2019_SyxvSiCcFQ | Neural Network Cost Landscapes as Quantum States | Quantum computers promise significant advantages over classical computers for a number of different applications. We show that the complete loss function landscape of a neural network can be represented as the quantum state output by a quantum computer. We demonstrate this explicitly for a binary neural network and, further, show how a quantum computer can train the network by manipulating this state using a well-known algorithm known as quantum amplitude amplification. We further show that with minor adaptation, this method can also represent the meta-loss landscape of a number of neural network architectures simultaneously. We search this meta-loss landscape with the same method to simultaneously train and design a binary neural network. | rejected-papers | This paper studies the problem of training binary neural networks using quantum amplitude amplification method. Reviewers agree that the problem considered is novel and interesting. However the consensus is that there are only few experiments in the current paper and the paper needs more experiments on different datasets with comparisons to proper baselines. Reviewers opined that the paper was not so easy to follow initially, though later revisions may have somewhat alleviated this problem. | train | [
"BJgyIo5dR7",
"S1gkZic_RQ",
"Skgo3q5uCQ",
"BJeeedcdRm",
"S1g9YoQonm",
"ByxRMTMc3m",
"HygvmHzF3m"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for recognising the work as novel and interesting and for their constructive comments. We have incorporated these into the new version of the paper. A detailed response to the reviews questions is included below. \nWe acknowledge that verifying the soundness and correctness of a paper is chal... | [
-1,
-1,
-1,
-1,
5,
3,
4
] | [
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"HygvmHzF3m",
"Skgo3q5uCQ",
"ByxRMTMc3m",
"S1g9YoQonm",
"iclr_2019_SyxvSiCcFQ",
"iclr_2019_SyxvSiCcFQ",
"iclr_2019_SyxvSiCcFQ"
] |
iclr_2019_SyxwW2A5Km | Learning Representations of Categorical Feature Combinations via Self-Attention | Self-attention has been widely used to model the sequential data and achieved remarkable results in many applications. Although it can be used to model dependencies without regard to positions of sequences, self-attention is seldom applied to non-sequential data. In this work, we propose to learn representations of multi-field categorical data in prediction tasks via self-attention mechanism, where features are orderless but have intrinsic relations over different fields. In most current DNN based models, feature embeddings are simply concatenated for further processing by networks. Instead, by applying self-attention to transform the embeddings, we are able to relate features in different fields and automatically learn representations of their combinations, which are known as the factors of many prevailing linear models. To further improve the effect of feature combination mining, we modify the original self-attention structure by restricting the similarity weight to have at most k non-zero values, which additionally regularizes the model. We experimentally evaluate the effectiveness of our self-attention model on non-sequential data. Across two click through rate prediction benchmark datasets, i.e., Cretio and Avazu, our model with top-k restricted self-attention achieves the state-of-the-art performance. Compared with the vanilla MLP, the gain by adding self-attention is significantly larger than that by modifying the network structures, which most current works focus on. | rejected-papers | All reviewers agree in their assessment that this paper is not ready for acceptance into ICLR and the authors did not respond during the rebuttal phase. | train | [
"r1x4CIwo27",
"HklbEe9Fhm",
"Bye_X4Dt2m"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Summary\nThe paper proposes to apply self-attention mechanism from (Vaswani et.al.) to the task of click-through rate prediction, which is a task where one has input features which are a concatenation of multiple one-hot vectors (referred to as fields). The paper finds that applying the self-attention mechanism ou... | [
5,
5,
5
] | [
4,
3,
4
] | [
"iclr_2019_SyxwW2A5Km",
"iclr_2019_SyxwW2A5Km",
"iclr_2019_SyxwW2A5Km"
] |
iclr_2019_SyzjBiR9t7 | MANIFOLDNET: A DEEP NEURAL NETWORK FOR MANIFOLD-VALUED DATA | Developing deep neural networks (DNNs) for manifold-valued data sets
has gained much interest of late in the deep learning research
community. Examples of manifold-valued data include data from
omnidirectional cameras on automobiles, drones etc., diffusion
magnetic resonance imaging, elastography and others. In this paper, we
present a novel theoretical framework for DNNs to cope with
manifold-valued data inputs. In doing this generalization, we draw
parallels to the widely popular convolutional neural networks (CNNs).
We call our network the ManifoldNet.
As in vector spaces where convolutions are equivalent to computing the
weighted mean of functions, an analogous definition for
manifold-valued data can be constructed involving the computation of
the weighted Fr\'{e}chet Mean (wFM). To this end, we present a
provably convergent recursive computation of the wFM of the given
data, where the weights makeup the convolution mask, to be
learned. Further, we prove that the proposed wFM layer achieves a
contraction mapping and hence the ManifoldNet does not need the
additional non-linear ReLU unit used in standard CNNs. Operations such
as pooling in traditional CNN are no longer necessary in this setting
since wFM is already a pooling type operation. Analogous to the
equivariance of convolution in Euclidean space to translations, we
prove that the wFM is equivariant to the action of the group of
isometries admitted by the Riemannian manifold on which the data
reside. This equivariance property facilitates weight sharing within
the network. We present experiments, using the ManifoldNet framework,
to achieve video classification and image reconstruction using an
auto-encoder+decoder setting. Experimental results demonstrate the
efficacy of ManifoldNet in the context of classification and
reconstruction accuracy. | rejected-papers | This manuscript proposes an extension of convolution operations for manifold-valued data. The primary contributions include the development and description of the approach and implementation and evaluation on real data.
The reviewers and AC expressed concern about the clarity of the presentation, particularly for a general ICLR audience. Though the contributions are primarily conceptual/theoretical, reviewers expressed concern about the breadth and quality of the presented experimental results. Some additional concerns related to missing proofs and details were addressed in the rebuttal. | train | [
"SJlgBKulk4",
"SylACz_g14",
"HJg9flnACX",
"rkxvysGcCX",
"B1lwsi8KAQ",
"HkgL4XTO0Q",
"BkxpMQpdAm",
"Byx2rYEHRm",
"Hkxunv4HRQ",
"BJgWKjEt67",
"BJxlJnVY6m",
"HJxVi6NK6X",
"rkxR6zqqhQ",
"B1lTKrfq2m",
"Byg4qQrP2Q"
] | [
"author",
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"(1) \"However, my overall view of this paper has not been changed as the writing of this paper was not clear enough for me. \"\n\nAns: We are sorry that you feel the writing was unclear. Can you please point out any further clarifications that you need which we can provide in order to change this opinion of yours?... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"HJg9flnACX",
"rkxvysGcCX",
"BJgWKjEt67",
"B1lwsi8KAQ",
"BkxpMQpdAm",
"Hkxunv4HRQ",
"Byx2rYEHRm",
"HJxVi6NK6X",
"HJxVi6NK6X",
"rkxR6zqqhQ",
"B1lTKrfq2m",
"Byg4qQrP2Q",
"iclr_2019_SyzjBiR9t7",
"iclr_2019_SyzjBiR9t7",
"iclr_2019_SyzjBiR9t7"
] |
iclr_2019_Syzn9i05Ym | Learning Neural Random Fields with Inclusive Auxiliary Generators | Neural random fields (NRFs), which are defined by using neural networks to implement potential functions in undirected models, provide an interesting family of model spaces for machine learning. In this paper we develop a new approach to learning NRFs with inclusive-divergence minimized auxiliary generator - the inclusive-NRF approach, for continuous data (e.g. images), with solid theoretical examination on exploiting gradient information in model sampling. We show that inclusive-NRFs can be flexibly used in unsupervised/supervised image generation and semi-supervised classification, and empirically to the best of our knowledge, represent the best-performed random fields in these tasks. Particularly, inclusive-NRFs achieve state-of-the-art sample generation quality on CIFAR-10 in both unsupervised and supervised settings. Semi-supervised inclusive-NRFs show strong classification results on par with state-of-the-art generative model based semi-supervised learning methods, and simultaneously achieve superior generation, on the widely benchmarked datasets - MNIST, SVHN and CIFAR-10. | rejected-papers | This paper proposes a method for learning neural RFs with the inclusive-divergence minimization problem.
Reviewers generally agree that the experiments are sufficient and convincing, and that the method is evaluated well. Results are comparable with SOTA methods for image generation. The paper is reasonably well-written.
The paper is also somewhat lacking in background; most people at ICLR will not be very familiar with this learning problem. More information on the inclusive-divergence minimization problem would be helpful. A major concern of reviewers is whether novelty of the method is sufficient for publication.
| train | [
"HJl2-vta0Q",
"BygoQnV9CQ",
"r1lMZ--J0m",
"r1g2osJOnX",
"S1l33IW10Q",
"SJgY-YW1R7",
"Bkxlxu-10X",
"Hkxe-51c3Q",
"S1gFepIt27"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewers,\n\nThank you again for your valuable comments and for considering our responses and revisions. The main revisions of the manuscript are: revising the Abstract, expanding the Related Work section to more clearly reveal the differences between this paper and previous studies (to respond to reviewer ... | [
-1,
-1,
-1,
5,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
-1,
3,
-1,
-1,
-1,
3,
2
] | [
"iclr_2019_Syzn9i05Ym",
"r1g2osJOnX",
"iclr_2019_Syzn9i05Ym",
"iclr_2019_Syzn9i05Ym",
"Hkxe-51c3Q",
"r1g2osJOnX",
"S1gFepIt27",
"iclr_2019_Syzn9i05Ym",
"iclr_2019_Syzn9i05Ym"
] |
iclr_2019_SyzrLjA5FQ | Selective Self-Training for semi-supervised Learning | Semi-supervised learning (SSL) is a study that efficiently exploits a large amount of unlabeled data to improve performance in conditions of limited labeled data. Most of the conventional SSL methods assume that the classes of unlabeled data are included in the set of classes of labeled data. In addition, these methods do not sort out useless unlabeled samples and use all the unlabeled data for learning, which is not suitable for realistic situations. In this paper, we propose an SSL method called selective self-training (SST), which selectively decides whether to include each unlabeled sample in the training process. It is also designed to be applied to a more real situation where classes of unlabeled data are different from the ones of the labeled data. For the conventional SSL problems which deal with data where both the labeled and unlabeled samples share the same class categories, the proposed method not only performs comparable to other conventional SSL algorithms but also can be combined with other SSL algorithms. While the conventional methods cannot be applied to the new SSL problems where the separated data do not share the classes, our method does not show any performance degradation even if the classes of unlabeled data are different from those of the labeled data. | rejected-papers | Reviewers have concerns about poor writing of the paper, lack of technical novelty, and the methodology taken by the paper not being very principled. | train | [
"SkeObu-t07",
"SJg9GkF9nQ",
"S1lmJUZFA7",
"rkgrW2bKCQ",
"HygpTo-FR7",
"rJxj7dZYCQ",
"Skg9OOEf6X",
"HyeHzlJ537"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"First of all, thank you for taking your time to review our paper and providing feedback. We have judiciously taken the comments of the reviewers, and apologize for the late response due to additional experiments and modifications of the paper.\n\n\nRemark 0. It needs other theoretical explanation (ex. co-training)... | [
-1,
5,
-1,
-1,
-1,
-1,
4,
4
] | [
-1,
4,
-1,
-1,
-1,
-1,
5,
4
] | [
"Skg9OOEf6X",
"iclr_2019_SyzrLjA5FQ",
"SJg9GkF9nQ",
"HygpTo-FR7",
"HyeHzlJ537",
"SkeObu-t07",
"iclr_2019_SyzrLjA5FQ",
"iclr_2019_SyzrLjA5FQ"
] |
iclr_2019_r14Aas09Y7 | COCO-GAN: Conditional Coordinate Generative Adversarial Network | Recent advancements on Generative Adversarial Network (GAN) have inspired a wide range of works that generate synthetic images. However, the current processes have to generate an entire image at once, and therefore resolutions are limited by memory or computational constraints. In this work, we propose COnditional COordinate GAN (COCO-GAN), which generates a specific patch of an image conditioned on a spatial position rather than the entire image at a time. The generated patches are later combined together to form a globally coherent full-image. With this process, we show that the generated image can achieve competitive quality to state-of-the-arts and the generated patches are locally smooth between consecutive neighbors. One direct implication of the COCO-GAN is that it can be applied onto any coordinate systems including the cylindrical systems which makes it feasible for generating panorama images. The fact that the patch generation process is independent to each other inspires a wide range of new applications: firstly, "Patch-Inspired Image Generation" enables us to generate the entire image based on a single patch. Secondly, "Partial-Scene Generation" allows us to generate images within a customized target region. Finally, thanks to COCO-GAN's patch generation and massive parallelism, which enables combining patches for generating a full-image with higher resolution than state-of-the-arts. | rejected-papers | The paper introduces a GAN architecture for generating small patches of an image and subsequently combining them. Following the rebuttal and discussion, reviewers still rate the paper as marginally above or below the acceptance threshold.
In response to updates, AnonReviewer3 comments that "ablation experiments do make the paper stronger" but it "still lacks convincing experiments for its main motivating use case: generating outputs at a resolution that won't fit in memory within a single forward pass".
AnonReviewer2 points to the major shortcoming that "throughout the exposition it is never really clear why COCO-GAN is a good idea beyond the fact that it somehow works. I was missing a concrete use case where COCO-GAN performs much better."
Though authors provide additional experiments and reference high-resolution output during the discussion phase, they caution that these results are preliminary and could likely benefit from more time/work devoted to training.
On balance, the AC agrees with the reviewers that the paper contains some interesting ideas, but also believes that experimental validation simply needs more work, and as a result the paper does not meet the bar for acceptance.
| train | [
"ByxHxsaxTQ",
"Byx3uk45R7",
"r1gNMG5FCX",
"HJlYrI8O0X",
"BJgGV8UdAQ",
"ryea-UIO07",
"r1xdfBUdRQ",
"B1lJ-SIuCQ",
"rylP4BLOCX",
"HklWyH8OAm",
"ryxR2VUORX",
"B1lvh93lA7",
"r1eLtajJR7",
"rJgZFBEY6m",
"HylqvBEYpQ",
"HJx6qgiU6m",
"HJexTqc8p7",
"SJeJgs98pX",
"HyeA1BiNTm",
"r1xwR_U4TX"... | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official... | [
"The paper describes a GAN architecture and training methodology where a generator is trained to generate \"micro-\" patches, being passed as input a latent vector and patch co-ordinates. Micro-patches generated for different adjacent locations with the same latent vector are combined to generate a \"macro\" patch.... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"iclr_2019_r14Aas09Y7",
"r1gNMG5FCX",
"BJgGV8UdAQ",
"r1gllwP937",
"Hkxi2VNbT7",
"r1eLtajJR7",
"ryxR2VUORX",
"ryxR2VUORX",
"ryxR2VUORX",
"ryxR2VUORX",
"iclr_2019_r14Aas09Y7",
"r1eLtajJR7",
"HylqvBEYpQ",
"HJx6qgiU6m",
"HJx6qgiU6m",
"HJexTqc8p7",
"HyeA1BiNTm",
"HyeA1BiNTm",
"ryxe9uU... |
iclr_2019_r1E0OsA9tX | Learning From the Experience of Others: Approximate Empirical Bayes in Neural Networks | Learning deep neural networks could be understood as the combination of representation learning and learning halfspaces. While most previous work aims to diversify representation learning by data augmentations and regularizations, we explore the opposite direction through the lens of empirical Bayes method. Specifically, we propose a matrix-variate normal prior whose covariance matrix has a Kronecker product structure to capture the correlations in learning different neurons through backpropagation. The prior encourages neurons to learn from the experience of others, hence it provides an effective regularization when training large networks on small datasets. To optimize the model, we design an efficient block coordinate descent algorithm with analytic solutions. Empirically, we show that the proposed method helps the network converge to better local optima that also generalize better, and we verify the effectiveness of the approach on both multiclass classification and multitask regression problems with various network structures. | rejected-papers | This paper proposes a method called approximate empirical Bayes to learn both the weights and hyperparameters. Reviewers have had a mixed feeling about this paper. Reviewers agree that the novelty of this paper is limited since AEB is already a well known method (in fact, iterative conditional modes is a well known algorithm). Unfortunately, the paper completely ignores the huge literature on this topic; the previous reference to use AEB is from McInerney (2017).
Another issue is that the paper seems to be unaware of any issues that this type of approach might have. Here is a reference that discusses some problems with this type of approach:
"Deterministic Latent Variable Models and their Pitfalls"
Max Welling∗ Chaitanya Chemudugunta, Nathan Sutter, 2008
The experiments presented in the paper are interesting, but then are not really doing a good job to assess why the method works well here even though in theory it should not be as good as the exact empirical Bayes method.
This paper does not meet the bar for acceptance at ICLR and therefore I recommend a reject for this paper.
| train | [
"Hyl0mgJ40Q",
"BkxSTCCmR7",
"HJxLqakd67",
"SyeonGMfpX",
"HkxLv4oEa7",
"HkednSGfam",
"HJxyoffMam",
"rkeXFzzGam",
"HJgR2SPWam",
"r1e-X_Vq2Q"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the feedback. We are glad the reviewer found the proposed empirical Bayes framework useful and the paper easy to follow. Responses to the reviewer’s concerns are addressed below. With these, we hope the reviewer will find the paper more appropriate for publication and, if so, will raise t... | [
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
3,
7
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
5,
4
] | [
"HJgR2SPWam",
"iclr_2019_r1E0OsA9tX",
"HkxLv4oEa7",
"HJgR2SPWam",
"iclr_2019_r1E0OsA9tX",
"r1e-X_Vq2Q",
"HJgR2SPWam",
"HJgR2SPWam",
"iclr_2019_r1E0OsA9tX",
"iclr_2019_r1E0OsA9tX"
] |
iclr_2019_r1GB5jA5tm | Adversarial Sampling for Active Learning | This paper proposes ASAL, a new pool based active learning method that generates high entropy samples. Instead of directly annotating the synthetic samples, ASAL searches similar samples from the pool and includes them for training. Hence, the quality of new samples is high and annotations are reliable. ASAL is particularly suitable for large data sets because it achieves a better run-time complexity (sub-linear) for sample selection than traditional uncertainty sampling (linear). We present a comprehensive set of experiments on two data sets and show that ASAL outperforms similar methods and clearly exceeds the established baseline (random sampling). In the discussion section we analyze in which situations ASAL performs best and why it is sometimes hard to outperform random sample selection. To the best of our knowledge this is the first adversarial active learning technique that is applied for multiple class problems using deep convolutional classifiers and demonstrates superior performance than random sample selection. | rejected-papers | The paper proposes adversarial sampling for pool-based active learning.
The reviewers and AC note the critical potential weaknesses on experimental results: it is far from being surprising the proposed method is better than random sampling. Ideally, one has to reduce the complexity under keeping the state-of-art performance. Otherwise, it is hard to claim the proposed method is fundamentally better than prior ones, although their targets might be different.
AC thinks the proposed method has potential and is interesting, but decided that the authors need more works to publish. | train | [
"Hkg43cdO0m",
"SJeH_qdd07",
"Skgxf5Od07",
"S1elfY__AQ",
"r1gbAqUSpm",
"rkedwDJo3Q",
"rJeOCbdwnX"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for the review. Although you are very confident about your evaluation we feel that we need to clarify the differences to previous methods such as GAAL, as sample matching is not the only but the most apparent difference among many other. Furthermore, we use sample generation in a very different context. Our... | [
-1,
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
-1,
2,
4,
5
] | [
"rJeOCbdwnX",
"Skgxf5Od07",
"rkedwDJo3Q",
"r1gbAqUSpm",
"iclr_2019_r1GB5jA5tm",
"iclr_2019_r1GB5jA5tm",
"iclr_2019_r1GB5jA5tm"
] |
iclr_2019_r1GgDj0cKX | PRUNING IN TRAINING: LEARNING AND RANKING SPARSE CONNECTIONS IN DEEP CONVOLUTIONAL NETWORKS | This paper proposes a Pruning in Training (PiT) framework of learning to reduce the parameter size of networks. Different from existing works, our PiT framework employs the sparse penalties to train networks and thus help rank the importance of weights and filters. Our PiT algorithms can directly prune the network without any fine-tuning. The pruned networks can still achieve comparable performance to the original networks. In particular, we introduce the (Group) Lasso-type Penalty (L-P /GL-P), and (Group) Split LBI Penalty (S-P / GS-P) to regularize the networks, and a pruning strategy proposed is used in help prune the network. We conduct the extensive experiments on MNIST, Cifar-10, and miniImageNet. The results validate the efficacy of our proposed methods. Remarkably, on MNIST dataset, our PiT framework can save 17.5% parameter size of LeNet-5, which achieves the 98.47% recognition accuracy. | rejected-papers | This paper propose to obtain high pruning ratio by adding constraints to obtain small weights. Reviewers have a consensus on rejection due to not convincing experiments and lack of novelty. | val | [
"Syx-wJFkpX",
"rkez5fyA3m",
"HkgQJOqsh7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This manuscript presents a method to prune deep neural networks while training. The main idea is to use some regularization to force some parameters to have small values, which will then be subject to pruning. \nOverall, the proposed method is not very interesting. More importantly, the manuscript only lists the p... | [
5,
5,
4
] | [
4,
4,
5
] | [
"iclr_2019_r1GgDj0cKX",
"iclr_2019_r1GgDj0cKX",
"iclr_2019_r1GgDj0cKX"
] |
iclr_2019_r1GkMhAqYm | CoDraw: Collaborative Drawing as a Testbed for Grounded Goal-driven Communication | In this work, we propose a goal-driven collaborative task that contains language, vision, and action in a virtual environment as its core components. Specifically, we develop a Collaborative image-Drawing game between two agents, called CoDraw. Our game is grounded in a virtual world that contains movable clip art objects. The game involves two players: a Teller and a Drawer. The Teller sees an abstract scene containing multiple clip art pieces in a semantically meaningful configuration, while the Drawer tries to reconstruct the scene on an empty canvas using available clip art pieces. The two players communicate via two-way communication using natural language. We collect the CoDraw dataset of ~10K dialogs consisting of ~138K messages exchanged between human agents. We define protocols and metrics to evaluate the effectiveness of learned agents on this testbed, highlighting the need for a novel "crosstalk" condition which pairs agents trained independently on disjoint subsets of the training data for evaluation. We present models for our task, including simple but effective baselines and neural network approaches trained using a combination of imitation learning and goal-driven training. All models are benchmarked using both fully automated evaluation and by playing the game with live human agents. | rejected-papers | The reviewers raise a number of concerns including no methodological novelty, limited experimental evaluation, and relatively uninteresting application with very limited real-world application. This set of facts has been assessed differently by the three reviewers, and the scores range from probable rejection to probable acceptance. I believe that the work as is would not result in a wide interest by the ICLR attendees, mainly because of no methodological novelty and relatively simplistic application. The authors’ rebuttal failed to address these issues and I cannot recommend this work for presentation at ICLR. | val | [
"Ske8XQAwAX",
"rJe0qGAvAX",
"Hklr3W0D0m",
"H1lU2zbJ6Q",
"BJlFdvccnm",
"BJllz-x5hX"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your feedback!\n\nWe've updated the related works section to include some of the references you provided and contrast the CoDraw task with these works.\n\nWe tried several drawer variations that we did not include in the submission due to space concerns. Replacing the LSTM in the drawer with a bag-of... | [
-1,
-1,
-1,
4,
6,
7
] | [
-1,
-1,
-1,
4,
4,
4
] | [
"BJllz-x5hX",
"BJlFdvccnm",
"H1lU2zbJ6Q",
"iclr_2019_r1GkMhAqYm",
"iclr_2019_r1GkMhAqYm",
"iclr_2019_r1GkMhAqYm"
] |
iclr_2019_r1Gsk3R9Fm | Shallow Learning For Deep Networks | Shallow supervised 1-hidden layer neural networks have a number of favorable properties that make them easier to interpret, analyze, and optimize than their deep counterparts, but lack their representational power. Here we use 1-hidden layer learning problems to sequentially build deep networks layer by layer, which can inherit properties from shallow networks. Contrary to previous approaches using shallow networks, we focus on problems where deep learning is reported as critical for success. We thus study CNNs on image recognition tasks using the large-scale Imagenet dataset and the CIFAR-10 dataset. Using a simple set of ideas for architecture and training we find that solving sequential 1-hidden-layer auxiliary problems leads to a CNN that exceeds AlexNet performance on ImageNet. Extending our training methodology to construct individual layers by solving 2-and-3-hidden layer auxiliary problems, we obtain an 11-layer network that exceeds VGG-11 on ImageNet obtaining 89.8% top-5 single crop. To our knowledge, this is the first competitive alternative to end-to-end training of CNNs that can scale to ImageNet. We conduct a wide range of experiments to study the properties this induces on the intermediate layers. | rejected-papers | The paper discusses layer-wise training of deep networks. The authors show that it's possible to achieve reasonable performance by training deep nets layer by layer, as opposed to now widely adopted end-to-end training. While such a training procedure is not novel, the authors argue that this is an interesting result, considering that such a training procedure is often dismissed as sub-optimal and leading to inferior results. However, the results show exactly that, as the performance of the models is significantly worse than the state of the art, and it is unclear what other advantages such a training scheme can offer. The authors mention that layer-wise training could be useful for theoretical understanding of deep nets, but they don’t really perform such analysis in this submission, and it’s also unclear whether conclusions of such analysis would extend to deep nets trained end-to-end.
In its current form, the paper is not ready for acceptance. I encourage the authors to make a more clear case for the method: either by improving results to match end-to-end training, or by actually demonstrating that layer-wise training has certain advantages over end-to-end learning.
| train | [
"Bke8LzSGkN",
"BkldkpfoC7",
"H1xB7afoRm",
"B1llvpMsRX",
"SJxCDvweeV",
"rkeezrWoy4",
"Syly1TG51V",
"SylKUo3B14",
"ryeF8Nbf1E",
"H1ePa-u5nQ",
"BylEbOfbom",
"S1gZbZ5xAX",
"H1ltyOFeCm",
"r1e14TUeCQ",
"HygGhybIpm",
"rkgZx4WIT7",
"HJlT0m-LpQ",
"SkewQHb86m",
"SJxEC-WIa7",
"S1gExGb86Q"... | [
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"officia... | [
"Dear R1, We thank you for the response.\nWe would like to ask if you can specify the insufficient novelty in the results that puts this under the publication bar for you? We believe to have addressed and disconfirmed all the claims of overlap to prior work. The results we show are not present nor inferable from e... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
7,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1
] | [
"ryeF8Nbf1E",
"rkg7IXvPhQ",
"HJlYGFCO2Q",
"H1ePa-u5nQ",
"SylKUo3B14",
"Syly1TG51V",
"iclr_2019_r1Gsk3R9Fm",
"BkldkpfoC7",
"H1xB7afoRm",
"iclr_2019_r1Gsk3R9Fm",
"rye0Bjma5X",
"r1e14TUeCQ",
"iclr_2019_r1Gsk3R9Fm",
"iclr_2019_r1Gsk3R9Fm",
"HJlYGFCO2Q",
"rkg7IXvPhQ",
"rkg7IXvPhQ",
"icl... |
iclr_2019_r1MSBjA9Ym | Collapse of deep and narrow neural nets | Recent theoretical work has demonstrated that deep neural networks have superior performance over shallow networks, but their training is more difficult, e.g., they suffer from the vanishing gradient problem. This problem can be typically resolved by the rectified linear unit (ReLU) activation. However, here we show that even for such activation, deep and narrow neural networks (NNs) will converge to erroneous mean or median states of the target function depending on the loss with high probability. Deep and narrow NNs are encountered in solving partial differential equations with high-order derivatives. We demonstrate this collapse of such NNs both numerically and theoretically, and provide estimates of the probability of collapse. We also construct a diagram of a safe region for designing NNs that avoid the collapse to erroneous states. Finally, we examine different ways of initialization and normalization that may avoid the collapse problem. Asymmetric initializations may reduce the probability of collapse but do not totally eliminate it. | rejected-papers | The paper studies difficulties in training deep and narrow networks. It shows that there is high probability that deep and narrow ReLU networks will converge to an erroneous state, depending on the type of training that is employed. The results add to our current understanding of the limitations of these architectures.
The main criticism is that the analysis might be very limited, being restricted to very narrow networks (of width about 10 or less) which are not very common in practice, and that the observed collapse phenomenon can be easily addressed by non symmetric initialization.
There were some issues with the proofs that were covered in the discussed between authors and reviewers. The revision is relatively extensive.
This is a borderline case. The paper receives one good rating, one negative rating, and a borderline accept rating. Although the paper contributes interesting insights to a relevant problem that clearly needs contributions in this direction, the analysis presented in the paper and its applicability in practice seems to be very restrictive at this point. | val | [
"BJlxflLD3X",
"BklM04taAX",
"HylQmamnAQ",
"Bygp4t3K0Q",
"HkliJKnFRm",
"r1xq6OhFRQ",
"SkgQnP3K0Q",
"S1e7GunYR7",
"B1xzZcRK2m",
"H1ge8bKu2m"
] | [
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies failure modes of deep and narrow networks. I find this research extremely valuable and interesting. In addition to that, the paper focuses on as small as possible models, for which the undesired behavior occurs. That is another great positive, too much of a research in DL focuses on the most comp... | [
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
4
] | [
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_r1MSBjA9Ym",
"HylQmamnAQ",
"r1xq6OhFRQ",
"BJlxflLD3X",
"H1ge8bKu2m",
"H1ge8bKu2m",
"B1xzZcRK2m",
"B1xzZcRK2m",
"iclr_2019_r1MSBjA9Ym",
"iclr_2019_r1MSBjA9Ym"
] |
iclr_2019_r1MxciCcKm | Connecting the Dots Between MLE and RL for Sequence Generation | Sequence generation models such as recurrent networks can be trained with a diverse set of learning algorithms. For example, maximum likelihood learning is simple and efficient, yet suffers from the exposure bias problem. Reinforcement learning like policy gradient addresses the problem but can have prohibitively poor exploration efficiency. A variety of other algorithms such as RAML, SPG, and data noising, have also been developed in different perspectives. This paper establishes a formal connection between these algorithms. We present a generalized entropy regularized policy optimization formulation, and show that the apparently divergent algorithms can all be reformulated as special instances of the framework, with the only difference being the configurations of reward function and a couple of hyperparameters. The unified interpretation offers a systematic view of the varying properties of exploration and learning efficiency. Besides, based on the framework, we present a new algorithm that dynamically interpolates among the existing algorithms for improved learning. Experiments on machine translation and text summarization demonstrate the superiority of the proposed algorithm. | rejected-papers | I enjoyed reading the paper myself and I appreciate the unifying framework connecting RAML and SPG. While I do not put a lot of weight on the experiments, I agree with the reviewers that the experimental results are not very strong, and I am not convinced that the theoretical contribution meets the bar at ICLR.
In the interpolation algorithm, there seems to be an additional annealing parameter and two tuning parameters. It is important to describe how the parameters are tuned. Given the additional hyper-parameters, one may consider giving all of the algorithms the same budget of hyper-parameter tuning. I also agree with reviewers that the policy gradient baseline seems to underperform typical results. One possible way to strengthen the experiments is to try to replicate the results of SPG or RAML and discuss the behavior of each algorithm as a function of hyper-parameters.
| val | [
"SJglOdb2CX",
"B1xuDmP93Q",
"BJx-HalfAX",
"r1eI-peMAX",
"HJxXs3xzRm",
"rye_whgMCX",
"HJeXL_49hQ",
"HJl84_xU3m"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"I appreciate the new experimental results.\n\nWhile I do understand that coming up with a readily comparable experimental setup is difficult, I still think more effort should be put into this. The evaluation protocol is not that variable from my own experience and the relevant choices either appear in the publishe... | [
-1,
5,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
5,
-1,
-1,
-1,
-1,
3,
4
] | [
"BJx-HalfAX",
"iclr_2019_r1MxciCcKm",
"HJl84_xU3m",
"HJeXL_49hQ",
"rye_whgMCX",
"B1xuDmP93Q",
"iclr_2019_r1MxciCcKm",
"iclr_2019_r1MxciCcKm"
] |
iclr_2019_r1NDBsAqY7 | Unsupervised Word Discovery with Segmental Neural Language Models | We propose a segmental neural language model that combines the representational power of neural networks and the structure learning mechanism of Bayesian nonparametrics, and show that it learns to discover semantically meaningful units (e.g., morphemes and words) from unsegmented character sequences. The model generates text as a sequence of segments, where each segment is generated either character-by-character from a sequence model or as a single draw from a lexical memory that stores multi-character units. Its parameters are fit to maximize the marginal likelihood of the training data, summing over all segmentations of the input, and its hyperparameters are likewise set to optimize held-out marginal likelihood.
To prevent the model from overusing the lexical memory, which leads to poor generalization and bad segmentation, we introduce a differentiable regularizer that penalizes based on the expected length of each segment. To our knowledge, this is the first demonstration of neural networks that have predictive distributions better than LSTM language models and also infer a segmentation into word-like units that are competitive with the best existing word discovery models. | rejected-papers | a major issue or complaint from the reviewers seems to come from perhaps a wrong framing of this submission. i believe the framing of this work should have been a better language model (or translation model) with word discovery as an awesome side effect, which i carefully guess would've been a perfectly good story assuming that the perplexity result in Table 4 translates to text with blank spaces left in (it is not possible tell whether this is the case from the text alone.) even discounting R1, who i disagree with on quite a few points, the other reviewers also did not see much of the merit of this work, again probably due to the framing issue above.
i highly encourage the authors to change the framing, evaluate it as a usual sequence model on various benchmarks and resubmit it to another venue. | train | [
"HkeptnOShQ",
"rJl5sY9ZAm",
"rkgXtKcZ0X",
"rJesgY9bAm",
"S1lJFgDWTX",
"SJgVbBDkpm",
"B1xAJ5Xo37",
"SyxkjOIJTm",
"HyeWqgLaom",
"HklWx4ABo7",
"HkgKMthXom",
"HylO-Yh7oX",
"HyehCdhmiQ",
"BkeaRjdc5Q",
"S1xTLWQccQ",
"SJxE5HCFcm"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"public",
"author",
"author",
"author",
"public",
"public",
"public"
] | [
"This paper presented a novel approach for modeling a sequence of characters as a sequence of latent segmentations. The challenge here was how to efficiently compute the marginal likelihood of a character sequence (exponential number different of segmentations). The author(s) overcame this by having a segment gener... | [
6,
-1,
-1,
-1,
4,
-1,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
-1,
3,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_r1NDBsAqY7",
"HkeptnOShQ",
"SJgVbBDkpm",
"S1lJFgDWTX",
"iclr_2019_r1NDBsAqY7",
"SyxkjOIJTm",
"iclr_2019_r1NDBsAqY7",
"B1xAJ5Xo37",
"HyehCdhmiQ",
"HylO-Yh7oX",
"SJxE5HCFcm",
"S1xTLWQccQ",
"BkeaRjdc5Q",
"iclr_2019_r1NDBsAqY7",
"iclr_2019_r1NDBsAqY7",
"iclr_2019_r1NDBsAqY7"
] |
iclr_2019_r1Nb5i05tX | The effectiveness of layer-by-layer training using the information bottleneck principle | The recently proposed information bottleneck (IB) theory of deep nets suggests that during training, each layer attempts to maximize its mutual information (MI) with the target labels (so as to allow good prediction accuracy), while minimizing its MI with the input (leading to effective compression and thus good generalization). To date, evidence of this phenomenon has been indirect and aroused controversy due to theoretical and practical complications. In particular, it has been pointed out that the MI with the input is theoretically infinite in many cases of interest, and that the MI with the target is fundamentally difficult to estimate in high dimensions. As a consequence, the validity of this theory has been questioned. In this paper, we overcome these obstacles by two means. First, as previously suggested, we replace the MI with the input by a noise-regularized version, which ensures it is finite. As we show, this modified penalty in fact acts as a form of weight decay regularization. Second, to obtain accurate (noise regularized) MI estimates between an intermediate representation and the input, we incorporate the strong prior-knowledge we have about their relation, into the recently proposed MI estimator of Belghazi et al. (2018). With this scheme, we are able to stably train each layer independently to explicitly optimize the IB functional. Surprisingly, this leads to enhanced prediction accuracy, thus directly validating the IB theory of deep nets for the first time. | rejected-papers | This paper does two things. First, it proposes an approach to estimating the mutual information between the input, X, or target label, Y, and an internal representation in a deep neural network, L, using MINE (for I(Y;L)) or a variation on MINE (for I(X;L)) and noise regularization (estimating I(X;L+ε), where ε is isotropic Gaussian white noise) to avoid the problem that I(X;L) is infinite for deterministic networks and continuous X. Second, it attempts to validate the information bottleneck theory of deep learning (Tishby and Zaslavsky, 2015) by exploring an approach to training DNNs that optimizes the information bottleneck Lagrangian, I(Y;L) − βI(X;L+ε), layerwise instead of using cross-entropy and backpropagation. Experiments on MNIST and CIFAR-10 show improvements for the layerwise training over cross-entropy training. The penalty on I(X;L+ε) is described as being analogous to weight decay. The reviewers raised a number of concerns about the paper, the most serious of which is that the claim that the layerwise training results validate the information bottleneck theory of deep learning is too strong. In the AC's opinion, R1's critique that "[i]f the true mutual information is infinite and the noise regularized estimator is only meant for comparative purposes, why then are the results of the training trajectories interpreted so literally as estimates of the true mutual information?" is critical, and the authors' reply that "this quantity is in fact a more appropriate measure for “compactness” or “complexity” than the mutual information itself" undermines their claim that they are validating the information bottleneck theory of deep nets because the information bottleneck theory claims to be using mutual information. The AC also suggests that if the authors wish to continue this work and submit it to another venue, they (1) discuss the fact that MINE estimates only a lower bound that may be quite loose in practice and (2) say in their experimental section whether or not the variance of the regularizing noise was tuned as a hyperparameter, and if so, how results varied with different amounts of noise. Finally, the AC regrets that only one reviewer participated in the discussion (in a very minimal way), despite the reviewers' receiving several reminders that the discussion is a defining feature of the ICLR review process. | train | [
"Bkgj99ssnX",
"rJe0qyKXpX",
"B1xgjhOmT7",
"rJgqHhOX6m",
"rygrGiumpX",
"rye9H9_Qa7",
"HJg9b0Xxpm",
"r1ehpIDThX"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This paper provides a method to do explicit IB functional estimation for deep neural networks inspired from the recent mutual information estimation method (MINE). By using the method, the authors 1) validate the IB theory of deep nets using weight decay, and 2) provides a layer-wise explicit IB functional traini... | [
5,
-1,
-1,
-1,
-1,
-1,
5,
2
] | [
5,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_r1Nb5i05tX",
"iclr_2019_r1Nb5i05tX",
"r1ehpIDThX",
"r1ehpIDThX",
"Bkgj99ssnX",
"HJg9b0Xxpm",
"iclr_2019_r1Nb5i05tX",
"iclr_2019_r1Nb5i05tX"
] |
iclr_2019_r1V0m3C5YQ | Coupled Recurrent Models for Polyphonic Music Composition | This work describes a novel recurrent model for music composition, which accounts for the rich statistical structure of polyphonic music. There are many ways to factor the probability distribution over musical scores; we consider the merits of various approaches and propose a new factorization that decomposes a score into a collection of concurrent, coupled time series: "parts." The model we propose borrows ideas from both convolutional neural models and recurrent neural models; we argue that these ideas are natural for capturing music's pitch invariances, temporal structure, and polyphony.
We train generative models for homophonic and polyphonic composition on the KernScores dataset (Sapp, 2005), a collection of 2,300 musical scores comprised of around 2.8 million notes spanning time from the Renaissance to the early 20th century. While evaluation of generative models is know to be hard (Theis et al., 2016), we present careful quantitative results using a unit-adjusted cross entropy metric that is independent of how we factor the distribution over scores. We also present qualitative results using a blind discrimination test.
| rejected-papers | This paper proposes novel recurrent models for polyphonic music composition and demonstrates the approach with qualitative and quantitative evaluations as well as samples. The technical parts in the original write-up were not very clear, as noted by multiple reviewers. During the review period, the presentation was improved. Unfortunately the reviewer scores are mixed, and are on the lower side, mainly because of the lack of clarity and quality of the results. | train | [
"H1lqh2AHAX",
"rkgk2iABA7",
"BkldQoArAm",
"Hyl5P90rCX",
"rkxg76-an7",
"BJghCWtqh7",
"rJexNSvj3X",
"BygOalnNnQ"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author"
] | [
"Thank you for your extensive comments, and in particular for drawing our attention to the Johnson paper. Our relative pitch weight-sharing is the same idea as Johnson’s tied parallel networks, and we have made sure to recognize this in the new revision of the paper.\n\nWe’ve made an effort to clean up many of your... | [
-1,
-1,
-1,
-1,
7,
4,
3,
-1
] | [
-1,
-1,
-1,
-1,
3,
4,
4,
-1
] | [
"BJghCWtqh7",
"rJexNSvj3X",
"rkxg76-an7",
"iclr_2019_r1V0m3C5YQ",
"iclr_2019_r1V0m3C5YQ",
"iclr_2019_r1V0m3C5YQ",
"iclr_2019_r1V0m3C5YQ",
"iclr_2019_r1V0m3C5YQ"
] |
iclr_2019_r1VPNiA5Fm | The Universal Approximation Power of Finite-Width Deep ReLU Networks | We show that finite-width deep ReLU neural networks yield rate-distortion optimal approximation (Bölcskei et al., 2018) of a wide class of functions, including polynomials, windowed sinusoidal functions, one-dimensional oscillatory textures, and the Weierstrass function, a fractal function which is continuous but nowhere differentiable. Together with the recently established universal approximation result for affine function systems (Bölcskei et al., 2018), this demonstrates that deep neural networks approximate vastly different signal structures generated by the affine group, the Weyl-Heisenberg group, or through warping, and even certain fractals, all with approximation error decaying exponentially in the number of neurons. We also prove that in the approximation of sufficiently smooth functions finite-width deep networks require strictly fewer neurons than finite-depth wide networks. | rejected-papers | The paper contributes to the theoretical understanding of finite width ReLU networks. It contributes new ideas and constructions to investigate the representational power of such networks. In particular, the analysis works without skip connections. Referees found the paper refreshingly well-written and pleasant to read.
There is a concern that the paper may be overstating the novelty and innovation of the results, as some of them are easy implications, and there are other previous works that have obtained results on finite width networks (see AnonReviewer4's comments). On the other hand, the authors were careful to cite when they reuse proof techniques from these and other works (AnonReviewer2). Another concern is that the considered target function space might be too narrow (see AnonReviewer2's comments). The authors clarify that the choice was because the considered classes are known to be hard to approximate and there are no known classical methods that would yield exponential approximation accuracy. Another concern is that the results might not be suitable to ICLR, having an emphasis on approximation theory and less on learning (see AnonReviewer3's comments).
The reviewers consistently rate the paper as not very strong, with one marginally above acceptance threshold and two marginally below acceptance threshold ratings.
While this appears to be a well written paper with valuable new ideas in regard to the approximation properties of networks, the contributions were not convincing enough. I would suggest that developing a clearer connecting to learning and broader classes of target functions could increase the appeal of the paper. | train | [
"B1eFQNnYCQ",
"SkgepI3tA7",
"BJxTNE3tAQ",
"BylzqvcYRm",
"Bkg2DZ5eRQ",
"Byx03WpnnX",
"SJgOG6Gq27"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for pointing out references (Montanelli et al., 2017) and (Schmidt-Hieber, 2017). \n\nWe considered the Weierstrass function and oscillatory textures as these functions are known to be hard to approximate and there are no known classical methods that would yield exponential approximation accuracy. The ma... | [
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
3,
4,
3
] | [
"SJgOG6Gq27",
"Bkg2DZ5eRQ",
"SJgOG6Gq27",
"Byx03WpnnX",
"iclr_2019_r1VPNiA5Fm",
"iclr_2019_r1VPNiA5Fm",
"iclr_2019_r1VPNiA5Fm"
] |
iclr_2019_r1Vx_oA5YQ | Integrated Steganography and Steganalysis with Generative Adversarial Networks | Recently, generative adversarial network is the hotspot in research areas and industrial application areas. It's application on data generation in computer vision is most common usage. This paper extends its application to data hiding and security area. In this paper, we propose the novel framework to integrate steganography and steganalysis processes. The proposed framework applies generative adversarial networks as the core structure. The discriminative model simulate the steganalysis process, which can help us understand the sensitivity of cover images to semantic changes. The steganography generative model is to generate stego image which is aligned with the original cover image, and attempts to confuse steganalysis discriminative model. The introduction of cycle discriminative model and inconsistent loss can help to enhance the quality and security of generated stego image in the iterative training process. Training dataset is mixed with intact images as well as intentional attacked images. The mix training process can further improve the robustness and security of new framework. Through the qualitative, quantitative experiments and analysis, this novel framework shows compelling performance and advantages over the current state-of-the-art methods in steganography and steganalysis benchmarks. | rejected-papers | 1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.
- The problem and approach, steganography via GANs, is interesting.
- The results seem promising.
2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.
The original submission was imprecise and difficult to follow and, while the AC acknowledges that the authors made significant improvements, the current version still needs some work before it's clear enough to be acceptable for publication.
3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.
Concerns varied by reviewer and there was no main point of contention.
4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.
The reviewers did not reach a consensus. The final decision is aligned with the less positive reviewers, one of whom was very confident in his/her review. The AC agrees that the paper should be made clearer and more precise.
| train | [
"SkxgwfnHkE",
"S1eDQ1-Th7",
"rJxzr10mJN",
"rylw63nj2m",
"BJg2jRSKhm"
] | [
"public",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Hi Authors,\n\nI'm Melisa from the OpenReview team. Our tool to compare pdfs is using a third party api that supports files up to 40mb of size. Please try to reduce the size of the file and we can help you to replace the current versions. \n\nBest,\n\nMelisa",
"Paper uses Generative Adversarial Networks' (GAN) p... | [
-1,
5,
-1,
6,
5
] | [
-1,
5,
-1,
4,
2
] | [
"rJxzr10mJN",
"iclr_2019_r1Vx_oA5YQ",
"iclr_2019_r1Vx_oA5YQ",
"iclr_2019_r1Vx_oA5YQ",
"iclr_2019_r1Vx_oA5YQ"
] |
iclr_2019_r1eJssCqY7 | TabNN: A Universal Neural Network Solution for Tabular Data | Neural Network (NN) has achieved state-of-the-art performances in many tasks within image, speech, and text domains. Such great success is mainly due to special structure design to fit the particular data patterns, such as CNN capturing spatial locality and RNN modeling sequential dependency. Essentially, these specific NNs achieve good performance by leveraging the prior knowledge over corresponding domain data. Nevertheless, there are many applications with all kinds of tabular data in other domains. Since there are no shared patterns among these diverse tabular data, it is hard to design specific structures to fit them all. Without careful architecture design based on domain knowledge, it is quite challenging for NN to reach satisfactory performance in these tabular data domains. To fill the gap of NN in tabular data learning, we propose a universal neural network solution, called TabNN, to derive effective NN architectures for tabular data in all kinds of tasks automatically. Specifically, the design of TabNN follows two principles: \emph{to explicitly leverages expressive feature combinations} and \emph{to reduce model complexity}. Since GBDT has empirically proven its strength in modeling tabular data, we use GBDT to power the implementation of TabNN. Comprehensive experimental analysis on a variety of tabular datasets demonstrate that TabNN can achieve much better performance than many baseline solutions. | rejected-papers | All reviewers agree in their assessment that this paper has merits but is not yet ready for acceptance into ICLR. The area chair commends the authors for their responses to the reviews. | train | [
"HJxBzN0-Tm",
"HklpZu9eTm",
"BkxpR3qbTX",
"rJgBQF9-aX",
"HkeGURdb6X",
"BJxckPclaQ",
"HygFx1CkpX",
"SJgssoHT27",
"BkxIg3zKhX"
] | [
"author",
"author",
"author",
"public",
"official_reviewer",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your efforts in reviewing our paper and the valuable comments. We attempt to address your concerns in the following.\n\n1. Response to the \"Weaknesses\" part and the comparison with GBDT\n\nAs stated in the response to review 1, our goal is not inventing a model to beat GBDT but developing a model to c... | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
4,
5
] | [
-1,
-1,
-1,
-1,
4,
-1,
-1,
5,
2
] | [
"HkeGURdb6X",
"BkxIg3zKhX",
"rJgBQF9-aX",
"HklpZu9eTm",
"iclr_2019_r1eJssCqY7",
"SJgssoHT27",
"SJgssoHT27",
"iclr_2019_r1eJssCqY7",
"iclr_2019_r1eJssCqY7"
] |
iclr_2019_r1eO_oCqtQ | Gaussian-gated LSTM: Improved convergence by reducing state updates | Recurrent neural networks can be difficult to train on long sequence data due to the well-known vanishing gradient problem. Some architectures incorporate methods to reduce RNN state updates, therefore allowing the network to preserve memory over long temporal intervals. To address these problems of convergence, this paper proposes a timing-gated LSTM RNN model, called the Gaussian-gated LSTM (g-LSTM). The time gate controls when a neuron can be updated during training, enabling longer memory persistence and better error-gradient flow. This model captures long-temporal dependencies better than an LSTM and the time gate parameters can be learned even from non-optimal initialization values. Because the time gate limits the updates of the neuron state, the number of computes needed for the network update is also reduced. By adding a computational budget term to the training loss, we can obtain a network which further reduces the number of computes by at least 10x. Finally, by employing a temporal curriculum learning schedule for the g-LSTM, we can reduce the convergence time of the equivalent LSTM network on long sequences. | rejected-papers | perhaps the biggest issue with the proposed approach is that the proposed approach, which supposedly addresses the issue of capturing long-term dependency with a faster convergence, was only tested on problems with largely fixed length. with the proposed k_n gate being defined as a gaussian with a single mean (per unit?) and variance, it is important and interesting to know how this network would cope with examples of vastly varying lengths. in addition, r3 made good points about comparison against conventional LSTM and how it should be done with careful hyperparameter tuning and based on conventional known setups.
this submission will be greatly strengthened with more experiments using a better set of benchmarks and by more carefully placing its contribution w.r.t. other recent advances. | val | [
"HkgsK11j1E",
"BJx43BMo0Q",
"Hyg1f7TFCQ",
"BklV_d6d0Q",
"HJeW89ce07",
"HJec9dqeCm",
"r1enbDcgRm",
"SJeNiYcxR7",
"HkxYmJqo2m",
"BJxidDyinm",
"ryeUW3j937"
] | [
"author",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We appreciate the knowledgeable and insightful comments regarding convergence. We acknowledge that there is substantial relevant work that shows improvement of convergence properties with various methods (bias initialization, gate/kernel initialization, auxiliary losses, learning rate scheduling); however, with t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"iclr_2019_r1eO_oCqtQ",
"Hyg1f7TFCQ",
"BklV_d6d0Q",
"HJeW89ce07",
"HkxYmJqo2m",
"ryeUW3j937",
"iclr_2019_r1eO_oCqtQ",
"BJxidDyinm",
"iclr_2019_r1eO_oCqtQ",
"iclr_2019_r1eO_oCqtQ",
"iclr_2019_r1eO_oCqtQ"
] |
iclr_2019_r1elIi09K7 | Learning a Neural-network-based Representation for Open Set Recognition | In this paper, we present a neural network based representation for addressing the open set recognition problem. In this representation instances from the same class are close to each other while instances from different classes are further apart, resulting in statistically significant improvement when compared to other approaches on three datasets from two different domains.
| rejected-papers | The paper presents an approach to address the open-set recognition task based
on inter and intra class distances. All reviewers are concerned with novelty
and more experimental comparisons. Authors have added some results, but
reviewers did not think these were enough to make the paper convincing enough.
Overall I agree with reviewers and recommend to reject the paper. | train | [
"HkxnIemb0Q",
"Byxcf0zWRm",
"rkgcjZmWCX",
"Bylokk7bRQ",
"BJgJzVP9hQ",
"rJlQzEz9hX",
"SJlGGPyBj7",
"SylxCvAFom"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"We thank the reviewer for this comment.\n\nWe have conducted further experiments to quantify the advantage of ii-loss over central loss. Here are the results from experiments on using central loss:\nMNIST dataset: 30-run average AUC=0.9264 \nAndroid dataset: 30-run average AUC=0.7514\nMS dataset: 30-run average A... | [
-1,
-1,
-1,
-1,
4,
4,
5,
-1
] | [
-1,
-1,
-1,
-1,
4,
4,
4,
-1
] | [
"SJlGGPyBj7",
"BJgJzVP9hQ",
"SylxCvAFom",
"rJlQzEz9hX",
"iclr_2019_r1elIi09K7",
"iclr_2019_r1elIi09K7",
"iclr_2019_r1elIi09K7",
"iclr_2019_r1elIi09K7"
] |
iclr_2019_r1erRoCqtX | LSH Microbatches for Stochastic Gradients: Value in Rearrangement | Metric embeddings are immensely useful representations of associations between entities (images, users, search queries, words, and more). Embeddings are learned by optimizing a loss objective of the general form of a sum over example associations. Typically, the optimization uses stochastic gradient updates over minibatches of examples that are arranged independently at random. In this work, we propose the use of {\em structured arrangements} through randomized {\em microbatches} of examples that are more likely to include similar ones. We make a principled argument for the properties of our arrangements that accelerate the training and present efficient algorithms to generate microbatches that respect the marginal distribution of training examples. Finally, we observe experimentally that our structured arrangements accelerate training by 3-20\%. Structured arrangements emerge as a powerful and novel performance knob for SGD that is independent and complementary to other SGD hyperparameters and thus is a candidate for wide deployment. | rejected-papers | Following the unanimous vote of the four submitted reviews, this paper is not ready for publication at ICLR. Among other concerns raised, the experiments need significant work. | train | [
"BylvZPuqn7",
"S1eXFlzUAX",
"rkxDxeQpTQ",
"BJgqBX_jaX",
"SkgO0n7TTX",
"rkeplrXopm",
"SJlFHGEiaQ",
"HygR2Mnca7",
"r1liSr-9Tm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"###### Post-Revision ########################\nThank you for revising the paper and addressing the reviewers' concerns. The updated version reads much better and I have updated my score. \n\nUnfortunately, I still think that the experimental analysis is not enough to warrant acceptance. I would encourage the autho... | [
4,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
4,
-1,
-1,
-1,
-1,
-1,
3,
4,
2
] | [
"iclr_2019_r1erRoCqtX",
"iclr_2019_r1erRoCqtX",
"r1liSr-9Tm",
"SJlFHGEiaQ",
"BylvZPuqn7",
"HygR2Mnca7",
"iclr_2019_r1erRoCqtX",
"iclr_2019_r1erRoCqtX",
"iclr_2019_r1erRoCqtX"
] |
iclr_2019_r1esnoAqt7 | Morpho-MNIST: Quantitative Assessment and Diagnostics for Representation Learning | Revealing latent structure in data is an active field of research, having introduced exciting technologies such as variational autoencoders and adversarial networks, and is essential to push machine learning towards unsupervised knowledge discovery. However, a major challenge is the lack of suitable benchmarks for an objective and quantitative evaluation of learned representations. To address this issue we introduce Morpho-MNIST, a framework that aims to answer: "to what extent has my model learned to represent specific factors of variation in the data?" We extend the popular MNIST dataset by adding a morphometric analysis enabling quantitative comparison of trained models, identification of the roles of latent variables, and characterisation of sample diversity. We further propose a set of quantifiable perturbations to assess the performance of unsupervised and supervised methods on challenging tasks such as outlier detection and domain adaptation. | rejected-papers | This paper presents a dataset for measuring disentanglement in learned representations. It consists of MNIST digits, sometimes transformed in various ways, and labeled with a variety of attributes. This dataset is used to measure statistics of various learned models.
Measuring disentanglement is certainly an important problem in our field. This dataset seems to be well designed, and I would recommend its use for papers studying disentanglement. The experiments are well-designed. While the reviewers seem bothered by the fact that it's limited to MNIST, this doesn't strike me as a problem. We continue to learn a lot from MNIST, even today.
But producing a useful dataset isn't by itself a significant enough research contribution for an ICLR paper. I'd recommend publication if (a) it were very different from currently existing datasets, (b) constructing it required overcoming significant technical obstacles, or (c) the dataset led to particularly interesting findings.
Regarding (a), there are already datasets of similar complexity which have ground-truth attributes useful for measuring disentanglement, such as dSprites and 3D Faces. Regarding (b), the construction seems technically straightforward. Regarding (c), the experimental findings are plausible and consistent with past findings (which is a good validation of the dataset) but not obviously interesting in their own right.
So overall, this seems like a useful dataset, but I cannot recommend publication at ICLR.
| test | [
"SygMazOVR7",
"S1eAb9jpTX",
"rylzwEUXa7",
"SJghAtFWaX",
"SkgGgAJZTm",
"HkehOTkW6X",
"Hygjmwnqhm",
"S1etazq92Q"
] | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Dear Reviewers,\n\nWe have uploaded a revision of our paper, taking into account your feedback and attempting to clarify any misunderstandings, as outlined in our responses below.\n\nPlease consider reevaluating the new version, and thank you once again.",
"Dear Reviewers,\n\nWe appreciate your time on assessing... | [
-1,
-1,
-1,
3,
-1,
-1,
5,
4
] | [
-1,
-1,
-1,
4,
-1,
-1,
3,
3
] | [
"S1eAb9jpTX",
"iclr_2019_r1esnoAqt7",
"SJghAtFWaX",
"iclr_2019_r1esnoAqt7",
"S1etazq92Q",
"Hygjmwnqhm",
"iclr_2019_r1esnoAqt7",
"iclr_2019_r1esnoAqt7"
] |
iclr_2019_r1espiA9YQ | Towards More Theoretically-Grounded Particle Optimization Sampling for Deep Learning | Many deep-learning based methods such as Bayesian deep learning (DL) and deep reinforcement learning (RL) have heavily relied on the ability of a model being able to efficiently explore via Bayesian sampling. Particle-optimization sampling (POS) is a recently developed technique to generate high-quality samples from a target distribution by iteratively updating a set of interactive particles, with a representative algorithm the Stein variational gradient descent (SVGD). Though obtaining significant empirical success, the {\em non-asymptotic} convergence behavior of SVGD remains unknown. In this paper, we generalize POS to a stochasticity setting by injecting random noise in particle updates, called stochastic particle-optimization sampling (SPOS). Notably, for the first time, we develop {\em non-asymptotic convergence theory} for the SPOS framework, characterizing convergence of a sample approximation w.r.t.\! the number of particles and iterations under both convex- and noncovex-energy-function settings. Interestingly, we provide theoretical understanding of a pitfall of SVGD that can be avoided in the proposed SPOS framework, {\it i.e.}, particles tend to collapse to a local mode in SVGD under some particular conditions. Our theory is based on the analysis of nonlinear stochastic differential equations, which serves as an extension and a complementary development to the asymptotic convergence theory for SVGD such as (Liu, 2017). With such theoretical guarantees, SPOS can be safely and effectively applied on both Bayesian DL and deep RL tasks. Extensive results demonstrate the effectiveness of our proposed framework. | rejected-papers | This paper proposes a combination of SVGD and SLGD and analyzes its non-asymptotic properties based on gradient flow. This is an interesting direction to explore. Unfortunately, two major concerns have been raised regarding this paper: 1) the reviewers identified multiple technical flaws. Authors provided rebuttal and addressed some of the problems. But the reviewers think it requires significantly more improvement and clarification to fully address the issues. 2) the motivation of the combination of SVGD and SLGD, despite of being very interesting, is not very clearly motivated; by combining SVGD and SLGD, one get convergence rate for free from the SLGD part, but not much insight is shed on the SVGD part (meaning if the contribution of SLGD is zero, then the bound because vacuum). This could be misleading given that one of the claimed contribution is non-asymptotic theory of ''SVGD-style algorithms" (rather than SLGD style..). We encourage the authors to addresses the technical questions and clarify the contribution and motivation of the paper in revision for future submissions.
| train | [
"BkxGR-FtpX",
"rygw9m34TX",
"HyxLEekQ6Q",
"SygunrymTQ",
"HkevVLJ7aX",
"S1ll281mTm",
"r1eg4Z1X6X",
"BygxVSkmTm",
"rJxZJF1m6X",
"B1gr88ge6Q",
"HyegQHwhn7",
"HyxzqCPFhQ",
"SkeZkIz537",
"ryeRvH30h7"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author"
] | [
"Thank you for your response. We apologize for the previous long rebuttal. Nonetheless, we didn’t mean to write an “unprofessional” rebuttal, but hope to provide all the details and to solve possible doubts one might encounter when reading the rebuttal. We respect your decision, but still want to make the following... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
3,
4,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3,
-1
] | [
"rygw9m34TX",
"rJxZJF1m6X",
"HyxzqCPFhQ",
"HyxzqCPFhQ",
"HyxzqCPFhQ",
"HyxzqCPFhQ",
"HyxzqCPFhQ",
"HyxzqCPFhQ",
"HyxzqCPFhQ",
"SkeZkIz537",
"iclr_2019_r1espiA9YQ",
"iclr_2019_r1espiA9YQ",
"iclr_2019_r1espiA9YQ",
"HyegQHwhn7"
] |
iclr_2019_r1exVhActQ | DEEP-TRIM: REVISITING L1 REGULARIZATION FOR CONNECTION PRUNING OF DEEP NETWORK | State-of-the-art deep neural networks (DNNs) typically have tens of millions of parameters, which might not fit into the upper levels of the memory hierarchy, thus increasing the inference time and energy consumption significantly, and prohibiting their use on edge devices such as mobile phones. The compression of DNN models has therefore become an active area of research recently, with \emph{connection pruning} emerging as one of the most successful strategies. A very natural approach is to prune connections of DNNs via ℓ1 regularization, but recent empirical investigations have suggested that this does not work as well in the context of DNN compression. In this work, we revisit this simple strategy and analyze it rigorously, to show that: (a) any \emph{stationary point} of an ℓ1-regularized layerwise-pruning objective has its number of non-zero elements bounded by the number of penalized prediction logits, regardless of the strength of the regularization; (b) successful pruning highly relies on an accurate optimization solver, and there is a trade-off between compression speed and distortion of prediction accuracy, controlled by the strength of regularization. Our theoretical results thus suggest that ℓ1 pruning could be successful provided we use an accurate optimization solver. We corroborate this in our experiments, where we show that simple ℓ1 regularization with an Adamax-L1(cumulative) solver gives pruning ratio competitive to the state-of-the-art. | rejected-papers | This paper studies the properties of L1 regularization for deep neural network. It contains some interesting results, e.g. the stationary point of an l1 regularized layer has bounded number of non-zero elements. On the other hand, the majority of reviewers has concerns on that experimental supports are weak and suggests rejection. Therefore, a final rejection is proposed. | val | [
"BJlZt9ptRX",
"HkxH-cpFCQ",
"B1lBnKatR7",
"SkeePQCJTQ",
"Byevdiz3nm",
"BylbaEivn7"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the feedback and comments.\n\n(1) \"whether the theory for (5) is rigorously justified by the experiments\":\n\nWhile our theorem is designed for the layerwise objective (5), in practice for simplicity we find that directly optimize (8) yields promising results is more simple. We will sh... | [
-1,
-1,
-1,
4,
6,
4
] | [
-1,
-1,
-1,
4,
3,
3
] | [
"BylbaEivn7",
"Byevdiz3nm",
"SkeePQCJTQ",
"iclr_2019_r1exVhActQ",
"iclr_2019_r1exVhActQ",
"iclr_2019_r1exVhActQ"
] |
iclr_2019_r1ez_sRcFQ | Pixel Redrawn For A Robust Adversarial Defense | Recently, an adversarial example becomes a serious problem to be aware of because it can fool trained neural networks easily.
To prevent the issue, many researchers have proposed several defense techniques such as adversarial training, input transformation, stochastic activation pruning, etc.
In this paper, we propose a novel defense technique, Pixel Redrawn (PR) method, which redraws every pixel of training images to convert them into distorted images.
The motivation for our PR method is from the observation that the adversarial attacks have redrawn some pixels of the original image with the known parameters of the trained neural network.
Mimicking these attacks, our PR method redraws the image without any knowledge of the trained neural network.
This method can be similar to the adversarial training method but our PR method can be used to prevent future attacks.
Experimental results on several benchmark datasets indicate our PR method not only relieves the over-fitting issue when we train neural networks with a large number of epochs, but it also boosts the robustness of the neural network. | rejected-papers | Based on the majority of reviewers with reject (ratings: 4,6,3), the current version of paper is proposed as reject. | val | [
"SJx69yJYAX",
"BklM9xJtCX",
"rJgt7eytRm",
"SyxSjsbc37",
"SkeTIj1c27",
"H1xTSS2t2m",
"Hkgsy16OhX",
"H1gf0dh_nX",
"SygrWEvH3m",
"HJeK6XvBnQ",
"rkgGbR3Nnm",
"BkxLLnoE27",
"r1gV0TM4nX",
"BJg0P3Z4hm",
"BJlSCNx72X",
"rJxM4KJM2X",
"BklovTA-nQ",
"H1g5N3R-2m",
"SyedTiCWnm",
"BJxaSTclh7"... | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"public",
"author",
"public",
"author",
"public",
"author",
"author",
"author",
"public",
"public",
"public",... | [
"Thank you for your review and comments.\n\n1. We have proof-read and added preliminaries (Section 2.1) as well as the clear explanation of our method in the figure format (Figure 1) in this latest version. We believe that the readers could understand our method easily.\n\n2. a) Following your comment, we have adde... | [
-1,
-1,
-1,
4,
6,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
3,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"H1xTSS2t2m",
"SyxSjsbc37",
"SkeTIj1c27",
"iclr_2019_r1ez_sRcFQ",
"iclr_2019_r1ez_sRcFQ",
"iclr_2019_r1ez_sRcFQ",
"H1gf0dh_nX",
"iclr_2019_r1ez_sRcFQ",
"rkgGbR3Nnm",
"BkxLLnoE27",
"iclr_2019_r1ez_sRcFQ",
"r1gV0TM4nX",
"BJg0P3Z4hm",
"BJlSCNx72X",
"rJxM4KJM2X",
"SyedTiCWnm",
"S1lBAjqe2... |
iclr_2019_r1f78iAcFm | GRAPH TRANSFORMATION POLICY NETWORK FOR CHEMICAL REACTION PREDICTION | We address a fundamental problem in chemistry known as chemical reaction product prediction. Our main insight is that the input reactant and reagent molecules can be jointly represented as a graph, and the process of generating product molecules from reactant molecules can be formulated as a sequence of graph transformations. To this end, we propose Graph Transformation Policy Network (GTPN) - a novel generic method that combines the strengths of graph neural networks and reinforcement learning to learn the reactions directly from data with minimal chemical knowledge. Compared to previous methods, GTPN has some appealing properties such as: end-to-end learning, and making no assumption about the length or the order of graph transformations. In order to guide model search through the complex discrete space of sets of bond changes effectively, we extend the standard policy gradient loss by adding useful constraints. Evaluation results show that GTPN improves the top-1 accuracy over the current state-of-the-art method by about 3% on the large USPTO dataset. Our model's performances and prediction errors are also analyzed carefully in the paper. | rejected-papers | The reviewers and authors participated in modest discussion, with the authors providing direct responses to reviewer comments. However, this did not appreciably change the overall ratings of the paper (one reviewer raised their rating, while another grew more concerned), and in aggregate the reviewers do not recommend that the paper meets the bar for acceptance. | train | [
"BJgujTAO14",
"BylmJnLcnX",
"Hkx3QH_uyE",
"SylgsNZO1E",
"Hke2ZC4B14",
"SylS8--767",
"S1gJlx-maQ",
"rklGIpZXTQ",
"B1gyVTZXaX",
"Hyg2KLZ7pQ",
"r1xjEd-Qpm",
"SygQGNWQTQ",
"SJegDj_53Q",
"rJldqyBDnX"
] | [
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank Reviewer 2 for your great consideration.",
"Update:\n\nScore increased.\n\n___________________________________\n\nOriginal review:\n\nThe paper presents an approach to predict the products of chemical reactions, given the reactants and reagents. It works by stepwise predicting the atom pairs that change... | [
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5
] | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"Hkx3QH_uyE",
"iclr_2019_r1f78iAcFm",
"SygQGNWQTQ",
"Hke2ZC4B14",
"B1gyVTZXaX",
"SJegDj_53Q",
"SJegDj_53Q",
"rJldqyBDnX",
"rJldqyBDnX",
"BylmJnLcnX",
"BylmJnLcnX",
"BylmJnLcnX",
"iclr_2019_r1f78iAcFm",
"iclr_2019_r1f78iAcFm"
] |
iclr_2019_r1fE3sAcYQ | Overcoming Multi-model Forgetting | We identify a phenomenon, which we refer to as *multi-model forgetting*, that occurs when sequentially training multiple deep networks with partially-shared parameters; the performance of previously-trained models degrades as one optimizes a subsequent one, due to the overwriting of shared parameters. To overcome this, we introduce a statistically-justified weight plasticity loss that regularizes the learning of a model's shared parameters according to their importance for the previous models, and demonstrate its effectiveness when training two models sequentially and for neural architecture search. Adding weight plasticity in neural architecture search preserves the best models to the end of the search and yields improved results in both natural language processing and computer vision tasks. | rejected-papers |
pros:
- nicely written paper
- clear and precise with a derivation of the loss function
cons:
novelty/impact:
I think all the reviewers acknowledge that you are doing something different in the neural brainwashing (NB) problem than is done in the typical catastropic forgetting (CF) setting. You have one dataset and a set of models with shared weights; the CF setting has one model and trains on different datasets/tasks. But whereas solving the CF problem would solve a major problem of continual machine learning, the value of solving the NB problem is harder to assess from this paper... The main application seems to be improving neural architecture search. At the meta-level, the techniques used to derive the main loss are already well known and the result similar to EWC, so they don't add a lot from the analysis perspective. I think it would be very helpful to revise the paper to show a range of applications that could benefit from solving the NB problem and that the technique you propose applies more broadly. | test | [
"rJe_jDnHC7",
"HJe5Bv3BAm",
"rkeiZwhr0X",
"ByeOqI3r07",
"rkg-raP03Q",
"H1e4uaHj2X",
"rygpqz4uhQ",
"r1eaAIv6F7",
"H1xJ3lgTF7"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"We thank the reviewers for their valuable comments and for taking the time to review our paper. We have uploaded a revised version that addresses the reviewers’ main concerns. In particular\n\n- We have renamed brainwashing “multi-model forgetting” to account for the fact that the literature has become more libera... | [
-1,
-1,
-1,
-1,
6,
5,
6,
-1,
-1
] | [
-1,
-1,
-1,
-1,
2,
5,
4,
-1,
-1
] | [
"iclr_2019_r1fE3sAcYQ",
"rygpqz4uhQ",
"H1e4uaHj2X",
"rkg-raP03Q",
"iclr_2019_r1fE3sAcYQ",
"iclr_2019_r1fE3sAcYQ",
"iclr_2019_r1fE3sAcYQ",
"H1xJ3lgTF7",
"iclr_2019_r1fE3sAcYQ"
] |
iclr_2019_r1fO8oC9Y7 | Multi-Task Learning for Semantic Parsing with Cross-Domain Sketch | Semantic parsing which maps a natural language sentence into a formal machine-readable representation of its meaning, is highly constrained by the limited annotated training data. Inspired by the idea of coarse-to-fine, we propose a general-to-detailed neural network(GDNN) by incorporating cross-domain sketch(CDS) among utterances and their logic forms. For utterances in different domains, the General Network will extract CDS using an encoder-decoder model in a multi-task learning setup. Then for some utterances in a specific domain, the Detailed Network will generate the detailed target parts using sequence-to-sequence architecture with advanced attention to both utterance and generated CDS. Our experiments show that compared to direct multi-task learning, CDS has improved the performance in semantic parsing task which converts users' requests into meaning representation language(MRL). We also use experiments to illustrate that CDS works by adding some constraints to the target decoding process, which further proves the effectiveness and rationality of CDS. | rejected-papers | Interesting approach aiming to leverage cross domain schemas and generic semantic parsing (based on meaning representation language, MRL) for language understanding. Experiments have been performed on the recently released SNIPS corpus and comparisons have been made with multiple recent multi-task learning approaches. Unfortunately, the proposed approach falls short in comparison to the slot gated attention work by Goo et al.
The motivation and description of the cross domain schemas can be improved in the paper, and for replication of experiments it would be useful to include how the annotations are extended for this purpose.
Experimental results could be extended to the other available corpora mentioned in the paper (ATIS and GEO).
| train | [
"H1xfPpkdR7",
"H1x0ST1_AX",
"r1eYVpJ_C7",
"SkeFlTk_A7",
"HJeYN9r63Q",
"BklT1yJs27",
"rke8DKIKhm"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your reviews!\n\n1, CDS definition and value\nIn the revisioned paper, we add a subsection 3.1 to describe CDS definition in detail. We also list the attributes that can form CDS since actions are very few due to the limitation of the dataset. Besides, we do some statistics on the dataset and reveal ... | [
-1,
-1,
-1,
-1,
3,
4,
5
] | [
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"rke8DKIKhm",
"BklT1yJs27",
"HJeYN9r63Q",
"iclr_2019_r1fO8oC9Y7",
"iclr_2019_r1fO8oC9Y7",
"iclr_2019_r1fO8oC9Y7",
"iclr_2019_r1fO8oC9Y7"
] |
iclr_2019_r1fWmnR5tm | Learning to Search Efficient DenseNet with Layer-wise Pruning | Deep neural networks have achieved outstanding performance in many real-world applications with the expense of huge computational resources. The DenseNet, one of the recently proposed neural network architecture, has achieved the state-of-the-art performance in many visual tasks. However, it has great redundancy due to the dense connections of the internal structure, which leads to high computational costs in training such dense networks. To address this issue, we design a reinforcement learning framework to search for efficient DenseNet architectures with layer-wise pruning (LWP) for different tasks, while retaining the original advantages of DenseNet, such as feature reuse, short paths, etc. In this framework, an agent evaluates the importance of each connection between any two block layers, and prunes the redundant connections. In addition, a novel reward-shaping trick is introduced to make DenseNet reach a better trade-off between accuracy and float point operations (FLOPs). Our experiments show that DenseNet with LWP is more compact and efficient than existing alternatives. | rejected-papers | The paper proposes to apply Neural Architecture Search for pruning DenseNet.
The reviewers and AC note the potential weaknesses of the paper in various aspects, and decided that the authors need more works to publish. | train | [
"rJgkzj5Rn7",
"SJe-bk0on7",
"rkllOklK3Q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes to apply Neural Architecture Search (NAS) for connectivity pruning to improve the parameter efficiency of DenseNet. The idea is straightforward and the paper is well organized and easy to follow.\n\nMy major concern is the limited contribution. Applying deep reinforcement learning (DRL) and fol... | [
4,
4,
4
] | [
5,
4,
4
] | [
"iclr_2019_r1fWmnR5tm",
"iclr_2019_r1fWmnR5tm",
"iclr_2019_r1fWmnR5tm"
] |
iclr_2019_r1fiFs09YX | Sample-efficient policy learning in multi-agent Reinforcement Learning via meta-learning | To gain high rewards in muti-agent scenes, it is sometimes necessary to understand other agents and make corresponding optimal decisions. We can solve these tasks by first building models for other agents and then finding the optimal policy with these models. To get an accurate model, many observations are needed and this can be sample-inefficient. What's more, the learned model and policy can overfit to current agents and cannot generalize if the other agents are replaced by new agents. In many practical situations, each agent we face can be considered as a sample from a population with a fixed but unknown distribution. Thus we can treat the task against some specific agents as a task sampled from a task distribution. We apply meta-learning method to build models and learn policies. Therefore when new agents come, we can adapt to them efficiently. Experiments on grid games show that our method can quickly get high rewards. | rejected-papers | The paper extends MAML so that a learned behavior can be quickly (sample-efficiently) adapted to a new agend (allied or opponent). The approach is tested on two simple tasks in 2D gridworld environments: chasing and path blocking.
The experiments are very limited, they do not suffice to support the claims about the method. The authors did not enter a rebuttal and all the reviewers agree that the paper is not good enough for ICLR. | test | [
"SJxGZz8vTX",
"SkeMpWLvpm",
"BygUC9EDnX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper focuses on fast adaptation to new behaviour of the other agents of the environment, be it opponents or allies. To achieve this, a method based on MAML is proposed, with two main components:\n1) Learn a model of some characteristics of the opponent, such as \"the final goal, next action, or any other cha... | [
4,
4,
4
] | [
3,
4,
4
] | [
"iclr_2019_r1fiFs09YX",
"iclr_2019_r1fiFs09YX",
"iclr_2019_r1fiFs09YX"
] |
iclr_2019_r1g1LoAcFm | Using Ontologies To Improve Performance In Massively Multi-label Prediction | Massively multi-label prediction/classification problems arise in environments like health-care or biology where it is useful to make very precise predictions. One challenge with massively multi-label problems is that there is often a long-tailed frequency distribution for the labels, resulting in few positive examples for the rare labels. We propose a solution to this problem by modifying the output layer of a neural network to create a Bayesian network of sigmoids which takes advantage of ontology relationships between the labels to help share information between the rare and the more common labels. We apply this method to the two massively multi-label tasks of disease prediction (ICD-9 codes) and protein function prediction (Gene Ontology terms) and obtain significant improvements in per-label AUROC and average precision. | rejected-papers | The paper proposes a nice approach to massively multi-label problems with rare labels which may only have a limited number of positive examples; the approach uses Bayes nets to exploit the relationships among the labels in the output layer of a neural nets. The paper is clearly written and the approach seems promising, however, the reviewers would like to see even more convincing empirical results.
| train | [
"B1eO6ROX0Q",
"r1l-j0OQ0X",
"HkxetAO7Cm",
"ByeXzRdmCX",
"rJg_TzB6h7",
"rkxtA_MnhQ",
"HJeCA06d3X"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewers for the feedback which we have incorporated into the revised version of the paper. The biggest changes are:\n\n 1. Added confidence intervals (derived from bootstrapping the test set) to result tables to better show the significance of differences.\n 2. Added a baseline that consisted of a... | [
-1,
-1,
-1,
-1,
6,
5,
4
] | [
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"iclr_2019_r1g1LoAcFm",
"HJeCA06d3X",
"rkxtA_MnhQ",
"rJg_TzB6h7",
"iclr_2019_r1g1LoAcFm",
"iclr_2019_r1g1LoAcFm",
"iclr_2019_r1g1LoAcFm"
] |
iclr_2019_r1g5b2RcKm | MLPrune: Multi-Layer Pruning for Automated Neural Network Compression | Model compression can significantly reduce the computation and memory footprint of large neural networks. To achieve a good trade-off between model size and accuracy, popular compression techniques usually rely on hand-crafted heuristics and
require manually setting the compression ratio of each layer. This process is typically costly and suboptimal. In this paper, we propose a Multi-Layer Pruning method (MLPrune), which is theoretically sound, and can automatically decide appropriate compression ratios for all layers. Towards this goal, we use an efficient approximation of the Hessian as our pruning criterion, based on a Kronecker-factored Approximate Curvature method. We demonstrate the effectiveness of our method on several datasets and architectures, outperforming previous state-of-the-art by a large margin. Our experiments show that we can compress AlexNet and VGG16 by 25x without loss in accuracy on ImageNet. Furthermore, our method has much fewer hyper-parameters and requires no expert knowledge. | rejected-papers | The authors propose a technique for pruning networks by using second-order information through the Hessian. The Hessian is approximated using the Fisher Information Matrix, which is itself approximated using KFAC. The paper is clearly written and easy to follow, and is evaluated on a number of systems where the authors find that the proposed method achieves good compression ratios without requiring extensive hyperparameter tuning.
The reviewers raised concerns about 1) the novelty of the work (which builds on the KFAC work of Martens and Grosse), 2) whether zeroing out individual connections as opposed to neurons will have practical runtime benefits, 3) the lack of comparisons against baselines on overall training time/complexity, 4) comparisons to work which directly prune as part of training (instead of the train-prune-finetune scheme adopted by the authors).
In the view of the AC, 4) would be an interesting comparison but was not critical to the decision. Ultimately, the decision came down to the concern of lack of novelty and whether the proposed techniques would have an impact on runtime in practice.
| train | [
"H1xgw8mTRX",
"B1eFJqN9Am",
"SyehOO4qCX",
"BkgC6IEc0X",
"BygczIV50m",
"SJgffqb2h7",
"SJec_a682X",
"H1eb3snHn7"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for clarifying the complexity. Please also include it in the paper formally.",
"We would like to thank all reviewers for reviewing our paper and give insightful comments. We are open to further comments.\n\nPlease find more detailed responses as below.",
"Thanks for your comments and references. Hope... | [
-1,
-1,
-1,
-1,
-1,
5,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
"BkgC6IEc0X",
"iclr_2019_r1g5b2RcKm",
"H1eb3snHn7",
"SJec_a682X",
"SJgffqb2h7",
"iclr_2019_r1g5b2RcKm",
"iclr_2019_r1g5b2RcKm",
"iclr_2019_r1g5b2RcKm"
] |
iclr_2019_r1g7y2RqYX | Label Propagation Networks | Graph networks have recently attracted considerable interest, and in particular in the context of semi-supervised learning. These methods typically work by generating node representations that are propagated throughout a given weighted graph.
Here we argue that for semi-supervised learning, it is more natural to consider propagating labels in the graph instead. Towards this end, we propose a differentiable neural version of the classic Label Propagation (LP) algorithm. This formulation can be used for learning edge weights, unlike other methods where weights are set heuristically. Starting from a layer implementing a single iteration of LP, we proceed by adding several important non-linear steps that significantly enhance the label-propagating mechanism.
Experiments in two distinct settings demonstrate the utility of our approach.
| rejected-papers | This paper is on graph based semi-supervised learning where the goal is to develop an approach to jointly the node labeling function together with the edge weights. A natural way to formulate this problem as a bi-level optimization problem. However, the authors claim that this approach introduces two main difficulties: (a) the "upper" objective function is itself the solution to the "lower" optimization problem (Eq. (2)), and (b) optimization is challenging (Eq. (3)). The AC disagrees. Firstly, there is a close connection between the constrained version and the regression version of the problem (e.g., Belkin, Matveeva and Niyogi) -- the former is infact a special case of the latter for a certain choice of regularization parameter. The latter reduces to an linear system. The outer problem can be optimized using standard gradient descent using the implicit function theorem trick common in bilevel optimization. Reviewers have also raised concerns about clarity, and experimental support in this paper and comparisons with related work. | train | [
"BJeOQ0NqCm",
"S1l6GGSqR7",
"rkg5ufBcAQ",
"HJxISgAn2m",
"SkgLUiXWkV",
"rJxDdu4A37",
"SklckBGU3X",
"H1xSMZE_9Q",
"Byguq2Sw5Q",
"B1gZxggrcQ",
"HJgmkm6eqQ",
"SJgzy2ogcm"
] | [
"author",
"author",
"author",
"official_reviewer",
"public",
"official_reviewer",
"official_reviewer",
"public",
"public",
"author",
"public",
"public"
] | [
"Thank you for your comments, please see our responses below.\n\n“Convergence issues; the algorithm may go wrong; bifurcation rate can be too slow/fast:”\nIndeed, a limited number of layers might give an exact solution to the quadratic criterion in Eq. (2). However, our results imply that using few iterations with ... | [
-1,
-1,
-1,
5,
-1,
5,
6,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
4,
-1,
2,
4,
-1,
-1,
-1,
-1,
-1
] | [
"SklckBGU3X",
"HJxISgAn2m",
"rJxDdu4A37",
"iclr_2019_r1g7y2RqYX",
"B1gZxggrcQ",
"iclr_2019_r1g7y2RqYX",
"iclr_2019_r1g7y2RqYX",
"B1gZxggrcQ",
"iclr_2019_r1g7y2RqYX",
"SJgzy2ogcm",
"SJgzy2ogcm",
"iclr_2019_r1g7y2RqYX"
] |
iclr_2019_r1gGpjActQ | Hint-based Training for Non-Autoregressive Translation | Machine translation is an important real-world application, and neural network-based AutoRegressive Translation (ART) models have achieved very promising accuracy. Due to the unparallelizable nature of the autoregressive factorization, ART models have to generate tokens one by one during decoding and thus suffer from high inference latency. Recently, Non-AutoRegressive Translation (NART) models were proposed to reduce the inference time. However, they could only achieve inferior accuracy compared with ART models. To improve the accuracy of NART models, in this paper, we propose to leverage the hints from a well-trained ART model to train the NART model. We define two hints for the machine translation task: hints from hidden states and hints from word alignments, and use such hints to regularize the optimization of NART models. Experimental results show that the NART model trained with hints could achieve significantly better translation performance than previous NART models on several tasks. In particular, for the WMT14 En-De and De-En task, we obtain BLEU scores of 25.20 and 29.52 respectively, which largely outperforms the previous non-autoregressive baselines. It is even comparable to a strong LSTM-based ART model (24.60 on WMT14 En-De), but one order of magnitude faster in inference. | rejected-papers |
+ sufficiently strong results
+ a fast / parallelizable model
- Novelty with respect to previous work is not as great (see AnonReviewer1 and AnonReviewer2's comments)
- The same reviewers raised concerns about the discussion of related work (e.g., positioning with respect to work on knowledge distillation). I agree that the very related work of Roy et al should be mentioned, even though it has not been published it has been on arxiv since May.
- Ablation studies are only on smaller IWSLT datasets, confirming that the hints from an auto-regressive model are beneficial (whereas the main results are on WMT)
- I agree with R1 that the important modeling details (e.g., describing how the latent structure is generated) should not be described only in the appendix, esp given non-standard modeling choices. R1 is concerned that a model which does not have any autoregressive components (i.e. not even for the latent state) may have trouble representing multiple modes. I do find it surprising that the model with non-autoregressive latent state works well however I do not find this a sufficient ground for rejection on its own. However, emphasizing this point and discussing the implication in the paper makes a lot of sense, and should have been done. As of now, it is downplayed. R1 is concerned that such model may be gaming BLEU: as BLEU is less sensitive to long-distance dependencies, they may get damaged for the model which does not have any autoregressive components. Again, given the standards in the field, I do not think it is fair to require human evaluation, but I agree that including it would strengthen the paper and the arguments.
Overall, I do believe that the paper is sufficiently interesting and should get published but I also believe that it needs further revisions / further experiments.
| train | [
"HJg04FBCAQ",
"S1lIxAuFyV",
"rke_YdcHkN",
"ByeImig4AX",
"rkgOEYeNCX",
"SJg6ougVAX",
"Hkx6vOg40m",
"SkxNVPkkTQ",
"rklofD7c3X",
"SyxK4plKnX"
] | [
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Dear reviewer:\n\nTo address the concern of mode breaking, we make some discussion here. \n\n1. To some extent, a machine translation system doesn’t require to model **multiple modes** and is not evaluated by whether the model can generate **multiple modes**. \n\nMachine translation is a real application which aim... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"SyxK4plKnX",
"rke_YdcHkN",
"HJg04FBCAQ",
"iclr_2019_r1gGpjActQ",
"SyxK4plKnX",
"rklofD7c3X",
"SkxNVPkkTQ",
"iclr_2019_r1gGpjActQ",
"iclr_2019_r1gGpjActQ",
"iclr_2019_r1gGpjActQ"
] |
iclr_2019_r1gKNs0qYX | Filter Training and Maximum Response: Classification via Discerning | This report introduces a training and recognition scheme, in which classification is realized via class-wise discerning. Trained with datasets whose labels are randomly shuffled except for one class of interest, a neural network learns class-wise parameter values, and remolds itself from a feature sorter into feature filters, each of which discerns objects belonging to one of the classes only. Classification of an input can be inferred from the maximum response of the filters. A multiple check with multiple versions of filters can diminish fluctuation and yields better performance. This scheme of discerning, maximum response and multiple check is a method of general viability to improve performance of feedforward networks, and the filter training itself is a promising feature abstraction procedure. In contrast to the direct sorting, the scheme mimics the classification process mediated by a series of one component picking. | rejected-papers | This work examines how to deal with multiple classes. Unfortunately, as reviewers note, it fails to adequately ground its approach in previous work and show how the architecture relates to the considerable research that has examined the question beforehand. | train | [
"Skl-xJfY2Q",
"SJxaJUSBnX",
"HyxNv4HmiX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Unfortunately I don't understand what this paper is about. Please assign to another reviewer.",
"The paper discusses a method to increase accuracy of deep-nets on multi-class classification tasks by what seems to be a reduction of multi-class to binary classification following the classical one-vs-all mechanism.... | [
2,
3,
6
] | [
1,
4,
3
] | [
"iclr_2019_r1gKNs0qYX",
"iclr_2019_r1gKNs0qYX",
"iclr_2019_r1gKNs0qYX"
] |
iclr_2019_r1gOe209t7 | Reconciling Feature-Reuse and Overfitting in DenseNet with Specialized Dropout | Recently convolutional neural networks (CNNs) achieve great accuracy in visual recognition tasks. DenseNet becomes one of the most popular CNN models due to its effectiveness in feature-reuse. However, like other CNN models, DenseNets also face overfitting problem if not severer. Existing dropout method can be applied but not as effective due to the introduced nonlinear connections. In particular, the property of feature-reuse in DenseNet will be impeded, and the dropout effect will be weakened by the spatial correlation inside feature maps. To address these problems, we craft the design of a specialized dropout method from three aspects, dropout location, dropout granularity, and dropout probability. The insights attained here could potentially be applied as a general approach for boosting the accuracy of other CNN models with similar nonlinear connections. Experimental results show that DenseNets with our specialized dropout method yield better accuracy compared to vanilla DenseNet and state-of-the-art CNN models, and such accuracy boost increases with the model depth. | rejected-papers | All reviewers recommend reject and there is no rebuttal. There is no basis on which to accept the paper. | test | [
"S1gRqRvC37",
"HJxe-1m22Q",
"HyemrA0d2X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Overall Thoughts:\n\nI think the use of regularisation to improve performance in DenseNet architectures is a topic of interest to the community. My concern with the paper in it’s current form is that the different dropout structures/schedules are priors and it is not clear from the current analysis exactly what pr... | [
5,
3,
4
] | [
4,
3,
4
] | [
"iclr_2019_r1gOe209t7",
"iclr_2019_r1gOe209t7",
"iclr_2019_r1gOe209t7"
] |
iclr_2019_r1gR2sC9FX | On the Spectral Bias of Neural Networks | Neural networks are known to be a class of highly expressive functions able to fit even random input-output mappings with 100% accuracy. In this work we present properties of neural networks that complement this aspect of expressivity. By using tools from Fourier analysis, we show that deep ReLU networks are biased towards low frequency functions, meaning that they cannot have local fluctuations without affecting their global behavior. Intuitively, this property is in line with the observation that over-parameterized networks find simple patterns that generalize across data samples. We also investigate how the shape of the data manifold affects expressivity by showing evidence that learning high frequencies gets easier with increasing manifold complexity, and present a theoretical understanding of this behavior. Finally, we study the robustness of the frequency components with respect to parameter perturbation, to develop the intuition that the parameters must be finely tuned to express high frequency functions. | rejected-papers | This paper considers an interesting hypothesis that ReLU networks are biased towards learning learn low frequency Fourier components, showing a spectral bias towards low frequency functions. The paper backs the hypothesis with theoretical results computing and bounding the Fourier coefficients of ReLU networks and experiments on synthetic datasets.
All reviewers find the topic to be interesting and important. However they find the results in the paper to be preliminary and not yet ready for publication.
On theoretical front, the paper characterizes the Fourier coefficients for a given piecewise linear region of a ReLU network. However the bounds on Fourier coefficients of the entire network in Theorem 1 seem weak as they depend on number of pieces (N_f) and max Lipschitz constant over all pieces (L_f), quantities that can easily be exponentially big. Authors in their response have said that their bound on Fourier coefficients is tight. If so then the paper needs to discuss/prove why quantities N_f and L_f are expected to be small. Such a discussion will help reviewers in appreciating the theoretical contributions more.
On experimental front, the paper does not show spectral bias of networks trained over any real datasets. Reviewers are sympathetic to the challenge of evaluating Fourier coefficients of the network trained on real data sets, but the paper does not outline any potential approach to attack this problem.
I strongly suggest authors to address these reviewer concerns before next submission.
| val | [
"H1xjnEG4eE",
"SJeMY2tbkV",
"BkgJFfPt37",
"B1xDY8gQkV",
"Hklxk7ifJN",
"ByeFBpev6X",
"Hyea0eZW1V",
"ByeCM8C8a7",
"rylKTPtqCX",
"ByxMdFfmCQ",
"rJe2K_sW0Q",
"H1xuoxUj67",
"SJlUNA4i6m",
"Skx74sew6X",
"SyxdkRevp7",
"HygwdiS037",
"SylaVMTh27"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Although we can’t update our submission anymore: following the reviewers' suggestion, we have ran a few experiments on MNIST to demonstrate the effect of spectral bias on real data. It involves evaluating the robustness of neural network training dynamics to noise of various frequencies.\n\nWe train the same 6-lay... | [
-1,
-1,
6,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
-1,
3,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3
] | [
"iclr_2019_r1gR2sC9FX",
"Hyea0eZW1V",
"iclr_2019_r1gR2sC9FX",
"Skx74sew6X",
"iclr_2019_r1gR2sC9FX",
"SylaVMTh27",
"ByxMdFfmCQ",
"iclr_2019_r1gR2sC9FX",
"SyxdkRevp7",
"rJe2K_sW0Q",
"H1xuoxUj67",
"ByeCM8C8a7",
"ByeCM8C8a7",
"BkgJFfPt37",
"HygwdiS037",
"iclr_2019_r1gR2sC9FX",
"iclr_2019... |
iclr_2019_r1gRCiA5Ym | Jumpout: Improved Dropout for Deep Neural Networks with Rectified Linear Units | Dropout is a simple yet effective technique to improve generalization performance and prevent overfitting in deep neural networks (DNNs). In this paper, we discuss three novel observations about dropout to better understand the generalization of DNNs with rectified linear unit (ReLU) activations: 1) dropout is a smoothing technique that encourages each local linear model of a DNN to be trained on data points from nearby regions; 2) a constant dropout rate can result in effective neural-deactivation rates that are significantly different for layers with different fractions of activated neurons; and 3) the rescaling factor of dropout causes an inconsistency to occur between the normalization during training and testing conditions when batch normalization is also used. The above leads to three simple but nontrivial improvements to dropout resulting in our proposed method "Jumpout." Jumpout samples the dropout rate using a monotone decreasing distribution (such as the right part of a truncated Gaussian), so the local linear model at each data point is trained, with high probability, to work better for data points from nearby than from more distant regions. Instead of tuning a dropout rate for each layer and applying it to all samples, jumpout moreover adaptively normalizes the dropout rate at each layer and every training sample/batch, so the effective dropout rate applied to the activated neurons are kept the same. Moreover, we rescale the outputs of jumpout for a better trade-off that keeps both the variance and mean of neurons more consistent between training and test phases, which mitigates the incompatibility between dropout and batch normalization. Compared to the original dropout, jumpout shows significantly improved performance on CIFAR10, CIFAR100, Fashion- MNIST, STL10, SVHN, ImageNet-1k, etc., while introducing negligible additional memory and computation costs. | rejected-papers | The paper introduces a new variant of the Dropout method. The reviewers agree that the procedure is clear. However, motivations behind the method are heuristic, and have to lean much on empirical evidence. A strong motivation behind the procedure is lacking, and the motivation behind the method is unclear. Furthermore, the empirical evidence is lacking in detail and could use better comparisons with existing literature. | train | [
"BkxeyQI50X",
"Hyx3Q4U5C7",
"H1xdhM8cAQ",
"rkgazXIcRX",
"S1l5nQLqRm",
"Skgo-Yg037",
"S1lc6jSjnm",
"B1eUKk9xi7"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for your comments! We add a thorough ablation study as you suggested, but do not agree with other points.\n\nQ1: However, I find the intuitive reasoning unclear and have to lean much more on empirical evidence.\n\nR1: Modification 1 and 2 are theoretically supported by the rigorous analysis of ReLU DNNs in ... | [
-1,
-1,
-1,
-1,
-1,
5,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
4,
3,
5
] | [
"Skgo-Yg037",
"iclr_2019_r1gRCiA5Ym",
"Skgo-Yg037",
"S1lc6jSjnm",
"B1eUKk9xi7",
"iclr_2019_r1gRCiA5Ym",
"iclr_2019_r1gRCiA5Ym",
"iclr_2019_r1gRCiA5Ym"
] |
iclr_2019_r1gVqsA9tQ | ChainGAN: A sequential approach to GANs | We propose a new architecture and training methodology for generative adversarial networks. Current approaches attempt to learn the transformation from a noise sample to a generated data sample in one shot. Our proposed generator architecture, called ChainGAN, uses a two-step process. It first attempts to transform a noise vector into a crude sample, similar to a traditional generator. Next, a chain of networks, called editors, attempt to sequentially enhance this sample. We train each of these units independently, instead of with end-to-end backpropagation on the entire chain. Our model is robust, efficient, and flexible as we can apply it to various network architectures. We provide rationale for our choices and experimentally evaluate our model, achieving competitive results on several datasets. | rejected-papers | The paper presents a GAN-based generative model, where the generator consists of the base generator followed by several editors, each trained separately with its own discriminator. The reviewers found the idea interesting, but the evaluation insufficient. No rebuttal was provided. | train | [
"rklqd5V53m",
"BJxST0itnm",
"BJgSQnNP3m"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a GAN variant, called ChainGAN, which expresses the generator as a \"base generator\" -- which maps the noise vector to a rough model sample -- followed by a sequence of \"editors\" -- which progressively refine the sample. Each component of the generator is trained independently to fool its own... | [
4,
4,
4
] | [
4,
4,
4
] | [
"iclr_2019_r1gVqsA9tQ",
"iclr_2019_r1gVqsA9tQ",
"iclr_2019_r1gVqsA9tQ"
] |
iclr_2019_r1ge8sCqFX | An Exhaustive Analysis of Lazy vs. Eager Learning Methods for Real-Estate Property Investment | Accurate rent prediction in real estate investment can help in generating capital gains and guaranty a financial success. In this paper, we carry out a comprehensive analysis and study of eleven machine learning algorithms for rent prediction, including Linear Regression, Multilayer Perceptron, Random Forest, KNN, ML-KNN, Locally Weighted Learning, SMO, SVM, J48, lazy Decision Tree (i.e., lazy DT), and KStar algorithms.
Our contribution in this paper is twofold: (1) We present a comprehensive analysis of internal and external attributes of a real-estate housing dataset and their correlation with rental prices. (2) We use rental prediction as a platform to study and compare the performance of eager vs. lazy machine learning methods using myriad of ML algorithms.
We train our rent prediction models using a Zillow data set of 4K real estate properties in Virginia State of the US, including three house types of single-family, townhouse, and condo. Each data instance in the dataset has 21 internal attributes (e.g., area space, price, number of bed/bath, rent, school rating, so forth). In addition to Zillow data, external attributes like walk/transit score, and crime rate are collected from online data sources. A subset of the collected features - determined by the PCA technique- are selected to tune the parameters of the prediction models. We employ a hierarchical clustering approach to cluster the data based on two factors of house type, and average rent estimate of zip codes. We evaluate and compare the efficacy of the tuned prediction models based on two metrics of R-squared and Mean Absolute Error, applied on unseen data. Based on our study, lazy models like KStar lead to higher accuracy and lower prediction error compared to eager methods like J48 and LR. However, it is not necessarily found to be an overarching conclusion drawn from the comparison between all the lazy and eager methods in this work. | rejected-papers | The paper evaluates several off-the-shelf algorithms for predicting the return on real-estate property investment. The problem is interesting, but there is a consensus that the paper contains little technical novelty, and the empirical study on a fairly small dataset is also not convincing. | train | [
"HJgwfuoe6X",
"SJeO7Ixqh7",
"HJx8DIxthX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors compare a collection of machine learning models to predict the expected rental income from an investment property. The dataset they use to train their model is fairly small (around 4K transactions). In addition to using house specific features the authors use other macro features, such as, walk score e... | [
3,
4,
2
] | [
5,
4,
4
] | [
"iclr_2019_r1ge8sCqFX",
"iclr_2019_r1ge8sCqFX",
"iclr_2019_r1ge8sCqFX"
] |
iclr_2019_r1gkAoA5FQ | A bird's eye view on coherence, and a worm's eye view on cohesion | Generating coherent and cohesive long-form texts is a challenging problem in natural language generation. Previous works relied on a large amount of human-generated texts to train neural language models, however, few attempted to explicitly model the desired linguistic properties of natural language text, such as coherence and cohesion using neural networks. In this work, we train two expert discriminators for coherence and cohesion to provide hierarchical feedback for text generation. We also propose a simple variant of policy gradient, called 'negative-critical sequence training' in which the reward 'baseline' is constructed from randomly generated negative samples. We demonstrate the effectiveness of our approach through empirical studies, showing improvements over the strong baseline -- attention-based bidirectional MLE-trained neural language model -- in a number of automated metrics. The proposed model can serve as baseline architectures to promote further research in modeling additional linguistic properties for downstream NLP tasks. | rejected-papers | This paper attempts at modeling coherence of generated text, and proposes two kinds of discriminators that tries to measure whether a piece of text is coherent or not.
However, the paper misses several related critical references, and also lacks extensive evaluation (especially manual evaluation).
There is consensus between the reviewers that this paper needs more work before it is accepted to a conference such as ICLR.
| train | [
"Hygv83tF0m",
"BkxvpTtKRQ",
"Byl8EpYFAQ",
"HyxYypFKCX",
"HygrK3FFCX",
"Hkg4oaLsn7",
"Bkekcch5nX",
"r1e1frrd37"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your comments.\n\nWe would like to clarify our methodology. Apparently, our AnonReviewer2 also had the same misunderstanding so we append the same example to further help you understand our approach. We revised our writing to make this point clear. We would like to respond to your comments hereafter... | [
-1,
-1,
-1,
-1,
-1,
2,
2,
4
] | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
"Bkekcch5nX",
"iclr_2019_r1gkAoA5FQ",
"Hkg4oaLsn7",
"r1e1frrd37",
"Hygv83tF0m",
"iclr_2019_r1gkAoA5FQ",
"iclr_2019_r1gkAoA5FQ",
"iclr_2019_r1gkAoA5FQ"
] |
iclr_2019_r1gl7hC5Km | Adapting Auxiliary Losses Using Gradient Similarity | One approach to deal with the statistical inefficiency of neural networks is to rely on auxiliary losses that help to build useful representations. However, it is not always trivial to know if an auxiliary task will be helpful for the main task and when it could start hurting. We propose to use the cosine similarity between gradients of tasks as an adaptive weight to detect when an auxiliary loss is helpful to the main loss. We show that our approach is guaranteed to converge to critical points of the main task and demonstrate the practical usefulness of the proposed algorithm in a few domains: multi-task supervised learning on subsets of ImageNet, reinforcement learning on gridworld, and reinforcement learning on Atari games. | rejected-papers | This paper tackles the problem of using auxiliary losses to help regularize and aid the learning of a "goal" task. The approach proposes avoiding the learning of irrelevant or contradictory details from the auxiliary task at the expense of the "goal" tasks by observing cosine similarity between the auxiliary and main tasks and ignore those gradients which are too dissimilar.
To justify such a setup one must first show that such negative interference occurs in practice, warranting explicit attention. Then one must show that their algorithm effectively mitigates this interference and at the same time provides some useful signal in combination with the main learning objective.
During the review process there was a significant discussion as to whether the proposed approach sufficiently justified its need and usefulness as defined above. One major point of contention is whether to compare against the multi-task literature. The authors claim that prior multi-task learning literature is out of scope of this work since their goal is not to measure performance on all tasks used during learning. However, this claim does not invalidate the reviewer's request for comparison against multi-task learning work. In fact, the authors *should* verify that their method outperforms state-of-the-art multi-task learning methods. Not because they too are studying performance across all tasks, but because their method which knows to prioritize one task during training should certainly outperform the learning paradigms which have no special preference to one of the tasks.
A main issue with the current draft centers around the usefulness of the proposed algorithm. First, whether the gradient co-sine similarity is a necessary condition to avoid negative interference and 2) to show at least empirically that auxiliary losses do offer improved performance over optimizing the goal task alone. Based on the experiments now available the answers to these questions remains unclear and thus the paper is not yet recommended for publication. | train | [
"Bklzqu6sRX",
"HylMgwajC7",
"r1eyMy-92X",
"HylMDMj5Am",
"r1er2gjqAX",
"SyxdCRc5RQ",
"rkxamqUqAX",
"HkxmPvI9AX",
"Ske2aSLqCQ",
"BkxRqSI9C7",
"Hkx9DB85Cm",
"Syg2_4L9A7",
"BklWUQ1OhX",
"Bklbw0cw2X"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We suspect that this disagreement is primarily caused by what we mean by “multi-task learning” versus what the reviewer means by “multi-task learning”. \n\nTo be more specific, we further clarify the differences between [Chen et al,. ] & [Kendall et al.,] and our paper, assume we have two tasks, L_{main} and L_{au... | [
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
6,
6
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3
] | [
"r1er2gjqAX",
"SyxdCRc5RQ",
"iclr_2019_r1gl7hC5Km",
"Ske2aSLqCQ",
"BkxRqSI9C7",
"Hkx9DB85Cm",
"Bklbw0cw2X",
"BklWUQ1OhX",
"BkxRqSI9C7",
"Hkx9DB85Cm",
"r1eyMy-92X",
"iclr_2019_r1gl7hC5Km",
"iclr_2019_r1gl7hC5Km",
"iclr_2019_r1gl7hC5Km"
] |
iclr_2019_r1glehC5tQ | Distinguishability of Adversarial Examples | Machine learning models including traditional models and neural networks can be easily fooled by adversarial examples which are generated from the natural examples with small perturbations. This poses a critical challenge to machine learning security, and impedes the wide application of machine learning in many important domains such as computer vision and malware detection. Unfortunately, even state-of-the-art defense approaches such as adversarial training and defensive distillation still suffer from major limitations and can be circumvented. From a unique angle, we propose to investigate two important research questions in this paper: Are adversarial examples distinguishable from natural examples? Are adversarial examples generated by different methods distinguishable from each other? These two questions concern the distinguishability of adversarial examples. Answering them will potentially lead to a simple yet effective approach, termed as defensive distinction in this paper under the formulation of multi-label classification, for protecting against adversarial examples. We design and perform experiments using the MNIST dataset to investigate these two questions, and obtain highly positive results demonstrating the strong distinguishability of adversarial examples. We recommend that this unique defensive distinction approach should be seriously considered to complement other defense approaches. | rejected-papers | The paper investigates an interesting question and points at a promising research direction in relation to whether adversarial examples are distinguishable from natural examples.
A concern raised in the reviews is that the technical contribution of the paper is weak. A main concern with the paper is that the experiments have been conducted only on one simple data set. The authors proposed to add more experiments and improve other points, but a revision didn't follow.
The reviewers consistently rate the paper as ok, but not good enough.
I would encourage the authors to conduct the improvements proposed by the reviewers and the authors themselves. | val | [
"ByxJd3gppm",
"Skeue2rUpm",
"ByxDwhP0n7",
"rJec3VDvhm",
"SylVkFjP37",
"Hyxnw3VI3X"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"We would like to thank the reviewers for their valuable suggestions and comments. We really appreciate all the feedback from the reviewers as well as readers. We believe this has benefited our revision.\n\nBased on the reviews, one common concern is the dataset. Specifically, results predicated on the MNIST datase... | [
-1,
4,
4,
4,
-1,
-1
] | [
-1,
4,
5,
4,
-1,
-1
] | [
"iclr_2019_r1glehC5tQ",
"iclr_2019_r1glehC5tQ",
"iclr_2019_r1glehC5tQ",
"iclr_2019_r1glehC5tQ",
"Hyxnw3VI3X",
"iclr_2019_r1glehC5tQ"
] |
iclr_2019_r1gnQ20qYX | Pearl: Prototype lEArning via Rule Lists | Deep neural networks have demonstrated promising prediction and classification performance on many healthcare applications. However, the interpretability of those models are often lacking. On the other hand, classical interpretable models such as rule lists or decision trees do not lead to the same level of accuracy as deep neural networks and can often be too complex to interpret (due to the potentially large depth of rule lists). In this work, we present PEARL, Prototype lEArning via Rule Lists, which iteratively uses rule lists to guide a neural network to learn representative data prototypes. The resulting prototype neural network provides accurate prediction, and the prediction can be easily explained by prototype and its guiding rule lists. Thanks to the prediction power of neural networks, the rule lists from prototypes are more concise and hence provide better interpretability. On two real-world electronic healthcare records (EHR) datasets, PEARL consistently outperforms all baselines across both datasets, especially achieving performance improvement over conventional rule learning by up to 28% and over prototype learning by up to 3%. Experimental results also show the resulting interpretation of PEARL is simpler than the standard rule learning. | rejected-papers | This paper presents an approach that combines rule lists with prototype-based neural models to learn accurate models that are also interpretable (both due to rules and the prototypes). This combination is quite novel, the reviewers and the AC are unaware of prior work that has combined them, and find it potentially impactful. The experiments on the healthcare application were appreciated, and it is clear that the proposed approach produces accurate models, with much fewer rules than existing rule learning approaches.
The reviewers and AC note the following potential weaknesses: (1) there are substantial presentation issues, including the details of the approach, (2) unclear what the differences are from existing approaches, in particular, the benefits, and (3) The evaluation lacked in several important aspects, including user study on interpretability, and choice of benchmarks.
The authors provided a revision to their paper that addresses some of the presentation issues in notation, and incorporates some of the evaluation considerations as appendices into the paper. However, the reviewer scores are unchanged since most of the presentation and evaluation concerns remain, requiring significant modifications to be addressed. | train | [
"H1e5vPB9RQ",
"BJlcTrHc0m",
"r1elcZBcRX",
"S1lAzkr9Am",
"Syg5kyhS6X",
"SJx0XuLc3Q",
"HJgb6MBq2m"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewers for the constructive comments and suggestions. We have substantially revised the paper, in particularly \n1) adding a precise definition of interpretability, \n2) clarifying system design, \n3) improving the presentation of the paper, \nto respond to all the question raised. Hopefully the r... | [
-1,
-1,
-1,
-1,
5,
3,
4
] | [
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"iclr_2019_r1gnQ20qYX",
"HJgb6MBq2m",
"SJx0XuLc3Q",
"Syg5kyhS6X",
"iclr_2019_r1gnQ20qYX",
"iclr_2019_r1gnQ20qYX",
"iclr_2019_r1gnQ20qYX"
] |
iclr_2019_r1l-e3Cqtm | Deep Probabilistic Video Compression | We propose a variational inference approach to deep probabilistic video compression. Our model uses advances in variational autoencoders (VAEs) for sequential data and combines it with recent work on neural image compression. The approach jointly learns to transform the original video into a lower-dimensional representation as well as to entropy code this representation according to a temporally-conditioned probabilistic model. We split the latent space into local (per frame) and global (per segment) variables, and show that training the VAE to utilize both representations leads to an improved rate-distortion performance. Evaluation on small videos from public data sets with varying complexity and diversity show that our model yields competitive results when trained on generic video content. Extreme compression performance is achieved for videos with specialized content if the model is trained on similar videos. | rejected-papers | The proposed method is compressing video sequences with an end-to-end approach, by extending a variational approach from images to videos. The problem setting is interesting and somewhat novel. The main limitation, as exposed by the reviewers, is that evaluation was done on very limited and small domains. It is not at all clear that this method scales well to non-toy domains or that it is possible in fact to get good results with an extension of this method beyond small-scale content. There were some concerns about unfair comparisons to classical codes that were optimized for longer sequences (and I share those concerns, though they are somewhat alleviated in the rebuttal).
While the paper presents an interesting line of work, the reviewers did present a number of issues that make it hard to recommend it for acceptance. However, as R1 points out, most of the problems are fixable and I would advise the authors to take the suggested improvements (especially anything related to modeling longer sequences) and once they are incorporated this will be a much stronger submission. | train | [
"ryeVUXMi6X",
"BklJbXzsam",
"r1xDTlMiaQ",
"BkgUxbGipm",
"ByeXX3bo6m",
"S1lLcGX5hm",
"ByltbXgKnQ",
"BJxeCzadhX"
] | [
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
">4. It is not very clear how the global code is obtained. It is implied that all frames get processed in order to come up with f, but does this mean that they're processed via an LSTM model, or is there a single fully connected layer which takes as input all frames? In terms of modeling f, it sounds like the hyper... | [
-1,
-1,
-1,
-1,
-1,
6,
5,
6
] | [
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
"BklJbXzsam",
"BJxeCzadhX",
"ByltbXgKnQ",
"r1xDTlMiaQ",
"S1lLcGX5hm",
"iclr_2019_r1l-e3Cqtm",
"iclr_2019_r1l-e3Cqtm",
"iclr_2019_r1l-e3Cqtm"
] |
iclr_2019_r1l3NiCqY7 | Lipschitz regularized Deep Neural Networks generalize | We show that if the usual training loss is augmented by a Lipschitz regularization term, then the networks generalize. We prove generalization by first establishing a stronger convergence result, along with a rate of convergence. A second result resolves a question posed in Zhang et al. (2016): how can a model distinguish between the case of clean labels, and randomized labels? Our answer is that Lipschitz regularization using the Lipschitz constant of the clean data makes this distinction. In this case, the model learns a different function which we hypothesize correctly fails to learn the dirty labels. | rejected-papers | This paper is entitled Lipschitz regularized deep networks generalize. In fact, the paper has nothing in particular to do with neural networks. It is really the study of minimizers of a Lipschitz-regularized risk functional over certain nonparametric classes. The connection with neural networks is simply that one can usually achieve zero empirical risk for (overparametrized) neural networks and so, in deep learning practice, neural networks behave like a nonparametric class. Given the lack of connection with neural networks, one cannot logically learn anything specific about neural networks from this paper. It should be renamed... perhaps "Lipschitz regularization with an application to deep learning". One could raise issues of technical novelty, as it seems many of the key results are known.
I also question the insight that the bounds provide: they end up depending exponentially on the dimension of the data manifold. In the noiseless case, this exponential dependence arises from a triangle inequality between an arbitrary data point x and the nearest training data point! In the noisy case, this exponential dependence appears in a nonasymptotic uniform law of large numbers over the class of L-Lipschitz functions. There's no insight into deep learning here. It's also hard to judge whether these rates are what is explaining deep learning practice: it's unclear what the manifold dimensionality is, but it seems unlikely that this bound explains empirical performance (even if it describes the asymptotic rate of convergence).
One of the main results shows that, in the face of corrupted label (corrupted in a particular way), Lipschitz regularization can ```"undo" the corruption. However, convergence is not measured with respect to the true labeling function, but rather to the solution to the population regularized risk functional. How this solution relates to the true labeling function is unclear.
The paper also purports to resolve a mystery of generalization raised by Zhang et al (ICLR 2017). In that paper, the authors point to the diametrically opposed generalization performance on "true" and "random" labels. In fact, this paper does not resolve this problem because Zhang et al. were interested in how SGD solves this problem without explicit regularization. That Lipschitz regularization could solve this problem is borderline obvious.
I wanted to make a few comments.
In the rebuttal with reviewers, the question of parametric rates comes up. I think there's some confusion on both the part of the reviewer and authors. The parametric rates are often apparent but not real. The complexity terms often have an uncharacterized dependence on the number of data (through the learning algorithm) and on the size of the network (which is implicitly chosen based on the data complexity). In practice, these bounds are vacuous.
At some point, the authors argue that "In practice, u_n(x) is rounded to the nearest label, so once |u_n-u_0| < 1/2, all classification results will be correct after rounding." I'm not entirely sure I understand the logic here. First, convergence to u_0 is not controlled, but rather convergence to u*. u* may spend most of its time near the decision boundary, rendering uniform convergence almost useless. One would need noise conditions (Tysbakov) to make some claim.
Some other issues:
1. in (1), u ranges over X\to Y, but is then applied also to a weight vector.
2. Is"continuum variational problem" jargon? If so, cite. Otherwise, taking limits of rho_n and J makes sense only if J is suitably continuous, which depends on the loss function. You later address this convergence and so you should foreshadow.
3. Notation L[u;\rho] in (5) should be L[u,\rho], no?
4. (Goodfellow et al., 2016, Section 5.2) is an inappropriate citation for the term "Generalization".
5. in Thm 2.7., there is reference to a sequence mu_n and i assume the sequence elements is indexed by n, but then n appears in the probability with which the bound holds, and so this bound is not about the sequence but about a solution for \rho_n for fixed n.
6. Id should not be italicized in the statement of Lemma 2.10. Use mathrm not text/textrm. it should also be defined.
7. "convex convex" typo. | train | [
"r1g3yc1c2X",
"HkgkvMgOl4",
"B1lwy94kxE",
"H1lxtPfK2X",
"r1l4CH4h0Q",
"rJg5I0qjjm",
"B1lb-5yq0m",
"ByeJQCVLRm",
"S1yOZKrC7",
"BkerG5UNCQ",
"Hklx5D77CQ",
"SygmUEX7RX",
"rJgxjG7mRQ",
"BJg7fzQ70Q"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The paper studies generalization through a general empirical risk minimization procedure with Lipschitz regularization.\nGeneralization is measured through distance of the empirical minimizer function to a true labeling function u_0, or to the minimizer of the expected regularized loss.\n\nThe approach of studying... | [
4,
-1,
-1,
6,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
-1,
-1,
2,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_r1l3NiCqY7",
"B1lwy94kxE",
"B1lb-5yq0m",
"iclr_2019_r1l3NiCqY7",
"rJg5I0qjjm",
"iclr_2019_r1l3NiCqY7",
"iclr_2019_r1l3NiCqY7",
"BkerG5UNCQ",
"BkerG5UNCQ",
"rJgxjG7mRQ",
"rJg5I0qjjm",
"H1lxtPfK2X",
"r1g3yc1c2X",
"iclr_2019_r1l3NiCqY7"
] |
iclr_2019_r1l9Nj09YQ | Towards Language Agnostic Universal Representations | When a bilingual student learns to solve word problems in math, we expect the student to be able to solve these problem in both languages the student is fluent in, even if the math lessons were only taught in one language. However, current representations in machine learning are language dependent. In this work, we present a method to decouple the language from the problem by learning language agnostic representations and therefore allowing training a model in one language and applying to a different one in a zero shot fashion. We learn these representations by taking inspiration from linguistics, specifically the Universal Grammar hypothesis and learn universal latent representations that are language agnostic (Chomsky, 2014; Montague, 1970). We demonstrate the capabilities of these representations by showing that the models trained on a single language using language agnostic representations achieve very similar accuracies in other languages. | rejected-papers | This paper addresses a clear open problem in representation learning for language: the learning of language-agnostic representations for zero-shot cross-lingual transfer. All three reviewers agree that it makes some progress on that problem, and my understanding is that a straightforward presentation of these would likely have been accepted to this conference. However, there were serious issues with the framing and presentation of the paper.
One reviewer expressed serious concerns about clarity and detail, and two others expressed serious concerns about the paper's framing. I'm more worried about the framing issue: The paper opens with a sweeping discussion about the nature of language and universal grammar and, in the original version, also claims (in vague terms) to have made substantial progress on understanding the nature of language. The most problematic claims have since been removed, but the sweeping introduction remains, and it serves as the only introduction to the paper, leaving little discussion of the substantial points that the paper is trying to make.
I reluctantly have to recommend rejection. These problems should be fixable with a substantial re-write of the paper, but the reviewers were not satisfied with the progress made in that direction so far. | train | [
"rylXNTtLAm",
"SklAWFYUAm",
"rygaeXg6aQ",
"Bkl4hCAhT7",
"Bkl9lYFypm",
"ByeaDdt1TX",
"rJgRGOtk6X",
"SkeVkPm5nm",
"ryetcXFYnX",
"Bkx6E1yHnm"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your reply. \n\nWe agree that seeing perplexity gaps across all combinations of two languages from our 7 would be a useful graphic. We ran the experiment you've mentioned prior to writing the paper to see if the various perplexities align with our intuition about languages. We've included a heat-map ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
3
] | [
"rygaeXg6aQ",
"Bkl4hCAhT7",
"Bkl9lYFypm",
"rJgRGOtk6X",
"Bkx6E1yHnm",
"ryetcXFYnX",
"SkeVkPm5nm",
"iclr_2019_r1l9Nj09YQ",
"iclr_2019_r1l9Nj09YQ",
"iclr_2019_r1l9Nj09YQ"
] |
iclr_2019_r1lFIiR9tQ | Training generative latent models by variational f-divergence minimization | Probabilistic models are often trained by maximum likelihood, which corresponds to minimizing a specific form of f-divergence between the model and data distribution. We derive an upper bound that holds for all f-divergences, showing the intuitive result that the divergence between two joint distributions is at least as great as the divergence between their corresponding marginals. Additionally, the f-divergence is not formally defined when two distributions have different supports. We thus propose a noisy version of f-divergence which is well defined in such situations. We demonstrate how the bound and the new version of f-divergence can be readily used to train complex probabilistic generative models of data and that the fitted model can depend significantly on the particular divergence used. | rejected-papers | The paper proposes a new method for training generative models by minimizing general f-divergences. The main technical idea is to optimize f-divergence between joint distributions which is rightly observed to be the upper bound of the f-divergence between the marginal distributions and address the disjoint support problem by convolving the data with a noise distribution. The basic ideas in this work are not completely novel but are put together in a new way.
However, the key weakness of this work, as all the reviewer noticed, is that the empirical results are too week to support the usefulness of the proposed approach. The only quantitive results are in table 2, which is only a simple Gaussian example. It essential to have more substantial empirical results for supporting the new algorithm.
| train | [
"BklKFxuFC7",
"SyeXbx_YC7",
"rygly1uFR7",
"rkls138q27",
"HJlLzHlOn7",
"H1lMADZ727"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the constructive feedback on our paper. We respond below to the points raised.\n\n> \"For me, this seems that the author just introduced the new fancy objective. I think the motivation to introduce the new objective function should be stated clearly.\"\n\nWe would argue that we haven't really introdu... | [
-1,
-1,
-1,
6,
5,
5
] | [
-1,
-1,
-1,
3,
4,
3
] | [
"H1lMADZ727",
"HJlLzHlOn7",
"rkls138q27",
"iclr_2019_r1lFIiR9tQ",
"iclr_2019_r1lFIiR9tQ",
"iclr_2019_r1lFIiR9tQ"
] |
iclr_2019_r1lM_sA5Fm | Assumption Questioning: Latent Copying and Reward Exploitation in Question Generation | Question generation is an important task for improving our ability to process natural language data, with additional challenges over other sequence transformation tasks. Recent approaches use modifications to a Seq2Seq architecture inspired by advances in machine translation, but unlike translation the input and output vocabularies overlap significantly, and there are many different valid questions for each input. Approaches using copy mechanisms and reinforcement learning have shown promising results, but there are ambiguities in the exact implementation that have not yet been investigated. We show that by removing inductive bias from the model and allowing the choice of generation path to become latent, we achieve substantial improvements over implementations biased with both naive and smart heuristics. We perform a human evaluation to confirm these findings. We show that although policy gradient methods may be used to decouple training from the ground truth and optimise directly for quality metrics that have previously been assumed to be good choices, these objectives are poorly aligned with human judgement and the model simply learns to exploit the weaknesses of the reward source. Finally, we show that an adversarial objective learned directly from the ground truth data is not able to generate a useful training signal. | rejected-papers | This paper investigates copying mechanisms and reward functions in sequence to sequence models for question generation. The key findings are threefold: (1) when the alignments between input and output are weak, it is better to use latent copying mechanism to soften the model bias toward copying, (2) while policy gradient methods might be able to improve automatic scores, their results poorly align with human evaludation, and (3) the use of adversarial objective also does not lead to useful training signals.
Pros:
The task is well motivated and the paper presents potentially useful negative results on policy gradient and adversarial training.
Cons:
All reviewers found the clarity and organization of the paper requires improvements. Also, the proposed methods are reletively incremental and the empirical results are not strong. While the rebuttal answered some of the clarification questions, it does not address major concerns about the novelty and contributions.
Verdict:
Reject due to relatively weak contributions and novelty. | train | [
"rklKvHPsh7",
"SJgoN2N367",
"BkeOz3V3a7",
"HJgEx242Tm",
"BJlN9hrJa7",
"HJeXI8r92Q"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"The paper studies question generation, which is an important problem in many real applications. The authors propose to use better caching model and more evalution methods to deal with the problem. However, the paper is poorly written and hard to follow, and the proposed model lacks of novelty. The main reasons are... | [
3,
-1,
-1,
-1,
4,
5
] | [
4,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_r1lM_sA5Fm",
"HJeXI8r92Q",
"rklKvHPsh7",
"BJlN9hrJa7",
"iclr_2019_r1lM_sA5Fm",
"iclr_2019_r1lM_sA5Fm"
] |
iclr_2019_r1ledo0ctX | Consistency-based anomaly detection with adaptive multiple-hypotheses predictions | In one-class-learning tasks, only the normal case can be modeled with data, whereas the variation of all possible anomalies is too large to be described sufficiently by samples. Thus, due to the lack of representative data, the wide-spread discriminative approaches cannot cover such learning tasks, and rather generative models, which attempt to learn the input density of the normal cases, are used. However, generative models suffer from a large input dimensionality (as in images) and are typically inefficient learners. We propose to learn the data distribution more efficiently with a multi-hypotheses autoencoder. Moreover, the model is criticized by a discriminator, which prevents artificial data modes not supported by data, and which enforces diversity across hypotheses. This consistency-based anomaly detection (ConAD) framework allows the reliable identification of outof- distribution samples. For anomaly detection on CIFAR-10, it yields up to 3.9% points improvement over previously reported results. On a real anomaly detection task, the approach reduces the error of the baseline models from 6.8% to 1.5%. | rejected-papers | This paper proposes an anomaly-detection approach by augmenting VAE encoder with a network multiple hypothesis network and then using a discriminator in the decoder to select one of the hypothesis. The idea is interesting although the reviewers found the paper to be poorly written and the approach to be a bit confusing and complicated.
Revisions and rebuttal have certainly helped to improve the quality of the work. However, the reviewers believe that the paper require more work before it can be accepted at ICLR. For this reason, I recommend to reject this paper in its current state.
| train | [
"BklbVp-blE",
"rkesDG3rCX",
"SJeEHmhH0m",
"r1gi9MhSAm",
"BJlsnMnS0Q",
"H1xmVZ2HCQ",
"SJxwF776hm",
"ryxgISBF37",
"HyeByKbBnm"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks to the authors for their response, and for updating the paper accordingly. The motivation is now somewhat clearer, but I would still recommend resubmitting after some re-work. For example, if the focus of the paper is on detecting anomalies by means of the predicted log-likelihood, why is there so much focu... | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
4
] | [
"r1gi9MhSAm",
"ryxgISBF37",
"ryxgISBF37",
"SJxwF776hm",
"SJxwF776hm",
"HyeByKbBnm",
"iclr_2019_r1ledo0ctX",
"iclr_2019_r1ledo0ctX",
"iclr_2019_r1ledo0ctX"
] |
iclr_2019_r1lgm3C5t7 | Universal discriminative quantum neural networks | Quantum mechanics fundamentally forbids deterministic discrimination of quantum states and processes. However, the ability to optimally distinguish various classes of quantum data is an important primitive in quantum information science. In this work, we trained near-term quantum circuits to classify data represented by quantum states using the Adam stochastic optimization algorithm. This is achieved by iterative interactions of a classical device with a quantum processor to discover the parameters of an unknown non-unitary quantum circuit. This circuit learns to simulate the unknown structure of a generalized quantum measurement, or positive-operator valued measure (POVM), that is required to optimally distinguish possible distributions of quantum inputs. Notably we used universal circuit topologies, with a theoretically motivated circuit design which guaranteed that our circuits can perform arbitrary input-output mappings. Our numerical simulations showed that quantum circuits could be trained to discriminate among various pure and mixed quantum states, exhibiting a trade-off between minimizing erroneous and inconclusive outcomes with comparable performance to theoretically optimal POVMs. We trained the circuit on different classes of quantum data and evaluated the generalization error on unseen quantum data. This generalization power hence distinguishes our work from standard circuit optimization and provides an example of quantum machine learning for a task that has inherently no classical analogue. | rejected-papers | The paper needs work to improve clarity and strengthen the technical message. Also, the authors broke the policy of anonymous submission which disqualifies the paper. | train | [
"Hyl6l37i3Q",
"BJl60PYchX",
"rJgSi_Eq2Q"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Authors give a method to perform a full quantum problem of classifying unknown mixed quantum states. This is an important topic but the paper is ok and I think the test case is a bit lacking.\n\nThe theory is sound and the math is good. The only question I have is how does this hold on a real quantum computer suc... | [
5,
5,
2
] | [
3,
2,
2
] | [
"iclr_2019_r1lgm3C5t7",
"iclr_2019_r1lgm3C5t7",
"iclr_2019_r1lgm3C5t7"
] |
iclr_2019_r1lpx3A9K7 | Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference | Deep neural networks have been demonstrated to be vulnerable to adversarial attacks, where small perturbations intentionally added to the original inputs can fool the classifier. In this paper, we propose a defense method, Featurized Bidirectional Generative Adversarial Networks (FBGAN), to extract the semantic features of the input and filter the non-semantic perturbation. FBGAN is pre-trained on the clean dataset in an unsupervised manner, adversarially learning a bidirectional mapping between a high-dimensional data space and a low-dimensional semantic space; also mutual information is applied to disentangle the semantically meaningful features. After the bidirectional mapping, the adversarial data can be reconstructed to denoised data, which could be fed into any pre-trained classifier. We empirically show the quality of reconstruction images and the effectiveness of defense. | rejected-papers | The reviewers agree the paper is not ready for publication. | train | [
"r1lOxLgv3X",
"BJxwxKK9R7",
"SkgRopH90X",
"rJgINoH9CQ",
"BJgNu_B5AX",
"S1lODbPZp7",
"HJgJ3w4LhQ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"This work proposes to defend against adversarial examples by “denoising” the input image through an autoencoder (a BiGAN trained similar to InfoGAN) before classifying it with a standard CNN. The robustness of the model is evaluated on the L_infinity metric against FGSM and PGD.\n\nMy main criticism is as follows:... | [
4,
-1,
-1,
-1,
-1,
3,
3
] | [
5,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_r1lpx3A9K7",
"SkgRopH90X",
"r1lOxLgv3X",
"S1lODbPZp7",
"HJgJ3w4LhQ",
"iclr_2019_r1lpx3A9K7",
"iclr_2019_r1lpx3A9K7"
] |
iclr_2019_r1luCsCqFm | Learn From Neighbour: A Curriculum That Train Low Weighted Samples By Imitating | Deep neural networks, which gain great success in a wide spectrum of applications, are often time, compute and storage hungry. Curriculum learning proposed to boost training of network by a syllabus from easy to hard. However, the relationship between data complexity and network training is unclear: why hard example harm the performance at beginning but helps at end. In this paper, we aim to investigate on this problem. Similar to internal covariate shift in network forward pass, the distribution changes in weight of top layers also affects training of preceding layers during the backward pass. We call this phenomenon inverse "internal covariate shift". Training hard examples aggravates the distribution shifting and damages the training. To address this problem, we introduce a curriculum loss that consists of two parts: a) an adaptive weight that mitigates large early punishment; b) an additional representation loss for low weighted samples. The intuition of the loss is very simple. We train top layers on "good" samples to reduce large shifting, and encourage "bad" samples to learn from "good" sample. In detail, the adaptive weight assigns small values to hard examples, reducing the influence of noisy gradients. On the other hand, the less-weighted hard sample receives the proposed representation loss. Low-weighted data gets nearly no training signal and can stuck in embedding space for a long time. The proposed representation loss aims to encourage their training. This is done by letting them learn a better representation from its superior neighbours but not participate in learning of top layers. In this way, the fluctuation of top layers is reduced and hard samples also received signals for training. We found in this paper that curriculum learning needs random sampling between tasks for better training. Our curriculum loss is easy to combine with existing stochastic algorithms like SGD. Experimental result shows an consistent improvement over several benchmark datasets. | rejected-papers | This paper attempts to address a problem they dub "inverse" covariate shift where an improperly trained output layer can hamper learning. The idea is to use a form of curriculum learning. The reviewers found that the notion of inverse covariate shift was not formally or empirically well defined. Furthermore the baselines used were too weak: the authors should consider comparing against state-of-the-art curriculum learning methods. | train | [
"B1eXpM0L6m",
"SyxpzqEPhm",
"ryle7pFQ3X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper suggests a source of slowness when training a two-layer neural networks: improperly trained output layer (classifier) may hamper learning of the hidden layer (feature). The authors call this “inverse” internal covariate shift (as opposed to the usual one where the feature distribution shifts and trips t... | [
2,
3,
4
] | [
5,
3,
4
] | [
"iclr_2019_r1luCsCqFm",
"iclr_2019_r1luCsCqFm",
"iclr_2019_r1luCsCqFm"
] |
iclr_2019_r1xFE3Rqt7 | Adaptive Mixture of Low-Rank Factorizations for Compact Neural Modeling | Modern deep neural networks have a large amount of weights, which make them difficult to deploy on computation constrained devices such as mobile phones. One common approach to reduce the model size and computational cost is to use low-rank factorization to approximate a weight matrix. However, performing standard low-rank factorization with a small rank can hurt the model expressiveness and significantly decrease the performance. In this work, we propose to use a mixture of multiple low-rank factorizations to model a large weight matrix, and the mixture coefficients are computed dynamically depending on its input. We demonstrate the effectiveness of the proposed approach on both language modeling and image classification tasks. Experiments show that our method not only improves the computation efficiency but also maintains (sometimes outperforms) its accuracy compared with the full-rank counterparts. | rejected-papers | The paper is clearly written and well motivated, but there are remaining concerns on contributions and comparisons.
The paper received mixed initial reviews. After extensive discussions, while the authors successfully clarified several important issues (such as computation efficiency w.r.t splitting) pointed out by Reviewer 4 (an expert in the field), they were not able to convince him/her about the significance of the proposed network compression method.
Reviewer 4 has the following remaining concerns:
1) This is a typical paper showing only FLOPs reduction but with an intent of real-time acceleration. However, wall-clock speedup is different from FLOPs reduction. It may not be beneficial to change the current computing flow optimized in modern software/hardware. This is one of major reasons why the reported wall-clock time even slows down. The problem may be alleviated with optimization efforts on software or hardware, then it is unclear how good/worse will it be when compared with fine-grain pruning solutions (Han et al. 2015b, Han et al. 2016 & Han et al. 2017), which achieved a higher FLOP reduction and a great wall-clock speedup with hardware optimized (using ASIC and FPGA);
2) If it is OK to target on FLOPs reduction (without comparison with fine-grain pruning solutions),
2.1) In LSTM experiments, the major producer of FLOPs -- the output layer, is excluded and this exclusion was hidden in the first version. Although the author(s) claimed that an output layer could be compressed, it is not shown in the paper. Compressing output layer will reduce model capacity, making other layers more difficult being compressed.
2.2) In CNN experiments, the improvements of CIFAR-10 is within a random range and not statistically significant. In table 2, "Regular low-rank MobileNet" improves the original MobileNet, showing that the original MobileNet (an arXiv paper) is not well designed. "Adaptive Low-rank MobileNet" improves accuracy upon "Regular low-rank MobileNet", but using 0.3M more parameters. The trade-off is unclear.
In addition to these remaining concerns of Reviewer 4, the AC feels that the paper essentially modifies the original network structure in a very specific way: adding a particular nonlinear layer between two adjacent layers. Thus it seems a little bit unfair to mainly use low-rank factorization (which can be considered as a compression technique that barely changes the network architecture) for comparison. Adding comparisons with fine-grain pruning solutions (Han et al. 2015b, Han et al. 2016 & Han et al. 2017) and a large number of more recent related references inspired by the low-rank baseline (M. Jaderberg et al 2014) , as listed by Reviewer 4, will make the proposed method much more convincing. | train | [
"rkxICFPfkN",
"SyldotDfkV",
"Syg3sevJJ4",
"SygF_SIkkV",
"SJxCZq2i0Q",
"B1g0zh_7hQ",
"ryxVHego0Q",
"Byxuj9_cAQ",
"SJxCVNotR7",
"S1e1aJRdTX",
"HyxyQ-C_6X",
"SJeU9eAupQ",
"Bkg8CkAupQ",
"rylcGdES6m",
"r1lHGXH0hX"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We will review these papers and add them into revision accordingly.",
"Dear AC,\n\nThanks for your time and detailed comments. Please find our responses below.\n\n- Is the proposed adaptive mixture of low-rank factorization network trained end-to-end, or trained by approximating Wh, where W is pre-trained?\n\nIn... | [
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6
] | [
-1,
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
4
] | [
"Syg3sevJJ4",
"SygF_SIkkV",
"iclr_2019_r1xFE3Rqt7",
"iclr_2019_r1xFE3Rqt7",
"ryxVHego0Q",
"iclr_2019_r1xFE3Rqt7",
"Byxuj9_cAQ",
"SJxCVNotR7",
"Bkg8CkAupQ",
"rylcGdES6m",
"B1g0zh_7hQ",
"r1lHGXH0hX",
"S1e1aJRdTX",
"iclr_2019_r1xFE3Rqt7",
"iclr_2019_r1xFE3Rqt7"
] |
iclr_2019_r1xN5oA5tm | Phrase-Based Attentions | Most state-of-the-art neural machine translation systems, despite being different
in architectural skeletons (e.g., recurrence, convolutional), share an indispensable
feature: the Attention. However, most existing attention methods are token-based
and ignore the importance of phrasal alignments, the key ingredient for the success
of phrase-based statistical machine translation. In this paper, we propose
novel phrase-based attention methods to model n-grams of tokens as attention
entities. We incorporate our phrase-based attentions into the recently proposed
Transformer network, and demonstrate that our approach yields improvements of
1.3 BLEU for English-to-German and 0.5 BLEU for German-to-English translation
tasks, and 1.75 and 1.35 BLEU points in English-to-Russian and Russian-to-English translation tasks
on WMT newstest2014 using WMT’16 training data.
| rejected-papers | All reviewers agree in their assessment that this paper does not meet the bar for ICLR. The area chair commends the authors for their detailed responses. | train | [
"HJgip_gXAQ",
"HJxzKF1f0X",
"HJxkwY1zCm",
"BylNrF1z0Q",
"BJxjIGQ_aQ",
"SJlA4MX_T7",
"BkefzM7uTQ",
"ryxlgfm_p7",
"rkeJdZm_a7",
"r1gQl-md6m",
"HJeKLWXOT7",
"Byl8VZmd67",
"S1lPclX_p7",
"HJevZ28C27",
"r1eHjtZq27",
"rkgK5dQv3X"
] | [
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for the very detailed response!\n\nMy main concern is still that there are multiple moving parts whose contribution is not clearly disentangled. Most strikingly, in Table 1, three different configurations obtain the best results on four datasets. Of course, there is not one method that will work best on ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
5,
5,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
5
] | [
"BJxjIGQ_aQ",
"rkgK5dQv3X",
"r1eHjtZq27",
"HJevZ28C27",
"SJlA4MX_T7",
"BkefzM7uTQ",
"ryxlgfm_p7",
"rkgK5dQv3X",
"HJeKLWXOT7",
"S1lPclX_p7",
"Byl8VZmd67",
"r1eHjtZq27",
"HJevZ28C27",
"iclr_2019_r1xN5oA5tm",
"iclr_2019_r1xN5oA5tm",
"iclr_2019_r1xN5oA5tm"
] |
iclr_2019_r1xRW3A9YX | Riemannian TransE: Multi-relational Graph Embedding in Non-Euclidean Space | Multi-relational graph embedding which aims at achieving effective representations with reduced low-dimensional parameters, has been widely used in knowledge base completion. Although knowledge base data usually contains tree-like or cyclic structure, none of existing approaches can embed these data into a compatible space that in line with the structure. To overcome this problem, a novel framework, called Riemannian TransE, is proposed in this paper to embed the entities in a Riemannian manifold. Riemannian TransE models each relation as a move to a point and defines specific novel distance dissimilarity for each relation, so that all the relations are naturally embedded in correspondence to the structure of data. Experiments on several knowledge base completion tasks have shown that, based on an appropriate choice of manifold, Riemannian TransE achieves good performance even with a significantly reduced parameters. | rejected-papers | This paper proposes a generalization of the translation-style embedding approaches for link prediction to Riemannian manifolds. The reviewers feel this is an important contribution to the recent work on embedding graphs into non-Euclidean spaces, especially since this work focuses on multi-relational links, thus supporting knowledge graph completion. The results on WN11 and FB13 are also promising.
The reviewers and AC note the following potential weaknesses: (1) the primary concern is the low performance on the benchmarks, especially WN18 and FB15k, and not using the appropriate versions (WN18-RR and FB15k-237), (2) use of hyperbolic embedding for an entity shared across all relations, and (3) lack of discussion/visualization of the learned geometry.
During the discussion phase, the authors clarified reviewer 1's concern regarding the difference in performance between HolE and ComplEx, along with providing a revision that addressed some of the clarity issues raised by reviewer 3. The authors also justified the lower performance due to (1) they are focusing on low-dimensionality setting, and (2) not all datasets will fit the space of the proposed model (like FB15k). However, reviewers 2 and 3 still maintain that the results provide insufficient evidence for the need for Riemannian spaces over Euclidean ones, especially for larger, and more realistic, knowledge graphs.
The reviewers and the AC agree that the paper should not be accepted in the current state.
| train | [
"rJgVurunjX",
"ByxG803IRm",
"S1g6X03LRm",
"Sklj263IRQ",
"HklXoxNThm",
"SJgeuTL5hX",
"BJgB_El19m",
"HJlnVGo6Fm",
"BylRsQETKm",
"HyxxjkO2KQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"public"
] | [
"This paper presents a generalization of TransE to Riemannian manifolds. While this work falls into the class of interesting recent approaches for using non-Euclidean spaces for knowledge graph embeddings, I found it very hard to digest (e.g. the first paragraph in Section 3.3). Figure 3 and 4 confused me more than... | [
5,
-1,
-1,
-1,
5,
5,
-1,
-1,
-1,
-1
] | [
2,
-1,
-1,
-1,
5,
3,
-1,
-1,
-1,
-1
] | [
"iclr_2019_r1xRW3A9YX",
"rJgVurunjX",
"SJgeuTL5hX",
"HklXoxNThm",
"iclr_2019_r1xRW3A9YX",
"iclr_2019_r1xRW3A9YX",
"HJlnVGo6Fm",
"BylRsQETKm",
"HyxxjkO2KQ",
"iclr_2019_r1xRW3A9YX"
] |
iclr_2019_r1xYr3C5t7 | Neural Message Passing for Multi-Label Classification | Multi-label classification (MLC) is the task of assigning a set of target labels for a given sample. Modeling the combinatorial label interactions in MLC has been a long-haul challenge. Recurrent neural network (RNN) based encoder-decoder models have shown state-of-the-art performance for solving MLC. However, the sequential nature of modeling label dependencies through an RNN limits its ability in parallel computation, predicting dense labels, and providing interpretable results. In this paper, we propose Message Passing Encoder-Decoder (MPED) Networks, aiming to provide fast, accurate, and interpretable MLC. MPED networks model the joint prediction of labels by replacing all RNNs in the encoder-decoder architecture with message passing mechanisms and dispense with autoregressive inference entirely. The proposed models are simple, fast, accurate, interpretable, and structure-agnostic (can be used on known or unknown structured data). Experiments on seven real-world MLC datasets show the proposed models outperform autoregressive RNN models across five different metrics with a significant speedup during training and testing time. | rejected-papers | The reviewers highlighted aspects of the work that were interesting, particularly on the chosen topic of multi-label output of graph neural networks. However, no reviewer was willing to champion the paper, and in aggregate all reviewers trend towards rejection. | train | [
"B1e7a3fmC7",
"HkgBCuGmRm",
"rylTyYG7AX",
"r1lSDhf707",
"BJeNI3GmAm",
"HJeVrnzmR7",
"Hylf42fm0X",
"rkghLczXCQ",
"H1gKXqMmRQ",
"S1gib5GQ0m",
"B1eexqMXCX",
"BkecRtfmCX",
"BygWhYGQC7",
"rkgQSKGQRQ",
"rJxlVtfmAQ",
"ryxPMFfmC7",
"rygMZtfmR7",
"B1lmRDA037",
"ByeckYZ937",
"Sylw5GMBnm"... | [
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank our reviewers for providing valuable comments and questions. Please see our revised manuscript, which we have updated to reflect the following changes. \n \n+ We have changed our title from “Graph2Graph Networks for Multi-Label Classification” to “Neural Message Passing for Multi-Label Class... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"iclr_2019_r1xYr3C5t7",
"ByeckYZ937",
"ByeckYZ937",
"B1lmRDA037",
"B1lmRDA037",
"B1lmRDA037",
"B1lmRDA037",
"Sylw5GMBnm",
"Sylw5GMBnm",
"Sylw5GMBnm",
"Sylw5GMBnm",
"Sylw5GMBnm",
"Sylw5GMBnm",
"ByeckYZ937",
"ByeckYZ937",
"ByeckYZ937",
"ByeckYZ937",
"iclr_2019_r1xYr3C5t7",
"iclr_20... |
iclr_2019_r1xkIjA9tX | q-Neurons: Neuron Activations based on Stochastic Jackson's Derivative Operators | We propose a new generic type of stochastic neurons, called q-neurons, that considers activation functions based on Jackson's q-derivatives, with stochastic parameters q. Our generalization of neural network architectures with q-neurons is shown to be both scalable and very easy to implement. We demonstrate experimentally consistently improved performances over state-of-the-art standard activation functions, both on training and testing loss functions.
| rejected-papers | This paper proposes a new type of activations function based on q-calculus. The reviewers found that the papers is significantly lacking in its presentation, in clarity, and in its experimental evaluation. The motivation of the method raises several significant questions to the reviewers, and the proposed method is not sufficiently compared to existing approaches for (noisy) activation functions. After reviews, the authors have failed to present any updates to their paper. | val | [
"rklyCuc_37",
"HkeiFZZKC7",
"rklk9KW9hX",
"H1e4gE9Oh7"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer"
] | [
"\n############ Updated Review #################\n\nI have read the author(s)' rebuttal. My decision stays unchanged. In my opinion, this first step is not significant enough, and the presentation is clearly below the acceptance threshold for ICLR. Additionally, the author(s) did not update their submission to refl... | [
2,
-1,
6,
5
] | [
5,
-1,
3,
3
] | [
"iclr_2019_r1xkIjA9tX",
"iclr_2019_r1xkIjA9tX",
"iclr_2019_r1xkIjA9tX",
"iclr_2019_r1xkIjA9tX"
] |
iclr_2019_r1xrb3CqtQ | Latent Domain Transfer: Crossing modalities with Bridging Autoencoders | Domain transfer is a exciting and challenging branch of machine learning because models must learn to smoothly transfer between domains, preserving local variations and capturing many aspects of variation without labels.
However, most successful applications to date require the two domains to be closely related (ex. image-to-image, video-video),
utilizing similar or shared networks to transform domain specific properties like texture, coloring, and line shapes.
Here, we demonstrate that it is possible to transfer across modalities (ex. image-to-audio) by first abstracting the data with latent generative models and then learning transformations between latent spaces.
We find that a simple variational autoencoder is able to learn a shared latent space to bridge between two generative models in an unsupervised fashion, and even between different types of models (ex. variational autoencoder and a generative adversarial network).
We can further impose desired semantic alignment of attributes with a linear classifier in the shared latent space.
The proposed variation autoencoder enables preserving both locality and semantic alignment through the transfer process, as shown in the qualitative and quantitative evaluations.
Finally, the hierarchical structure decouples the cost of training the base generative models and semantic alignments, enabling computationally efficient and data efficient retraining of personalized mapping functions. | rejected-papers | This paper studies the problem of heterogeneous domain transfer, for example across different data modalities.
The comments of the reviewers are overlapping to a great extent. On the one hand, the reviewers and AC agree that the problem considered is very interesting and deserves more attention.
On the other hand, the reviewers have raised concerns about the amount of novelty contained in this manuscript, as well as convincingness of results. The AC understands the authors’ argument that a simple method can be a feature and not a flaw, however this work still does not feel complete. Even within a relatively simple framework, it would be desirable to examine the problem from multiple angles and "disentangle" the effects of the different hypotheses – for example the reviewers have drawn attention to end-to-end training and comparison with other baselines. The points raised above, together with improving the manuscript (as commented by reviewers) would make this work more complete. | train | [
"SJgnipVK0Q",
"B1epHTEY0Q",
"S1x-p2EFA7",
"SJgdto_epX",
"H1xi2iv5hm",
"H1gpdRy5hm",
"r1eOUNxx5X"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author"
] | [
"Thank you for your time and insight in your review. We’ve done our best to address your key points below. \n\n> The technical parts are weak since the authors use the existing method with to some extent evolution. \n\nWe would like to highlight that the problem this paper addresses (cross-modal domain transfer) is... | [
-1,
-1,
-1,
4,
4,
4,
-1
] | [
-1,
-1,
-1,
4,
4,
4,
-1
] | [
"SJgdto_epX",
"H1xi2iv5hm",
"H1gpdRy5hm",
"iclr_2019_r1xrb3CqtQ",
"iclr_2019_r1xrb3CqtQ",
"iclr_2019_r1xrb3CqtQ",
"iclr_2019_r1xrb3CqtQ"
] |
iclr_2019_r1xurn0cKQ | Correction Networks: Meta-Learning for Zero-Shot Learning | We propose a model that learns to perform zero-shot classification using a meta-learner that is trained to produce a correction to the output of a previously trained learner. The model consists of two modules: a task module that supplies an initial prediction, and a correction module that updates the initial prediction. The task module is the learner and the correction module is the meta-learner. The correction module is trained in an episodic approach whereby many different task modules are trained on various subsets of the total training data, with the rest being used as unseen data for the correction module. The correction module takes as input a representation of the task module's training data so that the predicted correction is a function of the task module's training data. The correction module is trained to update the task module's prediction to be closer to the target value. This approach leads to state-of-the-art performance for zero-shot classification on natural language class descriptions on the CUB and NAB datasets. | rejected-papers | This is a difficult decision, as the reviewers are quite polarized on this paper, and did not come to a consensus through discussion. The positive elements of the paper are that the method itself is a novel and interesting approach, and that the performance is clearly state of the art. While impressive, the fact that a relatively simple task module trained on the features from Zhu et al. can match the performance of GAZSL suggests that it is difficult to compare these methods in an apples-to-apples way without using consistent features. There are two ways to deal with this: train the baseline methods using the features of Zhu, or train correction networks using less powerful features from other baselines.
Reviewer 3 pointed this out, and asked for such a comparison. The defense given by the authors is that they use the same features as the current SOTA baselines, and therefore their comparison is sound. I agree to an extent, however it should be relatively simple to either elevate other baselines, or compare correction networks with different features. Otherwise, most of the rows in Table 1 should be ignored. Running correction networks in different features in an ablation study would also demonstrate that the gains are consistent.
I think the authors should run these experiments, and if the results hold then there will be no doubt in my mind that this will be a worthy contribution. However, in their absence, I can’t say with certainty how effective the proposed method really is.
| train | [
"HyxukKdLhm",
"rJguR6XG07",
"r1gvx57MAm",
"H1lmxW7fAX",
"HkltO1Rdn7",
"r1lrIhav2Q"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"=== Post-rebuttal update ===\n\nThe authors' rebuttal provided many of the details I was seeking. I asked a few additional questions which were also recently addressed, and I encourage the authors to include these clarifications into the final draft of the paper.\n\nHence, I've increased my score for this paper.\n... | [
7,
-1,
-1,
-1,
4,
4
] | [
4,
-1,
-1,
-1,
5,
4
] | [
"iclr_2019_r1xurn0cKQ",
"HyxukKdLhm",
"r1lrIhav2Q",
"HkltO1Rdn7",
"iclr_2019_r1xurn0cKQ",
"iclr_2019_r1xurn0cKQ"
] |
iclr_2019_r1xwS3RqKQ | Differential Equation Networks | Most deep neural networks use simple, fixed activation functions, such
as sigmoids or rectified linear units, regardless of domain or
network structure. We introduce differential equation networks, an
improvement to modern neural networks in which each neuron learns the
particular nonlinear activation function that it requires. We show
that enabling each neuron with the ability to learn its own activation
function results in a more compact network capable of achieving
comperable, if not superior performance when compared to much larger
networks. We
also showcase the capability of a differential equation neuron to
learn behaviors, such as oscillation, currently only obtainable by a
large group of neurons. The ability of
differential equation networks to essentially compress a large neural network, without loss of overall performance
makes them suitable for on-device applications, where predictions must
be computed locally. Our experimental evaluation of real-world and toy
datasets show that differential equation networks outperform fixed activatoin networks in several areas. | rejected-papers | The reviewers unanimously agreed the paper did not meet the bar of acceptance for ICLR. They raised questions around the technical correctness of the paper, as well as the experimental setup. The authors did not address any reviewer concerns, or provide any response. Therefore, I recommend rejection. | test | [
"r1lL2yBcnQ",
"HyxikBgohX",
"BylD8IttnQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes the use of structured activations based on ordinary differential equations, as an activation in neural network architectures.\nThere are validations that the approach discovers different activations, and comparisons to a variety of other architectures with fixed activations. In general, I would... | [
4,
5,
5
] | [
4,
3,
4
] | [
"iclr_2019_r1xwS3RqKQ",
"iclr_2019_r1xwS3RqKQ",
"iclr_2019_r1xwS3RqKQ"
] |
iclr_2019_r1xwqjRcY7 | Probabilistic Semantic Embedding | We present an extension of a variational auto-encoder that creates semantically richcoupled probabilistic latent representations that capture the semantics of multiplemodalities of data. We demonstrate this model through experiments using imagesand textual descriptors as inputs and images as outputs. Our latent representationsare not only capable of driving a decoder to generate novel data, but can also be useddirectly for annotation or classification. Using the MNIST and Fashion-MNISTdatasets we show that the embedding not only provides better reconstruction andclassification performance than the current state-of-the-art, but it also allows us toexploit the semantic content of the pretrained word embedding spaces to do taskssuch as image generation from labels outside of those seen during training. | rejected-papers | mnist and small picture variants are not that impressive.
it is a minor extension of VAEs which also are not common in sota systems. | train | [
"rJljNyM3AX",
"SyeMPFmFCX",
"BJewxnTWR7",
"rJloSsTb07",
"B1gbWspZ0Q",
"rylNt56-R7",
"H1xIrc6ZRX",
"Bygd1UYahX",
"S1lmzZyq2Q",
"B1gBxOSYnX"
] | [
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We use a three MLP with (28 * 28, 512) (512, 64) (64, latent_dimension).",
"Is the image encoder in PSE a three layer MLP with (28 * 28, 512) (512, 64) (64, latent_dimension) or it's a two layer MLP with (28 * 28, 512) (512, latent_dimension)",
"\"The experiments are only conducted on MNIST and Fashion-MNIST,... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
4
] | [
"SyeMPFmFCX",
"iclr_2019_r1xwqjRcY7",
"B1gBxOSYnX",
"B1gBxOSYnX",
"S1lmzZyq2Q",
"Bygd1UYahX",
"Bygd1UYahX",
"iclr_2019_r1xwqjRcY7",
"iclr_2019_r1xwqjRcY7",
"iclr_2019_r1xwqjRcY7"
] |
iclr_2019_r1xywsC9tQ | Mapping the hyponymy relation of wordnet onto vector Spaces | In this paper, we investigate mapping the hyponymy relation of
wordnet to feature vectors.
We aim to model lexical knowledge in such a way that it can be used as
input in generic machine-learning models, such as phrase entailment
predictors.
We propose two models. The first one leverages an existing mapping of
words to feature vectors (fasttext), and attempts to classify
such vectors as within or outside of each class. The second model is fully supervised,
using solely wordnet as a ground truth. It maps each concept to an
interval or a disjunction thereof.
On the first model, we approach, but not quite attain state of the
art performance. The second model can achieve near-perfect accuracy.
| rejected-papers | All three reviewers found this to be an interesting exploration of a reasonable topic—the use of ontologies in word representations—but all three also expressed serious concerns about clarity and none could identify a concrete, sound result that the paper contributes to the field. | train | [
"rJxOzhIxaQ",
"ryxzHUBahQ",
"rJxE1ltc37",
"SJefL5lK2m"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for your reviews. As we see it, the reviewers missed the main point of the paper, hoping that it makes our intent clear. \n\nWe consider the task of mapping wordnet to vector spaces, giving two baselines. \n\n1. The first baseline is based on dividing fasttext into subspaces corresponding to predicates i... | [
-1,
3,
3,
3
] | [
-1,
4,
3,
5
] | [
"iclr_2019_r1xywsC9tQ",
"iclr_2019_r1xywsC9tQ",
"iclr_2019_r1xywsC9tQ",
"iclr_2019_r1xywsC9tQ"
] |
iclr_2019_r1xyx3R9tQ | Prototypical Examples in Deep Learning: Metrics, Characteristics, and Utility | Machine learning (ML) research has investigated prototypes: examples that are representative of the behavior to be learned. We systematically evaluate five methods for identifying prototypes, both ones previously introduced as well as new ones we propose, finding all of them to provide meaningful but different interpretations. Through a human study, we confirm that all five metrics are well matched to human intuition. Examining cases where the metrics disagree offers an informative perspective on the properties of data and algorithms used in learning, with implications for data-corpus construction, efficiency, adversarial robustness, interpretability, and other ML aspects. In particular, we confirm that the "train on hard" curriculum approach can improve accuracy on many datasets and tasks, but that it is strictly worse when there are many mislabeled or ambiguous examples. | rejected-papers | This paper considers "prototypes" in machine learning, in which a small subset of a dataset is selected as representative of the behavior of the models. The authors propose a number of desiderata, and outline the connections to existing approaches. Further, they carry out evaluation with user studies to compare them with human intuition, and empirical experiments to compare them to each other. The reviewers agreed that the search for more concrete definitions of prototypes is a worthy one, and they appreciated the user studies.
The reviewers and AC note the following potential weaknesses: (1) the specific description of prototypes that the authors are using is not provided precisely, (2) the desiderata was found to be informal, leading to considerable confusion regarding the choices that are made and their compatibility with each other, (3) concerns in the evaluation regarding the practicality and the appropriateness of the user study for the goals of the paper.
Although the authors provided detailed responses to these concerns, most of them still remained. Both reviewer 1 and reviewer 2 encourage the authors to define the prototypes defined more precisely, providing motivation for the various choices therein. Even though some of the concerns raised by reviewer 3 were addressed, it still remains to be seen how scalable the approach is for real-world applications.
For these reasons, the reviewers and the AC feel that the authors would need to make substantial improvements for the paper to be accepted. | train | [
"BJxUO3CeJE",
"SJx27H8K3X",
"S1eer30eyV",
"BJe4inNiTX",
"BJlqqnEi6X",
"S1g4tQHqTX",
"rJepYgS5Tm",
"Hyx9Ogrc6Q",
"rkgrweBq6m",
"H1gQUgH9aX",
"BJxRElrcp7",
"rkljAzy56Q",
"rJldTzkqTX",
"Syx0EvauTm",
"B1lVFaNv6X",
"rygaIWalaQ",
"SJgsjxalT7"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author"
] | [
"Being not practical refers to Section 4, utilizing prototypes to improve aspects of machine learning. It is true that generating adversarial examples is relatively efficient when given access to a trained model, but, if I'm not wrong, this model has to be trained on all the examples. It seems that the author did n... | [
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
5,
-1,
-1
] | [
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
-1,
-1
] | [
"SJgsjxalT7",
"iclr_2019_r1xyx3R9tQ",
"rygaIWalaQ",
"BJlqqnEi6X",
"BJxRElrcp7",
"rygaIWalaQ",
"Syx0EvauTm",
"Syx0EvauTm",
"Syx0EvauTm",
"Syx0EvauTm",
"Syx0EvauTm",
"B1lVFaNv6X",
"B1lVFaNv6X",
"iclr_2019_r1xyx3R9tQ",
"iclr_2019_r1xyx3R9tQ",
"SJx27H8K3X",
"SJx27H8K3X"
] |
iclr_2019_r1zOg309tX | Understanding the Effectiveness of Lipschitz-Continuity in Generative Adversarial Nets | In this paper, we investigate the underlying factor that leads to the failure and success in training of GANs. Specifically, we study the property of the optimal discriminative function f∗(x) and show that f∗(x) in most GANs can only reflect the local densities at x, which means the value of f∗(x) for points in the fake distribution (Pg) does not contain any information useful about the location of other points in the real distribution (Pr). Given that the supports of the real and fake distributions are usually disjoint, we argue that such a f∗(x) and its gradient tell nothing about "how to pull Pg to Pr", which turns out to be the fundamental cause of failure in training of GANs. We further demonstrate that a well-defined distance metric (including the dual form of Wasserstein distance with a compacted constraint) does not necessarily ensure the convergence of GANs. Finally, we propose Lipschitz-continuity condition as a general solution and show that in a large family of GAN objectives, Lipschitz condition is capable of connecting Pg and Pr through f∗(x) such that the gradient ∇xf∗(x) at each sample x∼Pg points towards some real sample y∼Pr. | rejected-papers | The paper investigates problems that can arise for a certain version of the dual form of the Wasserstein distance, which is proved in Appendix I. While the theoretical analysis seems correct, the significance of the distribution is limited by the fact, that the specific dual form analysed is not commonly used in other works. Furthermore, the assumption that the optimal function is differentiable is often not fulfilled neither. The paper would herefore be significantly strengthen by making more clear to which methods used in practice the insights carry over.
| train | [
"HJlOsZE20X",
"B1eWMFbd1N",
"B1ljRdWd1N",
"ByldHOC33m",
"HJg_BrpGkE",
"SyxXvRyhRX",
"HJxtQXln0X",
"BkgKq6kn0Q",
"rJgau013C7",
"Bkes1R1nC7",
"SkxDk51iCX",
"S1g40CRcA7",
"HJll5C050Q",
"HJlzFY9q0m",
"BJxOhIWq3X",
"rkxfycBIR7",
"rJeLkh9s6m",
"BJggS2qspm",
"HJelpj5j6X",
"SygATc9s6X"... | [
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"official... | [
"We really appreciate your effort. Thank you very much and we really enjoy the discussion with you. \n\nWe are very rigorous in the rebuttal and we have very carefully checked related materials. In particular, we read the Wikipedia and the proof of theorem 3 line by line, as well as the paper you mentioned: ‘‘Sobol... | [
-1,
-1,
-1,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
5
] | [
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
4
] | [
"HJxtQXln0X",
"iclr_2019_r1zOg309tX",
"ByldHOC33m",
"iclr_2019_r1zOg309tX",
"HJlOsZE20X",
"iclr_2019_r1zOg309tX",
"Bkes1R1nC7",
"SkxDk51iCX",
"SyxXvRyhRX",
"SkxDk51iCX",
"HJll5C050Q",
"HJlzFY9q0m",
"HJlzFY9q0m",
"rJeLkh9s6m",
"iclr_2019_r1zOg309tX",
"BJxOhIWq3X",
"BJxOhIWq3X",
"BJx... |
iclr_2019_r1zmVhCqKm | Text Infilling | Recent years have seen remarkable progress of text generation in different contexts, including the most common setting of generating text from scratch, the increasingly popular paradigm of retrieval and editing, and others. Text infilling, which fills missing text portions of a sentence or paragraph, is also of numerous use in real life. Previous work has focused on restricted settings, by either assuming single word per missing portion, or limiting to single missing portion to the end of text. This paper studies the general task of text infilling, where the input text can have an arbitrary number of portions to be filled, each of which may require an arbitrary unknown number of tokens.
We develop a self-attention model with segment-aware position encoding for precise global context modeling.
We further create a variety of supervised data by masking out text in different domains with varying missing ratios and mask strategies. Extensive experiments show the proposed model performs significantly better than other methods, and generates meaningful text patches. | rejected-papers | although the problem of text infilling itself is interesting, all the reviewers were not certain about the extent of experiments and how they shed light on whether, how and why the proposed approach is better than existing approaches. | train | [
"ryxzxgQc37",
"HygOvpY_2X",
"SklOXypTjm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper proposes a setting for evaluation of a text infilling task, where a system needs to fill in the blanks in a provided incomplete sentences. The authors select sentences from three different sources, Yahoo Reviews, fairy tales, and NBA scripts, and blank out words with varying strategies, ranging from taki... | [
3,
5,
6
] | [
4,
4,
4
] | [
"iclr_2019_r1zmVhCqKm",
"iclr_2019_r1zmVhCqKm",
"iclr_2019_r1zmVhCqKm"
] |
iclr_2019_r1znKiAcY7 | Few-shot Classification on Graphs with Structural Regularized GCNs | We consider the fundamental problem of semi-supervised node classification in attributed graphs with a focus on \emph{few-shot} learning. Here, we propose Structural Regularized Graph Convolutional Networks (SRGCN), novel neural network architectures extending the well-known GCN structures by stacking transposed convolutional layers for reconstruction of input features. We add a reconstruction error term in the loss function as a regularizer. Unlike standard regularization such as L1 or L2, which controls the model complexity by including a penalty term depends solely on parameters, our regularization function is parameterized by a trainable neural network whose structure depends on the topology of the underlying graph. The new approach effectively addresses the shortcomings of previous graph convolution-based techniques for learning classifiers in the few-shot regime and significantly improves generalization performance over original GCNs when the number of labeled samples is insufficient. Experimental studies on three challenging benchmarks demonstrate that the proposed approach has matched state-of-the-art results and can improve classification accuracies by a notable margin when there are very few examples from each class. | rejected-papers | A new regularized graph CNN approach is proposed for semi-supervised learning on graphs. The conventional Graph CNN is concatenated with a Transposed Network, which is used to supplement the supervised loss w.r.t. the labeled part of the graph with an unsupervised loss that serves as a regularizer measuring reconstruction errors of features. While this extension performs well and was found to be interesting in general by the reviewers, the novelty of the approach (adding a reconstruction loss), the completeness of the experimental evaluation, and the presentation quality have also been questioned consistently. The paper has improved during the course of the review, but overall the AC evaluates that paper is not upto ICLR-2019 standards in its current form.
| train | [
"S1eCA4l927",
"HkxvGahOnm",
"rkli05svnm",
"rylS6OJiRm",
"SkghBg9YCX",
"S1l0cpFtCm",
"HJePTS9YRQ",
"HyGXiScY0X"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper proposes to regularize the training of graph convolutional neural networks by adding a reconstruction loss to the supervised loss. Results are reported on citation benchmarks and compared for increasing number of labeled data.\n\nThe presentation of the paper could be significantly improved. Details of ... | [
4,
6,
5,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2019_r1znKiAcY7",
"iclr_2019_r1znKiAcY7",
"iclr_2019_r1znKiAcY7",
"SkghBg9YCX",
"rkli05svnm",
"HkxvGahOnm",
"HyGXiScY0X",
"S1eCA4l927"
] |
iclr_2019_r1ztwiCcYQ | VARIATIONAL SGD: DROPOUT , GENERALIZATION AND CRITICAL POINT AT THE END OF CONVEXITY | The goal of the paper is to propose an algorithm for learning the most generalizable solution from given training data. It is shown that Bayesian approach leads to a solution that dependent on statistics of training data and not on particular
samples. The solution is stable under perturbations of training data because it is defined by an integral contribution of multiple maxima of the likelihood and not by a single global maximum. Specifically, the Bayesian probability distribution
of parameters (weights) of a probabilistic model given by a neural network is estimated via recurrent variational approximations. Derived recurrent update rules correspond to SGD-type rules for finding a minimum of an effective loss that is an average of an original negative log-likelihood over the Gaussian distributions of weights, which makes it a function of means and variances. The effective loss is convex for large variances and non-convex in the limit of small variances. Among stationary solutions of the update rules there are trivial solutions with zero variances at local minima of the original loss and a single non-trivial solution with finite variances that is a critical point at the end of convexity of the effective loss
in the mean-variance space. At the critical point both first- and second-order gradients of the effective loss w.r.t. means are zero. The empirical study confirms that the critical point represents the most generalizable solution. While the location of
the critical point in the weight space depends on specifics of the used probabilistic model some properties at the critical point are universal and model independent. | rejected-papers | This paper studies a variational formulation of the loss minimization to study the solution that generalizes the most. An expectation of the loss wrt a Gaussian distribution is minimized to find the mean and variance of the Gaussian distribution. As the variance goes to zero, we recover the original loss, but for a higher value of variance, the loss may be convex. This is used to study the generalizability of the landscape.
Both objective and solutions of the paper are unclear and not communicated well. There is not enough citation to previous work (e.g., Gaussian homotopy exactly considers this problem, and there are papers that study the convexity of the expectation of the loss function). There are no experimental results either to confirm the theoretical finding.
All the reviewers struggle to understand both the problem and solutions discussed in this paper. I believe that the paper could become useful if reviewers' feedback is taken seriously to improve the paper. | train | [
"SJlehN5PA7",
"r1g4PM0uCX",
"ryxFKHR_0Q",
"HkgcijgT27",
"rJlqb64qn7",
"Hyl3G76N27"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This note is in support of the claim of the paper regarding generalization for critical point solution.\n\nWhile general proof is still in work below we consider toy one-dimensional examples that explains how generalization error may be lower at critical point: we show that absolute value of expectation of genera... | [
-1,
-1,
-1,
4,
2,
2
] | [
-1,
-1,
-1,
3,
4,
5
] | [
"iclr_2019_r1ztwiCcYQ",
"HkgcijgT27",
"rJlqb64qn7",
"iclr_2019_r1ztwiCcYQ",
"iclr_2019_r1ztwiCcYQ",
"iclr_2019_r1ztwiCcYQ"
] |
iclr_2019_rJ4qXnCqFX | Probabilistic Knowledge Graph Embeddings | We develop a probabilistic extension of state-of-the-art embedding models for link prediction in relational knowledge graphs. Knowledge graphs are collections of relational facts, where each fact states that a certain relation holds between two entities, such as people, places, or objects. We argue that knowledge graphs should be treated within a Bayesian framework because even large knowledge graphs typically contain only few facts per entity, leading effectively to a small data problem where parameter uncertainty matters. We introduce a probabilistic reinterpretation of the DistMult (Yang et al., 2015) and ComplEx (Trouillon et al., 2016) models and employ variational inference to estimate a lower bound on the marginal likelihood of the data. We find that the main benefit of the Bayesian approach is that it allows for efficient, gradient based optimization over hyperparameters, which would lead to divergences in a non-Bayesian treatment. Models with such learned hyperparameters improve over the state-of-the-art by a significant margin, as we demonstrate on several benchmarks. | rejected-papers | The paper proposes a Bayesian extension to existing knowledge base embedding methods (like DistMult and ComplEx), which is applied for for hyperparameter learning. While using Bayesian inference for for hyperparameter tuning for embedding methods is not generally novel, it has not been used in the context of knowledge graph modelling before. The paper could be strengthened by comparing the method to other strategies of hyperparameter selection to prove the significance of the advantage brought by the method. | train | [
"SkesolC-Am",
"SyxIO5TW0Q",
"r1l78tTb0m",
"rylFNAMQaQ",
"S1lKg0Iq3m",
"SyxbX4AFnQ"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We respond to each of your questions below:\n\n> Sec 3 can be regarded as a knowledge base extension of [Bayesian treatments to\n> embedding methods] with a different likelihood [...].\n\nWe agree with the reviewer’s summary of our theoretical contributions. In addition, we would like to stress our experimental co... | [
-1,
-1,
-1,
5,
6,
5
] | [
-1,
-1,
-1,
2,
3,
3
] | [
"SyxbX4AFnQ",
"S1lKg0Iq3m",
"rylFNAMQaQ",
"iclr_2019_rJ4qXnCqFX",
"iclr_2019_rJ4qXnCqFX",
"iclr_2019_rJ4qXnCqFX"
] |
iclr_2019_rJ4vlh0qtm | SSoC: Learning Spontaneous and Self-Organizing Communication for Multi-Agent Collaboration | Multi-agent collaboration is required by numerous real-world problems. Although distributed setting is usually adopted by practical systems, local range communication and information aggregation still matter in fulfilling complex tasks. For multi-agent reinforcement learning, many previous studies have been dedicated to design an effective communication architecture. However, existing models usually suffer from an ossified communication structure, e.g., most of them predefine a particular communication mode by specifying a fixed time frequency and spatial scope for agents to communicate regardless of necessity. Such design is incapable of dealing with multi-agent scenarios that are capricious and complicated, especially when only partial information is available. Motivated by this, we argue that the solution is to build a spontaneous and self-organizing communication (SSoC) learning scheme. By treating the communication behaviour as an explicit action, SSoC learns to organize communication in an effective and efficient way. Particularly, it enables each agent to spontaneously decide when and who to send messages based on its observed states. In this way, a dynamic inter-agent communication channel is established in an online and self-organizing manner. The agents also learn how to adaptively aggregate the received messages and its own hidden states to execute actions. Various experiments have been conducted to demonstrate that SSoC really learns intelligent message passing among agents located far apart. With such agile communications, we observe that effective collaboration tactics emerge which have not been mastered by the compared baselines. | rejected-papers | The reviewers raised a number of major concerns including the incremental novelty of the proposed and a poor readability of the presented materials (lack of sufficient explanations and discussions). The authors decided to withdraw the paper. | val | [
"Skl9IBL9A7",
"SJgGnWw92m",
"BJgUO5sOn7",
"SkxIWUbt2Q"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In the revised paper, we have updated a complete appendix which is missing in the original submission. Thanks.",
"This paper proposes a spontaneous and self-organizing communication learning scheme in multi-agent RL setup. The problem is interesting. I mainly have one concern regarding its originality.\n\nFrom a... | [
-1,
4,
5,
5
] | [
-1,
3,
4,
3
] | [
"iclr_2019_rJ4vlh0qtm",
"iclr_2019_rJ4vlh0qtm",
"iclr_2019_rJ4vlh0qtm",
"iclr_2019_rJ4vlh0qtm"
] |
iclr_2019_rJEyrjRqYX | Reduced-Gate Convolutional LSTM Design Using Predictive Coding for Next-Frame Video Prediction | Spatiotemporal sequence prediction is an important problem in deep learning. We
study next-frame video prediction using a deep-learning-based predictive coding
framework that uses convolutional, long short-term memory (convLSTM) modules.
We introduce a novel reduced-gate convolutional LSTM architecture. Our
reduced-gate model achieves better next-frame prediction accuracy than the original
convolutional LSTM while using a smaller parameter budget, thereby reducing
training time. We tested our reduced gate modules within a predictive coding architecture
on the moving MNIST and KITTI datasets. We found that our reduced-gate
model has a significant reduction of approximately 40 percent of the total
number of training parameters and training time in comparison with the standard
LSTM model which makes it attractive for hardware implementation especially
on small devices. | rejected-papers | The submission suggests reducing the parameters in a conv-lSTM by replacing the 3 gates in the standard LSTM with one gate. The idea is to get a more efficient convolutional LSTM and use it for video prediction. Two of the reviewers found the manuscript and description of the work difficult to follow and the justification for the proposed method lacking. Additionally, the contribution of this submission feels rather thin, and the experimental results are not very convincing: the absolute training time is too coarse of a measurement (and convergence may depend on many factors), and the improvements over PredNet seem somewhat marginal.
Finally, I agree with the reviewer that mentioned that a proper comparison with baselines should be done in such a way that the number of parameters is comparable (if #params is a main claim of the paper!). It is entirely plausible that if you reduce the number of parameters in PredNet by 40% (in some other way), its performance would also benefit.
With all this in mind, I do not recommend this paper be accepted at this time. | val | [
"SkxhhqFDJ4",
"rygGifMvJ4",
"rygDNTOHkN",
"rJeVYa8NCX",
"S1gT4QZc3m",
"B1eLNxt0Tm",
"r1eJYyFC6Q",
"BJlt-pOAp7",
"ryx7mFSq3Q",
"B1gC5rSPnX"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for pointing out the link to the revision. \n\nAfter rereading the paper, I'm still not entirely convinced by the proposed model, neither by the intuition or by the experiments. Since this paper conducts experimental research, it may make sense to compare the standard LSTM and the proposed rgcLSTM using a s... | [
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
5,
7
] | [
-1,
-1,
-1,
-1,
5,
-1,
-1,
-1,
4,
4
] | [
"rygGifMvJ4",
"rygDNTOHkN",
"BJlt-pOAp7",
"r1eJYyFC6Q",
"iclr_2019_rJEyrjRqYX",
"B1gC5rSPnX",
"S1gT4QZc3m",
"ryx7mFSq3Q",
"iclr_2019_rJEyrjRqYX",
"iclr_2019_rJEyrjRqYX"
] |
iclr_2019_rJG8asRqKX | A Deep Learning Approach for Dynamic Survival Analysis with Competing Risks | Currently available survival analysis methods are limited in their ability to deal with complex, heterogeneous, and longitudinal data such as that available in primary care records, or in their ability to deal with multiple competing risks. This paper develops a novel deep learning architecture that flexibly incorporates the available longitudinal data comprising various repeated measurements (rather than only the last available measurements) in order to issue dynamically updated survival predictions for one or multiple competing risk(s). Unlike existing works in the survival analysis on the basis of longitudinal data, the proposed method learns the time-to-event distributions without specifying underlying stochastic assumptions of the longitudinal or the time-to-event processes. Thus, our method is able to learn associations between the longitudinal data and the various associated risks in a fully data-driven fashion. We demonstrate the power of our method by applying it to real-world longitudinal datasets and show a drastic improvement over state-of-the-art methods in discriminative performance. Furthermore, our analysis of the variable importance and dynamic survival predictions will yield a better understanding of the predicted risks which will result in more effective health care. | rejected-papers | While there was disagreement on this paper, reviewers remained unconvinced about the scalability and novelty of the presented work. While it was universally agreed that many positive points exist in this paper, it is not yet ready for publication. | train | [
"BklnXbdiAQ",
"S1xPXXp_hQ",
"H1xTERssTm",
"HkeZh6jj6Q",
"Byl2p6ispm",
"rylMoI9i67",
"SkgmC8sqhQ",
"HylrU8I52m"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"We thank the reviewer for the feedback. \n\n2 . We do acknowledge that the point process-based approaches can utilize the covariate information (i.e., history of measurements) for prediction. \nWe also agree that the point process-based approaches can be applied to the \"first-hitting time\" analysis by limiting t... | [
-1,
4,
-1,
-1,
-1,
-1,
4,
8
] | [
-1,
3,
-1,
-1,
-1,
-1,
4,
4
] | [
"S1xPXXp_hQ",
"iclr_2019_rJG8asRqKX",
"SkgmC8sqhQ",
"S1xPXXp_hQ",
"S1xPXXp_hQ",
"HylrU8I52m",
"iclr_2019_rJG8asRqKX",
"iclr_2019_rJG8asRqKX"
] |
iclr_2019_rJMcdsA5FX | On Accurate Evaluation of GANs for Language Generation | Generative Adversarial Networks (GANs) are a promising approach to language generation. The latest works introducing novel GAN models for language generation use n-gram based metrics for evaluation and only report single scores of the best run. In this paper, we argue that this often misrepresents the true picture and does not tell the full story, as GAN models can be extremely sensitive to the random initialization and small deviations from the best hyperparameter choice. In particular, we demonstrate that the previously used BLEU score is not sensitive to semantic deterioration of generated texts and propose alternative metrics that better capture the quality and diversity of the generated samples. We also conduct a set of experiments comparing a number of GAN models for text with a conventional Language Model (LM) and find that none of the considered models performs convincingly better than the LM. | rejected-papers | This paper conducts experiments evaluating several different metrics for evaluating GAN-based language generation models. This is a worthy pursuit, and some of the evaluation is interesting.
However as noted by Reviewer 2, there are a number of concerns with the execution of the paper: evaluation of metrics with respect to human judgement is insufficient, the diversity of the text samples is not evaluated, and there are clarity issues.
I feel that with a major re-write and tighter experiments this paper could potentially become something nice, but in its current form it seems below the ICLR quality threshold. | train | [
"rkg6QeB5hX",
"S1eoHt5T1N",
"BkxM_MoYTm",
"B1x_zGoKa7",
"SJlB6-it67",
"Hyl9e1IihX",
"rkxfeqp5hX"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer"
] | [
"===========================\nSince the authors did not provide a proper response to my questions, I have lowered my score from 7 to 6. I think this paper will have a good chance to be a good paper if evaluated more comprehensively, as suggested by reviewers. \n===========================\n\nContributions:\n\nThe m... | [
6,
-1,
-1,
-1,
-1,
5,
3
] | [
4,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_rJMcdsA5FX",
"SJlB6-it67",
"Hyl9e1IihX",
"iclr_2019_rJMcdsA5FX",
"rkxfeqp5hX",
"iclr_2019_rJMcdsA5FX",
"iclr_2019_rJMcdsA5FX"
] |
iclr_2019_rJVoEiCqKQ | Deep Perm-Set Net: Learn to predict sets with unknown permutation and cardinality using deep neural networks | Many real-world problems, e.g. object detection, have outputs that are naturally expressed as sets of entities. This creates a challenge for traditional deep neural networks which naturally deal with structured outputs such as vectors, matrices or tensors. We present a novel approach for learning to predict sets with unknown permutation and cardinality using deep neural networks. Specifically, in our formulation we incorporate the permutation as unobservable variable and estimate its distribution during the learning process using alternating optimization. We demonstrate the validity of this new formulation on two relevant vision problems: object detection, for which our formulation outperforms state-of-the-art detectors such as Faster R-CNN and YOLO, and a complex CAPTCHA test, where we observe that, surprisingly, our set based network acquired the ability of mimicking arithmetics without any rules being coded. | rejected-papers | Strengths:
The method extends [21], which proposes an unordered set prediction model for multi-class classification.
The submission proposes a formulation to learn the distribution over unobservable permutation variables based on deep networks and uses a MAP estimator for inference.
While the failure of NMS to detect overlapping objects is expected, the experiments showing that perm-set prediction handles them well is interesting and promising.
Weaknesses:
Reviewer 1: "I find the paper still too scattered, trying to solve diverse problems with a hammer without properly motivating / analyzing key details of this hammer. So I keep my rating."
Reviewer 2: "I'm glad that the authors are seeing good performance and seem to have an effective method for matching outputs to fixed predictions, however the quality of the paper is too poor for publication."
Points of contention:
Although there was one reviewer who gave a high rating, they were not responsive in the rebuttal phase. The other two reviewers took into account the author responses, and a contributed comment by an unaffiliated reviewer, and both concluded that the paper still had serious issues. The main issues were: lack of clear methodology and poor clarity (AnonReviewer2), and poor organization and lack of motivation for modeling choices (AnonReviewer1). | train | [
"rkg0ogbkkE",
"BklHs9ycRX",
"ryl0dgqwam",
"rkxaglcwTm",
"SkxrcuFw6m",
"ryx3iYPDam",
"H1eoH7T167",
"rJegAZu6nQ",
"S1l5N7S53X",
"r1x0cUg5nm"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Misunderstandings are due to poor clarity in the text as written.\n\n\"... (we) assume p_m(\\pi | x_i, w) is uniform,\" this is very unclear in the text.\n\nI also appreciate that one may want to make use of a permutation to match outputs to fixed predictions. However, I again invite the authors to stare at what t... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
7,
3,
3
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
3,
4
] | [
"rkxaglcwTm",
"SkxrcuFw6m",
"H1eoH7T167",
"r1x0cUg5nm",
"S1l5N7S53X",
"rJegAZu6nQ",
"r1x0cUg5nm",
"iclr_2019_rJVoEiCqKQ",
"iclr_2019_rJVoEiCqKQ",
"iclr_2019_rJVoEiCqKQ"
] |
iclr_2019_rJe1y3CqtX | Deep Reinforcement Learning of Universal Policies with Diverse Environment Summaries | Deep reinforcement learning has enabled robots to complete complex tasks in simulation. However, the resulting policies do not transfer to real robots due to model errors in the simulator. One solution is to randomize the simulation environment, so that the resulting, trained policy achieves high performance in expectation over a variety of configurations that could represent the real-world. However, the distribution over simulator configurations must be carefully selected to represent the relevant dynamic modes of the system, as otherwise it can be unlikely to sample challenging configurations frequently enough. Moreover, the ideal distribution to improve the policy changes as the policy (un)learns to solve tasks in certain configurations. In this paper, we propose to use an inexpensive, kernel-based summarization method method that identifies configurations that lead to diverse behaviors. Since failure modes for the given task are naturally diverse, the policy trains on a mixture of representative and challenging configurations, which leads to more robust policies. In experiments, we show that the proposed method achieves the same performance as domain randomization in simple cases, but performs better when domain randomization does not lead to diverse dynamic modes. | rejected-papers | The paper proposes an approach to learn policies that can effectively transfer to new environments. The perspective on this problem from the perspective of streaming submodular optimization is nice; the paper introduces new ideas that are likely of interest to the ICLR community. Unfortunately, there are significant concerns about how convincing the results are. Multiple reviewers were concerned about there only being two experiments, and the lack of comparison to ep-opt on the half-cheetah experiment. Without a more solid empirical validation of the ideas, the paper does not meet the bar for publication at ICLR. | train | [
"Sye-d_D33X",
"r1xR9Pucn7",
"HygHJ0cXnX"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper presents a new method for learning diverse policies that can potentially transfer better to new environments. The proposed method aims to find simulation configurations that lead to diverse behaviors using “submodular optimizations” technique; an idea stemmed from past data summarization methods. \n\nP... | [
4,
6,
5
] | [
5,
4,
4
] | [
"iclr_2019_rJe1y3CqtX",
"iclr_2019_rJe1y3CqtX",
"iclr_2019_rJe1y3CqtX"
] |
iclr_2019_rJeEqiC5KQ | ON THE USE OF CONVOLUTIONAL AUTO-ENCODER FOR INCREMENTAL CLASSIFIER LEARNING IN CONTEXT AWARE ADVERTISEMENT | Context Aware Advertisement (CAA) is a type of advertisement
appearing on websites or mobile apps. The advertisement is targeted
on specific group of users and/or the content displayed on the
websites or apps. This paper focuses on classifying images displayed
on the websites by incremental learning classifier with Deep
Convolutional Neural Network (DCNN) especially for Context Aware
Advertisement (CAA) framework. Incrementally learning new knowledge
with DCNN leads to catastrophic forgetting as previously stored
information is replaced with new information. To prevent
catastrophic forgetting, part of previously learned knowledge should
be stored for the life time of incremental classifier. Storing
information for life time involves privacy and legal concerns
especially in context aware advertising framework. Here, we propose
an incremental classifier learning method which addresses privacy
and legal concerns while taking care of catastrophic forgetting
problem. We conduct experiments on different datasets including
CIFAR-100. Experimental results show that proposed system achieves
relatively high performance compared to the state-of-the-art
incremental learning methods. | rejected-papers | 1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.
The paper tackles an interesting and relevant problem for ICLR: incremental classifier learning applied to image data streams.
2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.
- The proposed method is not clearly explained and not reproducible. In particular the contribution on top of the baseline iCaRL method is unclear. It seems to be mainly the use of CAE which is a minor change.
- The experimental comparisons are incomplete. For example, in Table 4 the authors don't discuss the storage requirements of GAN and FearNet baselines.
- The authors state that one of their main contributions is fullfilling privacy and legal requirements. They claim this is done by using CAEs which generate image embeddings that they store rather than the original images. However it's quite well known that a lot of data about the original images can be recovered from such embeddings (e.g. Dosovitskiy & Brox. "Inverting visual representations with convolutional networks." CVPR 2016.).
These concerns all impacted the final decision.
3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it’s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.
There were no major points of contention and no author feedback.
4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.
The reviewers reached a consensus that the paper should be rejected.
| train | [
"SkeZ6rW62Q",
"r1g92X3qnX",
"r1xDx2y537"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The paper extends an existing incremental learning method, mainly introducing the latent representations of an autoencoder instead of the original images. It includes a lot of hype in that it simulates the human brain - because it is based on the iCaRL & Fear Net formulation - and that it fulfils the privacy and ... | [
5,
4,
3
] | [
5,
4,
4
] | [
"iclr_2019_rJeEqiC5KQ",
"iclr_2019_rJeEqiC5KQ",
"iclr_2019_rJeEqiC5KQ"
] |
iclr_2019_rJeQYjRqYX | Effective Path: Know the Unknowns of Neural Network | Despite their enormous success, there is still no solid understanding of deep neural network’s working mechanism. As such, researchers have demonstrated DNNs are vulnerable to small input perturbation, i.e., adversarial attacks. This work proposes the effective path as a new approach to exploring DNNs' internal organization. The effective path is an ensemble of synapses and neurons, which is reconstructed from a trained DNN using our activation-based backward algorithm. The per-image effective path can be aggregated to the class-level effective path, through which we observe that adversarial images activate effective path different from normal images. We propose an effective path similarity-based method to detect adversarial images and demonstrate its high accuracy and broad applicability.
| rejected-papers | The paper presents an approach to estimate the "effective path" of examples
in a network to reach a decision, and consider this to analyze if examples
might be adversarial. Reviewers think the paper lacks some clarity and
experiments. They point to a confusion between interpretability and adversarial
attacks, they ask questions about computational complexity, and point to some
unsubstanciated claims. Authors have not responded to reviewers. Overall, I
concur with the reviewers to reject the paper. | train | [
"SyxXV02nhX",
"BJxWEDainm",
"HJlRYLTOhm"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes a method for the detection of adversarial examples based on identification of critical paths (called \"effective paths\") in DNN classifiers. Borrowing from the analysis of execution paths of control-flow programs, the authors use back-propagation from the neuron associated from the final class... | [
4,
4,
6
] | [
4,
3,
5
] | [
"iclr_2019_rJeQYjRqYX",
"iclr_2019_rJeQYjRqYX",
"iclr_2019_rJeQYjRqYX"
] |
iclr_2019_rJeZS3RcYm | Simple Black-box Adversarial Attacks | The construction of adversarial images is a search problem in high dimensions within a small region around a target image. The goal is to find an imperceptibly modified image that is misclassified by a target model. In the black-box setting, only sporadic feedback is provided through occasional model evaluations. In this paper we provide a new algorithm whose search strategy is based on an intriguingly simple iterative principle: We randomly pick a low frequency component of the discrete cosine transform (DCT) and either add or subtract it to the target image. Model evaluations are only required to identify whether an operation decreases the adversarial loss. Despite its simplicity, the proposed method can be used for targeted and untargeted attacks --- resulting in previously unprecedented query efficiency in both settings. We require a median of 600 black-box model queries (ResNet-50) to produce an adversarial ImageNet image, and we successfully attack Google Cloud Vision with 2500 median queries, averaging to a cost of only $3 per image. We argue that our proposed algorithm should serve as a strong baseline for future adversarial black-box attacks, in particular because it is extremely fast and can be implemented in less than 20 lines of PyTorch code. | rejected-papers | The paper considers a procedure for the generation of adversarial examples under a black box setting. The authors claim simplicity as one of the main selling points, with which reviewers agreed, while also noting that the results were impressive or "promising". There were concerns over novelty and some confusion over the contribution compared to Guo et al, which I believe has been clarified.
The highest confidence reviewer (AnonReviewer2), a researcher with significant expertise in adversarial examples, raised issues of inconsistent threat models (and therefore unfair comparisons regarding query efficiency), missing baselines. A misunderstanding about comparison against a concurrent submission to ICLR 2019 was resolved on the basis that the relevant results are mentioned but not originally presented in the concurrent submission.
While I disagree with AnonReviewer2 that results on attacking a particular image from previous work (when run against the Google Cloud Vision API) would be informative, the reviewer has remaining unaddressed concerns about the fairness of comparison (comparing against results reported in previous work rather than re-run in the same setting), and rightly points out that as many variables should be controlled for as possible when making comparisons. Running all methods under the same experimental setting with the same *collection* of query images is therefore appropriate.
The authors have not responded to AnonReviewer2's updated post-rebuttal review, and with the remaining sticking point of fairness of comparison with respect to query efficiency I must recommend rejection at this point in time, while noting that all reviewers considered the method promising; I thus would expect to see the method successfully published in the near future once issues of the experimental protocol have been solidified. | train | [
"SkesQhRO2Q",
"SJeaN4LihX",
"rJeUWhL3TX",
"ryxurJRsTX",
"HygHF_6spX",
"Bklu_v6spQ",
"SkxPHvTiaQ",
"BklIEDTs6X",
"rkxkTpcJTQ"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer"
] | [
"This paper proposed a simple query-efficient \"score-based\" black-box attack based on iteratively perturbing an input image with a direction randomly sampled (w/o replacement) from a set of orthonormal bases. In particular, the authors proposed the use of low-frequency parts of DCT (discrete cosine transformation... | [
4,
6,
-1,
-1,
-1,
-1,
-1,
-1,
6
] | [
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
3
] | [
"iclr_2019_rJeZS3RcYm",
"iclr_2019_rJeZS3RcYm",
"ryxurJRsTX",
"Bklu_v6spQ",
"iclr_2019_rJeZS3RcYm",
"SkesQhRO2Q",
"SJeaN4LihX",
"rkxkTpcJTQ",
"iclr_2019_rJeZS3RcYm"
] |
iclr_2019_rJedbn0ctQ | Zero-training Sentence Embedding via Orthogonal Basis | We propose a simple and robust training-free approach for building sentence representations. Inspired by the Gram-Schmidt Process in geometric theory, we build an orthogonal basis of the subspace spanned by a word and its surrounding context in a sentence. We model the semantic meaning of a word in a sentence based on two aspects. One is its relatedness to the word vector subspace already spanned by its contextual words. The other is its novel semantic meaning which shall be introduced as a new basis vector perpendicular to this existing subspace. Following this motivation, we develop an innovative method based on orthogonal basis to combine pre-trained word embeddings into sentence representation. This approach requires zero training and zero parameters, along with efficient inference performance. We evaluate our approach on 11 downstream NLP tasks. Experimental results show that our model outperforms all existing zero-training alternatives in all the tasks and it is competitive to other approaches relying on either large amounts of labelled data or prolonged training time. | rejected-papers | The paper proposes a simple approach for computing a sentence embedding as a weighted combination of pre-trained word embeddings, which obtains nice results on a number of tasks. The approach is described as training-free but does require computing principal components of word embedding subspaces on the test set (similarly to some earlier work). The reviewers are generally in agreement that the approach is interesting, and the results are encouraging. However, there is some concern about the clarity of the paper and in particular the placement of the work in relation to other methods. There is also a bit of concern about whether there is sufficient novelty compared to Arora et al. 2017, which also compose sentence embeddings as weighted combinations of word embeddings, and also use a principal subspace of embeddings in the test set. This AC feels that the method here is sufficiently different from Arora et al., but agrees with the reviewers that the paper clarity needs to be improved, so that the community can appreciate what is gained from the new aspects of the approach and what conclusions should be drawn from each experimental comparison. | train | [
"BklEbeM46Q",
"H1x3zHcX6Q",
"HJxL-Cv-pX",
"rkegb7v1pQ",
"H1gGaa7q3m",
"rylVjUk3sX",
"ryxisXv53X",
"ByeZqvI5n7",
"SJlO1wzcnQ",
"Hyge8iaiqX",
"SJlo7YliqQ",
"BkeeXPCc9m",
"S1xll3s597",
"HygqgNDKcX",
"SkljByVFcX"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"Hello AnonReviewer3,\n\nWe appreciate your comprehensive review and questions. Please find our response below.\n\n(1) About re-word the categories. Thanks for your suggestion. In the revised version submitted, we categorize sentence embeddings methods into two types, one is non-parameterized methods, including GEM... | [
-1,
-1,
-1,
5,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"rkegb7v1pQ",
"H1gGaa7q3m",
"rylVjUk3sX",
"iclr_2019_rJedbn0ctQ",
"iclr_2019_rJedbn0ctQ",
"iclr_2019_rJedbn0ctQ",
"ByeZqvI5n7",
"SJlO1wzcnQ",
"iclr_2019_rJedbn0ctQ",
"SJlo7YliqQ",
"HygqgNDKcX",
"S1xll3s597",
"iclr_2019_rJedbn0ctQ",
"SkljByVFcX",
"iclr_2019_rJedbn0ctQ"
] |
iclr_2019_rJf0BjAqYX | Like What You Like: Knowledge Distill via Neuron Selectivity Transfer | Despite deep neural networks have demonstrated extraordinary power in various applications, their superior performances are at expense of high storage and computational costs. Consequently, the acceleration and compression of neural networks have attracted much attention recently. Knowledge Transfer (KT), which aims at training a smaller student network by transferring knowledge from a larger teacher model, is one of the popular solutions. In this paper, we propose a novel knowledge transfer method by treating it as a distribution matching problem. Particularly, we match the distributions of neuron selectivity patterns between teacher and student networks. To achieve this goal, we devise a new KT loss function by minimizing the Maximum Mean Discrepancy (MMD) metric between these distributions. Combined with the original loss function, our method can significantly improve the performance of student networks. We validate the effectiveness of our method across several datasets, and further combine it with other KT methods to explore the best possible results. Last but not least, we fine-tune the model to other tasks such as object detection. The results are also encouraging, which confirm the transferability of the learned features. | rejected-papers | The paper presents a sensible algorithm for knowledge distillation (KD) from a larger teacher network to a smaller student network by minimizing the Maximum Mean Discrepancy (MMD) between the distributions over students and teachers network activations. As rightly acknowledged by the R3, the benefits of the proposed approach are encouraging in the object detection task, and are less obvious in classification (R1 and R2).
The reviewers and AC note the following potential weaknesses:
(1) low technical novelty in light of prior works “Demystifying Neural Style Transfer” by Li et al 2017 and “Deep Transfer Learning with Joint Adaptation Networks” by Long et al 2017 -- See R2’s detailed explanations; (2) lack of empirical evidence that the proposed method is better than the seminal work on KD by Hinton et al, 2014; (3) important practical issues are not justified (e.g. kernel specifications as requested by R3 and R2; accuracy-efficiency trade-off as suggested by R1); (4) presentation clarity.
R3 has raised questions regarding deploying the proposed student models on mobile devices without a proper comparison with the MobileNet and ShuffleNet light architectures. This can be seen as a suggestion for future revisions.
There is reviewer disagreement on this paper and no author rebuttal. The reviewer with a positive view on the manuscript (R3) was reluctant to champion the paper as the authors did not respond to the concerns of the reviewers.
AC suggests in its current state the manuscript is not ready for a publication. We hope the reviews are useful for improving and revising the paper. | train | [
"BygynUFT3X",
"rylOvP0FnX",
"BkgLnWiI2X",
"SJx8fwqrim",
"HJg7dddrom"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"This submission proposes a novel loss function, based on Maximum Mean Discrepancy (MMD), for knowledge transfer (distillation) from a teacher network to a student network, which matches the spatial distribution of neuron activations between the two.\n\nThe proposed approach is interesting but there is significant ... | [
4,
4,
6,
-1,
-1
] | [
4,
4,
5,
-1,
-1
] | [
"iclr_2019_rJf0BjAqYX",
"iclr_2019_rJf0BjAqYX",
"iclr_2019_rJf0BjAqYX",
"HJg7dddrom",
"iclr_2019_rJf0BjAqYX"
] |
iclr_2019_rJgP7hR5YQ | COMPOSITION AND DECOMPOSITION OF GANS | In this work, we propose a composition/decomposition framework for adversarially training generative models on composed data - data where each sample can be thought of as being constructed from a fixed number of components. In our framework, samples are generated by sampling components from component generators and feeding these components to a composition function which combines them into a “composed sample”. This compositional training approach improves the modularity, extensibility and interpretability of Generative Adversarial Networks (GANs) - providing a principled way to incrementally construct complex models out of simpler component models, and allowing for explicit “division of responsibility” between these components. Using this framework, we define a family of learning tasks and evaluate their feasibility on two datasets in two different data modalities (image and text). Lastly, we derive sufficient conditions such that these compositional generative models are identifiable. Our work provides a principled approach to building on pretrained generative models or for exploiting the compositional nature of data distributions to train extensible and interpretable models.
| rejected-papers | This paper investigates composition and decomposition for adversarially training generative models that work on composed data. Components that are sampled from component generators are then fed into a composition function to generate composed samples, aiming to improve modularity, extensibility, and interpretability of GANs. The paper is written very clearly and is easy to follow.
Experiments considered application to both images (MNIST) and text (yelp reviews).
The original version of the paper lacks any qualitative analysis, even though experiments were described. Authors revised the paper to include some experimental results, however, they are still not sufficient. State-of-the-art baselines, from previous work suggested by the reviewers should be included for comparison. | train | [
"BkxNHkNq0m",
"HkgZIX79Am",
"rkxRCJXqRm",
"B1eoKJXq0m",
"Sklg9J-a37",
"SJlqBB3FnQ",
"SyxhevqD3Q",
"r1e7Mx3A2m"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public"
] | [
"Dear sir,\n\nOur generators' architectures followed DCGAN. \nA thing to clarify in MNIST-BB experiment is our composition/decomposition network are Unet and they are not generators. For the details about the networks' architectures, please see apendix.\nIn terms of your question, Unet composition network shows it ... | [
-1,
-1,
-1,
-1,
4,
5,
4,
-1
] | [
-1,
-1,
-1,
-1,
5,
5,
4,
-1
] | [
"r1e7Mx3A2m",
"SyxhevqD3Q",
"SJlqBB3FnQ",
"Sklg9J-a37",
"iclr_2019_rJgP7hR5YQ",
"iclr_2019_rJgP7hR5YQ",
"iclr_2019_rJgP7hR5YQ",
"iclr_2019_rJgP7hR5YQ"
] |
iclr_2019_rJgSV3AqKQ | Combining adaptive algorithms and hypergradient method: a performance and robustness study | Wilson et al. (2017) showed that, when the stepsize schedule is properly designed, stochastic gradient generalizes better than ADAM (Kingma & Ba, 2014). In light of recent work on hypergradient methods (Baydin et al., 2018), we revisit these claims to see if such methods close the gap between the most popular optimizers. As a byproduct, we analyze the true benefit of these hypergradient methods compared to more classical schedules, such as the fixed decay of Wilson et al. (2017). In particular, we observe they are of marginal help since their performance varies significantly when tuning their hyperparameters. Finally, as robustness is a critical quality of an optimizer, we provide a sensitivity analysis of these gradient based optimizers to assess how challenging their tuning is. | rejected-papers | The paper is a premature submission that needs significant improvement in terms of conceptual, theoretical, and empirical aspects. | test | [
"Hkxi1iUDpX",
"rygZgpNBpm",
"B1O9N17MTQ",
"SkgupY1-p7",
"Bkgyx6Cg67",
"SyeuS3Ae6X",
"BkxfqdZq3Q"
] | [
"author",
"public",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"First, the authors would like to thank you for the time given to reviewing this paper and the constructive comments you are offering. We will take them into account for future submission.\n\n1. , 4. , 5. - These are relevant suggestions and will be followed for the future version.\n\n2. By \"true benefit\", we m... | [
-1,
-1,
-1,
3,
-1,
3,
4
] | [
-1,
-1,
-1,
4,
-1,
2,
4
] | [
"rygZgpNBpm",
"iclr_2019_rJgSV3AqKQ",
"iclr_2019_rJgSV3AqKQ",
"iclr_2019_rJgSV3AqKQ",
"SyeuS3Ae6X",
"iclr_2019_rJgSV3AqKQ",
"iclr_2019_rJgSV3AqKQ"
] |
iclr_2019_rJgTciR9tm | Learning Information Propagation in the Dynamical Systems via Information Bottleneck Hierarchy | Extracting relevant information, causally inferring and predicting the future states with high accuracy is a crucial task for modeling complex systems. The endeavor to address these tasks is made even more challenging when we have to deal with high-dimensional heterogeneous data streams. Such data streams often have higher-order inter-dependencies across spatial and temporal dimensions. We propose to perform a soft-clustering of the data and learn its dynamics to produce a compact dynamical model while still ensuring the original objectives of causal inference and accurate predictions. To efficiently and rigorously process the dynamics of soft-clustering, we advocate for an information theory inspired approach that incorporates stochastic calculus and seeks to determine a trade-off between the predictive accuracy and compactness of the mathematical representation. We cast the model construction as a maximization of the compression of the state variables such that the predictive ability and causal interdependence (relatedness) constraints between the original data streams and the compact model are closely bounded. We provide theoretical guarantees concerning the convergence of the proposed learning algorithm. To further test the proposed framework, we consider a high-dimensional Gaussian case study and describe an iterative scheme for updating the new model parameters. Using numerical experiments, we demonstrate the benefits on compression and prediction accuracy for a class of dynamical systems. Finally, we apply the proposed algorithm to the real-world dataset of multimodal sentiment intensity and show improvements in prediction with reduced dimensions. | rejected-papers | The reviewers reached a consensus that the paper is not ready for publication in ICLR. (see more details in the reviews below. ) | train | [
"rklKBAMq0m",
"SJesJZWtRQ",
"r1eaEaAeAX",
"S1gaSt9caQ",
"SkeHfY9qaQ",
"S1gJpd59pQ",
"B1lW4mEC2X",
"BkebP2PthQ",
"B1gi5FcQsX"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We would like to thank the reviewer for reading our revised version and for the feedback. We believe that the current version of the paper, in which we have incorporated a small change, clarifies the ambiguity regarding the convergence of the procedure.\n\n1- Due to space limitations, we have just restricted o... | [
-1,
-1,
-1,
-1,
-1,
-1,
5,
4,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
2,
4
] | [
"SJesJZWtRQ",
"SkeHfY9qaQ",
"iclr_2019_rJgTciR9tm",
"B1gi5FcQsX",
"BkebP2PthQ",
"B1lW4mEC2X",
"iclr_2019_rJgTciR9tm",
"iclr_2019_rJgTciR9tm",
"iclr_2019_rJgTciR9tm"
] |
iclr_2019_rJg_NjCqtX | CHEMICAL NAMES STANDARDIZATION USING NEURAL SEQUENCE TO SEQUENCE MODEL | Chemical information extraction is to convert chemical knowledge in text into true chemical database, which is a text processing task heavily relying on chemical compound name identification and standardization. Once a systematic name for a chemical compound is given, it will naturally and much simply convert the name into the eventually required molecular formula. However, for many chemical substances, they have been shown in many other names besides their systematic names which poses a great challenge for this task. In this paper, we propose a framework to do the auto standardization from the non-systematic names to the corresponding systematic names by using the spelling error correction, byte pair encoding tokenization and neural sequence to sequence model. Our framework is trained end to end and is fully data-driven. Our standardization accuracy on the test dataset achieves 54.04% which has a great improvement compared to previous state-of-the-art result. | rejected-papers | The area chair agrees with reviewer 1 and 2 that this paper does not have sufficient machine learning novelty for ICLR. This is competent work and the problem is interesting, but ICLR is not the right venue since the main contributions are on defining the task. All the models that are then applied are standard. | train | [
"H1lPW2xiA7",
"BkxNVq9aT7",
"BkebaEmBAQ",
"ryxKphzrC7",
"SJle7AdmAm",
"r1eip-5ZRm",
"r1gR-t5apQ",
"HyxWndcpp7",
"SJgIbOcp6X",
"Syg9-Hs53X",
"B1lFQRsvn7",
"SklvlzHyhQ"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thank you for highlighting the importance of your work. I do agree that this might be a paper worth publishing, but I am afraid that the paper may not be a good fit for ICLR.",
"First of all, many thanks to all the valuable comments from the reviewers.\n\nFor the updated version of paper, we have modified the f... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
3,
7
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
5,
3
] | [
"ryxKphzrC7",
"iclr_2019_rJg_NjCqtX",
"B1lFQRsvn7",
"Syg9-Hs53X",
"HyxWndcpp7",
"iclr_2019_rJg_NjCqtX",
"SklvlzHyhQ",
"B1lFQRsvn7",
"Syg9-Hs53X",
"iclr_2019_rJg_NjCqtX",
"iclr_2019_rJg_NjCqtX",
"iclr_2019_rJg_NjCqtX"
] |
iclr_2019_rJgfjjC9Ym | Backprop with Approximate Activations for Memory-efficient Network Training | With innovations in architecture design, deeper and wider neural network models deliver improved performance on a diverse variety of tasks. But the increased memory footprint of these models presents a challenge during training, when all intermediate layer activations need to be stored for back-propagation. Limited GPU memory forces practitioners to make sub-optimal choices: either train inefficiently with smaller batches of examples; or limit the architecture to have lower depth and width, and fewer layers at higher spatial resolutions. This work introduces an approximation strategy that significantly reduces a network's memory footprint during training, but has negligible effect on training performance and computational expense. During the forward pass, we replace activations with lower-precision approximations immediately after they have been used by subsequent layers, thus freeing up memory. The approximate activations are then used during the backward pass. This approach limits the accumulation of errors across the forward and backward pass---because the forward computation across the network still happens at full precision, and the approximation has a limited effect when computing gradients to a layer's input. Experiments, on CIFAR and ImageNet, show that using our approach with 8- and even 4-bit fixed-point approximations of 32-bit floating-point activations has only a minor effect on training and validation performance, while affording significant savings in memory usage. | rejected-papers | This work proposes to reduce memory use in network training by quantizing the activations during backprop. It shows that this leads to only small drops in accuracy for resnets on CIFAR-10 and Imagenet for factors up to 8. The reviewers raised concerns about comparison to other approaches such as checkpointing, and questioned the technical novelty of the approach. The authors were able to properly address the concerns around comparisons, but the issue around novelty remained. This could be compensated by strengthening the experimental results and leveraging the memory saving for instance to train larger networks. Resubmission is encouraged. | train | [
"SJlXOXV90X",
"ryebisuxRm",
"SJlbSuiI37",
"SkgPcXOxAm",
"Bkx3-_UxAX",
"ByeHHdok0m",
"HJg3vx5kAX",
"r1lNLuXI6X",
"SJg9-d7ITQ",
"SJxqBk7767",
"Bkl6xGbW67",
"SJgfA87927"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"official_reviewer"
] | [
"We have uploaded a revised version of the paper incorporating the comments received so far by the revision deadline. We are of course happy to continue to respond to any further comments and questions.\n\nWe have responded to individual reviewers below. Here is a brief summary:\n\n-Rev 1 has a positive view of our... | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
5,
-1,
7
] | [
-1,
-1,
5,
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
4
] | [
"iclr_2019_rJgfjjC9Ym",
"SkgPcXOxAm",
"iclr_2019_rJgfjjC9Ym",
"Bkx3-_UxAX",
"ByeHHdok0m",
"HJg3vx5kAX",
"Bkl6xGbW67",
"SJgfA87927",
"SJxqBk7767",
"iclr_2019_rJgfjjC9Ym",
"SJlbSuiI37",
"iclr_2019_rJgfjjC9Ym"
] |
iclr_2019_rJgvf3RcFQ | On Inductive Biases in Deep Reinforcement Learning | Many deep reinforcement learning algorithms contain inductive biases that sculpt the agent's objective and its interface to the environment. These inductive biases can take many forms, including domain knowledge and pretuned hyper-parameters. In general, there is a trade-off between generality and performance when we use such biases. Stronger biases can lead to faster learning, but weaker biases can potentially lead to more general algorithms that work on a wider class of problems.
This trade-off is relevant because these inductive biases are not free; substantial effort may be required to obtain relevant domain knowledge or to tune hyper-parameters effectively. In this paper, we re-examine several domain-specific components that modify the agent's objective and environmental interface. We investigated whether the performance deteriorates when all these fixed components are replaced with adaptive solutions from the literature. In our experiments, performance sometimes decreased with the adaptive components, as one might expect when comparing to components crafted for the domain, but sometimes the adaptive components performed better. We then investigated the main benefit of having fewer domain-specific components, by comparing the learning performance of the two systems on a different set of continuous control problems, without additional tuning of either system. As hypothesized, the system with adaptive components performed better on many of the tasks. | rejected-papers | The paper studies inductive biases in DRL, by comparing with different reward shaping, and curriculums. The authors performed comparative experiments where they replace domain specific heuristics by such adaptive components.
The paper includes very little (new) scientific contributions, and, as such, is not suitable for publication at ICLR. | train | [
"H1eGa3Z5R7",
"BJlyABDhaQ",
"HJg5KSw26Q",
"SJx1tmP3pQ",
"rJg_hF_TnQ",
"HJgVj1k327",
"SJlnow4qh7"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"Thanks for clarifying question: I did not misunderstand the empirical efforts. As you can see, my review mentions single \"environment\". I believe you conduct 57 games experiment in Arcade environment. Perhaps when I used Atari environment, it confused you.\n\nIn any case, the major issue of no substantial techni... | [
-1,
-1,
-1,
-1,
3,
3,
7
] | [
-1,
-1,
-1,
-1,
4,
4,
2
] | [
"SJx1tmP3pQ",
"SJlnow4qh7",
"HJgVj1k327",
"rJg_hF_TnQ",
"iclr_2019_rJgvf3RcFQ",
"iclr_2019_rJgvf3RcFQ",
"iclr_2019_rJgvf3RcFQ"
] |
iclr_2019_rJgz8sA5F7 | HC-Net: Memory-based Incremental Dual-Network System for Continual learning | Training a neural network for a classification task typically assumes that the data to train are given from the beginning.
However, in the real world, additional data accumulate gradually and the model requires additional training without accessing the old training data. This usually leads to the catastrophic forgetting problem which is inevitable for the traditional training methodology of neural networks.
In this paper, we propose a memory-based continual learning method that is able to learn additional tasks while retaining the performance of previously learned tasks.
Composed of two complementary networks, the Hippocampus-Net (H-Net) and the Cortex-Net (C-Net), our model estimates the index of the corresponding task for an input sample and utilizes a particular portion of itself with the estimated index.
The C-Net guarantees no degradation in the performance of the previously learned tasks and the H-Net shows high confidence in finding the origin of an input sample. | rejected-papers | This work is effectively an extension of progressive nets, where the task ID is not given at test time. There were several concerns about novelty of this work and the evaluation being insufficient. There was a reasonable back and forth between the reviewers and authors, and the reviewers are all aligned with the idea that this work would need a substantial rewrite in order to be accepted at ICLR. | val | [
"HJxpo2ksJE",
"S1lWA9EcAm",
"HkxBjyVFCm",
"HJx7_1EtA7",
"SyluHJ4KAX",
"rye0nwomaQ",
"ryg8dhcu2X",
"rJg8d7lpiX"
] | [
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I'm keeping the evaluation that the contribution is not sufficient for ICLR.",
"Thank you for the clarifications. I still think the paper will need a major revision to address all the issues we discussed.",
"Thank you for the comments and feedbacks.\n\n1. The major comparison in this paper was between the Pack... | [
-1,
-1,
-1,
-1,
-1,
4,
4,
4
] | [
-1,
-1,
-1,
-1,
-1,
3,
4,
5
] | [
"HkxBjyVFCm",
"HJx7_1EtA7",
"rJg8d7lpiX",
"ryg8dhcu2X",
"rye0nwomaQ",
"iclr_2019_rJgz8sA5F7",
"iclr_2019_rJgz8sA5F7",
"iclr_2019_rJgz8sA5F7"
] |
iclr_2019_rJl-HsR9KX | Discriminative Active Learning | We propose a new batch mode active learning algorithm designed for neural networks and large query batch sizes. The method, Discriminative Active Learning (DAL), poses active learning as a binary classification task, attempting to choose examples to label in such a way as to make the labeled set and the unlabeled pool indistinguishable. Experimenting on image classification tasks, we empirically show our method to be on par with state of the art methods in medium and large query batch sizes, while being simple to implement and also extend to other domains besides classification tasks. Our experiments also show that none of the state of the art methods of today are clearly better than uncertainty sampling, negating some of the reported results in the recent literature. | rejected-papers | This paper proposes a novel and interesting active learning approach, that trains a classifier to discriminate between the examples in the labeled and unlabeled data at each iteration. The top few samples that are most likely to be from the unlabeled set as per this classifier are selected to be labeled by an oracle, and are moved to the labeled training examples bin in the next iteration. The idea is simple and clear and is shown to have a principled basis and theoretical background, related to GANs and to previous results from the literature. Experiments performed on CIFAR-10 and MNIST benchmarks demonstrate good results in comparison to baselines.
During the review period, authors considered most of the suggestions by the reviewers and updated the paper. Although the proposed method is similar to density-based active learning methods, as also suggested by the reviewers, baselines do not include such approaches in the comparison experiments. | train | [
"SJgxZFHc2m",
"HJetKy5F0X",
"rJlwRbVw0m",
"BJeFpIAQ07",
"rklmt4Cm0Q",
"rkxq1FkmCm",
"rJeyyMi567",
"BkeIeafd6m",
"HJeEuH3b67",
"Sygh3VnZpm",
"Bke4oN2Z6Q",
"SJl0Bm2-6Q",
"HJgzFz2bpQ",
"Ske5lz2b6X",
"rJgQJxcgpQ",
"S1lASPI9nm",
"BJx-6PRtnQ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
"The paper is proposing a distribution matching as a metric for active learning. Basic intuition is: if we can make the distribution of labelled and unlabelled examples similar to each other, training error in one will approximate the training error in the other. Hence, a model learned using labelled ones will do w... | [
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
8,
4
] | [
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"iclr_2019_rJl-HsR9KX",
"rklmt4Cm0Q",
"Sygh3VnZpm",
"HJeEuH3b67",
"rkxq1FkmCm",
"SJl0Bm2-6Q",
"BkeIeafd6m",
"iclr_2019_rJl-HsR9KX",
"iclr_2019_rJl-HsR9KX",
"Bke4oN2Z6Q",
"BJx-6PRtnQ",
"SJgxZFHc2m",
"S1lASPI9nm",
"rJgQJxcgpQ",
"iclr_2019_rJl-HsR9KX",
"iclr_2019_rJl-HsR9KX",
"iclr_2019... |
iclr_2019_rJl2E3AcF7 | Doubly Sparse: Sparse Mixture of Sparse Experts for Efficient Softmax Inference | Computations for the softmax function in neural network models are expensive when the number of output classes is large. This can become a significant issue in both training and inference for such models. In this paper, we present Doubly Sparse Softmax (DS-Softmax), Sparse Mixture of Sparse of Sparse Experts, to improve the efficiency for softmax inference. During training, our method learns a two-level class hierarchy by dividing entire output class space into several partially overlapping experts. Each expert is responsible for a learned subset of the output class space and each output class only belongs to a small number of those experts. During inference, our method quickly locates the most probable expert to compute small-scale softmax. Our method is learning-based and requires no knowledge of the output class partition space a priori. We empirically evaluate our method on several real-world tasks and demonstrate that we can achieve significant computation reductions without loss of performance. | rejected-papers | This work proposes a new approximation method for softmax layers with large number of classes. The idea is to use a sparse two-layer mixture of experts. This approach successfully reduces the computation requires on the PTB and Wiki-2 datasets which have up to 32k classes. However, the reviewers argue that the work lacks relevant baselines such as D-softmax and adaptive-softmax. The authors argue that they focus on training and not inference and should do worse, but this should be substantiated in the paper by actual experimental results. | train | [
"HklSWHq3p7",
"B1e8djk9Tm",
"HJxF95J5aX",
"HJgvmMlBTQ",
"B1euHOqi37",
"SklHkeMohX",
"rJxPUFHc3m"
] | [
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We thank reviewers for their time and valuable comments. We have revised our article based on reviewers' suggestions. \nWe want to summarize the key points of this work as follows:\n\n* Our work focuses on speeding up softmax inference given large output dimension and achieved good empirical results on both synthe... | [
-1,
-1,
-1,
-1,
6,
4,
7
] | [
-1,
-1,
-1,
-1,
3,
3,
3
] | [
"iclr_2019_rJl2E3AcF7",
"B1euHOqi37",
"rJxPUFHc3m",
"SklHkeMohX",
"iclr_2019_rJl2E3AcF7",
"iclr_2019_rJl2E3AcF7",
"iclr_2019_rJl2E3AcF7"
] |
iclr_2019_rJl3S2A9t7 | Policy Optimization via Stochastic Recursive Gradient Algorithm | In this paper, we propose the StochAstic Recursive grAdient Policy Optimization (SARAPO) algorithm which is a novel variance reduction method on Trust Region Policy Optimization (TRPO). The algorithm incorporates the StochAstic Recursive grAdient algoritHm(SARAH) into the TRPO framework. Compared with the existing Stochastic Variance Reduced Policy Optimization (SVRPO), our algorithm is more stable in the variance. Furthermore, by theoretical analysis the ordinary differential equation and the stochastic differential equation (ODE/SDE) of SARAH, we analyze its convergence property and stability. Our experiments demonstrate its performance on a variety of benchmark tasks. We show that our algorithm gets better improvement in each iteration and matches or even outperforms SVRPO and TRPO.
| rejected-papers | The use of SARAH for Policy optimization in RL is novel, with some theoretical analysis to demonstrate convergence of this approach. However, concerns were raised in terms of clarity of the paper, empirical results and in placement of this theory relative to a previous variance reduction algorithm called SVRPG. The author response similarly did not explain the novelty of the theory beyond the convergence results of what was given by the paper on SVRPG. By incorporating some of the reviewer comments, this paper could be a meaningful and useful contribution. | train | [
"SkxZyFDtA7",
"ByeyPs1YCm",
"r1eYIN9phX",
"r1eJPAn52Q"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"We sincerely thank all reviewers for the valuable remarks!\n\nWe would like to emphasize that our paper is not an incremental one. We believe that variance reduced (VR) gradient methods (SARAH and SVRG) serve as potential alternatives to incorporate into the TRPO framework [Xu 2017], which might significantly outp... | [
-1,
5,
6,
5
] | [
-1,
3,
2,
3
] | [
"iclr_2019_rJl3S2A9t7",
"iclr_2019_rJl3S2A9t7",
"iclr_2019_rJl3S2A9t7",
"iclr_2019_rJl3S2A9t7"
] |
iclr_2019_rJl4BsR5KX | k-Nearest Neighbors by Means of Sequence to Sequence Deep Neural Networks and Memory Networks | k-Nearest Neighbors is one of the most fundamental but effective classification models. In this paper, we propose two families of models built on a sequence to sequence model and a memory network model to mimic the k-Nearest Neighbors model, which generate a sequence of labels, a sequence of out-of-sample feature vectors and a final label for classification, and thus they could also function as oversamplers. We also propose `out-of-core' versions of our models which assume that only a small portion of data can be loaded into memory. Computational experiments show that our models outperform k-Nearest Neighbors, a feed-forward neural network and a memory network, due to the fact that our models must produce additional output and not just the label. As an oversampler on imbalanced datasets, the sequence to sequence kNN model often outperforms Synthetic Minority Over-sampling Technique and Adaptive Synthetic Sampling.
| rejected-papers | the proposed approach of predicting k nearest neighbouring examples as an auxiliary task is an interesting idea. however, the submission should have studied further on how those examples are predicted (e.g., sequence prediction is one, but you could try set prediction, or so on) rather than how sequential prediction of nearest neighbours is done together with different types of classifiers (many of which are arguably not necessarily suitable for classification,) which was a sentiment shared by all the reviewers.
more careful investigation of different ways in which nearest neighbour prediction could be incorporated and more careful/thorough analysis on how the incorporation of this auxiliary task changes the behaviours or properties of the representation would make it a much better paper (also with clearer writing.) | train | [
"Bke-YtWv1V",
"H1g1StWDJE",
"BJlWxJz4yE",
"H1gRKNG9nm",
"HyxLBU-9h7",
"rJgh5oCAA7",
"H1lEoAxtpm",
"BJglP0xKaQ",
"rkgbdjlFaX",
"SyL9ahOnm"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer"
] | [
"Thank you for the response. Our intuition is that in classification tasks, the distance between manifolds of different classes should be large and the distance between manifolds of the same class should be small. Therefore, letting neural networks mimic kNN would combine neural networks with the desired distance p... | [
-1,
-1,
-1,
6,
5,
-1,
-1,
-1,
-1,
4
] | [
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
4
] | [
"rJgh5oCAA7",
"BJlWxJz4yE",
"rkgbdjlFaX",
"iclr_2019_rJl4BsR5KX",
"iclr_2019_rJl4BsR5KX",
"BJglP0xKaQ",
"SyL9ahOnm",
"HyxLBU-9h7",
"H1gRKNG9nm",
"iclr_2019_rJl4BsR5KX"
] |
iclr_2019_rJl6M2C5Y7 | Online Hyperparameter Adaptation via Amortized Proximal Optimization | Effective performance of neural networks depends critically on effective tuning of optimization hyperparameters, especially learning rates (and schedules thereof). We present Amortized Proximal Optimization (APO), which takes the perspective that each optimization step should approximately minimize a proximal objective (similar to the ones used to motivate natural gradient and trust region policy optimization). Optimization hyperparameters are adapted to best minimize the proximal objective after one weight update. We show that an idealized version of APO (where an oracle minimizes the proximal objective exactly) achieves global convergence to stationary point and locally second-order convergence to global optimum for neural networks. APO incurs minimal computational overhead. We experiment with using APO to adapt a variety of optimization hyperparameters online during training, including (possibly layer-specific) learning rates, damping coefficients, and gradient variance exponents. For a variety of network architectures and optimization algorithms (including SGD, RMSprop, and K-FAC), we show that with minimal tuning, APO performs competitively with carefully tuned optimizers. | rejected-papers | This paper proposes an amortized proximal optimization method to adapt optimization hyperparameters. Empirical results on many problems are performed.
Reviewers overall find the ideas interesting, however there still are some questions whether strong baselines are used in the experimental comparisons. The reviewers also point that the theoretical results are not useful ones since the assumptions are not satisfied in practice. One of the reviewer increased their score, but the other has maintained that the paper requires more work.
The presentation of the result is also a bit problematic; the font sizes in the figure are too small to read.
The paper contains interesting ideas, but it does not make the bar for acceptance in ICLR. Therefore I recommend a reject. I encourage the authors to resubmit this work after improving the presentation and experiments.
| train | [
"ryl2LHzGJE",
"Syx2qR4gyN",
"S1l-tamlkN",
"rkxxb72KCX",
"Bkld8r3FRQ",
"S1ei3QGoCQ",
"SklCKPP1h7",
"rye-WhkcRX",
"Bklg17AYCQ",
"rylbu-RtRm",
"Bkg51P6KCX",
"SylicetE67",
"r1labrqphm",
"SJl8R-eTnQ"
] | [
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"official_reviewer"
] | [
"Q: Weight decay\n\nWe used weight decay 1e-5 for all SGD/SGDm/RMSprop experiments on CIFAR. We apologize that this was not clearly described in the paper, and we will add this information to the final version.\n\nUsing a larger weight decay of 5e-4 improves the baselines for SGD and SGDm to 94.32% and 94.82%, resp... | [
-1,
-1,
-1,
-1,
-1,
-1,
7,
-1,
-1,
-1,
-1,
-1,
6,
5
] | [
-1,
-1,
-1,
-1,
-1,
-1,
3,
-1,
-1,
-1,
-1,
-1,
4,
4
] | [
"Syx2qR4gyN",
"S1l-tamlkN",
"SJl8R-eTnQ",
"r1labrqphm",
"SJl8R-eTnQ",
"Bklg17AYCQ",
"iclr_2019_rJl6M2C5Y7",
"SylicetE67",
"rylbu-RtRm",
"iclr_2019_rJl6M2C5Y7",
"SklCKPP1h7",
"iclr_2019_rJl6M2C5Y7",
"iclr_2019_rJl6M2C5Y7",
"iclr_2019_rJl6M2C5Y7"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.