paper_id stringlengths 19 21 | paper_title stringlengths 8 170 | paper_abstract stringlengths 8 5.01k | paper_acceptance stringclasses 18 values | meta_review stringlengths 29 10k | label stringclasses 3 values | review_ids list | review_writers list | review_contents list | review_ratings list | review_confidences list | review_reply_tos list |
|---|---|---|---|---|---|---|---|---|---|---|---|
iclr_2018_rkr1UDeC- | Large scale distributed neural network training through online distillation | Techniques such as ensembling and distillation promise model quality improvements when paired with almost any base model. However, due to increased test-time cost (for ensembles) and increased complexity of the training pipeline (for distillation), these techniques are challenging to use in industrial settings. In this paper we explore a variant of distillation which is relatively straightforward to use as it does not require a complicated multi-stage setup or many new hyperparameters. Our first claim is that online distillation enables us to use extra parallelism to fit very large datasets about twice as fast. Crucially, we can still speed up training even after we have already reached the point at which additional parallelism provides no benefit for synchronous or asynchronous stochastic gradient descent. Two neural networks trained on disjoint subsets of the data can share knowledge by encouraging each model to agree with the predictions the other model would have made. These predictions can come from a stale version of the other model so they can be safely computed using weights that only rarely get transmitted. Our second claim is that online distillation is a cost-effective way to make the exact predictions of a model dramatically more reproducible. We support our claims using experiments on the Criteo Display Ad Challenge dataset, ImageNet, and the largest to-date dataset used for neural language modeling, containing 6×1011 tokens and based on the Common Crawl repository of web data. | accepted-poster-papers | meta score: 7
The paper introduces an online distillation technique to parallelise large scale training. Although the basic idea is not novel, the presented experimentation indicates that the authors' have made the technique work. Thus this paper should be of interest to practitioners.
Pros:
- clearly written, the approach is well-explained
- good experimentation on large-scale common crawl data with 128-256 GPUs
- strong experimental results
Cons:
- the idea itself is not novel
- the range of experimentation could be wider (e.g. different numbers of GPUs) but this is expensive!
Overall the novelty is in making this approach work well in practice, and demonstrating it experimentally. | train | [
"SJ7PzWDeM",
"SyOiDTtef",
"Bk09mAnlG",
"rkmWh7jMG",
"SkVb6QsMM",
"B1Hj5YOMG",
"HyFBYTVZf",
"HyCju6EZz",
"H1e5ETNWM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper provides a very original & promising method to scale distributed training beyond the current limits of mini-batch stochastic gradient descent. As authors point out, scaling distributed stochastic gradient descent to more workers typically requires larger batch sizes in order to fully utilize computation... | [
8,
4,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rkr1UDeC-",
"iclr_2018_rkr1UDeC-",
"iclr_2018_rkr1UDeC-",
"HyCju6EZz",
"SJ7PzWDeM",
"iclr_2018_rkr1UDeC-",
"SyOiDTtef",
"Bk09mAnlG",
"SJ7PzWDeM"
] |
iclr_2018_BJ0hF1Z0b | Learning Differentially Private Recurrent Language Models | We demonstrate that it is possible to train large recurrent language models with user-level differential privacy guarantees with only a negligible cost in predictive accuracy. Our work builds on recent advances in the training of deep networks on user-partitioned data and privacy accounting for stochastic gradient descent. In particular, we add user-level privacy protection to the federated averaging algorithm, which makes large step updates from user-level data. Our work demonstrates that given a dataset with a sufficiently large number of users (a requirement easily met by even small internet-scale datasets), achieving differential privacy comes at the cost of increased computation, rather than in decreased utility as in most prior work. We find that our private LSTM language models are quantitatively and qualitatively similar to un-noised models when trained on a large dataset. | accepted-poster-papers | This paper uses known methods for learning a differentially private models and applies it to the task of learning a language model, and find they are able to maintain accuracy results on large datasets. Reviewers found the method convincing and original saying it was "interesting and very important to the machine learning ... community", and that in terms of results it was a "very strong empirical paper, with experiments comparable to industrial scale". There were some complaints as to the clarity of the work, with requests for more clear explanations of the methods used.
| train | [
"rJG5vkH4z",
"BJ1XIR_ef",
"Bkg5_kcxG",
"ryImKM5lG",
"HJRVC6-Gz",
"BkMETpWMz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author"
] | [
"\n1. Noise\n\nThanks for the reference. It might indeed be an LSTM issue!\n\n2. Clipping\n\nOh right, I didn't thought about the bias introduces, that is a good point!\n\n3. Optimizers\n\"Certainly an interesting direction, but beyond the scope of the current work.\"\n\nIndeed!\n",
"Summary: The paper provides ... | [
-1,
7,
7,
8,
-1,
-1
] | [
-1,
4,
2,
4,
-1,
-1
] | [
"HJRVC6-Gz",
"iclr_2018_BJ0hF1Z0b",
"iclr_2018_BJ0hF1Z0b",
"iclr_2018_BJ0hF1Z0b",
"Bkg5_kcxG",
"ryImKM5lG"
] |
iclr_2018_SJ-C6JbRW | Mastering the Dungeon: Grounded Language Learning by Mechanical Turker Descent | Contrary to most natural language processing research, which makes use of static datasets, humans learn language interactively, grounded in an environment. In this work we propose an interactive learning procedure called Mechanical Turker Descent (MTD) that trains agents to execute natural language commands grounded in a fantasy text adventure game. In MTD, Turkers compete to train better agents in the short term, and collaborate by sharing their agents' skills in the long term. This results in a gamified, engaging experience for the Turkers and a better quality teaching signal for the agents compared to static datasets, as the Turkers naturally adapt the training data to the agent's abilities. | accepted-poster-papers | This paper provides a game-based interface to have Turkers compete to analyze data for a learning task over multiple rounds. Reviewers found the work interesting and clear written, saying "the paper is easy to follow and the evaluation is meaningful." They also note that there is clear empirical benefit "the results seem to suggest that MTD provides an improvement over non-HITL methods." They also like the task compared to synthetic grounding experiments. There was some concern about the methodology of the experiments but the authors provide reasonable explanations and clarification.
One final concern that I hope the readers take into account. While the reviewers were convinced by the work and did not require it, I feel like the work does not engage enough with the literature of crowd-sourcing in other disciplines. While there are likely some unique aspects to ML use of crowdsourcing, there are many papers about encouraging crowd-workers to produce more useful data. | train | [
"r14hglcez",
"ByLXrM9eG",
"SyXWKhaxM",
"rJZaKFhXG",
"SkRXUrXMf",
"HkGz8Bmff",
"BJxl8BQfM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The authors propose a framework for interactive language learning, called Mechanical Turker Descent (MTD). Over multiple iterations, Turkers provide training examples for a language grounding task, and they are incentivized to provide new training examples that quickly improve generalization. The framework is stra... | [
7,
7,
8,
-1,
-1,
-1,
-1
] | [
4,
4,
5,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJ-C6JbRW",
"iclr_2018_SJ-C6JbRW",
"iclr_2018_SJ-C6JbRW",
"iclr_2018_SJ-C6JbRW",
"r14hglcez",
"ByLXrM9eG",
"SyXWKhaxM"
] |
iclr_2018_Hyg0vbWC- | Generating Wikipedia by Summarizing Long Sequences | We show that generating English Wikipedia articles can be approached as a multi-
document summarization of source documents. We use extractive summarization
to coarsely identify salient information and a neural abstractive model to generate
the article. For the abstractive model, we introduce a decoder-only architecture
that can scalably attend to very long sequences, much longer than typical encoder-
decoder architectures used in sequence transduction. We show that this model can
generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia
articles. When given reference documents, we show it can extract relevant factual
information as reflected in perplexity, ROUGE scores and human evaluations. | accepted-poster-papers | This paper presents a new multi-document summarization task of trying to write a wikipedia article based on its sources. Reviewers found the paper and the task clear to understand and well-explained. The modeling aspects are clear as well, although lacking justification. Reviewers are split on the originality of the task, saying that it is certainly big, but wondering if that makes it difficult to compare with. The main split was the feeling that "the paper presents strong quantitative results and qualitative examples. " versus a frustration that the experimental results did not take into account extractive baselines or analysis. However the authors provide a significantly updated version of the work targeting many of these concerns, which does alleviate some of the main issues. For these reasons, despite one low review, my recommendation is that this work be accepted as a very interesting contribution.
| train | [
"r129mGrxf",
"BJGaExqgz",
"H1VuTvqgG",
"SyEJe4v7M",
"SkUg9xLfz",
"Hy663x8ff",
"SJ3RoxLzz",
"S1npqlIMf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper considers the task of generating Wikipedia articles as a combination of extractive and abstractive multi-document summarization task where input is the content of reference articles listed in a Wikipedia page along with the content collected from Web search and output is the generated content for a targ... | [
7,
8,
7,
-1,
-1,
-1,
-1,
-1
] | [
5,
3,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_Hyg0vbWC-",
"iclr_2018_Hyg0vbWC-",
"iclr_2018_Hyg0vbWC-",
"SJ3RoxLzz",
"iclr_2018_Hyg0vbWC-",
"r129mGrxf",
"BJGaExqgz",
"H1VuTvqgG"
] |
iclr_2018_rkYTTf-AZ | Unsupervised Machine Translation Using Monolingual Corpora Only | Machine translation has recently achieved impressive performance thanks to recent advances in deep learning and the availability of large-scale parallel corpora. There have been numerous attempts to extend these successes to low-resource language pairs, yet requiring tens of thousands of parallel sentences. In this work, we take this research direction to the extreme and investigate whether it is possible to learn to translate even without any parallel data. We propose a model that takes sentences from monolingual corpora in two different languages and maps them into the same latent space. By learning to reconstruct in both languages from this shared feature space, the model effectively learns to translate without using any labeled data. We demonstrate our model on two widely used datasets and two language pairs, reporting BLEU scores of 32.8 and 15.1 on the Multi30k and WMT English-French datasets, without using even a single parallel sentence at training time. | accepted-poster-papers | This work presents some of the first results on unsupervised neural machine translation. The group of reviewers is highly knowledgeable in machine translation, and they were generally very impressed by the results and the think it warrants a whole new area of research noting "the fact that this is possible at all is remarkable.". There were some concerns with the clarity of the details presented and how it might be reproduced, but it seems like much of this was cleared up in the discussion. The reviewers generally praise the thoroughness of the method, the experimental clarity, and use of ablations. One reviewer was less impressed, and felt more comparison should be done. | val | [
"B1POjpKef",
"HJlJ_aqgf",
"r1uaaZRxf",
"BkJl1m6mf",
"rkFvP_9mG",
"Sy8bGbVmf",
"BJd4kZNmz",
"r19B0xEmM",
"SJNmPo-Xf",
"Sk7cfh-zG",
"BJQe95agz",
"Hy59njgkG",
"rJPiWDkJf",
"SkrSo6EC-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"public",
"author",
"author",
"public",
"public"
] | [
"This paper describes an approach to train a neural machine translation system without parallel data. Starting from a word-to-word translation lexicon, which was also learned with unsupervised methods, this approach combines a denoising auto-encoder objective with a back-translation objective, both in two translati... | [
8,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rkYTTf-AZ",
"iclr_2018_rkYTTf-AZ",
"iclr_2018_rkYTTf-AZ",
"iclr_2018_rkYTTf-AZ",
"Sk7cfh-zG",
"HJlJ_aqgf",
"r1uaaZRxf",
"B1POjpKef",
"iclr_2018_rkYTTf-AZ",
"iclr_2018_rkYTTf-AZ",
"rJPiWDkJf",
"SkrSo6EC-",
"iclr_2018_rkYTTf-AZ",
"iclr_2018_rkYTTf-AZ"
] |
iclr_2018_HkAClQgA- | A Deep Reinforced Model for Abstractive Summarization | Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that attends over the input and continuously generated output separately, and a new training method that combines standard supervised word prediction and reinforcement learning (RL).
Models trained only with supervised learning often exhibit "exposure bias" - they assume ground truth is provided at each step during training.
However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable.
We evaluate this model on the CNN/Daily Mail and New York Times datasets. Our model obtains a 41.16 ROUGE-1 score on the CNN/Daily Mail dataset, an improvement over previous state-of-the-art models. Human evaluation also shows that our model produces higher quality summaries. | accepted-poster-papers | This work extends upon recent ideas to build a complete summarization system using clever attention, copying, and RL training. Reviewers like the work but have some criticisms. Particularly in terms of its originality and potential significance noting "It is a good incremental research, but the downside of this paper is lack of innovations since most of the methods proposed in this paper are not new to us.". Still reviewers note the experimental results are of high quality performing excellent on several datasets and building "a strong summarization model." Furthermore the model is extensively tested including in "human readability and relevance assessments ". The work itself is well written and clear. | train | [
"ryxZURtlf",
"HyzQdZqez",
"BkQAkH5lM",
"S1nwDXnQM",
"B1c32sZmM",
"SyqgjnIWM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"public"
] | [
"The paper proposes a model for abstractive document summarization using a self-critical policy gradient training algorithm, which is mixed with maximum likelihood objective. The Seq2seq architecture incorporates both intra-temporal and intra-decoder attention, and a pointer copying mechanism. A hard constraint is ... | [
8,
7,
6,
-1,
-1,
-1
] | [
3,
5,
4,
-1,
-1,
-1
] | [
"iclr_2018_HkAClQgA-",
"iclr_2018_HkAClQgA-",
"iclr_2018_HkAClQgA-",
"iclr_2018_HkAClQgA-",
"iclr_2018_HkAClQgA-",
"iclr_2018_HkAClQgA-"
] |
iclr_2018_BJRZzFlRb | Compressing Word Embeddings via Deep Compositional Code Learning | Natural language processing (NLP) models often require a massive number of parameters for word embeddings, resulting in a large storage or memory footprint. Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance. For this purpose, we propose to construct the embeddings with few basis vectors. For each word, the composition of basis vectors is determined by a hash code. To maximize the compression rate, we adopt the multi-codebook quantization approach instead of binary coding scheme. Each code is composed of multiple discrete numbers, such as (3, 2, 1, 8), where the value of each component is limited to a fixed range. We propose to directly learn the discrete codes in an end-to-end neural network by applying the Gumbel-softmax trick. Experiments show the compression rate achieves 98% in a sentiment analysis task and 94% ~ 99% in machine translation tasks without performance loss. In both tasks, the proposed method can improve the model performance by slightly lowering the compression rate. Compared to other approaches such as character-level segmentation, the proposed method is language-independent and does not require modifications to the network architecture. | accepted-poster-papers | This paper proposes an offline neural method using concrete/gumbel for learning a sparse codebook for use in NLP tasks such as sentiment analysis and MT. The method outperforms other methods using pruning and other sparse coding methods, and also produces somewhat interpretable codes. Reviewers found the paper to be simple, clear, and effective. There was particular praise for the strength of the results and the practicality of application. There were some issues, such as only being applicable to input layers, and not being able to be applied end-to-end. The author also did a very admirable job of responding to questions about analysis with clear and comprehensive additional experiments. | train | [
"rk0hvx5xf",
"SyrG5UJ-G",
"ryIqDgXbz",
"Sk5cTlq7f",
"rJL04SDXf",
"SyW6J-AWM",
"By5oXODZG",
"Bk0C4WEZz",
"ry-Ird7Zf",
"HybsIIQbz",
"ryJCJQXZf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"public",
"official_reviewer",
"public",
"public",
"public",
"public"
] | [
"This paper proposed a new method to compress the space complexity of word embedding vectors by introducing summation composition over a limited number of basis vectors, and representing each embedding as a list of the basis indices. The proposed method can reduce more than 90% memory consumption while keeping orig... | [
8,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BJRZzFlRb",
"iclr_2018_BJRZzFlRb",
"iclr_2018_BJRZzFlRb",
"rJL04SDXf",
"iclr_2018_BJRZzFlRb",
"By5oXODZG",
"ryJCJQXZf",
"ryIqDgXbz",
"ryJCJQXZf",
"rk0hvx5xf",
"iclr_2018_BJRZzFlRb"
] |
iclr_2018_SkhQHMW0W | Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training | Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections. In this paper, we find 99.9% of the gradient exchange in distributed SGD is redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth. To preserve accuracy during compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training. We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270x to 600x without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB. Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile. | accepted-poster-papers | This work proposes a hybrid system for large-scale distributed and federated training of commonly used deep networks. This problem is of broad interest and these methods have the potential to be significantly impactful, as is attested by the active and interesting discussion on this work. At first there were questions about the originality of this study, but it seems that the authors have now added extra references and comparisons.
Reviewers were split about the clarity of the paper itself. One notes that "on the whole clearly presented", but another finds it too dense, disorganized and needing of more clear explanation. Reviewers were also concerned that methods were a bit heuristic and could benefit from more details. There were also many questions about these details in the discussion forum, these should make it into the next version. The main stellar aspect of the work were the experimental results, and reviewers call them "thorough" and note they are convincing. | train | [
"SkwY9v4VG",
"rJ9crpElM",
"rJmrmQ5lG",
"B1lk3Ojxf",
"B1HmMiTXz",
"ByemeWoMz",
"S1te8dFmM",
"H1eVtcdff",
"H1o6dcuff",
"HkRogNzGf",
"Byn_brkfG",
"S1mTqdWzM",
"r1U-Go0WG",
"r13BWoAZM",
"rknhZsRbf",
"HJbNlsRbM",
"S1SGgoA-f",
"HySlhQ1GM",
"ryuVTiCWG",
"BkCsfi0bG",
"H18nes0Wz",
"... | [
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"official_reviewer",
"author",
"author",
"public",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"public",
"official_reviewer",
"author",
"author",
"author",... | [
"Do the four rows in Table 2 (#GPUs in total = 4, 8, 16, 32) correspond to 1, 2, 4 and 8 training nodes? Could you please also say what is the compression ratio for these four cases? Thank you.",
"I think this is a good work that I am sure will have some influence in the near future. I think it should be accepted... | [
-1,
7,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SkhQHMW0W",
"iclr_2018_SkhQHMW0W",
"iclr_2018_SkhQHMW0W",
"iclr_2018_SkhQHMW0W",
"ryuVTiCWG",
"Byn_brkfG",
"r1U-Go0WG",
"HkRogNzGf",
"S1mTqdWzM",
"iclr_2018_SkhQHMW0W",
"HySlhQ1GM",
"BkCsfi0bG",
"rJ9crpElM",
"Bk5Gd4sxf",
"B1lk3Ojxf",
"S1SGgoA-f",
"SkhtLE9Zz",
"iclr_2018_... |
iclr_2018_B14TlG-RW | QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension | Current end-to-end machine reading and question answering (Q\&A) models are primarily based on recurrent neural networks (RNNs) with attention. Despite their success, these models are often slow for both training and inference due to the sequential nature of RNNs. We propose a new Q\&A architecture called QANet, which does not require recurrent networks: Its encoder consists exclusively of convolution and self-attention, where convolution models local interactions and self-attention models global interactions. On the SQuAD dataset, our model is 3x to 13x faster in training and 4x to 9x faster in inference, while achieving equivalent accuracy to recurrent models. The speed-up gain allows us to train the model with much more data. We hence combine our model with data generated by backtranslation from a neural machine translation model.
On the SQuAD dataset, our single model, trained with augmented data, achieves 84.6 F1 score on the test set, which is significantly better than the best published F1 score of 81.8. | accepted-poster-papers | This work replaces the RNN layer of square with a self-attention and convolution, achieving a big speed up and performance gains, particularly with data augmentation. The work is mostly clear presented, one reviewer found it "well-written" although there was a complaint the work did not clear separate out the novel aspects. In terms of results the work is clearly of high quality, producing top numbers on the shared task. There were some initial complaints of only using the SQuAD dataset, but the authors have now included additional results that diversify the experiments. Perhaps the largest concern is novelty. The idea of non-RNN self-attention is now widely known, and there are several systems that are applying it. Reviewers felt that while this system does it well, it is maybe less novel or significant than other possible work. | train | [
"B1lKKF8Bz",
"H1hw3bgSz",
"S1yRD584M",
"Hkx2Bz9lM",
"rycJHDIgf",
"Hyqx3y5xz",
"r1xzqspQM",
"BJgkoz9Xz",
"ryYnnsdXG",
"ByOo9od7M",
"HkxgcsdQG",
"Hy4sFiuQM",
"H1d6uidmf",
"rkOlY3WXf",
"BkCXSXOyf",
"HJg2Fk_yf"
] | [
"public",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public"
] | [
"Thank you for your paper! We really liked your approach to accelerate inference and training times in QA. \nI have one question regarding the comparison with BiDAF. On the article, you mention that you batched the training examples by paragraph length in your model, but it is not clear whether you did the same for... | [
-1,
-1,
-1,
8,
5,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
5,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_B14TlG-RW",
"iclr_2018_B14TlG-RW",
"BJgkoz9Xz",
"iclr_2018_B14TlG-RW",
"iclr_2018_B14TlG-RW",
"iclr_2018_B14TlG-RW",
"iclr_2018_B14TlG-RW",
"Hy4sFiuQM",
"rkOlY3WXf",
"rycJHDIgf",
"Hyqx3y5xz",
"Hkx2Bz9lM",
"iclr_2018_B14TlG-RW",
"iclr_2018_B14TlG-RW",
"HJg2Fk_yf",
"iclr_2018_... |
iclr_2018_Sy2ogebAW | Unsupervised Neural Machine Translation | In spite of the recent success of neural machine translation (NMT) in standard benchmarks, the lack of large parallel corpora poses a major practical problem for many language pairs. There have been several proposals to alleviate this issue with, for instance, triangulation and semi-supervised learning techniques, but they still require a strong cross-lingual signal. In this work, we completely remove the need of parallel data and propose a novel method to train an NMT system in a completely unsupervised manner, relying on nothing but monolingual corpora. Our model builds upon the recent work on unsupervised embedding mappings, and consists of a slightly modified attentional encoder-decoder model that can be trained on monolingual corpora alone using a combination of denoising and backtranslation. Despite the simplicity of the approach, our system obtains 15.56 and 10.21 BLEU points in WMT 2014 French-to-English and German-to-English translation. The model can also profit from small parallel corpora, and attains 21.81 and 15.24 points when combined with 100,000 parallel sentences, respectively. Our implementation is released as an open source project. | accepted-poster-papers | This work presents new results on unsupervised machine translation using a clever combination of techniques. In terms of originality, the reviewers find that the paper over-claims, and promises a breakthrough, which they do not feel is justified.
However there is "more than enough new content" and "preliminary" results on a new task. The experimental quality also has some issues, there is a lack of good qualitative analysis, and reviewers felt the claims about semi-supervised work had issues. Still the main number is a good start, and the authors are correct to note that there is another work with similarly promising results. Of the two works, the reviewers found the other more clearly written, and with better experimental analysis, noting that they both over claim in terms of novelty. The most promising aspect of the work, will likely be the significance of this task going forward, as there is now more interest in the use of multi-lingual embeddings and nmt as a benchmark task. | train | [
"BkW3sl8Nz",
"B1pZoxIEf",
"rJdm42ENG",
"SkBWbhN4M",
"S1jAR0Klf",
"SyniKeceM",
"S1BhMb5lG",
"B1sSDPTXf",
"SJHB8vaXM",
"Sy-0ez6MG",
"HkiSezafG",
"ByJTJzaMM",
"r1NHyGpGG",
"BkH86b6fM",
"B1SawIffz"
] | [
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"While it is true that we do not analyze any specific linguistic phenomenon in depth, note that our experiments already show that the system is not working like a \"word-for-word gloss\" as speculated in the comment: the baseline system is precisely a word-for-word gloss, and the proposed method beats it with a con... | [
-1,
-1,
-1,
-1,
6,
5,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
4,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"rJdm42ENG",
"SkBWbhN4M",
"r1NHyGpGG",
"r1NHyGpGG",
"iclr_2018_Sy2ogebAW",
"iclr_2018_Sy2ogebAW",
"iclr_2018_Sy2ogebAW",
"iclr_2018_Sy2ogebAW",
"B1SawIffz",
"iclr_2018_Sy2ogebAW",
"S1BhMb5lG",
"r1NHyGpGG",
"SyniKeceM",
"S1jAR0Klf",
"iclr_2018_Sy2ogebAW"
] |
iclr_2018_BkwHObbRZ | Learning One-hidden-layer Neural Networks with Landscape Design | We consider the problem of learning a one-hidden-layer neural network: we assume the input x is from Gaussian distribution and the label y=aσ(Bx)+ξ, where a is a nonnegative vector and B is a full-rank weight matrix, and ξ is a noise vector. We first give an analytic formula for the population risk of the standard squared loss and demonstrate that it implicitly attempts to decompose a sequence of low-rank tensors simultaneously.
Inspired by the formula, we design a non-convex objective function G whose landscape is guaranteed to have the following properties:
1. All local minima of G are also global minima.
2. All global minima of G correspond to the ground truth parameters.
3. The value and gradient of G can be estimated using samples.
With these properties, stochastic gradient descent on G provably converges to the global minimum and learn the ground-truth parameters. We also prove finite sample complexity results and validate the results by simulations. | accepted-poster-papers | I recommend acceptance based on the reviews. The paper makes novel contributions to learning one-hidden layer neural networks and designing new objective function with no bad local optima.
There is one point that the paper is missing. It only mentions Janzamin et al in the passing. Janzamin et al propose using score function framework for designing alternative objective function. For the case of Gaussian input that this paper considers, the score function reduces to Hermite polynomials. Lack of discussion about this connection is weird. There should be proper acknowledgement of prior work. Also missing are some of the key papers on tensor decomposition and its analysis
I think there are enough contributions in the paper for acceptance irrespective of the above aspect. | test | [
"ry5GSRHNf",
"rk0Ek5vgM",
"SyfsN8tef",
"SJjI7pKlz",
"HkLFssYfM",
"H1jVcsFMf",
"BkFFKsKfz"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Thanks for the review again!\n\nWe apologize that we didn't know that the paper was expected to be updated. We just added the results for sigmoid that answers the question \"does sigmoid suffer from the same problem?\" as we claimed in the response before. Please see page 9, figure 2 in the current version. \n\nWe... | [
-1,
6,
9,
7,
-1,
-1,
-1
] | [
-1,
3,
3,
3,
-1,
-1,
-1
] | [
"rk0Ek5vgM",
"iclr_2018_BkwHObbRZ",
"iclr_2018_BkwHObbRZ",
"iclr_2018_BkwHObbRZ",
"rk0Ek5vgM",
"SyfsN8tef",
"SJjI7pKlz"
] |
iclr_2018_SysEexbRb | Critical Points of Linear Neural Networks: Analytical Forms and Landscape Properties | Due to the success of deep learning to solving a variety of challenging machine learning tasks, there is a rising interest in understanding loss functions for training neural networks from a theoretical aspect. Particularly, the properties of critical points and the landscape around them are of importance to determine the convergence performance of optimization algorithms. In this paper, we provide a necessary and sufficient characterization of the analytical forms for the critical points (as well as global minimizers) of the square loss functions for linear neural networks. We show that the analytical forms of the critical points characterize the values of the corresponding loss functions as well as the necessary and sufficient conditions to achieve global minimum. Furthermore, we exploit the analytical forms of the critical points to characterize the landscape properties for the loss functions of linear neural networks and shallow ReLU networks. One particular conclusion is that: While the loss function of linear networks has no spurious local minimum, the loss function of one-hidden-layer nonlinear networks with ReLU activation function does have local minimum that is not global minimum. | accepted-poster-papers | I recommend acceptance based on the positive reviews. The paper analyzes critical points for linear neural networks and shallow ReLU networks. Getting characterization of critical points for shallow ReLU networks is a great first step. | train | [
"S1BRtK8EM",
"Hyyw_tH4z",
"S1aEzCJxG",
"ryOWEcdlM",
"SJ6btV9gz",
"BydBmeLQf",
"H1oDUDefG",
"rJye8vezz",
"SkBt4Dgff"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"I am satisfied with the authors response and maintain my rating and acceptance recommendation.",
"Thanks for the clarification. Most of my concerns are addressed. An anonymous reviewer raised a concern about the overlap with existing work, Li et al. 2016b. The authors' comments about this related work sound ok t... | [
-1,
-1,
7,
7,
6,
-1,
-1,
-1,
-1
] | [
-1,
-1,
3,
5,
4,
-1,
-1,
-1,
-1
] | [
"rJye8vezz",
"SkBt4Dgff",
"iclr_2018_SysEexbRb",
"iclr_2018_SysEexbRb",
"iclr_2018_SysEexbRb",
"iclr_2018_SysEexbRb",
"S1aEzCJxG",
"ryOWEcdlM",
"SJ6btV9gz"
] |
iclr_2018_rJm7VfZA- | Learning Parametric Closed-Loop Policies for Markov Potential Games | Multiagent systems where the agents interact among themselves and with an stochastic environment can be formalized as stochastic games. We study a subclass of these games, named Markov potential games (MPGs), that appear often in economic and engineering applications when the agents share some common resource. We consider MPGs with continuous state-action variables, coupled constraints and nonconvex rewards. Previous analysis followed a variational approach that is only valid for very simple cases (convex rewards, invertible dynamics, and no coupled constraints); or considered deterministic dynamics and provided open-loop (OL) analysis, studying strategies that consist in predefined action sequences, which are not optimal for stochastic environments. We present a closed-loop (CL) analysis for MPGs and consider parametric policies that depend on the current state and where agents adapt to stochastic transitions. We provide easily verifiable, sufficient and necessary conditions for a stochastic game to be an MPG, even for complex parametric functions (e.g., deep neural networks); and show that a closed-loop Nash equilibrium (NE) can be found (or at least approximated) by solving a related optimal control problem (OCP). This is useful since solving an OCP---which is a single-objective problem---is usually much simpler than solving the original set of coupled OCPs that form the game---which is a multiobjective control problem. This is a considerable improvement over the previously standard approach for the CL analysis of MPGs, which gives no approximate solution if no NE belongs to the chosen parametric family, and which is practical only for simple parametric forms. We illustrate the theoretical contributions with an example by applying our approach to a noncooperative communications engineering game. We then solve the game with a deep reinforcement learning algorithm that learns policies that closely approximates an exact variational NE of the game. | accepted-poster-papers | The paper considers Markov potential games (MPGs), where the agents share some common resource. They consider MPGs with continuous state-action variables, coupled constraints and nonconvex rewards, which is novel. The reviews are all positive and point out the novel contributions in the paper | train | [
"BJLGKD8Mz",
"BkVvEP5gM",
"BJZ6A-clG",
"H1iE5-jQG",
"ry7CMFMmz",
"S1UU_tGQz",
"B1JnBtzXf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"While it is not very surprising that in a potential game it is easy to find Nash equilibria (compare to normal form static games, in which local maxima of the potential are pure Nash equilibria), the idea of approaching these stochastic games from this direction is novel and potentially (no pun intended) fruitful.... | [
7,
6,
6,
-1,
-1,
-1,
-1
] | [
2,
3,
1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rJm7VfZA-",
"iclr_2018_rJm7VfZA-",
"iclr_2018_rJm7VfZA-",
"iclr_2018_rJm7VfZA-",
"BkVvEP5gM",
"BJZ6A-clG",
"BJLGKD8Mz"
] |
iclr_2018_SyProzZAW | The power of deeper networks for expressing natural functions | It is well-known that neural networks are universal approximators, but that deeper networks tend in practice to be more powerful than shallower ones. We shed light on this by proving that the total number of neurons m required to approximate natural classes of multivariate polynomials of n variables grows only linearly with n for deep neural networks, but grows exponentially when merely a single hidden layer is allowed. We also provide evidence that when the number of hidden layers is increased from 1 to k, the neuron requirement grows exponentially not with n but with n^{1/k}, suggesting that the minimum number of layers required for practical expressibility grows only logarithmically with n. | accepted-poster-papers | All the reviewers are agree on the significance of the topic of understanding expressivity of deep networks. This paper makes good progress in analyzing the ability of deep networks to fit multivariate polynomials. They show exponential depth advantage for general sparse polynomials.
I am very surprised that the paper misses the original contribution of Andrew Barron. He analyzes the size of the shallow neural networks needed to fit a wide class of functions including polynomials. The deep learning community likes to think that everything has been invented in the current decade.
@article{barron1994approximation,
title={Approximation and estimation bounds for artificial neural networks},
author={Barron, Andrew R},
journal={Machine Learning},
volume={14},
number={1},
pages={115--133},
year={1994},
publisher={Springer}
} | train | [
"HJqsNbFez",
"S1z1Zf9xM",
"B1B65zqef",
"HkvXDF2XM",
"HyaCBK3XG",
"B1ebHYnXG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Experimental results have shown that deep networks (many hidden layers) can approximate more complicated functions with less neurons compared to shallow (single hidden layer) networks. \nThis paper gives an explicit proof when the function in question is a sparse polynomial, ie: a polynomial in n variables, which ... | [
7,
6,
6,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1
] | [
"iclr_2018_SyProzZAW",
"iclr_2018_SyProzZAW",
"iclr_2018_SyProzZAW",
"HJqsNbFez",
"S1z1Zf9xM",
"B1B65zqef"
] |
iclr_2018_B1QgVti6Z | Empirical Risk Landscape Analysis for Understanding Deep Neural Networks | This work aims to provide comprehensive landscape analysis of empirical risk in deep neural networks (DNNs), including the convergence behavior of its gradient, its stationary points and the empirical risk itself to their corresponding population counterparts, which reveals how various network parameters determine the convergence performance. In particular, for an l-layer linear neural network consisting of \dmi neurons in the i-th layer, we prove the gradient of its empirical risk uniformly converges to the one of its population risk, at the rate of O(r2llmaxi\dmislog(d/l)/n). Here d is the total weight dimension, s is the number of nonzero entries of all the weights and the magnitude of weights per layer is upper bounded by r. Moreover, we prove the one-to-one correspondence of the non-degenerate stationary points between the empirical and population risks and provide convergence guarantee for each pair. We also establish the uniform convergence of the empirical risk to its population counterpart and further derive the stability and generalization bounds for the empirical risk. In addition, we analyze these properties for deep \emph{nonlinear} neural networks with sigmoid activation functions. We prove similar results for convergence behavior of their empirical risk gradients, non-degenerate stationary points as well as the empirical risk itself.
To our best knowledge, this work is the first one theoretically characterizing the uniform convergence of the gradient and stationary points of the empirical risk of DNN models, which benefits the theoretical understanding on how the neural network depth l, the layer width \dmi, the network size d, the sparsity in weight and the parameter magnitude r determine the neural network landscape. | accepted-poster-papers | Based on the positive reviews, I recommend acceptance. The paper analyzes when empirical risk is close to the population version, when empirical saddle points are close to the population version and empirical gradients are close to the population version. | train | [
"H1Wo7pKgM",
"BJGc-k9xG",
"r13F3TRbM",
"S1T8deYMz",
"S1mZuxYGf",
"S14tDgFGG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"public"
] | [
"This paper studies empirical risk in deep neural networks. Results are provided in Section 4 for linear networks and in Section 5 for nonlinear networks.\nResults for deep linear neural networks are puzzling. Whatever the number of layers, a deep linear NN is simply a matrix multiplication and minimizing the MSE i... | [
3,
7,
7,
-1,
-1,
-1
] | [
3,
3,
3,
-1,
-1,
-1
] | [
"iclr_2018_B1QgVti6Z",
"iclr_2018_B1QgVti6Z",
"iclr_2018_B1QgVti6Z",
"r13F3TRbM",
"BJGc-k9xG",
"H1Wo7pKgM"
] |
iclr_2018_Hk9Xc_lR- | On the Discrimination-Generalization Tradeoff in GANs | Generative adversarial training can be generally understood as minimizing certain moment matching loss defined by a set of discriminator functions, typically neural networks. The discriminator set should be large enough to be able to uniquely identify the true distribution (discriminative), and also be small enough to go beyond memorizing samples (generalizable). In this paper, we show that a discriminator set is guaranteed to be discriminative whenever its linear span is dense in the set of bounded continuous functions. This is a very mild condition satisfied even by neural networks with a single neuron. Further, we develop generalization bounds between the learned distribution and true distribution under different evaluation metrics. When evaluated with neural distance, our bounds show that generalization is guaranteed as long as the discriminator set is small enough, regardless of the size of the generator or hypothesis set. When evaluated with KL divergence, our bound provides an explanation on the counter-intuitive behaviors of testing likelihood in GAN training. Our analysis sheds lights on understanding the practical performance of GANs. | accepted-poster-papers | I recommend acceptance. The two positive reviews point out the theoretical contributions. The authors have responded extensively to the negative review and I see no serious flaw as claimed by the negative review. | train | [
"HJjLXT4gM",
"ByyV3Atez",
"SyRq3ukMf",
"HySPX0V-f",
"BJJee_aXz",
"Skfk1PT7z",
"rkqxkP6XG",
"Hyz1hBp7M",
"rkwy_hVbG",
"S1FLYnVbf",
"S1eOo0sez",
"r16zP8jlG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"== Paper Summary ==\nThe paper addresses the problem of balancing capacities of generator and discriminator classes in generative adversarial nets (GANs) from purely theoretical (function analytical and statistical learning) perspective. In my point of view, the main *novel* contributions are: \n(a) Conditions on ... | [
6,
3,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_Hk9Xc_lR-",
"iclr_2018_Hk9Xc_lR-",
"iclr_2018_Hk9Xc_lR-",
"ByyV3Atez",
"SyRq3ukMf",
"HJjLXT4gM",
"HJjLXT4gM",
"iclr_2018_Hk9Xc_lR-",
"ByyV3Atez",
"ByyV3Atez",
"r16zP8jlG",
"iclr_2018_Hk9Xc_lR-"
] |
iclr_2018_SyZI0GWCZ | Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models | Many machine learning algorithms are vulnerable to almost imperceptible perturbations of their inputs. So far it was unclear how much risk adversarial perturbations carry for the safety of real-world machine learning applications because most methods used to generate such perturbations rely either on detailed model information (gradient-based attacks) or on confidence scores such as class probabilities (score-based attacks), neither of which are available in most real-world scenarios. In many such cases one currently needs to retreat to transfer-based attacks which rely on cumbersome substitute models, need access to the training data and can be defended against. Here we emphasise the importance of attacks which solely rely on the final model decision. Such decision-based attacks are (1) applicable to real-world black-box models such as autonomous cars, (2) need less knowledge and are easier to apply than transfer-based attacks and (3) are more robust to simple defences than gradient- or score-based attacks. Previous attacks in this category were limited to simple models or simple datasets. Here we introduce the Boundary Attack, a decision-based attack that starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial. The attack is conceptually simple, requires close to no hyperparameter tuning, does not rely on substitute models and is competitive with the best gradient-based attacks in standard computer vision tasks like ImageNet. We apply the attack on two black-box algorithms from Clarifai.com. The Boundary Attack in particular and the class of decision-based attacks in general open new avenues to study the robustness of machine learning models and raise new questions regarding the safety of deployed machine learning systems. An implementation of the attack is available as part of Foolbox (https://github.com/bethgelab/foolbox). | accepted-poster-papers | The reviewers all agree this is a well written and interesting paper describing a novel black box adversarial attack. There were missing relevant references in the original submission, but these have been added. I would suggest the authors follow the reviewer suggestions on claims of generality beyond CNN; although there may not be anything obvious stopping this method from working more generally, it hasn't been tested in this work. Even if you keep the title you might be more careful to frame the body in the context of CNN's. | test | [
"BkP-T1qgM",
"SyWXJWqgf",
"HJ3OcT3gG",
"S1lew6QNf",
"rJFt8lM4f",
"SkqcaKqfM",
"Bk06LctMz",
"SJcYIcFzf",
"ryCQLcYzG",
"Bkw25o8GM",
"r1uU2GiZM",
"HkPCcEuWM",
"SJvgtpHeM",
"SySvBkSJf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"author",
"public"
] | [
"The authors identify a new security threat for deep learning: Decision-based adversarial attacks. This new class of attacks on deep learning systems requires from an attacker only the knowledge of class labels (previous attacks required more information, e.g., access to a gradient oracle). Unsurprisingly, since th... | [
7,
7,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SyZI0GWCZ",
"iclr_2018_SyZI0GWCZ",
"iclr_2018_SyZI0GWCZ",
"rJFt8lM4f",
"iclr_2018_SyZI0GWCZ",
"iclr_2018_SyZI0GWCZ",
"BkP-T1qgM",
"SyWXJWqgf",
"HJ3OcT3gG",
"r1uU2GiZM",
"iclr_2018_SyZI0GWCZ",
"SJvgtpHeM",
"SySvBkSJf",
"iclr_2018_SyZI0GWCZ"
] |
iclr_2018_rJQDjk-0b | Unbiased Online Recurrent Optimization | The novel \emph{Unbiased Online Recurrent Optimization} (UORO) algorithm allows for online learning of general recurrent computational graphs such as recurrent network models. It works in a streaming fashion and avoids backtracking through past activations and inputs. UORO is computationally as costly as \emph{Truncated Backpropagation Through Time} (truncated BPTT), a widespread algorithm for online learning of recurrent networks \cite{jaeger2002tutorial}. UORO is a modification of \emph{NoBackTrack} \cite{DBLP:journals/corr/OllivierC15} that bypasses the need for model sparsity and makes implementation easy in current deep learning frameworks, even for complex models. Like NoBackTrack, UORO provides unbiased gradient estimates; unbiasedness is the core hypothesis in stochastic gradient descent theory, without which convergence to a local optimum is not guaranteed. On the contrary, truncated BPTT does not provide this property, leading to possible divergence. On synthetic tasks where truncated BPTT is shown to diverge, UORO converges. For instance, when a parameter has a positive short-term but negative long-term influence, truncated BPTT diverges unless the truncation span is very significantly longer than the intrinsic temporal range of the interactions, while UORO performs well thanks to the unbiasedness of its gradients.
| accepted-poster-papers | The reviewers agree that the proposed method is theoretically interesting, but disagree on whether it has been properly experimentally validated. My view is that the the theoretical contribution is interesting enough to warrant inclusion in the conference, and so I will err on the side of accepting. | val | [
"B1Ud2yqgz",
"B1QPFb5eG",
"S1zt98K4z",
"r11STCqxG",
"rk7gF0cXG",
"SyaSrR5Xf",
"SJHpNAqQG",
"H1thm0c7G"
] | [
"official_reviewer",
"official_reviewer",
"public",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The authors introduce a novel approach to online learning of the parameters of recurrent neural networks from long sequences that overcomes the limitation of truncated backpropagation through time (BPTT) of providing biased gradient estimates.\n\nThe idea is to use a forward computation of the gradient as in Willi... | [
6,
7,
-1,
8,
-1,
-1,
-1,
-1
] | [
4,
4,
-1,
5,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rJQDjk-0b",
"iclr_2018_rJQDjk-0b",
"iclr_2018_rJQDjk-0b",
"iclr_2018_rJQDjk-0b",
"iclr_2018_rJQDjk-0b",
"B1Ud2yqgz",
"B1QPFb5eG",
"r11STCqxG"
] |
iclr_2018_ryup8-WCW | Measuring the Intrinsic Dimension of Objective Landscapes | Many recently trained neural networks employ large numbers of parameters to achieve good performance. One may intuitively use the number of parameters required as a rough gauge of the difficulty of a problem. But how accurate are such notions? How many parameters are really needed? In this paper we attempt to answer this question by training networks not in their native parameter space, but instead in a smaller, randomly oriented subspace. We slowly increase the dimension of this subspace, note at which dimension solutions first appear, and define this to be the intrinsic dimension of the objective landscape. The approach is simple to implement, computationally tractable, and produces several suggestive conclusions. Many problems have smaller intrinsic dimensions than one might suspect, and the intrinsic dimension for a given dataset varies little across a family of models with vastly different sizes. This latter result has the profound implication that once a parameter space is large enough to solve a problem, extra parameters serve directly to increase the dimensionality of the solution manifold. Intrinsic dimension allows some quantitative comparison of problem difficulty across supervised, reinforcement, and other types of learning where we conclude, for example, that solving the inverted pendulum problem is 100 times easier than classifying digits from MNIST, and playing Atari Pong from pixels is about as hard as classifying CIFAR-10. In addition to providing new cartography of the objective landscapes wandered by parameterized models, the method is a simple technique for constructively obtaining an upper bound on the minimum description length of a solution. A byproduct of this construction is a simple approach for compressing networks, in some cases by more than 100 times. | accepted-poster-papers | The authors make an empirical study of the "dimension" of a neural net optimization problem, where the "dimension" is defined by the minimal random linear parameter subspace dimension where a (near) solution to the problem is likely to be found. I agree with reviewers that in light of the authors' revisions, the results are interesting enough to be presented at the conference. | train | [
"B1IwI-2xz",
"BkJsM2vgf",
"BJva6gOgM",
"SJohldaXz",
"HkDPl_aXG",
"S1e7luTQM",
"SJ1yeuTmM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper proposes an empirical measure of the intrinsic dimensionality of a neural network problem. Taking the full dimensionality to be the total number of parameters of the network model, the authors assess intrinsic dimensionality by randomly projecting the network to a domain with fewer parameters (correspon... | [
7,
7,
6,
-1,
-1,
-1,
-1
] | [
3,
4,
2,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ryup8-WCW",
"iclr_2018_ryup8-WCW",
"iclr_2018_ryup8-WCW",
"BkJsM2vgf",
"BJva6gOgM",
"SJ1yeuTmM",
"B1IwI-2xz"
] |
iclr_2018_rkO3uTkAZ | Memorization Precedes Generation: Learning Unsupervised GANs with Memory Networks | We propose an approach to address two issues that commonly occur during training of unsupervised GANs. First, since GANs use only a continuous latent distribution to embed multiple classes or clusters of data, they often do not correctly handle the structural discontinuity between disparate classes in a latent space. Second, discriminators of GANs easily forget about past generated samples by generators, incurring instability during adversarial training. We argue that these two infamous problems of unsupervised GAN training can be largely alleviated by a learnable memory network to which both generators and discriminators can access. Generators can effectively learn representation of training samples to understand underlying cluster distributions of data, which ease the structure discontinuity problem. At the same time, discriminators can better memorize clusters of previously generated samples, which mitigate the forgetting problem. We propose a novel end-to-end GAN model named memoryGAN, which involves a memory network that is unsupervisedly trainable and integrable to many existing GAN models. With evaluations on multiple datasets such as Fashion-MNIST, CelebA, CIFAR10, and Chairs, we show that our model is probabilistically interpretable, and generates realistic image samples of high visual fidelity. The memoryGAN also achieves the state-of-the-art inception scores over unsupervised GAN models on the CIFAR10 dataset, without any optimization tricks and weaker divergences. | accepted-poster-papers |
I am going to recommend acceptance of this paper despite being worried about the issues raised by reviewer 1. In particular,
1: the best possible inception score would be obtained by copying the training dataset
2: the highest visual quality samples would be obtained by copying the training dataset
3: perturbations (in the hidden space of a convnet) of training data might not be perturbations in l2, and so one might not find a close nearest neighbor with an l2 search
4: it has been demonstrated in other works that perturbations of convnet features of training data (e.g. trained as auto-encoders) can make convincing "new samples"; or more generally, paths between nearby samples in the hidden space of a convnet can be convincing new samples.
These together suggest the possibility that the method presented is not necessarily doing a great job as a generative model or as a density model (it may be, we just can't tell...), but it is doing a good job at hacking the metrics (inception score, visual quality). This is not an issue with only this paper, and I do not want to punish the authors of this papers for the failings of the field; but this work, especially because of its explicit use of training examples in the memory, nicely exposes the deficiencies in our community's methodology for evaluating GANs and other generative models.
| train | [
"Bko3dzDlG",
"SyzkuzYxG",
"S1ck4rYxM",
"rkQUyPd7z",
"ryl76gumM",
"BJMceY9Mf",
"ryYDlFcfG",
"S16Vet5Mz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author"
] | [
"In summary, the paper introduces a memory module to the GANs to address two existing problems: (1) no discrete latent structures and (2) the forgetting problem. The memory provides extra information for both the generation and the discrimination, compared with vanilla GANs. Based on my knowledge, the idea is novel... | [
6,
6,
7,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rkO3uTkAZ",
"iclr_2018_rkO3uTkAZ",
"iclr_2018_rkO3uTkAZ",
"ryl76gumM",
"BJMceY9Mf",
"Bko3dzDlG",
"SyzkuzYxG",
"S1ck4rYxM"
] |
iclr_2018_H1uR4GZRZ | Stochastic Activation Pruning for Robust Adversarial Defense | Neural networks are known to be vulnerable to adversarial examples. Carefully chosen perturbations to real images, while imperceptible to humans, induce misclassification and threaten the reliability of deep learning systems in the wild. To guard against adversarial examples, we take inspiration from game theory and cast the problem as a minimax zero-sum game between the adversary and the model. In general, for such games, the optimal strategy for both players requires a stochastic policy, also known as a mixed strategy. In this light, we propose Stochastic Activation Pruning (SAP), a mixed strategy for adversarial defense. SAP prunes a random subset of activations (preferentially pruning those with smaller magnitude) and scales up the survivors to compensate. We can apply SAP to pretrained networks, including adversarially trained models, without fine-tuning, providing robustness against adversarial examples. Experiments demonstrate that SAP confers robustness against attacks, increasing accuracy and preserving calibration. | accepted-poster-papers | This is a borderline paper. The reviewers are happy with the simplicity of the proposed method and the fact that it can be applied after training; but are concerned by the lack of theory explaining the results. I will recommend accepting, but I would ask the authors add the additional experiments they have promised, and would also suggest experiments on imagenet. | train | [
"ryrXQ4wyz",
"SJFnpOYxM",
"ry5D1Z5xf",
"HJvA3yQQG",
"rkk5517Qf",
"B1DQ5J77z",
"HJRkw1X7f"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper investigates a new approach to prevent a given classifier from adversarial examples. The most important contribution is that the proposed algorithm can be applied post-hoc to already trained networks. Hence, the proposed algorithm (Stochastic Activation Pruning) can be combined with algorithms which pre... | [
6,
7,
6,
-1,
-1,
-1,
-1
] | [
3,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1uR4GZRZ",
"iclr_2018_H1uR4GZRZ",
"iclr_2018_H1uR4GZRZ",
"ryrXQ4wyz",
"SJFnpOYxM",
"ry5D1Z5xf",
"iclr_2018_H1uR4GZRZ"
] |
iclr_2018_HkxF5RgC- | Sparse Persistent RNNs: Squeezing Large Recurrent Networks On-Chip | Recurrent Neural Networks (RNNs) are powerful tools for solving sequence-based problems, but their efficacy and execution time are dependent on the size of the network. Following recent work in simplifying these networks with model pruning and a novel mapping of work onto GPUs, we design an efficient implementation for sparse RNNs. We investigate several optimizations and tradeoffs: Lamport timestamps, wide memory loads, and a bank-aware weight layout. With these optimizations, we achieve speedups of over 6x over the next best algorithm for a hidden layer of size 2304, batch size of 4, and a density of 30%. Further, our technique allows for models of over 5x the size to fit on a GPU for a speedup of 2x, enabling larger networks to help advance the state-of-the-art. We perform case studies on NMT and speech recognition tasks in the appendix, accelerating their recurrent layers by up to 3x. | accepted-poster-papers | The reviewers find the work interesting and well made, but are concerned that ICLR is not the right venue for the work. I will recommend that the paper be accepted, but ask the authors to add the NMT results to the main paper (any other non-synthetic applications they could add would be helpful). | train | [
"rkoKvifef",
"BJ6cxWFlM",
"H1PcMAKeG",
"SkzfXdpmf",
"SJ5iMUt7f",
"BkwIocfQz",
"S1gFF5fmM",
"HJ4pB5M7f"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author"
] | [
"The paper devises a sparse kernel for RNNs which is urgently needed because current GPU deep learning libraries (e.g., CuDNN) cannot exploit sparsity when it is presented and because a number of works have proposed to sparsify/prune RNNs so as to be able to run on devices with limited compute power (e.g., smartpho... | [
6,
6,
6,
-1,
-1,
-1,
-1,
-1
] | [
2,
4,
2,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HkxF5RgC-",
"iclr_2018_HkxF5RgC-",
"iclr_2018_HkxF5RgC-",
"iclr_2018_HkxF5RgC-",
"S1gFF5fmM",
"rkoKvifef",
"BJ6cxWFlM",
"H1PcMAKeG"
] |
iclr_2018_ByKWUeWA- | GANITE: Estimation of Individualized Treatment Effects using Generative Adversarial Nets | Estimating individualized treatment effects (ITE) is a challenging task due to the need for an individual's potential outcomes to be learned from biased data and without having access to the counterfactuals. We propose a novel method for inferring ITE based on the Generative Adversarial Nets (GANs) framework. Our method, termed Generative Adversarial Nets for inference of Individualized Treatment Effects (GANITE), is motivated by the possibility that we can capture the uncertainty in the counterfactual distributions by attempting to learn them using a GAN. We generate proxies of the counterfactual outcomes using a counterfactual generator, G, and then pass these proxies to an ITE generator, I, in order to train it. By modeling both of these using the GAN framework, we are able to infer based on the factual data, while still accounting for the unseen counterfactuals. We test our method on three real-world datasets (with both binary and multiple treatments) and show that GANITE outperforms state-of-the-art methods. | accepted-poster-papers | The reviewers agree that the method is original and mostly well communicated, but have some doubts about the significance of the work. | val | [
"rk3S-gKez",
"ryaoluFgG",
"SyIFK-9lG",
"HJAYMyyfz",
"rkl8zJyfM",
"S1VTby1fG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Summary:\nThis paper proposes to estimate the individual treatment effects (ITE) through\ntraining two separate conditional generative adversarial networks (GANs). \n\nFirst, a counterfactual GAN is trained to estimate the conditional distribution \nof the potential outcome vector, which consists of factual outcom... | [
6,
6,
6,
-1,
-1,
-1
] | [
4,
3,
3,
-1,
-1,
-1
] | [
"iclr_2018_ByKWUeWA-",
"iclr_2018_ByKWUeWA-",
"iclr_2018_ByKWUeWA-",
"rk3S-gKez",
"ryaoluFgG",
"SyIFK-9lG"
] |
iclr_2018_S18Su--CW | Thermometer Encoding: One Hot Way To Resist Adversarial Examples | It is well known that it is possible to construct "adversarial examples"
for neural networks: inputs which are misclassified by the network
yet indistinguishable from true data. We propose a simple
modification to standard neural network architectures, thermometer
encoding, which significantly increases the robustness of the network to
adversarial examples. We demonstrate this robustness with experiments
on the MNIST, CIFAR-10, CIFAR-100, and SVHN datasets, and show that
models with thermometer-encoded inputs consistently have higher accuracy
on adversarial examples, without decreasing generalization.
State-of-the-art accuracy under the strongest known white-box attack was
increased from 93.20% to 94.30% on MNIST and 50.00% to 79.16% on CIFAR-10.
We explore the properties of these networks, providing evidence
that thermometer encodings help neural networks to
find more-non-linear decision boundaries. | accepted-poster-papers | This paper is borderline. The reviewers agree that the method is novel and interesting, but have concerns about scalability and weakness to attacks with larger epsilon. I will recommend accepting; but I think the paper would be well served by imagenet experiments, and hope the authors are able to include these for the final version | val | [
"ByzXBMDxf",
"HJDuim3lM",
"Bk9IXvzWf",
"H16--t2Xz",
"HkYs0dnXz",
"r1fzgYnQM",
"ryPcrG9xG",
"B19zAPflM",
"HkALrvckf",
"SJJbNyK0Z"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public",
"public",
"author",
"public"
] | [
"This paper studies input discretization and white-box attacks on it to make deep networks robust to adversarial examples. They propose one-hot and thermometer encodings as input discretization and \nalso propose DGA and LS-PGA as white-box attacks on it.\nRobustness to adversarial examples for thermometer encodin... | [
6,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
2,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_S18Su--CW",
"iclr_2018_S18Su--CW",
"iclr_2018_S18Su--CW",
"ByzXBMDxf",
"Bk9IXvzWf",
"HJDuim3lM",
"HkALrvckf",
"iclr_2018_S18Su--CW",
"SJJbNyK0Z",
"iclr_2018_S18Su--CW"
] |
iclr_2018_HyrCWeWCb | Trust-PCL: An Off-Policy Trust Region Method for Continuous Control | Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL, which exploits an observation that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. The introduction of relative entropy regularization allows Trust-PCL to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL significantly improves the solution quality and sample efficiency of TRPO. | accepted-poster-papers | This paper adapts (Nachum et al 2017) to continuous control via TRPO. The work is incremental (not in the dirty sense of the word popular amongst researchers, but rather in the sense of "building atop a closely related work"), nontrivial, and shows empirical promise. The reviewers would like more exploration of the sensitivity of the hyper-parameters. | train | [
"rJ-4JL_Vf",
"ByDPYkUxG",
"H11zfWQZf",
"B1tQ10rVG",
"H1ccXfmeG",
"HkF_6L6Qz",
"BJ--aUT7M",
"Hk772U6XM"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"These comments continue to reveal some fundamental misunderstandings we should clarify.\n\nR2: \"Our paper does not present a policy gradient method\" <- This is obviously untrue.\n\n- To correct such a misunderstanding, one first needs to realize that policy gradient algorithms update model parameters along the g... | [
-1,
6,
5,
-1,
5,
-1,
-1,
-1
] | [
-1,
4,
4,
-1,
1,
-1,
-1,
-1
] | [
"B1tQ10rVG",
"iclr_2018_HyrCWeWCb",
"iclr_2018_HyrCWeWCb",
"BJ--aUT7M",
"iclr_2018_HyrCWeWCb",
"H1ccXfmeG",
"H11zfWQZf",
"ByDPYkUxG"
] |
iclr_2018_rk49Mg-CW | Stochastic Variational Video Prediction | Predicting the future in real-world settings, particularly from raw sensory observations such as images, is exceptionally challenging. Real-world events can be stochastic and unpredictable, and the high dimensionality and complexity of natural images requires the predictive model to build an intricate understanding of the natural world. Many existing methods tackle this problem by making simplifying assumptions about the environment. One common assumption is that the outcome is deterministic and there is only one plausible future. This can lead to low-quality predictions in real-world settings with stochastic dynamics. In this paper, we develop a stochastic variational video prediction (SV2P) method that predicts a different possible future for each sample of its latent variables. To the best of our knowledge, our model is the first to provide effective stochastic multi-frame prediction for real-world video. We demonstrate the capability of the proposed method in predicting detailed future frames of videos on multiple real-world datasets, both action-free and action-conditioned. We find that our proposed method produces substantially improved video predictions when compared to the same model without stochasticity, and to other stochastic video prediction methods. Our SV2P implementation will be open sourced upon publication. | accepted-poster-papers | Not quite enough for an oral but a very solid poster. | train | [
"SJHVI1WSG",
"S1gH28vgM",
"r17bOI8yG",
"S1riI7OxM",
"H1ajkrZEf",
"S1W4e5U7f",
"HJtZzEW7z",
"Hk7A25bmM",
"r1ULac-Qf",
"rkLWT5Z7f",
"HJ1NygMgM",
"H1i2t1egM"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public",
"author",
"author",
"author",
"author",
"public"
] | [
"\nFor comparison with VPN, we did NOT train any model. Instead, the authors of Reed et al. 2017 provided their trained model which we used for evaluation.\n\nIn terms of numbers, the model from Reed et al. 2017 has 119,538,432 while our model has 8,378,497. Hopefully this helps to get a better understanding of the... | [
-1,
7,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
5,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"H1ajkrZEf",
"iclr_2018_rk49Mg-CW",
"iclr_2018_rk49Mg-CW",
"iclr_2018_rk49Mg-CW",
"iclr_2018_rk49Mg-CW",
"HJtZzEW7z",
"iclr_2018_rk49Mg-CW",
"S1riI7OxM",
"r17bOI8yG",
"S1gH28vgM",
"H1i2t1egM",
"iclr_2018_rk49Mg-CW"
] |
iclr_2018_HkXWCMbRW | Towards Image Understanding from Deep Compression Without Decoding | Motivated by recent work on deep neural network (DNN)-based image compression methods showing potential improvements in image quality, savings in storage, and bandwidth reduction, we propose to perform image understanding tasks such as classification and segmentation directly on the compressed representations produced by these compression methods. Since the encoders and decoders in DNN-based compression methods are neural networks with feature-maps as internal representations of the images, we directly integrate these with architectures for image understanding. This bypasses decoding of the compressed representation into RGB space and reduces computational cost. Our study shows that accuracies comparable to networks that operate on compressed RGB images can be achieved while reducing the computational complexity up to 2×. Furthermore, we show that synergies are obtained by jointly training compression networks with classification networks on the compressed representations, improving image quality, classification accuracy, and segmentation performance. We find that inference from compressed representations is particularly advantageous compared to inference from compressed RGB images for aggressive compression rates. | accepted-poster-papers | Some reviewers seem to assign novelty to the compression and classification formulation; however, semi-supervised autoencoders have been used for a long time. Taking the compression task more seriously as is done in this paper is less explored.
The paper provides some extensive experimental evaluation and was edited to make the paper more concise at the request of reviewers. One reviewer had a particularly strong positive rating, due to the quality of the presentation, experiments and discussion. I think the community would like this work and it should be accepted.
| train | [
"SkE6QMtlG",
"r1A9XDwgG",
"rJx_tnFeM",
"BkCWUB2zM",
"HJB3rBhzG",
"HkvLrShzM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Thanks for addressing most of the issues. I changed my given score from 3 to 6.\n\nSummary:\nThis work explores the use of learned compressed image representation for solving 2 computer vision tasks without employing a decoding step. \n\nThe paper claims to be more computationally and memory efficient compared to ... | [
6,
9,
6,
-1,
-1,
-1
] | [
4,
5,
3,
-1,
-1,
-1
] | [
"iclr_2018_HkXWCMbRW",
"iclr_2018_HkXWCMbRW",
"iclr_2018_HkXWCMbRW",
"r1A9XDwgG",
"SkE6QMtlG",
"rJx_tnFeM"
] |
iclr_2018_ByJIWUnpW | Automatically Inferring Data Quality for Spatiotemporal Forecasting | Spatiotemporal forecasting has become an increasingly important prediction task in machine learning and statistics due to its vast applications, such as climate modeling, traffic prediction, video caching predictions, and so on. While numerous studies have been conducted, most existing works assume that the data from different sources or across different locations are equally reliable. Due to cost, accessibility, or other factors, it is inevitable that the data quality could vary, which introduces significant biases into the model and leads to unreliable prediction results. The problem could be exacerbated in black-box prediction models, such as deep neural networks. In this paper, we propose a novel solution that can automatically infer data quality levels of different sources through local variations of spatiotemporal signals without explicit labels. Furthermore, we integrate the estimate of data quality level with graph convolutional networks to exploit their efficient structures. We evaluate our proposed method on forecasting temperatures in Los Angeles. | accepted-poster-papers | With an 8-6-6 rating all reviewers agreed that this paper is past the threshold for acceptance.
The quality of the paper appears to have increased during the review cycle due to interactions with the reviewers. The paper addresses issues related to the quality of heterogeneous data sources. The paper does this through the framework of graph convolutional networks (GCNs). The work proposes a data quality level concept defined at each vertex in a graph based on a local variation of the vertex. The quality level is used as a regularizer constant in the objective function. Experimental work shows that this formulation is important in the context of time-series prediction.
Experiments are performed on a dataset that is less prominent in the ML and ICLR community, from two commercial weather services Weather Underground and WeatherBug; however, experiments with reasonable baseline models using a "Forecasting mean absolute error (MAE)" metric seem to be well done.
The biggest weakness of this work was a lack of comparison with some more traditional time-series modelling approaches. However, the authors added an auto-regressive model into the baselines used for comparison. Some more details on this model would help.
I tend to agree with the author's assertion that: "there is limited work in ICLR on data quality, but it is definitely one essential hurdle for any representation learning model to work in practice. ".
For these reasons I recommend a poster.
| train | [
"Hk7kJzcxM",
"B1GH1Kd4f",
"r16AndOEf",
"S1GlLvu4G",
"rJDUzhtxf",
"ry07x_9xG",
"rJCAdVTQM",
"ByIwKojXM",
"rktRm4eQz",
"HyChXVl7G",
"S1eWfNlmz",
"ry6rgEgXG"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Update:\n\nI have read the rebuttal and the revised manuscript. Paper reads better and comparison to Auto-regression was added. This work presents a novel way of utilizing GCN and I believe it would be interesting to the community. In this regard, I have updated my rating.\n\nOn the downside, I still remain uncert... | [
6,
-1,
-1,
-1,
6,
8,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
-1,
-1,
-1,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ByJIWUnpW",
"r16AndOEf",
"S1GlLvu4G",
"S1eWfNlmz",
"iclr_2018_ByJIWUnpW",
"iclr_2018_ByJIWUnpW",
"ByIwKojXM",
"HyChXVl7G",
"rJDUzhtxf",
"rJDUzhtxf",
"Hk7kJzcxM",
"ry07x_9xG"
] |
iclr_2018_Sy21R9JAW | Towards better understanding of gradient-based attribution methods for Deep Neural Networks | Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective. What is more, no exhaustive empirical comparison has been performed in the past. In this work we analyze four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them. By reformulating two of these methods, we construct a unified framework which enables a direct comparison, as well as an easier implementation. Finally, we propose a novel evaluation metric, called Sensitivity-n and test the gradient-based attribution methods alongside with a simple perturbation-based attribution method on several datasets in the domains of image and text classification, using various network architectures. | accepted-poster-papers | With scores of 7-7-6 and the justification below the AC recommends acceptance.
One of the reviewers summarizes why this is a good paper as follows:
"This paper discusses several gradient based attribution methods, which have been popular for the fast computation of saliency maps for interpreting deep neural networks. The paper provides several advances:
- This gives a more unified way of understanding, and implementing the methods.
- The paper points out situations when the methods are equivalent
- The paper analyses the methods' sensitivity to identifying single and joint regions of sensitivity
- The paper proposes a new objective function to measure joint sensitivity"
| train | [
"rJUrhpYxf",
"Byt56W9lM",
"SymYit2xf",
"H1QgktIGG",
"rJoXE_UMG",
"B1NvQ_Izz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"This paper discusses several gradient based attribution methods, which have been popular for the fast computation of saliency maps for interpreting deep neural networks. The paper provides several advances:\n- \\epsilon-LRP and DeepLIFT are formulated in a way that can be calculated using the same back-propagation... | [
7,
6,
7,
-1,
-1,
-1
] | [
3,
5,
4,
-1,
-1,
-1
] | [
"iclr_2018_Sy21R9JAW",
"iclr_2018_Sy21R9JAW",
"iclr_2018_Sy21R9JAW",
"Byt56W9lM",
"rJUrhpYxf",
"SymYit2xf"
] |
iclr_2018_SyJ7ClWCb | Countering Adversarial Images using Input Transformations | This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system. Specifically, we study applying image transformations such as bit-depth reduction, JPEG compression, total variance minimization, and image quilting before feeding the image to a convolutional network classifier. Our experiments on ImageNet show that total variance minimization and image quilting are very effective defenses in practice, in particular, when the network is trained on transformed images. The strength of those defenses lies in their non-differentiable nature and their inherent randomness, which makes it difficult for an adversary to circumvent the defenses. Our best defense eliminates 60% of strong gray-box and 90% of strong black-box attacks by a variety of major attack methods. | accepted-poster-papers | A well written paper proposing some reasonable approaches to counter adversarial images. Proposed approaches include non-differentiable and randomized methods. Anonymous commentators pushed upon and cleared up some important issues regarding white, black and gray "box" settings. The approach appears to be a plausible defence strategy. One reviewers is a hold out on acceptance, but is open to the idea. The authors responded to the points of this reviewer sufficiently. The AC recommends accept. | train | [
"HJLVhosrz",
"SJR7osjSG",
"SyYA9jsBz",
"rkx25isrM",
"ryi9KVKBG",
"Bk1rU7YSz",
"HksQPiUVM",
"B1gREqS4f",
"HyulgtSVz",
"S1wXIrVgM",
"Sk47YIYlM",
"SJzYnEqef",
"rk5sqWNNG",
"HkKhk67Nf",
"rkqFThmEM",
"rJX6P09fz",
"r1XVPRqzM",
"SyNlDC9GM",
"r1-ST6olG",
"SJp9ze61G",
"H1mJgiqJG",
"... | [
"public",
"public",
"public",
"public",
"author",
"public",
"public",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"public",
"author",
"author",
"author",
"public",
"public",
"public",
"author",
"public",
"author... | [
"There is a paper “Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods” (https://nicholas.carlini.com/papers/2017_aisec_breakingdetection.pdf) which explores how stochastic model could be de-randomized and successfully attacked.\n\nThus while randomness makes an attack harder, it does not ... | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
4,
8,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
3,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"HksQPiUVM",
"SyYA9jsBz",
"iclr_2018_SyJ7ClWCb",
"iclr_2018_SyJ7ClWCb",
"Bk1rU7YSz",
"iclr_2018_SyJ7ClWCb",
"B1gREqS4f",
"HyulgtSVz",
"iclr_2018_SyJ7ClWCb",
"iclr_2018_SyJ7ClWCb",
"iclr_2018_SyJ7ClWCb",
"iclr_2018_SyJ7ClWCb",
"HkKhk67Nf",
"rkqFThmEM",
"iclr_2018_SyJ7ClWCb",
"S1wXIrVgM"... |
iclr_2018_HkwVAXyCW | Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks | Recurrent Neural Networks (RNNs) continue to show outstanding performance in sequence modeling tasks. However, training RNNs on long sequences often face challenges like slow inference, vanishing gradients and difficulty in capturing long term dependencies. In backpropagation through time settings, these issues are tightly coupled with the large, sequential computational graph resulting from unfolding the RNN in time. We introduce the Skip RNN model which extends existing RNN models by learning to skip state updates and shortens the effective size of the computational graph. This model can also be encouraged to perform fewer state updates through a budget constraint. We evaluate the proposed model on various tasks and show how it can reduce the number of required RNN updates while preserving, and sometimes even improving, the performance of the baseline RNN models. Source code is publicly available at https://imatge-upc.github.io/skiprnn-2017-telecombcn/. | accepted-poster-papers | This paper explores what might be characterized as an adaptive form of ZoneOut.
With the improvements and clarifications added to the paper during the rebuttal the paper could be accepted.
| train | [
"HkmeI2vxM",
"rkKX3j7SM",
"SyUH1Qjez",
"HJ6Ve2MHz",
"BkfgO-FgG",
"BywMmCPzf",
"SyThG0DMz",
"HJwcGAPMG",
"HJoVzADMM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"UPDATE: Following the author's response I've increased my score from 5 to 6. The revised paper includes many of the additional references that I suggested, and the author response clarified my confusion over the Charades experiments; their results are indeed close to state-of-the-art on Charades activity localizat... | [
6,
-1,
6,
-1,
6,
-1,
-1,
-1,
-1
] | [
4,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HkwVAXyCW",
"BywMmCPzf",
"iclr_2018_HkwVAXyCW",
"HJwcGAPMG",
"iclr_2018_HkwVAXyCW",
"HkmeI2vxM",
"BkfgO-FgG",
"SyUH1Qjez",
"iclr_2018_HkwVAXyCW"
] |
iclr_2018_rkPLzgZAZ | Modular Continual Learning in a Unified Visual Environment | A core aspect of human intelligence is the ability to learn new tasks quickly and switch between them flexibly. Here, we describe a modular continual reinforcement learning paradigm inspired by these abilities. We first introduce a visual interaction environment that allows many types of tasks to be unified in a single framework. We then describe a reward map prediction scheme that learns new tasks robustly in the very large state and action spaces required by such an environment. We investigate how properties of module architecture influence efficiency of task learning, showing that a module motif incorporating specific design principles (e.g. early bottlenecks, low-order polynomial nonlinearities, and symmetry) significantly outperforms more standard neural network motifs, needing fewer training examples and fewer neurons to achieve high levels of performance. Finally, we present a meta-controller architecture for task switching based on a dynamic neural voting scheme, which allows new modules to use information learned from previously-seen tasks to substantially improve their own learning efficiency. | accepted-poster-papers | Important problem (modular continual RL) and novel contributions. The initial submission was judged to be a little dense and hard to read, but the authors have been responsive in responding and updating the paper. I support accepting this paper. | test | [
"rJHuDgAlz",
"HkxSRZcxM",
"BJQgR1aef",
"BJPN4R3WM",
"H1qarChZG",
"ByXcBCnbf",
"BJVOBA3Wz",
"Bkht4Anbf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"The authors propose a kind of framework for learning to solve elemental tasks and then learning task switching in a multitask scenario. The individual tasks are inspired by a number of psychological tasks. Specifically, the authors use a pretrained convnet as raw statespace encoding together with previous actions ... | [
6,
8,
8,
-1,
-1,
-1,
-1,
-1
] | [
2,
3,
2,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rkPLzgZAZ",
"iclr_2018_rkPLzgZAZ",
"iclr_2018_rkPLzgZAZ",
"iclr_2018_rkPLzgZAZ",
"HkxSRZcxM",
"BJQgR1aef",
"BJQgR1aef",
"rJHuDgAlz"
] |
iclr_2018_BydLzGb0Z | Twin Networks: Matching the Future for Sequence Generation | We propose a simple technique for encouraging generative RNNs to plan ahead. We train a ``backward'' recurrent network to generate a given sequence in reverse order, and we encourage states of the forward model to predict cotemporal states of the backward model. The backward network is used only during training, and plays no role during sampling or inference. We hypothesize that our approach eases modeling of long-term dependencies by implicitly forcing the forward states to hold information about the longer-term future (as contained in the backward states). We show empirically that our approach achieves 9% relative improvement for a speech recognition task, and achieves significant improvement on a COCO caption generation task. | accepted-poster-papers | Simple idea (which is a positive) to regularize RNNs, broad applicability, well-written paper. Initially, there were concerns about comparisons, but he authors have provided additional experiments that have made the paper stronger. | train | [
"HyciX9dxM",
"B1Fe0Zqxz",
"Hy2zdEuNz",
"HJzYCPDlf",
"B1VAfclVG",
"HyCbe867z",
"r1SMZBT7z",
"BJ2TeHpXz",
"rJob2Ep7z",
"B1_9I7DzG",
"BJMXIjbGM",
"rk6_yF2-f",
"r11N5P2Zf"
] | [
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"public",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"author"
] | [
"\n1) Summary\nThis paper proposes a recurrent neural network (RNN) training formulation for encouraging RNN the hidden representations to contain information useful for predicting future timesteps reliably. The authors propose to train a forward and backward RNN in parallel. The forward RNN predicts forward in tim... | [
6,
7,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BydLzGb0Z",
"iclr_2018_BydLzGb0Z",
"HyCbe867z",
"iclr_2018_BydLzGb0Z",
"B1_9I7DzG",
"HyciX9dxM",
"HJzYCPDlf",
"B1Fe0Zqxz",
"iclr_2018_BydLzGb0Z",
"BJMXIjbGM",
"iclr_2018_BydLzGb0Z",
"B1Fe0Zqxz",
"iclr_2018_BydLzGb0Z"
] |
iclr_2018_S1J2ZyZ0Z | Interpretable Counting for Visual Question Answering | Questions that require counting a variety of objects in images remain a major challenge in visual question answering (VQA). The most common approaches to VQA involve either classifying answers based on fixed length representations of both the image and question or summing fractional counts estimated from each section of the image. In contrast, we treat counting as a sequential decision process and force our model to make discrete choices of what to count. Specifically, the model sequentially selects from detected objects and learns interactions between objects that influence subsequent selections. A distinction of our approach is its intuitive and interpretable output, as discrete counts are automatically grounded in the image. Furthermore, our method outperforms the state of the art architecture for VQA on multiple metrics that evaluate counting. | accepted-poster-papers | Important problem and all reviewers recommend acceptance. I agree. | train | [
"rkvdigi4f",
"Byv9HGFEG",
"SJdWxzoxz",
"ryq-8Y_lG",
"rJ1U7MKef",
"B1bTymoGf",
"S1iUyQjzM",
"H1bYCGofM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"I'm satisfied with the authors' responses to the concerns raised by me and my fellow reviewers, I would recommend acceptance of the paper.",
"After reading the authors' responses to the concerns raised by me and my fellow reviewers, I would recommend acceptance of the paper because it presents a novel, interesti... | [
-1,
-1,
6,
7,
7,
-1,
-1,
-1
] | [
-1,
-1,
3,
4,
4,
-1,
-1,
-1
] | [
"S1iUyQjzM",
"H1bYCGofM",
"iclr_2018_S1J2ZyZ0Z",
"iclr_2018_S1J2ZyZ0Z",
"iclr_2018_S1J2ZyZ0Z",
"ryq-8Y_lG",
"rJ1U7MKef",
"SJdWxzoxz"
] |
iclr_2018_H1UOm4gA- | Interactive Grounded Language Acquisition and Generalization in a 2D World | We build a virtual agent for learning language in a 2D maze-like world. The agent sees images of the surrounding environment, listens to a virtual teacher, and takes actions to receive rewards. It interactively learns the teacher’s language from scratch based on two language use cases: sentence-directed navigation and question answering. It learns simultaneously the visual representations of the world, the language, and the action control. By disentangling language grounding from other computational routines and sharing a concept detection function between language grounding and prediction, the agent reliably interpolates and extrapolates to interpret sentences that contain new word combinations or new words missing from training sentences. The new words are transferred from the answers of language prediction. Such a language ability is trained and evaluated on a population of over 1.6 million distinct sentences consisting of 119 object words, 8 color words, 9 spatial-relation words, and 50 grammatical words. The proposed model significantly outperforms five comparison methods for interpreting zero-shot sentences. In addition, we demonstrate human-interpretable intermediate outputs of the model in the appendix. | accepted-poster-papers | This manuscript was reviewed by 3 expert reviewers and their evaluation is generally positive. The authors have responded to the questions asked and the reviewers are satisfied with the responses. Although the 2D environments are underwhelming (compared to 3D environments such as SUNCG, Doom, Thor, etc), one thing that distinguishes this paper from other concurrent submissions on the similar topics is the demonstration that "words learned only from a VQA-style supervision condition can be successfully interpreted in an instruction-following setting." | train | [
"B1JBH18gf",
"B1KF2Z5xf",
"HJLqaM7bz",
"r1-gIAMNM",
"SJGGiTcQM",
"HJi7u9TWf",
"Skq3gyaWf",
"SJnXZJpWz",
"HkQl1ypZf",
"ryVnJ16Wz",
"rkCvlKvkf",
"r1a6A4PJz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"This paper introduces a new task that combines elements of instruction following\nand visual question answering: agents must accomplish particular tasks in an\ninteractive environment while providing one-word answers to questions about\nfeatures of the environment. To solve this task, the paper also presents a new... | [
7,
6,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1UOm4gA-",
"iclr_2018_H1UOm4gA-",
"iclr_2018_H1UOm4gA-",
"HkQl1ypZf",
"iclr_2018_H1UOm4gA-",
"Skq3gyaWf",
"B1JBH18gf",
"r1a6A4PJz",
"HJLqaM7bz",
"B1KF2Z5xf",
"r1a6A4PJz",
"iclr_2018_H1UOm4gA-"
] |
iclr_2018_Sy0GnUxCb | Emergent Complexity via Multi-Agent Competition | Reinforcement learning algorithms can train agents that solve problems in complex, interesting environments. Normally, the complexity of the trained agent is closely related to the complexity of the environment. This suggests that a highly capable agent requires a complex environment for training. In this paper, we point out that a competitive multi-agent environment trained with self-play can produce behaviors that are far more complex than the environment itself. We also point out that such environments come with a natural curriculum, because for any skill level, an environment full of agents of this level will have the right level of difficulty.
This work introduces several competitive multi-agent environments where agents compete in a 3D world with simulated physics. The trained agents learn a wide variety of complex and interesting skills, even though the environment themselves are relatively simple. The skills include behaviors such as running, blocking, ducking, tackling, fooling opponents, kicking, and defending using both arms and legs. A highlight of the learned behaviors can be found here: https://goo.gl/eR7fbX | accepted-poster-papers | This paper received divergent reviews (7, 3, 9). The main contributions of the paper -- that multi-agent competition serves as a natural curriculum, opponent sampling strategies, and the characterization of emergent complex strategies -- are certainly of broad interest (although the first is essentially the same observation as AlphaZero, but the different environment makes this of broader interest).
In the discussion between R2 and the authors, I am sympathetic to (a subset of) both viewpoints.
To be fair to the authors, discovery (in this case, characterization of emergent behavior) can be often difficult to quantify. R2's initial review was unnecessary harsh and combative. The points presented by R2 as evidence of poor evaluation have clear answers by the authors. It would have been better to provide suggestions for what the authors could try, rather than raise philosophical objections that the authors cannot experimentally rebut.
On the other hand, I am disappointed that the authors were asked a reasonable, specific, quantifiable request by R2 --
"By the end of Section 5.2, you allude to transfer learning phenomena. It would be nice to study these transfer effects in your results with a quantitative methodology.”
-- and they chose to respond with informal and qualitative assessments. It doesn't matter if the results are obvious visually, why not provide quantitative evaluation when it is specifically asked?
Overall, we recommend this paper for acceptance, and ask the authors to incorporate feedback from R2. | train | [
"BJjEeunNf",
"SkFemC-lz",
"By9EwRPxG",
"SyCKd4clM",
"rJI9veRQz",
"B1XRb_6mG",
"r1gtW_p7G",
"S1VyeOTQG"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"We respond to the three points about the paper raised by the reviewer:\n\n1) There are two questions here, one about games being zero sum and another about the plots in Figure 3 being symmetric about 50%. For the first question the answer is games are not zero sum and a draw results in a “negative” reward for both... | [
-1,
3,
9,
7,
-1,
-1,
-1,
-1
] | [
-1,
3,
5,
4,
-1,
-1,
-1,
-1
] | [
"rJI9veRQz",
"iclr_2018_Sy0GnUxCb",
"iclr_2018_Sy0GnUxCb",
"iclr_2018_Sy0GnUxCb",
"iclr_2018_Sy0GnUxCb",
"SyCKd4clM",
"By9EwRPxG",
"SkFemC-lz"
] |
iclr_2018_B1mvVm-C- | Universal Agent for Disentangling Environments and Tasks | Recent state-of-the-art reinforcement learning algorithms are trained under the goal of excelling in one specific task. Hence, both environment and task specific knowledge are entangled into one framework. However, there are often scenarios where the environment (e.g. the physical world) is fixed while only the target task changes. Hence, borrowing the idea from hierarchical reinforcement learning, we propose a framework that disentangles task and environment specific knowledge by separating them into two units. The environment-specific unit handles how to move from one state to the target state; and the task-specific unit plans for the next target state given a specific task. The extensive results in simulators indicate that our method can efficiently separate and learn two independent units, and also adapt to a new task more efficiently than the state-of-the-art methods. | accepted-poster-papers | All reviewers recommend accepting this paper, and this AC agrees. | train | [
"rkHr_WFlz",
"rkx8qW9ez",
"H1wc2j2lM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors propose to decompose reinforcement learning into a PATH function that can learn how to solve reusable sub-goals an agent might have in a specific environment and a GOAL function that chooses subgoals in order to solve a specific task in the environment using path segments. So I guess it can be thought ... | [
6,
7,
6
] | [
3,
4,
3
] | [
"iclr_2018_B1mvVm-C-",
"iclr_2018_B1mvVm-C-",
"iclr_2018_B1mvVm-C-"
] |
iclr_2018_SJa9iHgAZ | Residual Connections Encourage Iterative Inference | Residual networks (Resnets) have become a prominent architecture in deep learning. However, a comprehensive understanding of Resnets is still a topic of ongoing research. A recent view argues that Resnets perform iterative refinement of features. We attempt to further expose properties of this aspect. To this end, we study Resnets both analytically and empirically. We formalize the notion of iterative refinement in Resnets by showing that residual architectures naturally encourage features to move along the negative gradient of loss during the feedforward phase. In addition, our empirical analysis suggests that Resnets are able to perform both representation learning and iterative refinement. In general, a Resnet block tends to concentrate representation learning behavior in the first few layers while higher layers perform iterative refinement of features. Finally we observe that sharing residual layers naively leads to representation explosion and hurts generalization performance, and show that simple existing strategies can help alleviating this problem. | accepted-poster-papers | The paper presents an interesting view of ResNets and the findings should be of broad interest. R1 did not update their score/review, but I am satisfied with the author response, and recommend this paper for acceptance. | train | [
"H1EPgaweG",
"HkeOU0qgf",
"HJyUi3sez",
"SkXbC02XM",
"BJyaPFbff",
"SyGjDt-fM",
"HkFKPt-Gf",
"HkzBwFWzM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"This paper investigates residual networks (ResNets) in an empirical way. The authors argue that shallow layers are responsible for learning important feature representations, while deeper layers focus on refining the features. They validate this point by performing a series of lesion study on ResNet.\n\nOverall, t... | [
6,
5,
7,
-1,
-1,
-1,
-1,
-1
] | [
3,
5,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJa9iHgAZ",
"iclr_2018_SJa9iHgAZ",
"iclr_2018_SJa9iHgAZ",
"iclr_2018_SJa9iHgAZ",
"H1EPgaweG",
"HJyUi3sez",
"HkeOU0qgf",
"HkeOU0qgf"
] |
iclr_2018_Hk6WhagRW | Emergent Communication through Negotiation | Multi-agent reinforcement learning offers a way to study how communication could emerge in communities of agents needing to solve specific problems. In this paper, we study the emergence of communication in the negotiation environment, a semi-cooperative model of agent interaction. We introduce two communication protocols - one grounded in the semantics of the game, and one which is a priori ungrounded. We show that self-interested agents can use the pre-grounded communication channel to negotiate fairly, but are unable to effectively use the ungrounded, cheap talk channel to do the same. However, prosocial agents do learn to use cheap talk to find an optimal negotiating strategy, suggesting that cooperation is necessary for language to emerge. We also study communication behaviour in a setting where one agent interacts with agents in a community with different levels of prosociality and show how agent identifiability can aid negotiation. | accepted-poster-papers | All reviewers agree the paper proposes an interesting setup and the main finding that "prosocial agents are able to learn to ground symbols using RL, but self-interested agents are not" progresses work in this area. R3 asked a number of detail-oriented questions and while they did not update their review based on the author response, I am satisfied by the answers. | train | [
"B1nNZKU4M",
"SJGBLcYxG",
"Bk7S9ZclM",
"BkoPxj3xz",
"ByxfIR9Xf",
"HyQaw09QG",
"B1RQDA9Qz",
"BkGlPCqQz",
"ryQO17PyM",
"HJ9FF6EAZ",
"r1Nwr02RW",
"HydzSWqCW",
"SyodmvDC-",
"SkM4N2SCW",
"SkW8mhN0-"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"public",
"public",
"public"
] | [
"Thank you for responding to the mentioned concerns and addressing those in your latest revision. The topic is interesting and deserves visibility.",
"The authors describe a variant of the negotiation game in which agents of different type, selfish or prosocial, and with different preferences. The central feature... | [
-1,
6,
7,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
3,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"HyQaw09QG",
"iclr_2018_Hk6WhagRW",
"iclr_2018_Hk6WhagRW",
"iclr_2018_Hk6WhagRW",
"iclr_2018_Hk6WhagRW",
"SJGBLcYxG",
"Bk7S9ZclM",
"BkoPxj3xz",
"r1Nwr02RW",
"SkW8mhN0-",
"iclr_2018_Hk6WhagRW",
"HJ9FF6EAZ",
"SkM4N2SCW",
"HJ9FF6EAZ",
"iclr_2018_Hk6WhagRW"
] |
iclr_2018_SygwwGbRW | Semi-parametric topological memory for navigation | We introduce a new memory architecture for navigation in previously unseen environments, inspired by landmark-based navigation in animals. The proposed semi-parametric topological memory (SPTM) consists of a (non-parametric) graph with nodes corresponding to locations in the environment and a (parametric) deep network capable of retrieving nodes from the graph based on observations. The graph stores no metric information, only connectivity of locations corresponding to the nodes. We use SPTM as a planning module in a navigation system. Given only 5 minutes of footage of a previously unseen maze, an SPTM-based navigation agent can build a topological map of the environment and use it to confidently navigate towards goals. The average success rate of the SPTM agent in goal-directed navigation across test environments is higher than the best-performing baseline by a factor of three. | accepted-poster-papers | Important problem (navigation in unseen 3D environments, Doom in this case), interesting hybrid approach (mixing neural networks and path-planning). Initially, there were concerns about evaluation (proper baselines, ambiguous environments, etc). The authors have responded with updated experiments that are convincing to the reviewers. R1 did not participate in the discussion and their review has been ignored. I am supportive of this paper. | train | [
"BJIvZ_H4M",
"BkOtNnKxM",
"S1H89etgf",
"Hy3ONt2eM",
"S1qTBd6XM",
"ByrCE_aXf",
"HJsFdVbQG",
"SJMgFkxQG",
"rJPdc9JmM",
"rkVudelff",
"rkU3lxgff",
"ByFzpZxzf",
"SJzHaWlfG",
"ryI0-glfM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"Taking into account the revision, this is an interesting idea whose limitations have been properly investigated.",
"*** Revision: based on the author's work, we have switched the score to accept (7) ***\n\nClever ideas but not end-to-end navigation.\n\nThis paper presents a hybrid architecture that mixes paramet... | [
-1,
7,
3,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
5,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"S1qTBd6XM",
"iclr_2018_SygwwGbRW",
"iclr_2018_SygwwGbRW",
"iclr_2018_SygwwGbRW",
"SJMgFkxQG",
"iclr_2018_SygwwGbRW",
"SJMgFkxQG",
"ByFzpZxzf",
"iclr_2018_SygwwGbRW",
"S1H89etgf",
"iclr_2018_SygwwGbRW",
"BkOtNnKxM",
"BkOtNnKxM",
"Hy3ONt2eM"
] |
iclr_2018_B12Js_yRb | Learning to Count Objects in Natural Images for Visual Question Answering | Visual Question Answering (VQA) models have struggled with counting objects in natural images so far. We identify a fundamental problem due to soft attention in these models as a cause. To circumvent this problem, we propose a neural network component that allows robust counting from object proposals. Experiments on a toy task show the effectiveness of this component and we obtain state-of-the-art accuracy on the number category of the VQA v2 dataset without negatively affecting other categories, even outperforming ensemble models with our single model. On a difficult balanced pair metric, the component gives a substantial improvement in counting over a strong baseline by 6.6%. | accepted-poster-papers | Initially this paper received mixed reviews. After reading the author response, R1 and and R3 recommend acceptance.
R2, who recommended rejecting the paper, did not participate in discussions, did not respond to author explanations, did not respond to AC emails, and did not submit a final recommendation. This AC does not agree with the concerns raised by R2 (e.g. I don't find this model to be unprincipled).
The concerns raised by R1 and R3 were important (especially e.g. comparisons to NMS) and the authors have done a good job adding the required experiments and providing explanations.
Please update the manuscript incorporating all feedback received here, including comparisons reported to the concurrent ICLR submission on counting. | train | [
"r13as6sXf",
"HkRBLTo7G",
"S1dXkASrz",
"H1GhmwqgG",
"Hkwum9YgM",
"SJ11jzclf",
"B1bfGw-ff",
"HJt5Gsrbf",
"ByYnNiB-f",
"S15E4oHbM",
"Sy-67iHZz",
"B1-TZiS-M",
"Sk0dZsS-M",
"B17QZjB-M",
"rkkDeiB-M"
] | [
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"Due to the length of our detailed point-by-point rebuttals, we would like to give a quick summary of our responses to the main concerns that the reviewers had.\n\n# Reviewer 3 (convinced by our rebuttal and increased the rating)\n\n- Too handcrafted\nThe current state-of-art in VQA on real images is nowhere near g... | [
-1,
-1,
-1,
6,
6,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
3,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_B12Js_yRb",
"iclr_2018_B12Js_yRb",
"H1GhmwqgG",
"iclr_2018_B12Js_yRb",
"iclr_2018_B12Js_yRb",
"iclr_2018_B12Js_yRb",
"iclr_2018_B12Js_yRb",
"Hkwum9YgM",
"S15E4oHbM",
"Sy-67iHZz",
"SJ11jzclf",
"Sk0dZsS-M",
"B17QZjB-M",
"rkkDeiB-M",
"H1GhmwqgG"
] |
iclr_2018_HJsjkMb0Z | i-RevNet: Deep Invertible Networks | It is widely believed that the success of deep convolutional networks is based on progressively discarding uninformative variability about the input with respect to the problem at hand. This is supported empirically by the difficulty of recovering images from their hidden representations, in most commonly used network architectures. In this paper we show via a one-to-one mapping that this loss of information is not a necessary condition to learn representations that generalize well on complicated problems, such as ImageNet. Via a cascade of homeomorphic layers, we build the i-RevNet, a network that can be fully inverted up to the final projection onto the classes, i.e. no information is discarded. Building an invertible architecture is difficult, for one, because the local inversion is ill-conditioned, we overcome this by providing an explicit inverse.
An analysis of i-RevNet’s learned representations suggests an alternative explanation for the success of deep networks by a progressive contraction and linear separation with depth. To shed light on the nature of the model learned by the i-RevNet we reconstruct linear interpolations between natural image representations. | accepted-poster-papers | This paper constructs a variant of deep CNNs which is provably invertible, by replacing spatial pooling with multiple shifted spatial downsampling, and capitalizing on residual layers to define a simple, invertible representation. The authors show that the resulting representation is equally effective at large-scale object classification, opening up a number of interesting questions.
Reviewers agreed this is an strong contribution, despite some comments about the significance of the result; ie, why is invertibility a "surprising" property for learnability, in the sense that F(x) = {x, phi(x)}, where phi is a standard CNN satisfies both properties: invertible and linear measurements of F producing good classification. All in all, this will be a great contribution to the conference. | train | [
"BJOsVtsNz",
"HyABCoKVz",
"BJzKguLVz",
"HJRdhx5eM",
"rJxrJe9eG",
"HkxP0bceM",
"SyyNsbbNz",
"H1ia7wJ4f",
"ByS7UBGQz",
"HyW2XBf7z",
"By9vGBMXM",
"H1dDbBfQf"
] | [
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"author",
"author"
] | [
"\n1) As mentioned in section 3.1 and detailed in the 4th paragraph of section 3.2, $\\tilde{S}$ splits the input into two tensors. In our case, we stick to the choice of Revnets and split the number of input channels in half. You will be able to check how this is done in detail in the code we will release alongsid... | [
-1,
-1,
-1,
8,
9,
8,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"HyABCoKVz",
"iclr_2018_HJsjkMb0Z",
"HyW2XBf7z",
"iclr_2018_HJsjkMb0Z",
"iclr_2018_HJsjkMb0Z",
"iclr_2018_HJsjkMb0Z",
"H1ia7wJ4f",
"iclr_2018_HJsjkMb0Z",
"HkxP0bceM",
"HJRdhx5eM",
"rJxrJe9eG",
"iclr_2018_HJsjkMb0Z"
] |
iclr_2018_BkUHlMZ0b | Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach | The robustness of neural networks to adversarial examples has received great attention due to security implications. Despite various attack approaches to crafting visually imperceptible adversarial examples, little has been developed towards a comprehensive measure of robustness. In this paper, we provide theoretical justification for converting robustness analysis into a local Lipschitz constant estimation problem, and propose to use the Extreme Value Theory for efficient evaluation. Our analysis yields a novel robustness metric called CLEVER, which is short for Cross Lipschitz Extreme Value for nEtwork Robustness. The proposed CLEVER score is attack-agnostic and is computationally feasible for large neural networks. Experimental results on various networks, including ResNet, Inception-v3 and MobileNet, show that (i) CLEVER is aligned with the robustness indication measured by the ℓ2 and ℓ∞ norms of adversarial examples from powerful attacks, and (ii) defended networks using defensive distillation or bounded ReLU indeed give better CLEVER scores. To the best of our knowledge, CLEVER is the first attack-independent robustness metric that can be applied to any neural network classifiers.
| accepted-poster-papers | This paper proposes a new metric to evaluate the robustness of neural networks to adversarial attacks. This metric comes with theoretical guarantees and can be efficiently computed on large-scale neural networks.
Reviewers were generally positive about the strengths of the paper, especially after major revisions during the rebuttal process. The AC believes this paper will contribute to the growing body of literature in robust training of neural networks. | train | [
"rk8Ucb5gf",
"B1ZlEVXyf",
"BJiW7IkZM",
"BJdX_3LGz",
"H1i4bwD7f",
"Hyu463UzG",
"SyK9an8Gz",
"Hkt4hhIMM",
"Hk0A_3IGG",
"ry18wnIMz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"The work claims a measure of robustness of networks that is attack-agnostic. Robustness measure is turned into the problem of finding a local Lipschitz constant which is given by the maximum of the norm of the gradient of the associated function. That quantity is then estimated by sampling from the domain of maxim... | [
7,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
3,
1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BkUHlMZ0b",
"iclr_2018_BkUHlMZ0b",
"iclr_2018_BkUHlMZ0b",
"BJiW7IkZM",
"B1ZlEVXyf",
"B1ZlEVXyf",
"B1ZlEVXyf",
"B1ZlEVXyf",
"rk8Ucb5gf",
"iclr_2018_BkUHlMZ0b"
] |
iclr_2018_r1vuQG-CW | HexaConv | The effectiveness of Convolutional Neural Networks stems in large part from their ability to exploit the translation invariance that is inherent in many learning problems. Recently, it was shown that CNNs can exploit other invariances, such as rotation invariance, by using group convolutions instead of planar convolutions. However, for reasons of performance and ease of implementation, it has been necessary to limit the group convolution to transformations that can be applied to the filters without interpolation. Thus, for images with square pixels, only integer translations, rotations by multiples of 90 degrees, and reflections are admissible.
Whereas the square tiling provides a 4-fold rotational symmetry, a hexagonal tiling of the plane has a 6-fold rotational symmetry. In this paper we show how one can efficiently implement planar convolution and group convolution over hexagonal lattices, by re-using existing highly optimized convolution routines. We find that, due to the reduced anisotropy of hexagonal filters, planar HexaConv provides better accuracy than planar convolution with square filters, given a fixed parameter budget. Furthermore, we find that the increased degree of symmetry of the hexagonal grid increases the effectiveness of group convolutions, by allowing for more parameter sharing. We show that our method significantly outperforms conventional CNNs on the AID aerial scene classification dataset, even outperforming ImageNet pre-trained models. | accepted-poster-papers | This paper implements Group convolutions on inputs defined over hexagonal lattices instead of square lattices, using the roto-translation group. The internal symmetries of the hexagonal grid allow for a larger discrete rotation group than when using square pixels, leading to improved performance on CIFAR and aerial datasets.
The paper is well-written and the reviewers were positive about its results. That said, the AC wonders what is the main contribution of this work relative to existing related works (such as Group Equivarant CNNS, Cohen & Welling'16, or steerable CNNs, Cohen & Welling'17). While it is true that extending GCNNs to hexagonal lattices is a non-trivial implementation task, the contribution lacks significance in the mathematical/learning fronts, which are perhaps the ones ICLR audience will care more about. Besides, the numerical results, while improved versus their square lattice counterparts, are not a major improvement over the state-of-the-art.
In summary, the AC believes this is a borderline paper. The unanimous favorable reviews tilt the decision towards acceptance. | train | [
"SysTGDdxf",
"rJnr8zdgM",
"Syvd8Qcgf",
"BkDvOSdQM",
"Sy1xuHOXM",
"SJrJPrdQG",
"Byo0SHu7z",
"HybbMN0Zf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public"
] | [
"\nThe authors took my comments nicely into account in their revision, and their answers are convincing. I increase my rating from 5 to 7. The authors could also integrate their discussion about their results on CIFAR in the paper, I think it would help readers understand better the advantage of the contribution.\n... | [
7,
7,
7,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_r1vuQG-CW",
"iclr_2018_r1vuQG-CW",
"iclr_2018_r1vuQG-CW",
"HybbMN0Zf",
"Syvd8Qcgf",
"SysTGDdxf",
"rJnr8zdgM",
"iclr_2018_r1vuQG-CW"
] |
iclr_2018_rJzIBfZAb | Towards Deep Learning Models Resistant to Adversarial Attacks | Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against a well-defined class of adversaries. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest robustness against a first-order adversary as a natural security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models. | accepted-poster-papers | This paper presents new results on adversarial training, using the framework of robust optimization. Its minimax nature allows for principled methods of both training and attacking neural networks.
The reviewers were generally positive about its contributions, despite some concerns about 'overclaiming'. The AC recommends acceptance, and encourages the authors to also relate this work with the concurrent ICLR submission (https://openreview.net/forum?id=Hk6kPgZA-) which addresses the problem using a similar approach. | train | [
"rkdJB4SBG",
"Hy0j8ecgz",
"rkO53U_ez",
"SyRt7SoxG",
"BkTN7DaXz",
"SyUsGvpmG",
"HkRKZDTQM",
"HJJe50sez",
"rkj4yGsgf",
"rJw7T03yG",
"ryQ33hmkf",
"rJsu3TGyG"
] | [
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"public",
"author",
"public"
] | [
"We have been performing an analysis of the robustness of many of the papers submitted here. This paper provides a substantially stronger defense than many of the other submissions, and we were not able to meaningfully invalidate any of the claims made. Given our analysis so far, it looks like this is the strongest... | [
-1,
7,
6,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rJzIBfZAb",
"iclr_2018_rJzIBfZAb",
"iclr_2018_rJzIBfZAb",
"iclr_2018_rJzIBfZAb",
"rkO53U_ez",
"Hy0j8ecgz",
"SyRt7SoxG",
"rkj4yGsgf",
"iclr_2018_rJzIBfZAb",
"ryQ33hmkf",
"rJsu3TGyG",
"iclr_2018_rJzIBfZAb"
] |
iclr_2018_By4HsfWAZ | Deep Learning for Physical Processes: Incorporating Prior Scientific Knowledge | We consider the use of Deep Learning methods for modeling complex phenomena like those occurring in natural physical processes. With the large amount of data gathered on these phenomena the data intensive paradigm could begin to challenge more traditional approaches elaborated over the years in fields like maths or physics. However, despite considerable successes in a variety of application domains, the machine learning field is not yet ready to handle the level of complexity required by such problems. Using an example application, namely Sea Surface Temperature Prediction, we show how general background knowledge gained from the physics could be used as a guideline for designing efficient Deep Learning models. In order to motivate the approach and to assess its generality we demonstrate a formal link between the solution of a class of differential equations underlying a large family of physical phenomena and the proposed model. Experiments and comparison with series of baselines including a state of the art numerical approach is then provided. | accepted-poster-papers | This paper proposes to use data-driven deep convolutional architectures for modeling advection diffusion. It is well motivated and comes with convincing numerical experiments.
Reviewers agreed that this is a worthy contribution to ICLR with the potential to trigger further research in the interplay between deep learning and physics. | train | [
"BJBU32dgz",
"SyeN_yclz",
"Bk6nbuf-M",
"Skf3ipimM",
"r1saUCzXG",
"rkX5LRzmz",
"Skp8ICG7f",
"BkK-sP4xf",
"H1UFgYQlG",
"HJdRxVkgM",
"H1lXnCjyG",
"Sy569OJ1G",
"SykEv6KRW"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"The paper ‘Deep learning for Physical Process: incorporating prior physical knowledge’ proposes\nto question the use of data-intensive strategies such as deep learning in solving physical \ninverse problems that are traditionally solved through assimilation strategies. They notably show\nhow physical priors on a g... | [
7,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
3,
2,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_By4HsfWAZ",
"iclr_2018_By4HsfWAZ",
"iclr_2018_By4HsfWAZ",
"iclr_2018_By4HsfWAZ",
"SyeN_yclz",
"BJBU32dgz",
"Bk6nbuf-M",
"H1UFgYQlG",
"HJdRxVkgM",
"H1lXnCjyG",
"iclr_2018_By4HsfWAZ",
"SykEv6KRW",
"iclr_2018_By4HsfWAZ"
] |
iclr_2018_ryazCMbR- | Communication Algorithms via Deep Learning | Coding theory is a central discipline underpinning wireline and wireless modems that are the workhorses of the information age. Progress in coding theory is largely driven by individual human ingenuity with sporadic breakthroughs over the past century. In this paper we study whether it is possible to automate the discovery of decoding algorithms via deep learning. We study a family of sequential codes parametrized by recurrent neural network (RNN) architectures. We show that cre- atively designed and trained RNN architectures can decode well known sequential codes such as the convolutional and turbo codes with close to optimal performance on the additive white Gaussian noise (AWGN) channel, which itself is achieved by breakthrough algorithms of our times (Viterbi and BCJR decoders, representing dynamic programing and forward-backward algorithms). We show strong gen- eralizations, i.e., we train at a specific signal to noise ratio and block length but test at a wide range of these quantities, as well as robustness and adaptivity to deviations from the AWGN setting. | accepted-poster-papers | This paper studies trainable deep encoders/decoders in the context of coding theory, based on recurrent neural networks. It presents highly promising results showing that one may be able to use learnt encoders and decoders on channels where no predefined codes are known.
Besides these encouraging aspects, there are important concerns that the authors are encouraged to address; in particular, reviewers noted that the main contribution of this paper is mostly on the learnt encoding/decoding scheme rather than in the replacement of Viterbi/BCJR. Also, complexity should be taken into account when comparing different decoding schemes.
Overall, the AC leans towards acceptance, since this paper may trigger further research in this direction. | train | [
"BJKuTzwBf",
"HyBtZlVrf",
"r1u-g-JSf",
"ry10QYxSM",
"ByEN_hFgM",
"S1PB3Ocef",
"BkbcZjAgM",
"S1LKJamVf",
"rygrya7Ez",
"r1oRA2QVG",
"rkAzD4Z4f",
"SyDC5dgEG",
"BJwz5EyEz",
"B1lvp76mG",
"Bk79HmpQf",
"HyGycOnXM",
"SJkduKjXz",
"BydMVxizM",
"Hkm9QejMM",
"r11m9pcfG",
"HyOXBaqzG",
"... | [
"public",
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public",
"public",
"public",
"author",
"author",
"public",
"public",
"author",
"author",
"author",
"author",
"author",
"author"... | [
"Thanks!",
"Thanks for your comments. Indeed, the Turbo decoder is not nearest neighbor, and therefore there is no theorem that the turbo decoder will perform better on every other noise distribution with the same variance. Indeed, if such was the case, there would be no way for the turbo decoder to do worse (sin... | [
-1,
-1,
-1,
-1,
2,
6,
9,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
4,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"HyBtZlVrf",
"r1u-g-JSf",
"rygrya7Ez",
"ByEN_hFgM",
"iclr_2018_ryazCMbR-",
"iclr_2018_ryazCMbR-",
"iclr_2018_ryazCMbR-",
"rkAzD4Z4f",
"SyDC5dgEG",
"BJwz5EyEz",
"iclr_2018_ryazCMbR-",
"B1lvp76mG",
"Bk79HmpQf",
"SJkduKjXz",
"HyGycOnXM",
"iclr_2018_ryazCMbR-",
"r11m9pcfG",
"ByEN_hFgM"... |
iclr_2018_rJYFzMZC- | Simulating Action Dynamics with Neural Process Networks | Understanding procedural language requires anticipating the causal effects of actions, even when they are not explicitly stated. In this work, we introduce Neural Process Networks to understand procedural text through (neural) simulation of action dynamics. Our model complements existing memory architectures with dynamic entity tracking by explicitly modeling actions as state transformers. The model updates the states of the entities by executing learned action operators. Empirical results demonstrate that our proposed model can reason about the unstated causal effects of actions, allowing it to provide more accurate contextual information for understanding and generating procedural text, all while offering more interpretable internal representations than existing alternatives. | accepted-poster-papers | this submission proposes a novel extension of existing recurrent networks that focus on capturing long-term dependencies via tracking entities/their statesand tested it on a new task. there's a concern that the proposed approach is heavily engineered toward the proposed task and may not be applicable to other tasks, which i fully agree with. i however find the proposed approach and the authors' justification to be thorough enough, and for now, recommend it to be accepted. | test | [
"SJUEXlDxf",
"r1Hu15Kxz",
"rJQcSB5gG",
"S1d6eY-7f",
"HyGietbmz",
"ryq_xY-7M",
"Hy2mkt-7z",
"HyFjC_-Xz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"Summary\n\nThis paper presents Neural Process Networks, an architecture for capturing procedural knowledge stated in texts that makes use of a differentiable memory, a sentence and word attention mechanism, as well as learning action representations and their effect on entity representations. The architecture is t... | [
6,
9,
8,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rJYFzMZC-",
"iclr_2018_rJYFzMZC-",
"iclr_2018_rJYFzMZC-",
"HyGietbmz",
"ryq_xY-7M",
"SJUEXlDxf",
"r1Hu15Kxz",
"rJQcSB5gG"
] |
iclr_2018_BkeqO7x0- | Unsupervised Cipher Cracking Using Discrete GANs | This work details CipherGAN, an architecture inspired by CycleGAN used for inferring the underlying cipher mapping given banks of unpaired ciphertext and plaintext. We demonstrate that CipherGAN is capable of cracking language data enciphered using shift and Vigenere ciphers to a high degree of fidelity and for vocabularies much larger than previously achieved. We present how CycleGAN can be made compatible with discrete data and train in a stable way. We then prove that the technique used in CipherGAN avoids the common problem of uninformative discrimination associated with GANs applied to discrete data.
| accepted-poster-papers | this work adapts cycle GAN to the problem of decipherment with some success. it's still an early result, but all the reviewers have found it to be interesting and worthwhile for publication. | test | [
"S1skfxRxM",
"r1TBz6I4M",
"SykysFulM",
"ryn4mW9ef",
"HkGl-GcZf",
"By1ReMc-M",
"HkWfxz9WM",
"By7yez5Zz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"SUMMARY\n\nThe paper considers the problem of using cycle GANs to decipher text encrypted with historical ciphers. Also it presents some theory to address the problem that discriminating between the discrete data and continuous prediction is too simple. The model proposed is a variant of the cycle GAN in which in ... | [
7,
-1,
7,
8,
-1,
-1,
-1,
-1
] | [
4,
-1,
1,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BkeqO7x0-",
"HkGl-GcZf",
"iclr_2018_BkeqO7x0-",
"iclr_2018_BkeqO7x0-",
"By1ReMc-M",
"S1skfxRxM",
"ryn4mW9ef",
"SykysFulM"
] |
iclr_2018_Sy-dQG-Rb | Neural Speed Reading via Skim-RNN | Inspired by the principles of speed reading, we introduce Skim-RNN, a recurrent neural network (RNN) that dynamically decides to update only a small fraction of the hidden state for relatively unimportant input tokens. Skim-RNN gives a significant computational advantage over an RNN that always updates the entire hidden state. Skim-RNN uses the same input and output interfaces as a standard RNN and can be easily used instead of RNNs in existing models. In our experiments, we show that Skim-RNN can achieve significantly reduced computational cost without losing accuracy compared to standard RNNs across five different natural language tasks. In addition, we demonstrate that the trade-off between accuracy and speed of Skim-RNN can be dynamically controlled during inference time in a stable manner. Our analysis also shows that Skim-RNN running on a single CPU offers lower latency compared to standard RNNs on GPUs. | accepted-poster-papers | this submission proposes an efficient parametrization of a recurrent neural net by using two transition functions (one large and one small) to reduce the amount of computation (though, without actual improvement on GPU.) the reviewers found the submission very positive.
please, do not forget to include all the result and discussion on the proposed approach's relationship to VCRNN which was presented at the same conference just a year ago. | train | [
"BkjQVC8Sz",
"H1dhmCIrz",
"rJiTSpSSf",
"BkXOd1q4G",
"r1izCPYlG",
"HJpgrTKxf",
"rkZtyy5gf",
"HyDWxCXNf",
"SyMwQNmEG",
"SkC43pf4G",
"ByGiVpuQM",
"BynOEa_Qf",
"SJNL4ad7M",
"SJDcXGVJM",
"SytImi7kG",
"BycR50MyG"
] | [
"official_reviewer",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"public",
"author",
"public"
] | [
"NT",
"We note that we could not increase FLOP reduction of VCRNN by controlling the hyperparameters on SQuAD. Also, VCRNN performs worse than vanilla RNN (LSTM) without any gain in FLOP reduction, which we believe is due to the difficulty in training (biased gradient, etc.).\n\nWe believe that this supports our... | [
-1,
-1,
-1,
-1,
7,
7,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
-1,
3,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"H1dhmCIrz",
"rJiTSpSSf",
"iclr_2018_Sy-dQG-Rb",
"HyDWxCXNf",
"iclr_2018_Sy-dQG-Rb",
"iclr_2018_Sy-dQG-Rb",
"iclr_2018_Sy-dQG-Rb",
"SyMwQNmEG",
"SkC43pf4G",
"iclr_2018_Sy-dQG-Rb",
"rkZtyy5gf",
"HJpgrTKxf",
"r1izCPYlG",
"SytImi7kG",
"BycR50MyG",
"iclr_2018_Sy-dQG-Rb"
] |
iclr_2018_SyJS-OgR- | Multi-level Residual Networks from Dynamical Systems View | Deep residual networks (ResNets) and their variants are widely used in many computer vision applications and natural language processing tasks. However, the theoretical principles for designing and training ResNets are still not fully understood. Recently, several points of view have emerged to try to interpret ResNet theoretically, such as unraveled view, unrolled iterative estimation and dynamical systems view. In this paper, we adopt the dynamical systems point of view, and analyze the lesioning properties of ResNet both theoretically and experimentally. Based on these analyses, we additionally propose a novel method for accelerating ResNet training. We apply the proposed method to train ResNets and Wide ResNets for three image classification benchmarks, reducing training time by more than 40\% with superior or on-par accuracy. | accepted-poster-papers | this submission proposes a learning algorithm for resnets based on their interpreration of them as a discrete approximation to a continuous-time dynamical system. all the reviewers have found the submission to be clearly written, well motivated and have proposed an interesting and effective learning algorithm for resnets. | test | [
"rJiJWZtHz",
"rk40-nDlz",
"SyuwCCKlz",
"HJzVc2sxf",
"HkrMrL2mG",
"rk51SIhmz",
"ByShEI27z",
"SkSwN8n7M"
] | [
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"> We are currently working on the experiments of ImageNet\n\nAny update on this front ?\nImproved ImageNet training time would significantly increase the impact of this paper.",
"This paper interprets deep residual network as a dynamic system, and proposes a novel training algorithm to train it in a constructive... | [
-1,
7,
7,
7,
-1,
-1,
-1,
-1
] | [
-1,
4,
3,
4,
-1,
-1,
-1,
-1
] | [
"ByShEI27z",
"iclr_2018_SyJS-OgR-",
"iclr_2018_SyJS-OgR-",
"iclr_2018_SyJS-OgR-",
"rk40-nDlz",
"SyuwCCKlz",
"HJzVc2sxf",
"iclr_2018_SyJS-OgR-"
] |
iclr_2018_HktJec1RZ | Towards Neural Phrase-based Machine Translation | In this paper, we present Neural Phrase-based Machine Translation (NPMT). Our method explicitly models the phrase structures in output sequences using Sleep-WAke Networks (SWAN), a recently proposed segmentation-based sequence modeling method. To mitigate the monotonic alignment requirement of SWAN, we introduce a new layer to perform (soft) local reordering of input sequences. Different from existing neural machine translation (NMT) approaches, NPMT does not use attention-based decoding mechanisms. Instead, it directly outputs phrases in a sequential order and can decode in linear time. Our experiments show that NPMT achieves superior performances on IWSLT 2014 German-English/English-German and IWSLT 2015 English-Vietnamese machine translation tasks compared with strong NMT baselines. We also observe that our method produces meaningful phrases in output languages. | accepted-poster-papers | this submission introduces soft local reordering to the recently proposed SWAN layer [Wang et al., 2017] to make it suitable for machine translation. although only in small-scale experiments, the results are convincing. | val | [
"Sy2fyR7gG",
"r1IRR2Yez",
"r1PrkGheM",
"rkMojlVzG",
"SyGiIgEff",
"SJzkqxNfz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The paper introduces a neural translation model that automatically discovers phrases. This idea is very interesting and tries to marry phrase-based statistical machine translation with neural methods in a principled way. However, the clarity of the paper could be improved.\n\nThe local reordering layer has the ab... | [
6,
6,
8,
-1,
-1,
-1
] | [
3,
4,
5,
-1,
-1,
-1
] | [
"iclr_2018_HktJec1RZ",
"iclr_2018_HktJec1RZ",
"iclr_2018_HktJec1RZ",
"Sy2fyR7gG",
"r1PrkGheM",
"r1IRR2Yez"
] |
iclr_2018_ByJHuTgA- | On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.
| accepted-poster-papers | this submission demonstrates an existing loop-hole (?) in rushing out new neural language models by carefully (and expensively) running hyperparameter tuning of baseline approaches. i feel this is an important contribution, but as pointed out by some reviewers, i would have liked to see whether the conclusion stands even with a more realistic data (as pointed out by some in the field quite harshly, perplexity on PTB should not be considered seriously, and i believe the same for the other two corpora used in this submission.) that said, it's an important paper in general which will work as an alarm to the current practice in the field, and i recommend it to be accepted. | train | [
"S1Mw8jBef",
"rJTcBCtxG",
"HkGW8A2gG",
"rJwjH7HzM",
"SkNXbLNzf",
"HJOf_f2Wz",
"HJfg_G2bz",
"BJM9Df2WM",
"BJXUZl2Wf",
"SJeVnLdbG",
"H14DMe7WG",
"HkGngSGZz",
"BkqLXA1Zz",
"r1naAI6eG",
"r19qHo2gM",
"HJV5juheG",
"Sk5vVdhez",
"HkATqf5xz",
"HksEzG5gf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"author",
"author",
"public",
"author",
"public",
"public",
"author",
"author",
"public",
"public",
"author",
"public"
] | [
"The submitted manuscript describes an exercise in performance comparison for neural language models under standardization of the hyperparameter tuning and model selection strategies and costs. This type of study is important to give perspective to non-standardized performance scores reported across separate publi... | [
7,
5,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
2,
5,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ByJHuTgA-",
"iclr_2018_ByJHuTgA-",
"iclr_2018_ByJHuTgA-",
"iclr_2018_ByJHuTgA-",
"iclr_2018_ByJHuTgA-",
"S1Mw8jBef",
"rJTcBCtxG",
"HkGW8A2gG",
"SJeVnLdbG",
"iclr_2018_ByJHuTgA-",
"HkGngSGZz",
"iclr_2018_ByJHuTgA-",
"rJTcBCtxG",
"Sk5vVdhez",
"HJV5juheG",
"HkATqf5xz",
"iclr_... |
iclr_2018_rkfOvGbCW | Memory-based Parameter Adaptation | Deep neural networks have excelled on a wide range of problems, from vision to language and game playing. Neural networks very gradually incorporate information into weights as they process data, requiring very low learning rates. If the training distribution shifts, the network is slow to adapt, and when it does adapt, it typically performs badly on the training distribution before the shift. Our method, Memory-based Parameter Adaptation, stores examples in memory and then uses a context-based lookup to directly modify the weights of a neural network. Much higher learning rates can be used for this local adaptation, reneging the need for many iterations over similar data before good predictions can be made. As our method is memory-based, it alleviates several shortcomings of neural networks, such as catastrophic forgetting, fast, stable acquisition of new knowledge, learning with an imbalanced class labels, and fast learning during evaluation. We demonstrate this on a range of supervised tasks: large-scale image classification and language modelling. | accepted-poster-papers | the proposed approach nicely incorporates various ideas from recent work into a single meta-learning (or domain adaptation or incremental learning or ...) framework. although better empirical comparison to existing (however recent they are) approaches would have made it stronger, the reviewers all found this submission to be worth publication, with which i agree. | val | [
"HylhQnUNG",
"HkJsPxmxG",
"rktPEKveG",
"ByEeLZ5xz",
"H1cEqB6mz",
"SJAlIN6mf",
"HJLvPs3mz",
"H1ymHgbmz",
"Skiov_-zz",
"HkbqcOZfG",
"BJQOOO-GG",
"H1rrLdWfM",
"rk86VuZGz",
"ByYMHubGz",
"ByINRE1zM",
"rJyW0n4eM",
"B1tJwiMef"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"public",
"author",
"author",
"author",
"author",
"author",
"author",
"official_reviewer",
"author",
"public"
] | [
"Dear Authors and AC\n\nThank you for your detailed answers -- having to split in two comments due to length shows how seriously you take it :)\nBetween them and the fact that my mind kept wandering back to the ideas in this paper during the holidays, I am happy to maintain my score of 8 - Top 50% papers.",
"Over... | [
-1,
6,
6,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
5,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"ByEeLZ5xz",
"iclr_2018_rkfOvGbCW",
"iclr_2018_rkfOvGbCW",
"iclr_2018_rkfOvGbCW",
"SJAlIN6mf",
"H1ymHgbmz",
"iclr_2018_rkfOvGbCW",
"rJyW0n4eM",
"rktPEKveG",
"ByINRE1zM",
"HkJsPxmxG",
"rktPEKveG",
"ByEeLZ5xz",
"ByEeLZ5xz",
"iclr_2018_rkfOvGbCW",
"B1tJwiMef",
"iclr_2018_rkfOvGbCW"
] |
iclr_2018_HJJ23bW0b | Initialization matters: Orthogonal Predictive State Recurrent Neural Networks | Learning to predict complex time-series data is a fundamental challenge in a range of disciplines including Machine Learning, Robotics, and Natural Language Processing. Predictive State Recurrent Neural Networks (PSRNNs) (Downey et al.) are a state-of-the-art approach for modeling time-series data which combine the benefits of probabilistic filters and Recurrent Neural Networks into a single model. PSRNNs leverage the concept of Hilbert Space Embeddings of distributions (Smola et al.) to embed predictive states into a Reproducing Kernel Hilbert Space, then estimate, predict, and update these embedded states using Kernel Bayes Rule. Practical implementations of PSRNNs are made possible by the machinery of Random Features, where input features are mapped into a new space where dot products approximate the kernel well. Unfortunately PSRNNs often require a large number of RFs to obtain good results, resulting in large models which are slow to execute and slow to train. Orthogonal Random Features (ORFs) (Choromanski et al.) is an improvement on RFs which has been shown to decrease the number of RFs required for pointwise kernel approximation. Unfortunately, it is not clear that ORFs can be applied to PSRNNs, as PSRNNs rely on Kernel Ridge Regression as a core component of their learning algorithm, and the theoretical guarantees of ORF do not apply in this setting. In this paper, we extend the theory of ORFs to Kernel Ridge Regression and show that ORFs can be used to obtain Orthogonal PSRNNs (OPSRNNs), which are smaller and faster than PSRNNs. In particular, we show that OPSRNN models clearly outperform LSTMs and furthermore, can achieve accuracy similar to PSRNNs with an order of magnitude smaller number of features needed. | accepted-poster-papers | this submission presents the positive impact of using orthogonal random features instead of unstructured random features for predictive state recurrent neural nets. there's been some sentiment by the reviewers that the contribution is rather limited, but after further discussion with another AC and PC's, we have concluded that it may be limited but a solid follow-up on the previous work on predictive state RNN. | train | [
"ByofVOOgG",
"HJzahgcgf",
"rJujSJjgG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"I was very confused by some parts of the paper that are simple copy-past from the paper of Downey et al. which has been accepted for publication in NIPS. In particular, in section 3, several sentences are taken as they are from the Downey et al.’s paper. Some examples :\n\n« provide a compact representation of a ... | [
4,
8,
7
] | [
5,
4,
2
] | [
"iclr_2018_HJJ23bW0b",
"iclr_2018_HJJ23bW0b",
"iclr_2018_HJJ23bW0b"
] |
iclr_2018_rJUYGxbCW | PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples | Adversarial perturbations of normal images are usually imperceptible to humans, but they can seriously confuse state-of-the-art machine learning models. What makes them so special in the eyes of image classifiers? In this paper, we show empirically that adversarial examples mainly lie in the low probability regions of the training distribution, regardless of attack types and targeted models. Using statistical hypothesis testing, we find that modern neural density models are surprisingly good at detecting imperceptible image perturbations. Based on this discovery, we devised PixelDefend, a new approach that purifies a maliciously perturbed image by moving it back towards the distribution seen in the training data. The purified image is then run through an unmodified classifier, making our method agnostic to both the classifier and the attacking method. As a result, PixelDefend can be used to protect already deployed models and be combined with other model-specific defenses. Experiments show that our method greatly improves resilience across a wide variety of state-of-the-art attacking methods, increasing accuracy on the strongest attack from 63% to 84% for Fashion MNIST and from 32% to 70% for CIFAR-10. | accepted-poster-papers | The paper studies the use of PixelCNN density models for the detection of adversarial images, which tend to lie in low-probability parts of image space. The work is novel, relevant to the ICLR community, and appears to be technically sound.
A downside of the paper is its limited empirical evaluation: there evidence suggesting that defenses against adversarial examples that work well on MNIST/CIFAR do not necessarily transfer well to much higher-dimensional datasets, for instance, ImageNet. The paper could, therefore, would benefit from empirical evaluations of the defense on a dataset like ImageNet. | train | [
"HJ9WQx6JG",
"rJ4_WfuxM",
"rJbiu3lbM",
"BkMlqBTfG",
"rklJKHaMf",
"Sk-nvSTMf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"\nI read the rebuttal and thank the authors for the thoughtful responses and revisions. The updated Figure 2 and Section 4.4. addresses my primary concerns. Upwardly revising my review.\n\n====================\n\nThe authors describe a method for detecting adversarial examples by measuring the likelihood in terms ... | [
7,
7,
7,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1
] | [
"iclr_2018_rJUYGxbCW",
"iclr_2018_rJUYGxbCW",
"iclr_2018_rJUYGxbCW",
"rJ4_WfuxM",
"rJbiu3lbM",
"HJ9WQx6JG"
] |
iclr_2018_Bys4ob-Rb | Certified Defenses against Adversarial Examples | While neural networks have achieved high accuracy on standard image classification benchmarks, their accuracy drops to nearly zero in the presence of small adversarial perturbations to test inputs. Defenses based on regularization and adversarial training have been proposed, but often followed by new, stronger attacks that defeat these defenses. Can we somehow end this arms race? In this work, we study this problem for neural networks with one hidden layer. We first propose a method based on a semidefinite relaxation that outputs a certificate that for a given network and test input, no attack can force the error to exceed a certain value. Second, as this certificate is differentiable, we jointly optimize it with the network parameters, providing an adaptive regularizer that encourages robustness against all attacks. On MNIST, our approach produces a network and a certificate that no that perturbs each pixel by at most ϵ=0.1 can cause more than 35% test error.
| accepted-poster-papers | The paper presents a differentiable upper bound on the performance of classifier on an adversarially perturbed example (with small perturbation in the L-infinity sense). The paper presents novel ideas, is well-written, and appears technically sound. It will likely be of interest to the ICLR community.
The only downside of the paper is its limited empirical evaluation: there is evidence suggesting that defenses against adversarial examples that work well on MNIST/CIFAR do not necessarily transfer well to much higher-dimensional datasets, for instance, ImageNet. The paper could, therefore, would benefit from empirical evaluations of the defenses on a dataset like ImageNet. | train | [
"SJlhZp8gf",
"BJVgLg9xf",
"SkwJQwogM",
"rJ3juc1MM",
"rkaLdqJMM",
"rkjMdqkGM",
"r1kqwc1Gz",
"By245E-Wf",
"r15EJeKyG",
"SyBzxMU1z",
"SJT_HG81G",
"Byvbr5WJM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"This paper develops a new differentiable upper bound on the performance of classifier when the adversarial input in l_infinity is assumed to be applied.\nWhile the attack model is quite general, the current bound is only valid for linear and NN with one hidden layer model, so the result is quite restrictive.\n\nHo... | [
8,
8,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_Bys4ob-Rb",
"iclr_2018_Bys4ob-Rb",
"iclr_2018_Bys4ob-Rb",
"SJlhZp8gf",
"BJVgLg9xf",
"SkwJQwogM",
"By245E-Wf",
"iclr_2018_Bys4ob-Rb",
"iclr_2018_Bys4ob-Rb",
"iclr_2018_Bys4ob-Rb",
"Byvbr5WJM",
"iclr_2018_Bys4ob-Rb"
] |
iclr_2018_BkJ3ibb0- | Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models | In recent years, deep neural network approaches have been widely adopted for machine learning tasks, including classification. However, they were shown to be vulnerable to adversarial perturbations: carefully crafted small perturbations can cause misclassification of legitimate images. We propose Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against such attacks. Defense-GAN is trained to model the distribution of unperturbed images. At inference time, it finds a close output to a given image which does not contain the adversarial changes. This output is then fed to the classifier. Our proposed method can be used with any classification model and does not modify the classifier structure or training procedure. It can also be used as a defense against any attack as it does not assume knowledge of the process for generating the adversarial examples. We empirically show that Defense-GAN is consistently effective against different attack methods and improves on existing defense strategies. | accepted-poster-papers | The paper studied defenses against adversarial examples by training a GAN and, at inference time, finding the GAN-generated sample that is nearest to the (adversarial) input example. Next, it classifies the generated example rather than the input example. This defense is interesting and novel. The CelebA experiments the authors added in their revision suggest that the defense can be effective on high-resolution RGB images. | train | [
"H17TwR4rM",
"By-CxBKgz",
"BympCwwgf",
"rJOVWxjez",
"SkbvmBamf",
"Hy120kU7f",
"Bkw8Ck8QG",
"r1MMCyImM",
"SyS0aJ8Xz",
"r1D5pJ87f",
"Bkbgpk87z",
"S1bEhkU7G",
"B1wgPVOzG",
"S1c64RJzz",
"ryW5rcl-f",
"SkdMUQaAZ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public",
"public",
"public",
"public"
] | [
"B) C) Thanks for the additional experiments, I think they make the paper stronger. In particular they validate that scaling is proportional to L but not (linear in) to image size, and that the method works in RGB.\nD) OK.\nA) E) I still think that these additional experiments would help, but I am now marginally co... | [
-1,
6,
6,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
3,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"Bkbgpk87z",
"iclr_2018_BkJ3ibb0-",
"iclr_2018_BkJ3ibb0-",
"iclr_2018_BkJ3ibb0-",
"iclr_2018_BkJ3ibb0-",
"SkdMUQaAZ",
"ryW5rcl-f",
"S1c64RJzz",
"B1wgPVOzG",
"BympCwwgf",
"By-CxBKgz",
"rJOVWxjez",
"iclr_2018_BkJ3ibb0-",
"iclr_2018_BkJ3ibb0-",
"iclr_2018_BkJ3ibb0-",
"iclr_2018_BkJ3ibb0-"... |
iclr_2018_rkZvSe-RZ | Ensemble Adversarial Training: Attacks and Defenses | Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model's loss.
We show that this form of adversarial training converges to a degenerate global minimum, wherein small curvature artifacts near the data points obfuscate a linear approximation of the loss. The model thus learns to generate weak perturbations, rather than defend against strong ones. As a result, we find that adversarial training remains vulnerable to black-box attacks, where we transfer perturbations computed on undefended models, as well as to a powerful novel single-step attack that escapes the non-smooth vicinity of the input data via a small random step.
We further introduce Ensemble Adversarial Training, a technique that augments training data with perturbations transferred from other models. On ImageNet, Ensemble Adversarial Training yields models with strong robustness to black-box attacks. In particular, our most robust model won the first round of the NIPS 2017 competition on Defenses against Adversarial Attacks. | accepted-poster-papers | The paper studies a defense against adversarial examples that re-trains convolutional networks on adversarial examples constructed to attack pre-trained networks. Whilst the proposed approach is not very original, the paper does present a solid empirical baseline for these kinds of defenses. In particular, it goes beyond the "toy" experiments that most other studies in this space perform by experimenting on ImageNet. This is important as there is evidence suggesting that defenses against adversarial examples that work well on MNIST/CIFAR do not necessarily transfer well to ImageNet. The importance of the baseline method studied in this paper is underlined by its frequent application in the recent NIPS competition on adversarial examples. | train | [
"BkM3vGDlf",
"SJxF3VsxG",
"S1suPTx-G",
"rJgZKlFGf",
"rySuOxYGG",
"rynrIxYGz",
"r1LmIgFGM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper proposes ensemble adversarial training, in which adversarial examples crafted on other static pre-trained models are used in the training phase. Their method makes deep networks robust to black-box attacks, which was empirically demonstrated.\n\nThis is an empirical paper. The ideas are simple and not s... | [
6,
6,
6,
-1,
-1,
-1,
-1
] | [
2,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rkZvSe-RZ",
"iclr_2018_rkZvSe-RZ",
"iclr_2018_rkZvSe-RZ",
"iclr_2018_rkZvSe-RZ",
"S1suPTx-G",
"SJxF3VsxG",
"BkM3vGDlf"
] |
iclr_2018_SJyVzQ-C- | Fraternal Dropout | Recurrent neural networks (RNNs) are important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that takes advantage of dropout to achieve this goal. Specifically, we propose to train two identical copies of an RNN (that share parameters) with different dropout masks while minimizing the difference between their (pre-softmax) predictions. In this way our regularization encourages the representations of RNNs to be invariant to dropout mask, thus being robust. We show that our regularization term is upper bounded by the expectation-linear dropout objective which has been shown to address the gap due to the difference between the train and inference phases of dropout. We evaluate our model and achieve state-of-the-art results in sequence modeling tasks on two benchmark datasets - Penn Treebank and Wikitext-2. We also show that our approach leads to performance improvement by a significant margin in image captioning (Microsoft COCO) and semi-supervised (CIFAR-10) tasks. | accepted-poster-papers | The paper studies a dropout variant, called fraternal dropout. The paper is somewhat incremental in that the proposed approach is closely related to expectation linear dropout. Having said that, fraternal dropout does improve a state-of-the-art language model on PTB and WikiText2 by ~0.5-1.7 perplexity points. The paper is well-written and appears technically sound.
Some reviewers complain that the authors could have performed a more careful hyperparameter search on the fraternal dropout model. The authors appear to have partly addressed those concerns, which frankly, I don't really agree with either. By doing only a limited hyperparameter optimization, the authors are putting their "own" method at a disadvantage. If anything, the fact that their method gets strong performance despite this disadvantage (compared to very strong baseline models) is an argument in favor of fraternal dropout. | train | [
"SJGZIlkSz",
"rkblPhrgf",
"SkmNLstxG",
"rJJ2RIigz",
"SkuiLrp7M",
"rJ6YkqBfz",
"ryNO0J2Wz",
"S1Yt6udbz",
"HJ4xA__bz",
"HkN3T__Wf"
] | [
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer",
"author",
"author",
"author"
] | [
"The proposed method, fraternal dropout, is the version of self-ensembles (Pi model) for RNNs. The authors proved the fact that the regularization term for self-ensemble is worth for learning RNN models. The results of the paper show incredible performances from the previous state-of-the-art performances on languag... | [
-1,
5,
5,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
3,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJyVzQ-C-",
"iclr_2018_SJyVzQ-C-",
"iclr_2018_SJyVzQ-C-",
"iclr_2018_SJyVzQ-C-",
"iclr_2018_SJyVzQ-C-",
"ryNO0J2Wz",
"HJ4xA__bz",
"rJJ2RIigz",
"rkblPhrgf",
"SkmNLstxG"
] |
iclr_2018_SJcKhk-Ab | Can recurrent neural networks warp time? | Successful recurrent models such as long short-term memories (LSTMs) and gated recurrent units (GRUs) use \emph{ad hoc} gating mechanisms. Empirically these models have been found to improve the learning of medium to long term temporal dependencies and to help with vanishing gradient issues.
We prove that learnable gates in a recurrent model formally provide \emph{quasi-invariance to general time transformations} in the input data. We recover part of the LSTM architecture from a simple axiomatic approach.
This result leads to a new way of initializing gate biases in LSTMs and GRUs. Experimentally, this new \emph{chrono initialization} is shown to greatly improve learning of long term dependencies, with minimal implementation effort.
| accepted-poster-papers | All the reviews like the theoretical result presented in the paper which relates the gating mechanism of LSTMS (and GRUs) to time invariance / warping. The theoretical result is great and is used to propose a heuristic for setting biases when time invariance scales are known. The experiments are not mind-boggling, but none of the reviewers seem to think that's a show stopper. | test | [
"Sk2_qmcxf",
"rk10EE5Vf",
"HyqtEE54z",
"SkrzwsKeG",
"Hyb2BDI4G",
"HkIzPXqxM",
"ry_xKKQEM",
"S12wnhz4f",
"HyGJjR9QG",
"ryPwnxFXf",
"HJUl3etXf",
"HyuqsxtXG"
] | [
"official_reviewer",
"author",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Summary:\nThis paper shows that incorporating invariance to time transformations in recurrent networks naturally results in a gating mechanism used by LSTMs and their variants. This is then used to develop a simple bias initialization scheme for the gates when the range of temporal dependencies relevant for a prob... | [
8,
-1,
-1,
8,
-1,
8,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
-1,
-1,
4,
-1,
4,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJcKhk-Ab",
"iclr_2018_SJcKhk-Ab",
"Hyb2BDI4G",
"iclr_2018_SJcKhk-Ab",
"ry_xKKQEM",
"iclr_2018_SJcKhk-Ab",
"S12wnhz4f",
"HyuqsxtXG",
"iclr_2018_SJcKhk-Ab",
"SkrzwsKeG",
"HkIzPXqxM",
"Sk2_qmcxf"
] |
iclr_2018_HyUNwulC- | Parallelizing Linear Recurrent Neural Nets Over Sequence Length | Recurrent neural networks (RNNs) are widely used to model sequential data but
their non-linear dependencies between sequence elements prevent parallelizing
training over sequence length. We show the training of RNNs with only linear
sequential dependencies can be parallelized over the sequence length using the
parallel scan algorithm, leading to rapid training on long sequences even with
small minibatch size. We develop a parallel linear recurrence CUDA kernel and
show that it can be applied to immediately speed up training and inference of
several state of the art RNN architectures by up to 9x. We abstract recent work
on linear RNNs into a new framework of linear surrogate RNNs and develop a
linear surrogate model for the long short-term memory unit, the GILR-LSTM, that
utilizes parallel linear recurrence. We extend sequence learning to new
extremely long sequence regimes that were previously out of reach by
successfully training a GILR-LSTM on a synthetic sequence classification task
with a one million timestep dependency.
| accepted-poster-papers | Paper presents a way in which linear RNNs can be computed (fprop, bprop) using parallel scan. They show big improvements in speedups and show application on really long sequences. Reviews were generally favorable. | val | [
"Hkr1wGOeG",
"ry7sCqtgM",
"SyAgjAtgG",
"rJIWI2PWM",
"BycaVhwZz",
"r1T7-3PbG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"This paper focuses on accelerating RNN by applying the method from Blelloch (1990). The application is straightforward and thus technical novelty of this paper is limited. But the results are impressive. \n\nOne concern is the proposed technique is only applied for few types of RNNs which may limit its application... | [
6,
7,
7,
-1,
-1,
-1
] | [
3,
2,
4,
-1,
-1,
-1
] | [
"iclr_2018_HyUNwulC-",
"iclr_2018_HyUNwulC-",
"iclr_2018_HyUNwulC-",
"Hkr1wGOeG",
"ry7sCqtgM",
"SyAgjAtgG"
] |
iclr_2018_HkTEFfZRb | Attacking Binarized Neural Networks | Neural networks with low-precision weights and activations offer compelling
efficiency advantages over their full-precision equivalents. The two most
frequently discussed benefits of quantization are reduced memory consumption,
and a faster forward pass when implemented with efficient bitwise
operations. We propose a third benefit of very low-precision neural networks:
improved robustness against some adversarial attacks, and in the worst case,
performance that is on par with full-precision models. We focus on the very
low-precision case where weights and activations are both quantized to ±1,
and note that stochastically quantizing weights in just one layer can sharply
reduce the impact of iterative attacks. We observe that non-scaled binary neural
networks exhibit a similar effect to the original \emph{defensive distillation}
procedure that led to \emph{gradient masking}, and a false notion of security.
We address this by conducting both black-box and white-box experiments with
binary models that do not artificially mask gradients. | accepted-poster-papers | Paper was well written and rebuttal was well thought out and convincing.
The reviewers agree that the paper showed BNNs were good (relatively speaking) at resisting adversarial examples. Some question was raised about whether the methods would work on larger datasets and models. The authors offered some experiments in this regard in the rebuttal to this end. Also, a public comment appeared to follow up on CIFAR and report correlated results. | train | [
"BkSH7A5Qz",
"Sy5hsUOlG",
"H1EWH1KxG",
"HkNP3wvWM",
"BJO40TcmM",
"ByKGyXMfz"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"We thank the reviewers for their positive and constructive feedback. We believe that we have addressed all of the main questions and concerns in the most recent revision of the paper. These are detailed below:\n\nR2 - Higher dimensional data\n\nTo confirm that our findings hold for higher dimensional data, and fur... | [
-1,
7,
7,
6,
-1,
-1
] | [
-1,
3,
4,
5,
-1,
-1
] | [
"iclr_2018_HkTEFfZRb",
"iclr_2018_HkTEFfZRb",
"iclr_2018_HkTEFfZRb",
"iclr_2018_HkTEFfZRb",
"iclr_2018_HkTEFfZRb",
"iclr_2018_HkTEFfZRb"
] |
iclr_2018_S1jBcueAb | Depthwise Separable Convolutions for Neural Machine Translation | Depthwise separable convolutions reduce the number of parameters and computation used in convolutional operations while increasing representational efficiency.
They have been shown to be successful in image classification models, both in obtaining better models than previously possible for a given parameter count (the Xception architecture) and considerably reducing the number of parameters required to perform at a given level (the MobileNets family of architectures). Recently, convolutional sequence-to-sequence networks have been applied to machine translation tasks with good results. In this work, we study how depthwise separable convolutions can be applied to neural machine translation. We introduce a new architecture inspired by Xception and ByteNet, called SliceNet, which enables a significant reduction of the parameter count and amount of computation needed to obtain results like ByteNet, and, with a similar parameter count, achieves better results.
In addition to showing that depthwise separable convolutions perform well for machine translation, we investigate the architectural changes that they enable: we observe that thanks to depthwise separability, we can increase the length of convolution windows, removing the need for filter dilation. We also introduce a new super-separable convolution operation that further reduces the number of parameters and computational cost of the models. | accepted-poster-papers | Paper explore depth-wise separable convolutions for sequence to sequence models with convolutions encoders.
R1 and R3 liked the paper and the results. R3 thought the presentation of the convolutional space was nice, but the experiments were hurried. Other reviewers thought the paper as a whole had dense parts and need cleaning up, but the authors seem to have only done this partially.
From the reviewers comments, I'm giving this a borderline accept. I would have been feeling much more comfortable with the decision if the authors had incorporated the reviewers' suggestions more thoroughly.. | train | [
"rJeAByPlf",
"BkCwTl9lG",
"rJ9-yZ9lM",
"BJkhyncQM",
"BJrg6o9mz",
"HyW_7BOZM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"official_reviewer"
] | [
"Pros:\n- new module\n- good performances (not state-of-the-art)\nCons:\n- additional experiments\n\nThe paper is well motivated, and is purely experimental and proposes a new architecture. However, I believe that more experiments should be performed and the explanations could be more concise.\n\nThe section 3 is d... | [
5,
7,
7,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1
] | [
"iclr_2018_S1jBcueAb",
"iclr_2018_S1jBcueAb",
"iclr_2018_S1jBcueAb",
"rJeAByPlf",
"rJ9-yZ9lM",
"rJeAByPlf"
] |
iclr_2018_rywHCPkAW | Noisy Networks For Exploration | We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent’s policy can be used to aid efficient exploration. The parameters of the noise are learned with gradient descent along with the remaining network weights. NoisyNet is straightforward to implement and adds little computational overhead. We find that replacing the conventional exploration heuristics for A3C, DQN and Dueling agents (entropy reward and epsilon-greedy respectively) with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance. | accepted-poster-papers | The paper proposes to add noise to the weights of a policy network during learning in Deep-RL settings and finds that this results in better performance on DQN, A3C and other algorithms that use other exploration strategies. Unfortunately, the paper does not do a thorough job of exploring the reasons and doesn't offer a comparison to other methods that have been out on arxiv for several months before the submission, in spite of reviewers and anonymous requests. Otherwise I might have supported recommending the paper for a talk. | train | [
"Hyf0aUVeM",
"rJ6Z7prxf",
"H14gEaFxG",
"SJDBQS5mz",
"BJZhEy5Xf",
"B1o89OFXz",
"B1e_W8uMM",
"H1NaqYFmz",
"r14ytLuMM",
"ry7jv6OQf",
"HJlU1T9GM",
"S1cI98dGM",
"S1pPiLdMG",
"BJRR3paAZ"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author",
"official_reviewer",
"official_reviewer",
"author",
"author",
"public"
] | [
"In this paper, a new heuristic is introduced with the purpose of controlling the exploration in deep reinforcement learning. \n\nThe proposed approach, NoisyNet, seems very simple and smart: a noise of zero mean and unknown variance is added to each weight of the deep network. The matrices of unknown variances are... | [
5,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_rywHCPkAW",
"iclr_2018_rywHCPkAW",
"iclr_2018_rywHCPkAW",
"BJZhEy5Xf",
"H1NaqYFmz",
"HJlU1T9GM",
"iclr_2018_rywHCPkAW",
"ry7jv6OQf",
"H14gEaFxG",
"S1cI98dGM",
"r14ytLuMM",
"rJ6Z7prxf",
"Hyf0aUVeM",
"iclr_2018_rywHCPkAW"
] |
iclr_2018_Hkc-TeZ0W | A Hierarchical Model for Device Placement | We introduce a hierarchical model for efficient placement of computational graphs onto hardware devices, especially in heterogeneous environments with a mixture of CPUs, GPUs, and other computational devices. Our method learns to assign graph operations to groups and to allocate those groups to available devices. The grouping and device allocations are learned jointly. The proposed method is trained with policy gradient and requires no human intervention. Experiments with widely-used
computer vision and natural language models show that our algorithm can find optimized, non-trivial placements for TensorFlow computational graphs with over 80,000 operations. In addition, our approach outperforms placements by human
experts as well as a previous state-of-the-art placement method based on deep reinforcement learning. Our method achieves runtime reductions of up to 60.6% per training step when applied to models such as Neural Machine Translation. | accepted-poster-papers | The authors provide an alternative method to [1] for placement of ops in blocks. The results are shown to be an improvement over prior RL based placement in [1] and superior to *some* (maybe not the best) earlier methods for operations placements. The paper seems to have benefited strongly from reviewer feedback and seems like a reasonable contribution. We hope that the implementation may be made available to the community.
[1] Mirhoseini A, Pham H, Le Q V, et al. Device Placement Optimization with Reinforcement Learning[J]. arXiv preprint arXiv:1706.04972, 2017. | train | [
"HytWY1DVG",
"ryazKvH4M",
"BJuGT9zez",
"Sk-qjGYlz",
"rkSREOYgM",
"r1yts_37f",
"HJMiRCdGz",
"SypbR0OGG",
"rJZNkyYzz"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Thanks for your response!\n\nAlthough we cited Scotch papers from 2009, the software we used was developed in 2012: http://www.labri.fr/perso/pelegrin/scotch/. Thanks to your suggestion, we have found a more recent graph partitioning package called KaHIP, which has publications in 2017 as well as ongoing software ... | [
-1,
-1,
5,
5,
8,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
4,
5,
-1,
-1,
-1,
-1
] | [
"ryazKvH4M",
"rJZNkyYzz",
"iclr_2018_Hkc-TeZ0W",
"iclr_2018_Hkc-TeZ0W",
"iclr_2018_Hkc-TeZ0W",
"iclr_2018_Hkc-TeZ0W",
"Sk-qjGYlz",
"rkSREOYgM",
"BJuGT9zez"
] |
iclr_2018_BJJLHbb0- | Deep Autoencoding Gaussian Mixture Model for Unsupervised Anomaly Detection | Unsupervised anomaly detection on multi- or high-dimensional data is of great importance in both fundamental machine learning research and industrial applications, for which density estimation lies at the core. Although previous approaches based on dimensionality reduction followed by density estimation have made fruitful progress, they mainly suffer from decoupled model learning with inconsistent optimization goals and incapability of preserving essential information in the low-dimensional space. In this paper, we present a Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection. Our model utilizes a deep autoencoder to generate a low-dimensional representation and reconstruction error for each input data point, which is further fed into a Gaussian Mixture Model (GMM). Instead of using decoupled two-stage training and the standard Expectation-Maximization (EM) algorithm, DAGMM jointly optimizes the parameters of the deep autoencoder and the mixture model simultaneously in an end-to-end fashion, leveraging a separate estimation network to facilitate the parameter learning of the mixture model. The joint optimization, which well balances autoencoding reconstruction, density estimation of latent representation, and regularization, helps the autoencoder escape from less attractive local optima and further reduce reconstruction errors, avoiding the need of pre-training. Experimental results on several public benchmark datasets show that, DAGMM significantly outperforms state-of-the-art anomaly detection techniques, and achieves up to 14% improvement based on the standard F1 score. | accepted-poster-papers | + Empirically convincing and clearly explained application: a novel deep learning architecture and approach is shown to significantly outperform state-of-the-art in unsupervised anomaly detection.
- No clear theoretical foundation and justification is provided for the approach
- Connexion and differentiation from prior work on simulataneous learning representation and fitting a Gaussian mixture to it would deserve a much more thorough discussion / treatment.
| train | [
"S1f48huxz",
"r1tvocFgf",
"B1aQ8_2ef",
"S11z079mf",
"B1wAzvEQf",
"Sk4EfvVQf",
"rJq9evE7G",
"HyFqWv4Xf",
"BJXb-w4Qz",
"rkkvR84XM",
"S1g4W3x-f"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"1. This is a good paper, makes an interesting algorithmic contribution in the sense of joint clustering-dimension reduction for unsupervised anomaly detection\n2. It demonstrates clear performance improvement via comprehensive comparison with state-of-the-art methods\n3. Is the number of Gaussian Mixtures 'K' a hy... | [
8,
8,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BJJLHbb0-",
"iclr_2018_BJJLHbb0-",
"iclr_2018_BJJLHbb0-",
"rJq9evE7G",
"S1g4W3x-f",
"S1f48huxz",
"r1tvocFgf",
"r1tvocFgf",
"r1tvocFgf",
"B1aQ8_2ef",
"iclr_2018_BJJLHbb0-"
] |
iclr_2018_BySRH6CpW | Learning Discrete Weights Using the Local Reparameterization Trick | Recent breakthroughs in computer vision make use of large deep neural networks, utilizing the substantial speedup offered by GPUs. For applications running on limited hardware, however, high precision real-time processing can still be a challenge. One approach to solving this problem is training networks with binary or ternary weights, thus removing the need to calculate multiplications and significantly reducing memory size. In this work, we introduce LR-nets (Local reparameterization networks), a new method for training neural networks with discrete weights using stochastic parameters. We show how a simple modification to the local reparameterization trick, previously used to train Gaussian distributed weights, enables the training of discrete weights. Using the proposed training we test both binary and ternary models on MNIST, CIFAR-10 and ImageNet benchmarks and reach state-of-the-art results on most experiments. | accepted-poster-papers | Well written paper on a novel application of the local reprarametrisation trick to learn networks with discrete weights. The approach achieves state-of-the-art results.
Note: I apreciate that the authors added a comparison to the Gumbel-softmax continuous relaxation approach during the review period, following the suggestion of a reviewer. This additional comparison strengthens the paper. | train | [
"BJHcawFxM",
"SkOjP3Hlf",
"ryZHzH9gz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"This paper proposes training binary and ternary weight distribution networks through the local reparametrization trick and continuous optimization. The argument is that due to the central limit theorem (CLT) the distribution on the neuron pre-activations is approximately Gaussian, with a mean given by the inner pr... | [
6,
7,
6
] | [
4,
3,
3
] | [
"iclr_2018_BySRH6CpW",
"iclr_2018_BySRH6CpW",
"iclr_2018_BySRH6CpW"
] |
iclr_2018_BJ_wN01C- | Deep Rewiring: Training very sparse deep networks | Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them. But also generic hardware and software implementations of deep learning run more efficiently for sparse networks. Several methods exist for pruning connections of a neural network after it was trained without connectivity constraints. We present an algorithm, DEEP R, that enables us to train directly a sparsely connected neural network. DEEP R automatically rewires the network during supervised training so that connections are there where they are most needed for the task, while its total number is all the time strictly bounded. We demonstrate that DEEP R can be used to train very sparse feedforward and recurrent neural networks on standard benchmark tasks with just a minor loss in performance. DEEP R is based on a rigorous theoretical foundation that views rewiring as stochastic sampling of network configurations from a posterior. | accepted-poster-papers | Clearly explained, well motivated and empirically supported algorithm for training deep networks while simultaneously learning their sparse connectivity.
The approach is similar to previous work (in particular Welling et al., Bayesian Learning via Stochastic Gradient Langevin Dynamics, ICML 2011) but is novel in that it satisfies a hard constraint on the network sparsity, which could be an advantage to match neuromorphic hardware limitations. | train | [
"Syx4zM9xM",
"H1aEoGAgG",
"r1UOC9lbf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"In this paper, the authors present an approach to implement deep learning directly on sparsely connected graphs. Previous approaches have focused on transferring trained deep networks to a sparse graph for fast or efficient utilization; using this approach, sparse networks can be trained efficiently online, allowi... | [
8,
5,
6
] | [
4,
5,
4
] | [
"iclr_2018_BJ_wN01C-",
"iclr_2018_BJ_wN01C-",
"iclr_2018_BJ_wN01C-"
] |
iclr_2018_SJQHjzZ0- | Quantitatively Evaluating GANs With Divergences Proposed for Training | Generative adversarial networks (GANs) have been extremely effective in approximating complex distributions of high-dimensional, input data samples, and substantial progress has been made in understanding and improving GAN performance in terms of both theory and application.
However, we currently lack quantitative methods for model assessment. Because of this, while many GAN variants being proposed, we have relatively little understanding of their relative abilities. In this paper, we evaluate the performance of various types of GANs using divergence and distance functions typically used only for training. We observe consistency across the various proposed metrics and, interestingly, the test-time metrics do not favour networks that use the same training-time criterion. We also compare the proposed metrics to human perceptual scores. | accepted-poster-papers | + clearly written and thorough empirical comparison of several metrics/divergences for evaluating GANs, prominently parametric-critic based divergences.
- little technical novelty with respect to prior work. As noted by reviewers and an anonymous commentator: using an Independent critic for evaluation has been proposed and used in practice before.
+ the contribution of the work thus lies primarily in its well-done and extensive empirical comparisons of multiple metrics and models | train | [
"ryX_FSexG",
"H1uFgwqeM",
"H1C2pZplz",
"HJBlWX6mM",
"SyqLlX6QM",
"S1Gex7a7z",
"Hyexk7TmM",
"S1mjkm67z",
"SJm-JvWfz",
"SJ7anZWlM",
"ry8IOs_yM",
"SkCF1x4kG",
"r1GaH_NAW"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"public",
"author",
"public"
] | [
"Through evaluation of current popular GAN variants. \n * useful AIS figure\n * useful example of failure mode of inception scores\n * interesting to see that using a metric based on a model’s distance does not make the model better at that distance\nthe main criticism that can be given to the paper is that the... | [
7,
4,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJQHjzZ0-",
"iclr_2018_SJQHjzZ0-",
"iclr_2018_SJQHjzZ0-",
"iclr_2018_SJQHjzZ0-",
"SJm-JvWfz",
"H1C2pZplz",
"H1uFgwqeM",
"ryX_FSexG",
"iclr_2018_SJQHjzZ0-",
"ry8IOs_yM",
"iclr_2018_SJQHjzZ0-",
"r1GaH_NAW",
"iclr_2018_SJQHjzZ0-"
] |
iclr_2018_BkLhaGZRW | Improving GAN Training via Binarized Representation Entropy (BRE) Regularization | We propose a novel regularizer to improve the training of Generative Adversarial Networks (GANs). The motivation is that when the discriminator D spreads out its model capacity in the right way, the learning signals given to the generator G are more informative and diverse, which helps G to explore better and discover the real data manifold while avoiding large unstable jumps due to the erroneous extrapolation made by D . Our regularizer guides the rectifier discriminator D to better allocate its model capacity, by encouraging the binary activation patterns on selected internal layers of D to have a high joint entropy. Experimental results on both synthetic data and real datasets demonstrate improvements in stability and convergence speed of the GAN training, as well as higher sample quality. The approach also leads to higher classification accuracies in semi-supervised learning.
| accepted-poster-papers | + Original regularizer that encourages discriminator representation entropy is shown to improve GAN training.
+ good supporting empirical validation
- While intuitively reasonable, no compelling theory is given to justify the approach
- The regularizer used in practice is a heap of heuristic approximations (continuous relaxation of a rough approximate measure of the joint entropy of a binarized activation vector)
- The writing and the mathematical exposition could be clearer and more precise | train | [
"B1ssgfcgM",
"SJa7Mu9lf",
"By5aywgZf",
"Hk1j4_p7z",
"HyAkfOTQf",
"Skcy4dTXf",
"r1xcQdaXG",
"Sk8_EdpXz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"The paper proposes a regularizer that encourages a GAN discriminator to focus its capacity in the region around the manifolds of real and generated data points, even when it would be easy to discriminate between these manifolds using only a fraction of its capacity, so that the discriminator provides a more inform... | [
6,
7,
4,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
3,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BkLhaGZRW",
"iclr_2018_BkLhaGZRW",
"iclr_2018_BkLhaGZRW",
"Sk8_EdpXz",
"iclr_2018_BkLhaGZRW",
"B1ssgfcgM",
"SJa7Mu9lf",
"By5aywgZf"
] |
iclr_2018_r1NYjfbR- | Generative networks as inverse problems with Scattering transforms | Generative Adversarial Nets (GANs) and Variational Auto-Encoders (VAEs) provide impressive image generations from Gaussian white noise, but the underlying mathematics are not well understood. We compute deep convolutional network generators by inverting a fixed embedding operator. Therefore, they do not require to be optimized with a discriminator or an encoder. The embedding is Lipschitz continuous to deformations so that generators transform linear interpolations between input white noise vectors into deformations between output images. This embedding is computed with a wavelet Scattering transform. Numerical experiments demonstrate that the resulting Scattering generators have similar properties as GANs or VAEs, without learning a discriminative network or an encoder. | accepted-poster-papers | The paper got mixed scores of 4 (R1), 6 (R3), 8 (R2). R1 initially gave up after a few pages of reading, due to clarity problems. But looking over the revised version was much happier, so raised their score to 7. R2, who is knowledge about the area, was very positive about the paper, feeling it is a very interesting idea. R3 was also cautiously positive. The authors have absorbed the comments by the reviewers to make significant changes to the paper. The AC feels the idea is interesting, even if the experimental results aren't that compelling, so feels the paper can be accepted.
| train | [
"H1WORsdlG",
"SkJxZ1FeG",
"H1QWqHsgz",
"Hy83g4Gmz",
"S1d6aXzmM",
"ryTfQNMXM",
"SkP1GEGmM",
"SkG-ZEzQG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"After a first manuscript that needed majors edits, the revised version\noffers an interesting GAN approach based the scattering transform.\n\nApproach is well motivated with proper references to the recent literature.\n\nExperiments are not state of the art but clearly demonstrate that the\nproposed approach does ... | [
7,
8,
6,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_r1NYjfbR-",
"iclr_2018_r1NYjfbR-",
"iclr_2018_r1NYjfbR-",
"H1QWqHsgz",
"iclr_2018_r1NYjfbR-",
"H1WORsdlG",
"SkJxZ1FeG",
"Hy83g4Gmz"
] |
iclr_2018_BJGWO9k0Z | Critical Percolation as a Framework to Analyze the Training of Deep Networks | In this paper we approach two relevant deep learning topics: i) tackling of graph structured input data and ii) a better understanding and analysis of deep networks and related learning algorithms. With this in mind we focus on the topological classification of reachability in a particular subset of planar graphs (Mazes). Doing so, we are able to model the topology of data while staying in Euclidean space, thus allowing its processing with standard CNN architectures. We suggest a suitable architecture for this problem and show that it can express a perfect solution to the classification task. The shape of the cost function around this solution is also derived and, remarkably, does not depend on the size of the maze in the large maze limit. Responsible for this behavior are rare events in the dataset which strongly regulate the shape of the cost function near this global minimum. We further identify an obstacle to learning in the form of poorly performing local minima in which the network chooses to ignore some of the inputs. We further support our claims with training experiments and numerical analysis of the cost function on networks with up to 128 layers. | accepted-poster-papers | The paper got generally positive scores of 6,7,7. The reviewers found the paper to be novel but hard to understand. The AC feels the paper should be accepted but the authors should revise their paper to take into account the comments from the reviewers to improve clarity. | val | [
"HJ1MEAYxG",
"HJraEkqlz",
"BkojC46xG",
"r13AbIaWG",
"Byo7BITbM",
"HyZXN8abM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The authors are motivated by two problems: Inputting non-Euclidean data (such as graphs) into deep CNNs, and analyzing optimization properties of deep networks. In particular, they look at the problem of maze testing, where, given a grid of black and white pixels, the goal is to answer whether there is a path from... | [
7,
7,
6,
-1,
-1,
-1
] | [
3,
3,
1,
-1,
-1,
-1
] | [
"iclr_2018_BJGWO9k0Z",
"iclr_2018_BJGWO9k0Z",
"iclr_2018_BJGWO9k0Z",
"BkojC46xG",
"HJ1MEAYxG",
"HJraEkqlz"
] |
iclr_2018_HkNGsseC- | On the Expressive Power of Overlapping Architectures of Deep Learning | Expressive efficiency refers to the relation between two architectures A and B, whereby any function realized by B could be replicated by A, but there exists functions realized by A, which cannot be replicated by B unless its size grows significantly larger. For example, it is known that deep networks are exponentially efficient with respect to shallow networks, in the sense that a shallow network must grow exponentially large in order to approximate the functions represented by a deep network of polynomial size. In this work, we extend the study of expressive efficiency to the attribute of network connectivity and in particular to the effect of "overlaps" in the convolutional process, i.e., when the stride of the convolution is smaller than its filter size (receptive field).
To theoretically analyze this aspect of network's design, we focus on a well-established surrogate for ConvNets called Convolutional Arithmetic Circuits (ConvACs), and then demonstrate empirically that our results hold for standard ConvNets as well. Specifically, our analysis shows that having overlapping local receptive fields, and more broadly denser connectivity, results in an exponential increase in the expressive capacity of neural networks. Moreover, while denser connectivity can increase the expressive capacity, we show that the most common types of modern architectures already exhibit exponential increase in expressivity, without relying on fully-connected layers. | accepted-poster-papers | The paper received scores of 8 (R1), 6 (R2), 6 (R3). R1's review is brief, and also is optimistic that these results demonstrated on ConvACs generalize to real convnets. R2 and R3 feel this might be a potential problem. R2 advocates weak accept and given that R1 is keen on the paper, the AC feels it can be accepted.
| train | [
"BJZ4zdslf",
"HkopHcseG",
"BypvOtGZz",
"HyxMluZfz",
"S1rPmFqZM",
"SyfVBDMbG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"The paper studies the expressive power provided by \"overlap\" in convolution layers of DNNs. Instead of ReLU networks with average/max pooling (as is standard in practice), the authors consider linear activations with product pooling. Such networks, which have been known as convolutional arithmetic circuits, ar... | [
6,
8,
6,
-1,
-1,
-1
] | [
4,
3,
4,
-1,
-1,
-1
] | [
"iclr_2018_HkNGsseC-",
"iclr_2018_HkNGsseC-",
"iclr_2018_HkNGsseC-",
"BypvOtGZz",
"HkopHcseG",
"BJZ4zdslf"
] |
iclr_2018_HJ94fqApW | Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers | Model pruning has become a useful technique that improves the computational efficiency of deep learning, making it possible to deploy solutions in resource-limited scenarios. A widely-used practice in relevant work assumes that a smaller-norm parameter or feature plays a less informative role at the inference time. In this paper, we propose a channel pruning technique for accelerating the computations of deep convolutional neural networks (CNNs) that does not critically rely on this assumption. Instead, it focuses on direct simplification of the channel-to-channel computation graph of a CNN without the need of performing a computationally difficult and not-always-useful task of making high-dimensional tensors of CNN structured sparse. Our approach takes two stages: first to adopt an end-to-end stochastic training method that eventually forces the outputs of some channels to be constant, and then to prune those constant channels from the original neural network by adjusting the biases of their impacting layers such that the resulting compact model can be quickly fine-tuned. Our approach is mathematically appealing from an optimization perspective and easy to reproduce. We experimented our approach through several image learning benchmarks and demonstrate its interest- ing aspects and competitive performance. | accepted-poster-papers | The paper received scores either side of the borderline: 6 (R1), 5 (R2), 7 (R3). R1 and R3 felt the idea to be interesting, simple and effective. R2 raised a number of concerns which the rebuttal addressed satisfactorily. Therefore the AC feels the paper can be accepted. | val | [
"BJtJ3c_gG",
"B1rak-5eG",
"B1KcBUqlz",
"r1lVkUJGz",
"Skqs6Bh-G",
"r1ud3r3WM",
"HJklnrnbM",
"ByUWjHhZf",
"ByWHEgg-z"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"author",
"author",
"author",
"author",
"public"
] | [
"In this paper, the authors propose a data-dependent channel pruning approach to simplify CNNs with batch-normalizations. The authors view CNNs as a network flow of information and applies sparsity regularization on the batch-normalization scaling parameter \\gamma which is seen as a “gate” to the information flow.... | [
5,
7,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
5,
3,
5,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HJ94fqApW",
"iclr_2018_HJ94fqApW",
"iclr_2018_HJ94fqApW",
"ByUWjHhZf",
"BJtJ3c_gG",
"B1rak-5eG",
"B1KcBUqlz",
"ByWHEgg-z",
"iclr_2018_HJ94fqApW"
] |
iclr_2018_SJiHXGWAZ | Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting | Spatiotemporal forecasting has various applications in neuroscience, climate and transportation domain. Traffic forecasting is one canonical example of such learning task. The task is challenging due to (1) complex spatial dependency on road networks, (2) non-linear temporal dynamics with changing road conditions and (3) inherent difficulty of long-term forecasting. To address these challenges, we propose to model the traffic flow as a diffusion process on a directed graph and introduce Diffusion Convolutional Recurrent Neural Network (DCRNN), a deep learning framework for traffic forecasting that incorporates both spatial and temporal dependency in the traffic flow. Specifically, DCRNN captures the spatial dependency using bidirectional random walks on the graph, and the temporal dependency using the encoder-decoder architecture with scheduled sampling. We evaluate the framework on two real-world large-scale road network traffic datasets and observe consistent improvement of 12% - 15% over state-of-the-art baselines | accepted-poster-papers | The paper received highly diverging scores: 5 (R1) ,9 (R2), 4(R3). Both R1 and R3 complained about the comparisons to related methods. R3 suggested some kNN and GP baselines, while R1 mentioned concurrent work using deepnets for trafffic prediction.
R3 is real expert on field. R2 and R1, not so.
R2 review very positive, but vacuous.
Rebuttal seems to counter R1 and R3 well.
It's a close all but the AC is inclined to accept since it's an interesting application of (graph-based) deepnets. | train | [
"r1zoeeFgf",
"r1pn22FeG",
"H1AlgBcxf",
"S1W9DvMmM",
"ryjAIwfQG",
"Hy2t8DfQz",
"B1IMLDfmf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The paper proposes to build a graph where the edge weight is defined using the road network distance which is shown to be more realistic than the Euclidean distance. The defined diffusion convolution operation is essentially conducting random walks over the road segment graph. To avoid the expensive matrix operati... | [
5,
4,
9,
-1,
-1,
-1,
-1
] | [
3,
5,
5,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJiHXGWAZ",
"iclr_2018_SJiHXGWAZ",
"iclr_2018_SJiHXGWAZ",
"H1AlgBcxf",
"r1zoeeFgf",
"B1IMLDfmf",
"r1pn22FeG"
] |
iclr_2018_SkHDoG-Cb | Simulated+Unsupervised Learning With Adaptive Data Generation and Bidirectional Mappings | Collecting a large dataset with high quality annotations is expensive and time-consuming. Recently, Shrivastava et al. (2017) propose Simulated+Unsupervised (S+U) learning: It first learns a mapping from synthetic data to real data, translates a large amount of labeled synthetic data to the ones that resemble real data, and then trains a learning model on the translated data. Bousmalis et al. (2017) propose a similar framework that jointly trains a translation mapping and a learning model.
While these algorithms are shown to achieve the state-of-the-art performances on various tasks, it may have a room for improvement, as they do not fully leverage flexibility of data simulation process and consider only the forward (synthetic to real) mapping. While these algorithms are shown to achieve the state-of-the-art performances on various tasks, it may have a room for improvement, as it does not fully leverage flexibility of data simulation process and consider only the forward (synthetic to real) mapping. Inspired by this limitation, we propose a new S+U learning algorithm, which fully leverage the flexibility of data simulators and bidirectional mappings between synthetic data and real data. We show that our approach achieves the improved performance on the gaze estimation task, outperforming (Shrivastava et al., 2017). | accepted-poster-papers | Split opinions on paper: 6 (R1), 3 (R2), 6 (R3). Much of the debate centered on the novelty of the algorithm. R2 felt that the paper was a straight-forward combination of CycleGAN with S+U, while R3 felt it made a significant contribution. The AC has looked at the paper and the reviews and discussion. The topic is very interesting and topical. The experiments are ok, but would be helped a lot by including the real/synth car data currently in appendix B: seeing the method work on natural images is much more compelling. The approach still seems a bit incremental: yes, it's not a straight combination but the extra stuff isn't so profound. The AC is inclined to accept, just because this is an interesting problem. | train | [
"S1uLIj8lG",
"BJ__hY9lz",
"BJ7oBjolf",
"HyXoBh3zM",
"Bk_imn2fz",
"ByisG2nMG",
"Hk-y9aJZz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"* sec.2.2 is about label-preserving translation and many notations are introduced. However, it is not clear what label here refers to, and it does not shown in the notation so far at all. Only until the end of sec.2.2, the function F(.) is introduced and its revelation - Google Search as label function is discusse... | [
6,
6,
3,
-1,
-1,
-1,
-1
] | [
3,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SkHDoG-Cb",
"iclr_2018_SkHDoG-Cb",
"iclr_2018_SkHDoG-Cb",
"iclr_2018_SkHDoG-Cb",
"S1uLIj8lG",
"BJ__hY9lz",
"BJ7oBjolf"
] |
iclr_2018_ryH20GbRW | Relational Neural Expectation Maximization: Unsupervised Discovery of Objects and their Interactions | Common-sense physical reasoning is an essential ingredient for any intelligent agent operating in the real-world. For example, it can be used to simulate the environment, or to infer the state of parts of the world that are currently unobserved. In order to match real-world conditions this causal knowledge must be learned without access to supervised data. To address this problem we present a novel method that learns to discover objects and model their physical interactions from raw visual images in a purely unsupervised fashion. It incorporates prior knowledge about the compositional nature of human perception to factor interactions between object-pairs and learn efficiently. On videos of bouncing balls we show the superior modelling capabilities of our method compared to other unsupervised neural approaches that do not incorporate such prior knowledge. We demonstrate its ability to handle occlusion and show that it can extrapolate learned knowledge to scenes with different numbers of objects. | accepted-poster-papers | All three reviewers recommend acceptance. The authors did a good job at the rebuttal which swayed the first reviewer to increase the final rating. This is a clear accept. | train | [
"S1OZaPI4z",
"BkApAXalG",
"ByDGdV9ef",
"B1USI22xz",
"Skl0rrTmz",
"HyR2QHKzM",
"B18VfrYMM",
"HkrHQHFMf",
"ryE0frKzM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"The rebuttal and revision addressed enough of my concerns for me to increase the score to 8. \nGood work on the additional experiments and the discussion of limitations in the conclusion!",
"Summary:\nThe manuscript extends the Neural Expectation Maximization framework by integrating an interaction function that... | [
-1,
8,
7,
7,
-1,
-1,
-1,
-1,
-1
] | [
-1,
5,
4,
3,
-1,
-1,
-1,
-1,
-1
] | [
"B18VfrYMM",
"iclr_2018_ryH20GbRW",
"iclr_2018_ryH20GbRW",
"iclr_2018_ryH20GbRW",
"iclr_2018_ryH20GbRW",
"iclr_2018_ryH20GbRW",
"BkApAXalG",
"ByDGdV9ef",
"B1USI22xz"
] |
iclr_2018_HkCsm6lRb | Generative Models of Visually Grounded Imagination | It is easy for people to imagine what a man with pink hair looks like, even if they have never seen such a person before. We call the ability to create images of novel semantic concepts visually grounded imagination. In this paper, we show how we can modify variational auto-encoders to perform this task. Our method uses a novel training objective, and a novel product-of-experts inference network, which can handle partially specified (abstract) concepts in a principled and efficient way. We also propose a set of easy-to-compute evaluation metrics that capture our intuitive notions of what it means to have good visual imagination, namely correctness, coverage, and compositionality (the 3 C’s). Finally, we perform a detailed comparison of our method with two existing joint image-attribute VAE methods (the JMVAE method of Suzuki et al., 2017 and the BiVCCA method of Wang et al., 2016) by applying them to two datasets: the MNIST-with-attributes dataset (which we introduce here), and the CelebA dataset (Liu et al., 2015). | accepted-poster-papers | All three reviewers recommend acceptance. Good work, accept | train | [
"BJDxbMvez",
"S1UuHvwgf",
"rky3x-5lG",
"HyzaUOvGz",
"HygQU_DMM",
"B1jeeYPMG",
"HkXT0_Pzf",
"HkvKuGGfG",
"r1W-W7fMM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"public",
"public"
] | [
"The authors propose a generative method that can produce images along a hierarchy of specificity, i.e. both when all relevant attributes are specified, and when some are left undefined, creating a more abstract generation task. \n\nPros:\n+ The results demonstrating the method's ability to generate results for (1)... | [
7,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HkCsm6lRb",
"iclr_2018_HkCsm6lRb",
"iclr_2018_HkCsm6lRb",
"rky3x-5lG",
"iclr_2018_HkCsm6lRb",
"BJDxbMvez",
"S1UuHvwgf",
"iclr_2018_HkCsm6lRb",
"iclr_2018_HkCsm6lRb"
] |
iclr_2018_r1wEFyWCW | Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions | Deep autoregressive models have shown state-of-the-art performance in density estimation for natural images on large-scale datasets such as ImageNet. However, such models require many thousands of gradient-based weight updates and unique image examples for training. Ideally, the models would rapidly learn visual concepts from only a handful of examples, similar to the manner in which humans learns across many vision tasks. In this paper, we show how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation. Our proposed modifications to PixelCNN result in state-of-the art few-shot density estimation on the Omniglot dataset. Furthermore, we visualize the learned attention policy and find that it learns intuitive algorithms for simple tasks such as image mirroring on ImageNet and handwriting on Omniglot without supervision. Finally, we extend the model to natural images and demonstrate few-shot image generation on the Stanford Online Products dataset. | accepted-poster-papers | This paper incorporates attention in the PixelCNN model and shows how to use MAML to enable few-shot density estimation. The paper received mixed reviews (7,6,4). After rebuttal the first reviewer updated the score to accept. The AC shares the concern of novelty with the first reviewer. However, it is also not trivial to incorporate attention and MAML in PixelCNN, thus the AC decided to accept the paper. | train | [
"rkHhxN2lG",
"HyKWS3KxM",
"rJ3vXv5xf",
"ryfVtml7f",
"ByMcMEGJG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public"
] | [
"This paper focuses on the density estimation when the amount of data available for training is low. The main idea is that a meta-learning model must be learnt, which learns to generate novel density distributions by learn to adapt a basic model on few new samples. The paper presents two independent method.\n\nThe ... | [
6,
7,
6,
-1,
-1
] | [
5,
4,
4,
-1,
-1
] | [
"iclr_2018_r1wEFyWCW",
"iclr_2018_r1wEFyWCW",
"iclr_2018_r1wEFyWCW",
"iclr_2018_r1wEFyWCW",
"iclr_2018_r1wEFyWCW"
] |
iclr_2018_rknt2Be0- | Compositional Obverter Communication Learning from Raw Visual Input | One of the distinguishing aspects of human language is its compositionality, which allows us to describe complex environments with limited vocabulary. Previously, it has been shown that neural network agents can learn to communicate in a highly structured, possibly compositional language based on disentangled input (e.g. hand- engineered features). Humans, however, do not learn to communicate based on well-summarized features. In this work, we train neural agents to simultaneously develop visual perception from raw image pixels, and learn to communicate with a sequence of discrete symbols. The agents play an image description game where the image contains factors such as colors and shapes. We train the agents using the obverter technique where an agent introspects to generate messages that maximize its own understanding. Through qualitative analysis, visualization and a zero-shot test, we show that the agents can develop, out of raw image pixels, a language with compositional properties, given a proper pressure from the environment. | accepted-poster-papers | This paper investigates emergence of language from raw pixels in a two-agent setting. The paper received divergent reviews, 3,6,9. Two ACs discussed this paper, due to a strong opinion from both positive and negative reviewers. The ACs agree that the score "9" is too high: the notion of compositionality is used in many places in the paper (and even in the title), but never explicitly defined. Furthermore, the zero-shot evaluation is somewhat disappointing. If the grammar extracted by the authors in sec. 3.2 did indeed indicate the compositional nature of the emergent communication, the authors should have shown that they could in fact build a message themselves, give it to the listener with an image and ask it to answer. On the other hand, "3" is also too low of a score. In this renaissance of emergent communication protocol with multi-agent deep learning systems, one missing piece has been an effort toward seriously analyzing the actual properties of the emergent communication protocol. This is one of the few papers that have tackled this aspect more carefully. The ACs decided to accept the paper. However, the authors should take the reviews and comments seriously when revising the paper for the camera ready. | val | [
"H1T0ZOBEG",
"rkKuB71xM",
"BJsMQVvxM",
"B11XUD_gM",
"B1TvQWpmz",
"rkV7ieaXf",
"BkS1LAiQf",
"B101HAiQG",
"SkTBrAjXz",
"rJcRXRjQf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Dear authors,\n\nThank you very much for the detailed response. I've spent a while thinking about this, and my score stays the same. 3 points:\n\n1. It is claimed that \"the messages in the gray boxes of Figure 3 do actually follow the patterns nicely\". I disagree. The central problem is that the paper provides n... | [
-1,
9,
3,
6,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
4,
4,
3,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"B101HAiQG",
"iclr_2018_rknt2Be0-",
"iclr_2018_rknt2Be0-",
"iclr_2018_rknt2Be0-",
"rkV7ieaXf",
"BkS1LAiQf",
"rkKuB71xM",
"BJsMQVvxM",
"BJsMQVvxM",
"B11XUD_gM"
] |
iclr_2018_rkN2Il-RZ | SCAN: Learning Hierarchical Compositional Visual Concepts | The seemingly infinite diversity of the natural world arises from a relatively small set of coherent rules, such as the laws of physics or chemistry. We conjecture that these rules give rise to regularities that can be discovered through primarily unsupervised experiences and represented as abstract concepts. If such representations are compositional and hierarchical, they can be recombined into an exponentially large set of new concepts. This paper describes SCAN (Symbol-Concept Association Network), a new framework for learning such abstractions in the visual domain. SCAN learns concepts through fast symbol association, grounding them in disentangled visual primitives that are discovered in an unsupervised manner. Unlike state of the art multimodal generative model baselines, our approach requires very few pairings between symbols and images and makes no assumptions about the form of symbol representations. Once trained, SCAN is capable of multimodal bi-directional inference, generating a diverse set of image samples from symbolic descriptions and vice versa. It also allows for traversal and manipulation of the implicit hierarchy of visual concepts through symbolic instructions and learnt logical recombination operations. Such manipulations enable SCAN to break away from its training data distribution and imagine novel visual concepts through symbolically instructed recombination of previously learnt concepts. | accepted-poster-papers | This paper initially received borderline reviews. The main concern raised by all reviewers was a limited experimental evaluation (synthetic only). In rebuttal, the authors provided new results on the CelebA dataset, which turned the first reviewer positive. The AC agrees there is merit to this approach, and generally appreciates the idea of compositional concept learning. | train | [
"Bkyw7hwrf",
"ByLBsIIBM",
"BkeoFCjgG",
"rkzoyZW-M",
"H1vrGEM-G",
"S1hTSkuXz",
"SkPdrTcMG",
"Hk-qNp5fz",
"r1m0QT9zz"
] | [
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Dear Reviewer,\n\nThank you for taking the time to comment on the updated version of our paper. You suggest that you do not find our additional experiments convincing enough because we do not train recombination operators on the celebA dataset. However, in our understanding your original review did not ask for the... | [
-1,
-1,
5,
6,
7,
-1,
-1,
-1,
-1
] | [
-1,
-1,
4,
4,
4,
-1,
-1,
-1,
-1
] | [
"ByLBsIIBM",
"r1m0QT9zz",
"iclr_2018_rkN2Il-RZ",
"iclr_2018_rkN2Il-RZ",
"iclr_2018_rkN2Il-RZ",
"SkPdrTcMG",
"H1vrGEM-G",
"rkzoyZW-M",
"BkeoFCjgG"
] |
iclr_2018_HJCXZQbAZ | Hierarchical Density Order Embeddings | By representing words with probability densities rather than point vectors, proba- bilistic word embeddings can capture rich and interpretable semantic information and uncertainty (Vilnis & McCallum, 2014; Athiwaratkun & Wilson, 2017). The uncertainty information can be particularly meaningful in capturing entailment relationships – whereby general words such as “entity” correspond to broad distributions that encompass more specific words such as “animal” or “instrument”. We introduce density order embeddings, which learn hierarchical representations through encapsulation of probability distributions. In particular, we propose simple yet effective loss functions and distance metrics, as well as graph-based schemes to select negative samples to better learn hierarchical probabilistic representations. Our approach provides state-of-the-art performance on the WordNet hypernym relationship prediction task and the challenging HyperLex lexical entailment dataset – while retaining a rich and interpretable probabilistic representation. | accepted-poster-papers | This paper marries the idea of Gaussian word embeddings and order embeddings, by imposing order among probabilistic word embeddings. Two reviewers vote for acceptance, and one finds the novelty of the paper incremental. The reviewer stuck to this view even after rebuttal, however, acknowledges the improvement in results. The AC read the paper, and agrees that the novelty is somewhat limited, however, the idea is still quite interesting, and the results are promising. The AC was missing more experiments on other tasks originally presented by Vendrov et al. Overall, this paper is slightly over the bar. | val | [
"HyGYccUxz",
"SyRynl9eM",
"rk63KZixz",
"BJMKZCoXM",
"BJ73RqEXf",
"ryk5RqN7G",
"SJIsJCXQG",
"ryeRnpmmz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"The paper presents a method for hierarchical object embedding by Gaussian densities for lexical entailment tasks.Each word is represented by a diagonal Gaussian and the KL divergence is used as a directional distance measure. if D(f||g) < gamma then the concept represented by f entails the concept represented by ... | [
4,
6,
8,
-1,
-1,
-1,
-1,
-1
] | [
3,
4,
5,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HJCXZQbAZ",
"iclr_2018_HJCXZQbAZ",
"iclr_2018_HJCXZQbAZ",
"iclr_2018_HJCXZQbAZ",
"ryk5RqN7G",
"HyGYccUxz",
"SyRynl9eM",
"rk63KZixz"
] |
iclr_2018_BkN_r2lR- | Identifying Analogies Across Domains | Identifying analogies across domains without supervision is a key task for artificial intelligence. Recent advances in cross domain image mapping have concentrated on translating images across domains. Although the progress made is impressive, the visual fidelity many times does not suffice for identifying the matching sample from the other domain. In this paper, we tackle this very task of finding exact analogies between datasets i.e. for every image from domain A find an analogous image in domain B. We present a matching-by-synthesis approach: AN-GAN, and show that it outperforms current techniques. We further show that the cross-domain mapping task can be broken into two parts: domain alignment and learning the mapping function. The tasks can be iteratively solved, and as the alignment is improved, the unsupervised translation function reaches quality comparable to full supervision. | accepted-poster-papers | This paper builds on top of Cycle GAN ideas where the main idea is to jointly optimize the domain-level translation function with an instance-level matching objective. Initially the paper received two negative reviews (4,5) and a positive (7). After the rebuttal and several back and forth between the first reviewer and the authors, the reviewer was finally swayed by the new experiments. While not officially changing the score, the reviewer recommended acceptance. The AC agrees that the paper is interesting and of value to the ICLR audience. | train | [
"HyJww3cEz",
"BkyDnj5VG",
"ByECSWv4z",
"SkHatuolz",
"HJ08-bCef",
"ryhcYB-bG",
"rJ6aA85QG",
"Hyj4tk1GM",
"Sk0k9JkfG",
"rklEiy1Mz",
"Byqw2JyGf"
] | [
"official_reviewer",
"author",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author"
] | [
"I thank the authors for thoroughly responding to my concerns. The 3D alignment experiment looks great, and indeed I did miss the comment about the cell bio experiment. That experiment is also very compelling.\n\nI think with these two experiments added to the revision, along with all the other improvements, the pa... | [
-1,
-1,
-1,
7,
5,
4,
-1,
-1,
-1,
-1,
-1
] | [
-1,
-1,
-1,
4,
4,
3,
-1,
-1,
-1,
-1,
-1
] | [
"BkyDnj5VG",
"ByECSWv4z",
"Byqw2JyGf",
"iclr_2018_BkN_r2lR-",
"iclr_2018_BkN_r2lR-",
"iclr_2018_BkN_r2lR-",
"iclr_2018_BkN_r2lR-",
"SkHatuolz",
"HJ08-bCef",
"ryhcYB-bG",
"ryhcYB-bG"
] |
iclr_2018_B17JTOe0- | Emergence of grid-like representations by training recurrent neural networks to perform spatial localization | Decades of research on the neural code underlying spatial navigation have revealed a diverse set of neural response properties. The Entorhinal Cortex (EC) of the mammalian brain contains a rich set of spatial correlates, including grid cells which encode space using tessellating patterns. However, the mechanisms and functional significance of these spatial representations remain largely mysterious. As a new way to understand these neural representations, we trained recurrent neural networks (RNNs) to perform navigation tasks in 2D arenas based on velocity inputs. Surprisingly, we find that grid-like spatial response patterns emerge in trained networks, along with units that exhibit other spatial correlates, including border cells and band-like cells. All these different functional types of neurons have been observed experimentally. The order of the emergence of grid-like and border cells is also consistent with observations from developmental studies. Together, our results suggest that grid cells, border cells and others as observed in EC may be a natural solution for representing space efficiently given the predominant recurrent connections in the neural circuits.
| accepted-poster-papers | This work shows how activation patterns of units reminiscent of grid and border cells emerge in RNNs trained on navigation tasks. While the ICLR audience is not mainly focused on neuroscience, the findings of the paper are quite intriguing, and grid cells are sufficiently well-known and "mainstream" that this may interest many people. | train | [
"SkDHZUXlG",
"rk3jvePlf",
"HyMQMl9eG",
"By7T3PpmM",
"HJ8d9D6Xz",
"ByubuDpmM",
"rktarP6mG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The authors train an RNN to perform deduced reckoning (ded reckoning) for spatial navigation, and then study the responses of the model neurons in the RNN. They find many properties reminiscent of neurons in the mammalian entorhinal cortex (EC): grid cells, border cells, etc. When regularization of the network is ... | [
8,
9,
8,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_B17JTOe0-",
"iclr_2018_B17JTOe0-",
"iclr_2018_B17JTOe0-",
"iclr_2018_B17JTOe0-",
"SkDHZUXlG",
"rk3jvePlf",
"HyMQMl9eG"
] |
iclr_2018_HJhIM0xAW | Learning a neural response metric for retinal prosthesis | Retinal prostheses for treating incurable blindness are designed to electrically stimulate surviving retinal neurons, causing them to send artificial visual signals to the brain. However, electrical stimulation generally cannot precisely reproduce normal patterns of neural activity in the retina. Therefore, an electrical stimulus must be selected that produces a neural response as close as possible to the desired response. This requires a technique for computing a distance between the desired response and the achievable response that is meaningful in terms of the visual signal being conveyed. Here we propose a method to learn such a metric on neural responses, directly from recorded light responses of a population of retinal ganglion cells (RGCs) in the primate retina. The learned metric produces a measure of similarity of RGC population responses that accurately reflects the similarity of the visual input. Using data from electrical stimulation experiments, we demonstrate that this metric may improve the performance of a prosthesis. | accepted-poster-papers | This work shows interesting potential applications of known machine learning techniques to the practical problem of how to devise a retina prosthesis that is the most perceptually useful. The paper suffers from a few methodological problems pointed out by the reviewers (e.g., not using the more powerful neural network encoding in the subsequent experiments of the paper), but is still interesting and inspiring in its current state. | val | [
"HyzRKw7xf",
"S1AQa7uxz",
"B1OVwz9ez"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer"
] | [
"The authors develop new spike train distance metrics that cluster together responses to the same stimulus, and push responses to different stimuli away from each other. Two such metrics are discussed: neural networks, and quadratic metrics. They then show that these metrics can be used to classify neural responses... | [
5,
6,
7
] | [
4,
3,
4
] | [
"iclr_2018_HJhIM0xAW",
"iclr_2018_HJhIM0xAW",
"iclr_2018_HJhIM0xAW"
] |
iclr_2018_BJj6qGbRW | Few-Shot Learning with Graph Neural Networks | We propose to study the problem of few-shot learning with the prism of inference on a partially observed graphical model, constructed from a collection of input images whose label can be either observed or not. By assimilating generic message-passing inference algorithms with their neural-network counterparts, we define a graph neural network architecture that generalizes several of the recently proposed few-shot learning models. Besides providing improved numerical performance, our framework is easily extended to variants of few-shot learning, such as semi-supervised or active learning, demonstrating the ability of graph-based models to operate well on ‘relational’ tasks. | accepted-poster-papers | All reviewers agree that the proposed method is novel and experiments do a good job in establishing its value for few-shot learning. Most the concerns raised by the reviewers on experimental protocols have been addressed in the author response and revised version. | train | [
"BJIp_k0xM",
"r1_szu5xM",
"By7ixJ9eG",
"SkzW_xPbG",
"HJ4l-figf",
"r1XiADHmz",
"HkmKSsqzz",
"ByG5VLHQG",
"HkH8YlDZz",
"Hy1Xgqbfz",
"HyGQQPbMM",
"r1CZZbezf",
"Syc4dgwWz",
"SJeQvN3eG",
"BkWNYcseM",
"rJ5B_ucef"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"public",
"author",
"author",
"public",
"author",
"author",
"author",
"public",
"author",
"public",
"public",
"public"
] | [
"This paper proposes to use graph neural networks for the purpose of few-shot learning, as well as semi-supervised learning and active learning. The paper first relies on convolutional neural networks to extract image features. Then, these image features are organized in a fully connected graph. Then, this graph is... | [
7,
7,
7,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BJj6qGbRW",
"iclr_2018_BJj6qGbRW",
"iclr_2018_BJj6qGbRW",
"r1_szu5xM",
"rJ5B_ucef",
"ByG5VLHQG",
"By7ixJ9eG",
"HkmKSsqzz",
"BJIp_k0xM",
"HyGQQPbMM",
"r1CZZbezf",
"iclr_2018_BJj6qGbRW",
"r1_szu5xM",
"BkWNYcseM",
"HJ4l-figf",
"iclr_2018_BJj6qGbRW"
] |
iclr_2018_S1nQvfgA- | Semantically Decomposing the Latent Spaces of Generative Adversarial Networks | We propose a new algorithm for training generative adversarial networks to jointly learn latent codes for both identities (e.g. individual humans) and observations (e.g. specific photographs). In practice, this means that by fixing the identity portion of latent codes, we can generate diverse images of the same subject, and by fixing the observation portion we can traverse the manifold of subjects while maintaining contingent aspects such as lighting and pose. Our algorithm features a pairwise training scheme in which each sample from the generator consists of two images with a common identity code. Corresponding samples from the real dataset consist of two distinct photographs of the same subject. In order to fool the discriminator, the generator must produce images that are both photorealistic, distinct, and appear to depict the same person. We augment both the DCGAN and BEGAN approaches with Siamese discriminators to accommodate pairwise training. Experiments with human judges and an off-the-shelf face verification system demonstrate our algorithm’s ability to generate convincing, identity-matched photographs. | accepted-poster-papers | The paper proposes a GAN based approach for disentangling identity (or class information) from style. The supervision needed is the identity label for each image. Overall, the reviewers agree that the paper makes a novel contribution along the line of work on disentangling 'style' from 'content'. | train | [
"SkIH8vIef",
"ryEAzYOlM",
"BkAls-Kgf",
"H19BT53-f",
"SyGahq2WG",
"rkqOhq2bM",
"rJ7G3q3Zz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"Quality\nThe paper is well written and the model is simple and clearly explained. The idea for disentangling identity from other factors of variation using identity-matched image pairs is quite simple, but the experimental results on faces and shoes are impressive.\n\nClarity\nThe model and its training objective ... | [
6,
6,
7,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_S1nQvfgA-",
"iclr_2018_S1nQvfgA-",
"iclr_2018_S1nQvfgA-",
"ryEAzYOlM",
"SkIH8vIef",
"BkAls-Kgf",
"iclr_2018_S1nQvfgA-"
] |
iclr_2018_By-7dz-AZ | A Framework for the Quantitative Evaluation of Disentangled Representations | Recent AI research has emphasised the importance of learning disentangled representations of the explanatory factors behind data. Despite the growing interest in models which can learn such representations, visual inspection remains the standard evaluation metric. While various desiderata have been implied in recent definitions, it is currently unclear what exactly makes one disentangled representation better than another. In this work we propose a framework for the quantitative evaluation of disentangled representations when the ground-truth latent structure is available. Three criteria are explicitly defined and quantified to elucidate the quality of learnt representations and thus compare models on an equal basis. To illustrate the appropriateness of the framework, we employ it to compare quantitatively the representations learned by recent state-of-the-art models. | accepted-poster-papers | The paper proposes evaluation metrics for quantifying the quality of disentangled representations. There is consensus among reviewers that the paper makes a useful contribution towards this end. Authors have addressed most of reviewers' concerns in their response. | train | [
"ByfkoCtlf",
"rk7pIjceG",
"H1YPgFhlf",
"rJnsAbTQG",
"S1-fkGaXf",
"rk4ik297f",
"H1PQy257z"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"****\nI acknowledge the author's comments and improve my score to 7.\n****\n\nSummary:\nThe authors propose an experimental framework and metrics for the quantitative evaluation of disentangling representations.\nThe basic idea is to use datasets with known factors of variation, z, and measure how well in an infor... | [
7,
6,
6,
-1,
-1,
-1,
-1
] | [
5,
5,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_By-7dz-AZ",
"iclr_2018_By-7dz-AZ",
"iclr_2018_By-7dz-AZ",
"rk7pIjceG",
"ByfkoCtlf",
"H1YPgFhlf",
"H1YPgFhlf"
] |
iclr_2018_HJcSzz-CZ | Meta-Learning for Semi-Supervised Few-Shot Classification | In few-shot classification, we are interested in learning algorithms that train a classifier from only a handful of labeled examples. Recent progress in few-shot classification has featured meta-learning, in which a parameterized model for a learning algorithm is defined and trained on episodes representing different classification problems, each with a small labeled training set and its corresponding test set. In this work, we advance this few-shot classification paradigm towards a scenario where unlabeled examples are also available within each episode. We consider two situations: one where all unlabeled examples are assumed to belong to the same set of classes as the labeled examples of the episode, as well as the more challenging situation where examples from other distractor classes are also provided. To address this paradigm, we propose novel extensions of Prototypical Networks (Snell et al., 2017) that are augmented with the ability to use unlabeled examples when producing prototypes. These models are trained in an end-to-end way on episodes, to learn to leverage the unlabeled examples successfully. We evaluate these methods on versions of the Omniglot and miniImageNet benchmarks, adapted to this new framework augmented with unlabeled examples. We also propose a new split of ImageNet, consisting of a large set of classes, with a hierarchical structure. Our experiments confirm that our Prototypical Networks can learn to improve their predictions due to unlabeled examples, much like a semi-supervised algorithm would. | accepted-poster-papers | The paper extends the earlier work on Prototypical networks to semi-supervised setting. Reviewers largely agree that the paper is well-written. There are some concerns on the incremental nature of the paper wrt to the novelty aspect but in the light of reported empirical results which show clear improvement over earlier work and given the importance of the topic, I recommend acceptance. | train | [
"rJzcaGvgf",
"Hyx7bEPez",
"SkW9BQ9lG",
"Hyrfr91fG",
"SJ7s7qkzG",
"Hy4CMckMG",
"BkmDZ-ZZG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public"
] | [
"This paper is an extension of the “prototypical network” which will be published in NIPS 2017. The classical few-shot learning has been limited to using the unlabeled data, while this paper considers employing the unlabeled examples available to help train each episode. The paper solves a new semi-supervised situa... | [
6,
6,
6,
-1,
-1,
-1,
-1
] | [
5,
4,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HJcSzz-CZ",
"iclr_2018_HJcSzz-CZ",
"iclr_2018_HJcSzz-CZ",
"rJzcaGvgf",
"Hyx7bEPez",
"SkW9BQ9lG",
"iclr_2018_HJcSzz-CZ"
] |
iclr_2018_H1q-TM-AW | A DIRT-T Approach to Unsupervised Domain Adaptation | Domain adaptation refers to the problem of leveraging labeled data in a source domain to learn an accurate model in a target domain where labels are scarce or unavailable. A recent approach for finding a common representation of the two domains is via domain adversarial training (Ganin & Lempitsky, 2015), which attempts to induce a feature extractor that matches the source and target feature distributions in some feature space. However, domain adversarial training faces two critical limitations: 1) if the feature extraction function has high-capacity, then feature distribution matching is a weak constraint, 2) in non-conservative domain adaptation (where no single classifier can perform well in both the source and target domains), training the model to do well on the source domain hurts performance on the target domain. In this paper, we address these issues through the lens of the cluster assumption, i.e., decision boundaries should not cross high-density data regions. We propose two novel and related models: 1) the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption; 2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T) model, which takes the VADA model as initialization and employs natural gradient steps to further minimize the cluster assumption violation. Extensive empirical results demonstrate that the combination of these two models significantly improve the state-of-the-art performance on the digit, traffic sign, and Wi-Fi recognition domain adaptation benchmarks. | accepted-poster-papers | Well motivated and well written, with extensive results. The paper also received positive comments from all reviewers. The AC recommends that the paper be accepted. | train | [
"rknCb19ef",
"Syaz3vINf",
"Hk19swKeM",
"BkhjhvQ-G",
"rJMJD0n7z",
"ryETB02XG",
"SknfSAnQf",
"S17uEA2QG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"This paper presents two complementary models for unsupervised domain adaptation (classification task): 1) the Virtual Adversarial Domain Adaptation (VADA) and 2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T). The authors make use of the so-called cluster assumption, i.e., decision bou... | [
8,
-1,
7,
7,
-1,
-1,
-1,
-1
] | [
4,
-1,
4,
2,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1q-TM-AW",
"rJMJD0n7z",
"iclr_2018_H1q-TM-AW",
"iclr_2018_H1q-TM-AW",
"Hk19swKeM",
"BkhjhvQ-G",
"rknCb19ef",
"iclr_2018_H1q-TM-AW"
] |
iclr_2018_r1Dx7fbCW | Generalizing Across Domains via Cross-Gradient Training | We present CROSSGRAD , a method to use multi-domain training data to learn a classifier that generalizes to new domains. CROSSGRAD does not need an adaptation phase via labeled or unlabeled data, or domain features in the new domain. Most existing domain adaptation methods attempt to erase domain signals using techniques like domain adversarial training. In contrast, CROSSGRAD is free to use domain signals for predicting labels, if it can prevent overfitting on training domains. We conceptualize the task in a Bayesian setting, in which a sampling step is implemented as data augmentation, based on domain-guided perturbations of input instances. CROSSGRAD jointly trains a label and a domain classifier on examples perturbed by loss gradients of each other’s objectives. This enables us to directly perturb inputs, without separating and re-mixing domain signals while making various distributional assumptions. Empirical evaluation on three different applications where this setting is natural establishes that
(1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbation methods, and
(2) data augmentation is a more stable and accurate method than domain adversarial training. | accepted-poster-papers | Well motivated and well received by all of the expert reviewers. The AC recommends that the paper be accepted. | test | [
"SkBWbEhVz",
"rkJNQW5gM",
"Syaaxl0lf",
"SJbAMFl-z",
"rkDrknW-G",
"HyQ8rr6Xf",
"rJ13AnxXG",
"HyalvTemf",
"S1CwzpemG",
"SkE4Ang7M",
"SkHQ6TqWG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"The rebuttal addresses my questions. The authors are recommended to explicitly use \"domain generalization\" in the paper and/or the title to make the language consistent with the literature. ",
"This paper proposed a domain generalization approach by domain-dependent data augmentation. The augmentation is guide... | [
-1,
7,
7,
8,
7,
-1,
-1,
-1,
-1,
-1,
-1
] | [
-1,
5,
4,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"rkJNQW5gM",
"iclr_2018_r1Dx7fbCW",
"iclr_2018_r1Dx7fbCW",
"iclr_2018_r1Dx7fbCW",
"iclr_2018_r1Dx7fbCW",
"iclr_2018_r1Dx7fbCW",
"Syaaxl0lf",
"rkDrknW-G",
"rkJNQW5gM",
"SJbAMFl-z",
"rkJNQW5gM"
] |
iclr_2018_ByRWCqvT- | Learning to cluster in order to transfer across domains and tasks | This paper introduces a novel method to perform transfer learning across domains and tasks, formulating it as a problem of learning to cluster. The key insight is that, in addition to features, we can transfer similarity information and this is sufficient to learn a similarity function and clustering network to perform both domain adaptation and cross-task transfer learning. We begin by reducing categorical information to pairwise constraints, which only considers whether two instances belong to the same class or not (pairwise semantic similarity). This similarity is category-agnostic and can be learned from data in the source domain using a similarity network. We then present two novel approaches for performing transfer learning using this similarity function. First, for unsupervised domain adaptation, we design a new loss function to regularize classification with a constrained clustering loss, hence learning a clustering network with the transferred similarity metric generating the training inputs. Second, for cross-task learning (i.e., unsupervised clustering with unseen categories), we propose a framework to reconstruct and estimate the number of semantic clusters, again using the clustering network. Since the similarity network is noisy, the key is to use a robust clustering algorithm, and we show that our formulation is more robust than the alternative constrained and unconstrained clustering approaches. Using this method, we first show state of the art results for the challenging cross-task problem, applied on Omniglot and ImageNet. Our results show that we can reconstruct semantic clusters with high accuracy. We then evaluate the performance of cross-domain transfer using images from the Office-31 and SVHN-MNIST tasks and present top accuracy on both datasets. Our approach doesn't explicitly deal with domain discrepancy. If we combine with a domain adaptation loss, it shows further improvement. | accepted-poster-papers | Pros
-- A novel formulation for cross-task and cross-domain transfer learning.
-- Extensive evaluations.
Cons
-- Presentation a bit confusing, please improve.
The paper received positive reviews from reviewers. But the reviewers pointed out some issues with presentation and flow of the paper. Even though the revised version has improved, the AC feels that it can be improved further. For example, as pointed out by reviewers, different parts of the model are trained using different losses and / or are pre-trained. It would be worth clarifying that. It might help if the authors include a pseudocode / algorithm block to the final version of the paper. | train | [
"BJbSJYcgG",
"Byum0OYlG",
"BJZ2B0agf",
"B16XOpsXf",
"Sk5-cTImG",
"B1D0OaIQz",
"B1oBwTIXM",
"BJkB-Sczf",
"ryA6vo2gf",
"HktVdZdlG",
"rkVJuH4ez"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"public",
"author",
"public"
] | [
"The authors propose a method for performing transfer learning and domain adaptation via a clustering approach. The primary contribution is the introduction of a Learnable Clustering Objective (LCO) that is trained on an auxiliary set of labeled data to correctly identify whether pairs of data belong to the same cl... | [
7,
5,
9,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
5,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ByRWCqvT-",
"iclr_2018_ByRWCqvT-",
"iclr_2018_ByRWCqvT-",
"iclr_2018_ByRWCqvT-",
"Byum0OYlG",
"BJbSJYcgG",
"BJZ2B0agf",
"ryA6vo2gf",
"HktVdZdlG",
"rkVJuH4ez",
"iclr_2018_ByRWCqvT-"
] |
iclr_2018_H1T2hmZAb | Deep Complex Networks | At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their real-valued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech spectrum prediction using TIMIT. We achieve state-of-the-art performance on these audio-related tasks. | accepted-poster-papers | The paper received mostly positive comments from experts. To summarize:
Pros:
-- The paper provides complex counterparts for typical architectures / optimization strategies used by real valued networks.
Cons:
-- Although the authors include plots explaining how nonlinearities transform phase, intuition about how phase gets processed can be improved.
-- Improving evaluations: Wisdom et al. computes log magnitude; real valued networks may not be suited for computing real / complex numbers which have a large dynamic range, like the complex spectra. So please compare performance by estimating magnitude as in Wisdom et al.
-- Please add computational cost, in terms of the number of multiplies and adds, to the final version of the paper.
I am recommending that the paper be accepted based on these reviews.
| train | [
"r1iYihLEf",
"SyJZuXjlG",
"rJH_dHjeG",
"BJ8VRRhgM",
"H1rRov6mz",
"HJUShwpQf",
"HJC5jDTmG",
"rJkJTEcXG"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"public",
"public",
"public",
"public"
] | [
"Unfortunately I'm not familiar with state of the art in music transcription.\n\nFrom description it sounds that test set is quite small (3 melodies). For a small test set, various hyper-parameters such as model architecture, learning rate schedule and choice of optimization algorithm are expected to have a strong ... | [
-1,
7,
8,
4,
-1,
-1,
-1,
-1
] | [
-1,
4,
4,
4,
-1,
-1,
-1,
-1
] | [
"HJC5jDTmG",
"iclr_2018_H1T2hmZAb",
"iclr_2018_H1T2hmZAb",
"iclr_2018_H1T2hmZAb",
"rJH_dHjeG",
"SyJZuXjlG",
"BJ8VRRhgM",
"iclr_2018_H1T2hmZAb"
] |
iclr_2018_HkwBEMWCZ | Skip Connections Eliminate Singularities | Skip connections made the training of very deep networks possible and have become an indispensable component in a variety of neural architectures. A completely satisfactory explanation for their success remains elusive. Here, we present a novel explanation for the benefits of skip connections in training very deep networks. The difficulty of training deep networks is partly due to the singularities caused by the non-identifiability of the model. Several such singularities have been identified in previous works: (i) overlap singularities caused by the permutation symmetry of nodes in a given layer, (ii) elimination singularities corresponding to the elimination, i.e. consistent deactivation, of nodes, (iii) singularities generated by the linear dependence of the nodes. These singularities cause degenerate manifolds in the loss landscape that slow down learning. We argue that skip connections eliminate these singularities by breaking the permutation symmetry of nodes, by reducing the possibility of node elimination and by making the nodes less linearly dependent. Moreover, for typical initializations, skip connections move the network away from the "ghosts" of these singularities and sculpt the landscape around them to alleviate the learning slow-down. These hypotheses are supported by evidence from simplified models, as well as from experiments with deep networks trained on real-world datasets. | accepted-poster-papers | pros:
* novel explanation: skip connections <--> singualrities
* thorough analysis
* significant topic in understanding deep nets
cons:
* more rigorous theoretical analysis would be better
overall, the committee feels this paper would be interesting to have at ICLR.
| train | [
"rJkUMYQgz",
"SJWa0g9xM",
"HJiCEsseG",
"rytZt1jXM",
"B1W8I1jmf",
"B11hBJsXM",
"H1JlpxZZf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"public"
] | [
"The authors show that two types of singularities impede learning in deep neural networks: elimination singularities (where a unit is effectively shut off by a loss of input or output weights, or by an overly-strong negative bias), and overlap singularities, where two or more units have very similar input or output... | [
8,
8,
6,
-1,
-1,
-1,
-1
] | [
3,
3,
4,
-1,
-1,
-1,
-1
] | [
"iclr_2018_HkwBEMWCZ",
"iclr_2018_HkwBEMWCZ",
"iclr_2018_HkwBEMWCZ",
"HJiCEsseG",
"iclr_2018_HkwBEMWCZ",
"H1JlpxZZf",
"iclr_2018_HkwBEMWCZ"
] |
iclr_2018_H1cWzoxA- | Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Modeling | Recurrent neural networks (RNN), convolutional neural networks (CNN) and self-attention networks (SAN) are commonly used to produce context-aware representations. RNN can capture long-range dependency but is hard to parallelize and not time-efficient. CNN focuses on local dependency but does not perform well on some tasks. SAN can model both such dependencies via highly parallelizable computation, but memory requirement grows rapidly in line with sequence length. In this paper, we propose a model, called "bi-directional block self-attention network (Bi-BloSAN)", for RNN/CNN-free sequence encoding. It requires as little memory as RNN but with all the merits of SAN. Bi-BloSAN splits the entire sequence into blocks, and applies an intra-block SAN to each block for modeling local context, then applies an inter-block SAN to the outputs for all blocks to capture long-range dependency. Thus, each SAN only needs to process a short sequence, and only a small amount of memory is required. Additionally, we use feature-level attention to handle the variation of contexts around the same word, and use forward/backward masks to encode temporal order information. On nine benchmark datasets for different NLP tasks, Bi-BloSAN achieves or improves upon state-of-the-art accuracy, and shows better efficiency-memory trade-off than existing RNN/CNN/SAN. | accepted-poster-papers | The proposed Bi-BloSAN is a two-levels' block SAN, which has both parallelization efficiency and memory efficiency. The study is thoroughly conducted and well presented. | train | [
"SJz6VRFlG",
"rkcETx9lf",
"ryOYfeaef",
"BJjyKg-mM",
"SyboUpHzz",
"rkzLzZpbM",
"ryi_ebTWM",
"r1Fr-WT-f",
"rk9hKgTZz",
"S1Y3_lTZM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author"
] | [
"Pros: \nThe paper proposes a “bi-directional block self-attention network (Bi-BloSAN)” for sequence encoding, which inherits the advantages of multi-head (Vaswani et al., 2017) and DiSAN (Shen et al., 2017) network but is claimed to be more memory-efficient. The paper is written clearly and is easy to follow. The ... | [
6,
9,
6,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
4,
4,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_H1cWzoxA-",
"iclr_2018_H1cWzoxA-",
"iclr_2018_H1cWzoxA-",
"iclr_2018_H1cWzoxA-",
"SJz6VRFlG",
"iclr_2018_H1cWzoxA-",
"SJz6VRFlG",
"SJz6VRFlG",
"ryOYfeaef",
"rkcETx9lf"
] |
iclr_2018_ry8dvM-R- | Routing Networks: Adaptive Selection of Non-Linear Functions for Multi-Task Learning | Multi-task learning (MTL) with neural networks leverages commonalities in tasks to improve performance, but often suffers from task interference which reduces the benefits of transfer. To address this issue we introduce the routing network paradigm, a novel neural network and training algorithm. A routing network is a kind of self-organizing neural network consisting of two components: a router and a set of one or more function blocks. A function block may be any neural network – for example a fully-connected or a convolutional layer. Given an input the router makes a routing decision, choosing a function block to apply and passing the output back to the router recursively, terminating when a fixed recursion depth is reached. In this way the routing network dynamically composes different function blocks for each input. We employ a collaborative multi-agent reinforcement learning (MARL) approach to jointly train the router and function blocks. We evaluate our model against cross-stitch networks and shared-layer baselines on multi-task settings of the MNIST, mini-imagenet, and CIFAR-100 datasets. Our experiments demonstrate a significant improvement in accuracy, with sharper convergence. In addition, routing networks have nearly constant per-task training cost while cross-stitch networks scale linearly with the number of tasks. On CIFAR100 (20 tasks) we obtain cross-stitch performance levels with an 85% average reduction in training time.
| accepted-poster-papers | The proposed routing networks using RL to automatically learn the optimal network architecture is very interesting. Solid experimental justification and comparisons. The authors also addressed reviewers' concerns on presentation clarity in revisions. | train | [
"HyNnyzceG",
"r1AHkVdgf",
"Hk65p-5lf",
"SJyFVJhXf",
"SJkOLCs7G",
"HyAwoOF7M",
"HJ2i7kjzf",
"BJIwfksMG",
"rJgSZyozG",
"ByzMZ1ozM",
"BkHpeJoGf",
"ry3NDBHGf"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"author",
"public"
] | [
"The paper introduces a routing network for multi-task learning. The routing network consists of a router and a set of function blocks. Router makes a routing decision by either passing the input to a function block or back to the router. This network paradigm is tested on multi-task settings of MNIST, mini-imagene... | [
7,
6,
8,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
3,
3,
4,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1
] | [
"iclr_2018_ry8dvM-R-",
"iclr_2018_ry8dvM-R-",
"iclr_2018_ry8dvM-R-",
"r1AHkVdgf",
"r1AHkVdgf",
"ry3NDBHGf",
"BJIwfksMG",
"r1AHkVdgf",
"Hk65p-5lf",
"HyNnyzceG",
"iclr_2018_ry8dvM-R-",
"iclr_2018_ry8dvM-R-"
] |
iclr_2018_rkhlb8lCZ | Wavelet Pooling for Convolutional Neural Networks | Convolutional Neural Networks continuously advance the progress of 2D and 3D image and object classification. The steadfast usage of this algorithm requires constant evaluation and upgrading of foundational concepts to maintain progress. Network regularization techniques typically focus on convolutional layer operations, while leaving pooling layer operations without suitable options. We introduce Wavelet Pooling as another alternative to traditional neighborhood pooling. This method decomposes features into a second level decomposition, and discards the first-level subbands to reduce feature dimensions. This method addresses the overfitting problem encountered by max pooling, while reducing features in a more structurally compact manner than pooling via neighborhood regions. Experimental results on four benchmark classification datasets demonstrate our proposed method outperforms or performs comparatively with methods like max, mean, mixed, and stochastic pooling. | accepted-poster-papers | The idea of using wavelet pooling is novel and will bring many interesting research work in this direction. But more thorough experimental justification such as those recommended by the reviewers would make the paper better. Overall, the committee feels this paper will bring value to the conference. | train | [
"SJgEVADSz",
"Sk8IoTPSf",
"rJJWFNNef",
"B1zf5Uvxf",
"B1Q6kMqgM",
"r1JTSdEzf",
"SJv6OO4Mf",
"B1C-Nximz"
] | [
"author",
"public",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author"
] | [
"Short answer is yes. We would love to give access to the code. The longer answer is that it needs to be made more efficient so that the implementation time is reduced. When it was written it wasn't written for CUDA, or MEX, and thus doesn't have the speedups afforded by precompiling, etc. When that happens we will... | [
-1,
-1,
7,
9,
4,
-1,
-1,
-1
] | [
-1,
-1,
4,
3,
4,
-1,
-1,
-1
] | [
"Sk8IoTPSf",
"iclr_2018_rkhlb8lCZ",
"iclr_2018_rkhlb8lCZ",
"iclr_2018_rkhlb8lCZ",
"iclr_2018_rkhlb8lCZ",
"B1zf5Uvxf",
"rJJWFNNef",
"B1Q6kMqgM"
] |
iclr_2018_SJ1Xmf-Rb | FearNet: Brain-Inspired Model for Incremental Learning | Incremental class learning involves sequentially learning classes in bursts of examples from the same class. This violates the assumptions that underlie methods for training standard deep neural networks, and will cause them to suffer from catastrophic forgetting. Arguably, the best method for incremental class learning is iCaRL, but it requires storing training examples for each class, making it challenging to scale. Here, we propose FearNet for incremental class learning. FearNet is a generative model that does not store previous examples, making it memory efficient. FearNet uses a brain-inspired dual-memory system in which new memories are consolidated from a network for recent memories inspired by the mammalian hippocampal complex to a network for long-term storage inspired by medial prefrontal cortex. Memory consolidation is inspired by mechanisms that occur during sleep. FearNet also uses a module inspired by the basolateral amygdala for determining which memory system to use for recall. FearNet achieves state-of-the-art performance at incremental class learning on image (CIFAR-100, CUB-200) and audio classification (AudioSet) benchmarks.
| accepted-poster-papers | A novel dual memory system inspired by brain for the important incremental learning and very good results. | test | [
"rJ96Jgclf",
"rkvqPjUVz",
"HyirgqDgM",
"HkvoWK6ef",
"SkTJreuzM",
"HkmO7eOGf",
"HyFfVxOzG",
"Bk1Cfe_MM"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"I quite liked the revival of the dual memory system ideas and the cognitive (neuro) science inspiration. The paper is overall well written and tackles serious modern datasets, which was impressive, even though it relies on a pre-trained, fixed ResNet (see point below).\n\nMy only complaint is that I felt I couldn’... | [
7,
-1,
7,
6,
-1,
-1,
-1,
-1
] | [
2,
-1,
4,
2,
-1,
-1,
-1,
-1
] | [
"iclr_2018_SJ1Xmf-Rb",
"SkTJreuzM",
"iclr_2018_SJ1Xmf-Rb",
"iclr_2018_SJ1Xmf-Rb",
"HyirgqDgM",
"HkvoWK6ef",
"rJ96Jgclf",
"iclr_2018_SJ1Xmf-Rb"
] |
iclr_2018_BJehNfW0- | Do GANs learn the distribution? Some Theory and Empirics | Do GANS (Generative Adversarial Nets) actually learn the target distribution? The foundational paper of Goodfellow et al. (2014) suggested they do, if they were given sufficiently large deep nets, sample size, and computation time. A recent theoretical analysis in Arora et al. (2017) raised doubts whether the same holds when discriminator has bounded size. It showed that the training objective can approach its optimum value even if the generated distribution has very low support. In other words, the training objective is unable to prevent mode collapse. The current paper makes two contributions. (1) It proposes a novel test for estimating support size using the birthday paradox of discrete probability. Using this evidence is presented that well-known GANs approaches do learn distributions of fairly low support. (2) It theoretically studies encoder-decoder GANs architectures (e.g., BiGAN/ALI), which were proposed to learn more meaningful features via GANs, and consequently to also solve the mode-collapse issue. Our result shows that such encoder-decoder training objectives also cannot guarantee learning of the full distribution because they cannot prevent serious mode collapse. More seriously, they cannot prevent learning meaningless codes for data, contrary to usual intuition. | accepted-poster-papers | * presents a novel way analyzing GANs using the birthday paradox and provides a theoretical construction that shows bidirectional GANs cannot escape specific cases of mode collapse
* significant contribution to the discussion of whether GANs learn the target disctibution
* thorough justifications | train | [
"rkhhruYgM",
"B1jWee9eM",
"B1g5pBTxz",
"ByMIdi0mz",
"H1O1IXeff",
"BJImH7xfz",
"SJaoN7gfz"
] | [
"official_reviewer",
"official_reviewer",
"official_reviewer",
"author",
"author",
"author",
"author"
] | [
"The paper adds to the discussion on the question whether Generative Adversarial Nets (GANs) learn the target distribution. Recent theoretical analysis of GANs by Arora et al. show that of the discriminator capacity of is bounded, then there is a solution the closely meets the objective but the output distribution ... | [
7,
6,
7,
-1,
-1,
-1,
-1
] | [
4,
4,
3,
-1,
-1,
-1,
-1
] | [
"iclr_2018_BJehNfW0-",
"iclr_2018_BJehNfW0-",
"iclr_2018_BJehNfW0-",
"iclr_2018_BJehNfW0-",
"rkhhruYgM",
"B1jWee9eM",
"B1g5pBTxz"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.