paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
list
review_writers
list
review_contents
list
review_ratings
list
review_confidences
list
review_reply_tos
list
iclr_2018_H1a37GWCZ
UNSUPERVISED SENTENCE EMBEDDING USING DOCUMENT STRUCTURE-BASED CONTEXT
We present a new unsupervised method for learning general-purpose sentence embeddings. Unlike existing methods which rely on local contexts, such as words inside the sentence or immediately neighboring sentences, our method selects, for each target sentence, influential sentences in the entire document based on a document structure. We identify a dependency structure of sentences using metadata or text styles. Furthermore, we propose a novel out-of-vocabulary word handling technique to model many domain-specific terms, which were mostly discarded by existing sentence embedding methods. We validate our model on several tasks showing 30% precision improvement in coreference resolution in a technical domain, and 7.5% accuracy increase in paraphrase detection compared to baselines.
rejected-papers
The paper presents an interesting extension of the SkipThought idea by modeling sentence embeddings using several document-structure related information. Out of the various kinds of evaluations presented, the coreference results are interesting -- but, they fall short by a bit (as noted by Reviewer 2) because they don't compare with recent work by Kenton Lee et al. In summary, the idea provides an interesting bit on building sentence embeddings, but the experimental results could have been stronger.
train
[ "HJUoksOxG", "HkMKdz9gz", "BkJBMfslf", "SkhSHOa7z", "Sk9XSOaXf", "BkN-ruaXf", "r17sEOTmz", "ryrYVu6XM", "HJU4pLHzM", "B1mXQXi-M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer" ]
[ "This paper presents simple but useful ideas for improving sentence embedding by drawing from more context. The authors build on the skip thought model where a sentence is predicted conditioned on the previous sentence; they posit that one can obtain more information about a sentence from other \"governing\" senten...
[ 7, 5, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1a37GWCZ", "iclr_2018_H1a37GWCZ", "iclr_2018_H1a37GWCZ", "B1mXQXi-M", "B1mXQXi-M", "B1mXQXi-M", "HkMKdz9gz", "HJUoksOxG", "B1mXQXi-M", "BkJBMfslf" ]
iclr_2018_Sk03Yi10Z
An Ensemble of Retrieval-Based and Generation-Based Human-Computer Conversation Systems.
Human-computer conversation systems have attracted much attention in Natural Language Processing. Conversation systems can be roughly divided into two categories: retrieval-based and generation-based systems. Retrieval systems search a user-issued utterance (namely a query) in a large conversational repository and return a reply that best matches the query. Generative approaches synthesize new replies. Both ways have certain advantages but suffer from their own disadvantages. We propose a novel ensemble of retrieval-based and generation-based conversation system. The retrieved candidates, in addition to the original query, are fed to a reply generator via a neural network, so that the model is aware of more information. The generated reply together with the retrieved ones then participates in a re-ranking process to find the final reply to output. Experimental results show that such an ensemble system outperforms each single module by a large margin.
rejected-papers
This paper presents an ensemble method for conversation systems, where a retrieval-based system is ensembled with a generation-based system. The combination is done via a reranker. Evaluation is done on one dataset containing query reply pairs with both BLEU and human evalutations. The experimental results are good using the ensemble model. Although this presents some novel ideas and may be useful for chatbots (not for goal oriented systems), the committee feels that the approach and the presented material does not have enough substance for publication at ICLR: it will be interesting to evaluate this system in a goal oriented setting; many prior papers have built generation based conversation systems (1-step) -- this paper does not present any comparison with those papers. Addressing these issues may strengthen the paper for a future venue.
train
[ "SksrEW9eG", "rkQ2C8cxz", "S1EhNw2gz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary:\n\nThe paper proposes a new dialog model combining both retrieval-based and generation-based modules. Answers are produced in three phases: a retrieval-based model extracts candidate answers; a generator model, conditioned on retrieved answers, produces an additional candidate; a reranker outputs the best...
[ 5, 5, 6 ]
[ 3, 3, 3 ]
[ "iclr_2018_Sk03Yi10Z", "iclr_2018_Sk03Yi10Z", "iclr_2018_Sk03Yi10Z" ]
iclr_2018_rkaqxm-0b
Neural Compositional Denotational Semantics for Question Answering
Answering compositional questions requiring multi-step reasoning is challenging for current models. We introduce an end-to-end differentiable model for interpreting questions, which is inspired by formal approaches to semantics. Each span of text is represented by a denotation in a knowledge graph, together with a vector that captures ungrounded aspects of meaning. Learned composition modules recursively combine constituents, culminating in a grounding for the complete sentence which is an answer to the question. For example, to interpret ‘not green’, the model will represent ‘green’ as a set of entities, ‘not’ as a trainable ungrounded vector, and then use this vector to parametrize a composition function to perform a complement operation. For each sentence, we build a parse chart subsuming all possible parses, allowing the model to jointly learn both the composition operators and output structure by gradient descent. We show the model can learn to represent a variety of challenging semantic operators, such as quantifiers, negation, disjunctions and composed relations on a synthetic question answering task. The model also generalizes well to longer sentences than seen in its training data, in contrast to LSTM and RelNet baselines. We will release our code.
rejected-papers
This paper presents a neural compositional model for visual question answering. The overall idea may be exciting but the committee agrees with the evaluation of Reviewer 1: the experimental section is a bit thin and it only evaluates against an artificial dataset for visual QA that does not really need a knowledge base. It would have been better to evaluate on more traditional question answering settings where the answer can be retrieved from a knowledge base (WebQuestions, Free917, etc.), and then compare with state of the art on those.
train
[ "B1uoZsYlM", "SyvmULHVf", "BJ-RO0meG", "ryx2q7_eG", "S1J7L_TQf", "rJsg8_TXf", "HkBCBuamf", "H1qKB_6Xf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper describes an end to end differentiable model to answer questions based on a knowledge base. They learn the composition modules which combine representations for parts of the question to generate a representation of the whole question. \n\nMy major complaint is the evaluation on a synthetically generate...
[ 4, -1, 5, 7, -1, -1, -1, -1 ]
[ 4, -1, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_rkaqxm-0b", "H1qKB_6Xf", "iclr_2018_rkaqxm-0b", "iclr_2018_rkaqxm-0b", "iclr_2018_rkaqxm-0b", "B1uoZsYlM", "ryx2q7_eG", "BJ-RO0meG" ]
iclr_2018_r1kNDlbCb
Learning to Encode Text as Human-Readable Summaries using Generative Adversarial Networks
Auto-encoders compress input data into a latent-space representation and reconstruct the original data from the representation. This latent representation is not easily interpreted by humans. In this paper, we propose training an auto-encoder that encodes input text into human-readable sentences. The auto-encoder is composed of a generator and a reconstructor. The generator encodes the input text into a shorter word sequence, and the reconstructor recovers the generator input from the generator output. To make the generator output human-readable, a discriminator restricts the output of the generator to resemble human-written sentences. By taking the generator output as the summary of the input text, abstractive summarization is achieved without document-summary pairs as training data. Promising results are shown on both English and Chinese corpora.
rejected-papers
As expressed by most reviewers, the idea of the paper is interesting: using summarization as an intermediate representation for an auto encoder. In addition, a GAN is used on the generator output to encourage the output to look like summaries. They just need unpaired summaries. Even if the idea is interesting, from the committee's perspective, important baselines are missing in the experimental section: why would one choose to use this method if it is not competitive with other baselines that have proposed work in this vein? One reviewer brings up the point that the method is significantly worse than a supervised baseline. Moreover, the authors mention the work of Miao and Blunsom, but could have used one of their experimental setups to show that at least in the semi-supervised scenario, this work empirically performs as well or better than that baseline.
train
[ "B1JR9zaVz", "SyrgB9UEz", "HkSpi4cgz", "S1ms6cqxM", "B1_R9Digf", "Bkfuk3L7f", "H1HWq58Qz", "HyJUmo8XG" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Thank you for reading the paper again and giving us comment. We will improve the writing of later sections. If we want to apply dual learning in this text summarization task, the training is not only on “source text -> summary -> source text”, but also on “summary -> source text -> summary”. In the “source text ->...
[ -1, -1, 5, 4, 6, -1, -1, -1 ]
[ -1, -1, 4, 4, 4, -1, -1, -1 ]
[ "SyrgB9UEz", "Bkfuk3L7f", "iclr_2018_r1kNDlbCb", "iclr_2018_r1kNDlbCb", "iclr_2018_r1kNDlbCb", "HkSpi4cgz", "B1_R9Digf", "S1ms6cqxM" ]
iclr_2018_Hy3MvSlRW
Adversarial reading networks for machine comprehension
Machine reading has recently shown remarkable progress thanks to differentiable reasoning models. In this context, End-to-End trainable Memory Networks (MemN2N) have demonstrated promising performance on simple natural language based reasoning tasks such as factual reasoning and basic deduction. However, the task of machine comprehension is currently bounded to a supervised setting and available question answering dataset. In this paper we explore the paradigm of adversarial learning and self-play for the task of machine reading comprehension. Inspired by the successful propositions in the domain of game learning, we present a novel approach of training for this task that is based on the definition of a coupled attention-based memory model. On one hand, a reader network is in charge of finding answers regarding a passage of text and a question. On the other hand, a narrator network is in charge of obfuscating spans of text in order to minimize the probability of success of the reader. We experimented the model on several question-answering corpora. The proposed learning paradigm and associated models present encouraging results.
rejected-papers
The paper presents an adversarial learning framework for reading comprehension. Although the idea is interesting and presents an approach that ideally would make reading comprehension approaches more robust, the results are not substantially solid (see reviewer 3's comments) compared to other baselines to warrant acceptance. Comments from reviewer 2 are also noteworthy where they mention that adversarial perturbations to a context around an answer can alter the facts in the context, thus destroying the actual information present there, and the rebuttal does not seem to satisfy the concern. Addressing these issues will strengthen the paper for a potential future venue.
train
[ "HynOT_5Jz", "BJggQbceG", "Sy0AiMnef" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper aims to improve the accuracy of reading model on question answering dataset by playing against an adversarial agent (which is called narrator by the authors) that \"obfuscates\" the document, i.e. changing words in the document. The authors mention that word dropout can be considered as its special case ...
[ 4, 5, 5 ]
[ 5, 5, 4 ]
[ "iclr_2018_Hy3MvSlRW", "iclr_2018_Hy3MvSlRW", "iclr_2018_Hy3MvSlRW" ]
iclr_2018_r1QZ3zbAZ
Adversarial Examples for Natural Language Classification Problems
Modern machine learning algorithms are often susceptible to adversarial examples — maliciously crafted inputs that are undetectable by humans but that fool the algorithm into producing undesirable behavior. In this work, we show that adversarial examples exist in natural language classification: we formalize the notion of an adversarial example in this setting and describe algorithms that construct such examples. Adversarial perturbations can be crafted for a wide range of tasks — including spam filtering, fake news detection, and sentiment analysis — and affect different models — convolutional and recurrent neural networks as well as linear classifiers to a lesser degree. Constructing an adversarial example involves replacing 10-30% of words in a sentence with synonyms that don’t change its meaning. Up to 90% of input examples admit adversarial perturbations; furthermore, these perturbations retain a degree of transferability across models. Our findings demonstrate the existence of vulnerabilities in machine learning systems and hint at limitations in our understanding of classification algorithms.
rejected-papers
This paper presents a way to generate adversarial examples for text classification. The method is simple -- finding semantically similar words and replacing them in sentences with high language model score. The committee identifies weaknesses in this paper that resonate with the reviews below -- reviewer 1 suggests that the authors should closely compare with the work of Papernot et al, and the response to that suggestion is not satisfactory. Addressing such concerns would make the paper stronger for a future venue.
train
[ "Byk2NS9xf", "HJKBdUKJf", "B1CfWm1WG", "Bk7QGoEQz", "SJx-zjVXz", "SkvobiEQz", "B1dv-jEmG", "SkuiQMmff", "BkMNGG7Gz", "BJNs1wMGG", "SJKx9Lzff", "B1BAtUMMG", "ByX3tUGGM", "S1E7t8fGz", "BJC9P7fMz", "SkkTnVJbz", "rJ7D9bpC-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "official_reviewer", "author", "author", "author", "author", "public", "public", "public" ]
[ "This paper proposes a method to generate adversarial examples for text classification problems. They do this by iteratively replacing words in a sentence with words that are close in its embedding space and which cause a change in the predicted class of the text. To preserve correct grammar, they only change words...
[ 4, 6, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1QZ3zbAZ", "iclr_2018_r1QZ3zbAZ", "iclr_2018_r1QZ3zbAZ", "HJKBdUKJf", "Byk2NS9xf", "B1CfWm1WG", "BkMNGG7Gz", "BJNs1wMGG", "iclr_2018_r1QZ3zbAZ", "ByX3tUGGM", "rJ7D9bpC-", "HJKBdUKJf", "Byk2NS9xf", "B1CfWm1WG", "iclr_2018_r1QZ3zbAZ", "iclr_2018_r1QZ3zbAZ", "iclr_2018_r1QZ3...
iclr_2018_rybDdHe0Z
Sequence Transfer Learning for Neural Decoding
A fundamental challenge in designing brain-computer interfaces (BCIs) is decoding behavior from time-varying neural oscillations. In typical applications, decoders are constructed for individual subjects and with limited data leading to restrictions on the types of models that can be utilized. Currently, the best performing decoders are typically linear models capable of utilizing rigid timing constraints with limited training data. Here we demonstrate the use of Long Short-Term Memory (LSTM) networks to take advantage of the temporal information present in sequential neural data collected from subjects implanted with electrocorticographic (ECoG) electrode arrays performing a finger flexion task. Our constructed models are capable of achieving accuracies that are comparable to existing techniques while also being robust to variation in sample data size. Moreover, we utilize the LSTM networks and an affine transformation layer to construct a novel architecture for transfer learning. We demonstrate that in scenarios where only the affine transform is learned for a new subject, it is possible to achieve results comparable to existing state-of-the-art techniques. The notable advantage is the increased stability of the model during training on novel subjects. Relaxing the constraint of only training the affine transformation, we establish our model as capable of exceeding performance of current models across all training data sizes. Overall, this work demonstrates that LSTMs are a versatile model that can accurately capture temporal patterns in neural data and can provide a foundation for transfer learning in neural decoding.
rejected-papers
This paper tries to establish that LSTMs are suitable for modeling neural signals from the brain. However, the committee and most reviewers find that results are inconclusive. Results are mixed across subjects. We think it would have been far more interesting to compare other types of sequence models for this task other than the few simple baselines implemented here. It is also unclear what is the LSTM learning extra in contrast with the other models presented in the paper.
train
[ "HJ_bsmPxG", "HJmBCpKeG", "S1D3Hb9eM", "SyQDCnMNz", "H1S6vPofG", "HkJYOvsGG", "BkrakDsfz", "H1AvJwszG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper describes an approach to use LSTM’s for finger classification based on ECOG. and a transfer learning extension of which two variations exists. From the presented results, the LSTM model is not an improvement over a basic linear model. The transfer learning models performs better than subject specific mod...
[ 4, 6, 3, -1, -1, -1, -1, -1 ]
[ 5, 5, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rybDdHe0Z", "iclr_2018_rybDdHe0Z", "iclr_2018_rybDdHe0Z", "H1S6vPofG", "HJ_bsmPxG", "HJ_bsmPxG", "HJmBCpKeG", "S1D3Hb9eM" ]
iclr_2018_HJ_X8GupW
Multi-label Learning for Large Text Corpora using Latent Variable Model with Provable Gurantees
Here we study the problem of learning labels for large text corpora where each document can be assigned a variable number of labels. The problem is trivial when the label dimensionality is small and can be easily solved by a series of one-vs-all classifiers. However, as the label dimensionality increases, the parameter space of such one-vs-all classifiers becomes extremely large and outstrips the memory. Here we propose a latent variable model to reduce the size of the parameter space, but still efficiently learn the labels. We learn the model using spectral learning and show how to extract the parameters using only three passes through the training dataset. Further, we analyse the sample complexity of our model using PAC learning theory and then demonstrate the performance of our algorithm on several benchmark datasets in comparison with existing algorithms.
rejected-papers
There is overall consensus about the paper's lack of novelty and clarity. Reviewer 1 has detailed comments that can be used to strengthen the paper. Reviewer 3 suggests that this paper is very close to Anandkumar et al 2012, and it is not clear where the novelty lies. Addressing these concerns of the reviewers will make the paper more acceptable to future venues.
train
[ "S1xUzwOgz", "B1ctU0uez", "B1ZszE9lG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper studies the problem of multi-label learning for text copora. The paper proposed a latent variable model for the documents and their labels, and used spectral algorithms to provably learn the parameters.\n\nThe model is fairly simplistic: the topic can be one of k topics (pure topic model), based on the ...
[ 4, 3, 4 ]
[ 5, 4, 5 ]
[ "iclr_2018_HJ_X8GupW", "iclr_2018_HJ_X8GupW", "iclr_2018_HJ_X8GupW" ]
iclr_2018_r1AMITFaW
Dependent Bidirectional RNN with Extended-long Short-term Memory
In this work, we first conduct mathematical analysis on the memory, which is defined as a function that maps an element in a sequence to the current output, of three RNN cells; namely, the simple recurrent neural network (SRN), the long short-term memory (LSTM) and the gated recurrent unit (GRU). Based on the analysis, we propose a new design, called the extended-long short-term memory (ELSTM), to extend the memory length of a cell. Next, we present a multi-task RNN model that is robust to previous erroneous predictions, called the dependent bidirectional recurrent neural network (DBRNN), for the sequence-in-sequenceout (SISO) problem. Finally, the performance of the DBRNN model with the ELSTM cell is demonstrated by experimental results.
rejected-papers
The reviewers of the paper are not very enthusiastic of the new model proposed, nor are they very happy with the experiments presented. It is unclear from both the POS tagging and dependency parsing results where they stand with respect to state of the art methods that do not use RNNs. We understand that the idea is to compare various RNN architectures, but it is surprising that the authors do not show any comparisons with other methods in the literature. The idea of truncating sequences beyond a certain length is also a really strange choice. Addressing the concerns of the reviewers will lead to a much stronger paper in the future.
test
[ "SJ2iJmyeM", "ByNyuNYeM", "HyFOESYxM", "ryj3O097f", "SyWcVWC-M", "HyTiNZRZf", "HyI84ZCWG", "r1bUXZAWz", "rkM2fRtZf", "SkILbCF-z", "SJqHV0tZM", "HJ-W4RYZM", "S1fN-RtZz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "The paper proposes a new recurrent cell and a new way to make predictions for sequence tagging. It starts with a theoretical analysis of memory capabilities in different RNN cells and goes on with experiments on POS tagging and dependency parsing. There are serious presentation issues in the paper, which make it h...
[ 3, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_r1AMITFaW", "iclr_2018_r1AMITFaW", "iclr_2018_r1AMITFaW", "r1bUXZAWz", "ByNyuNYeM", "SJ2iJmyeM", "HyFOESYxM", "iclr_2018_r1AMITFaW", "ByNyuNYeM", "S1fN-RtZz", "HJ-W4RYZM", "SJ2iJmyeM", "HyFOESYxM" ]
iclr_2018_HJ39YKiTb
Associative Conversation Model: Generating Visual Information from Textual Information
In this paper, we propose the Associative Conversation Model that generates visual information from textual information and uses it for generating sentences in order to utilize visual information in a dialogue system without image input. In research on Neural Machine Translation, there are studies that generate translated sentences using both images and sentences, and these studies show that visual information improves translation performance. However, it is not possible to use sentence generation algorithms using images for the dialogue systems since many text-based dialogue systems only accept text input. Our approach generates (associates) visual information from input text and generates response text using context vector fusing associative visual information and sentence textual information. A comparative experiment between our proposed model and a model without association showed that our proposed model is generating useful sentences by associating visual information related to sentences. Furthermore, analysis experiment of visual association showed that our proposed model generates (associates) visual information effective for sentence generation.
rejected-papers
None of the reviewers are enthusiastic about the paper, primarily due to lack of proper evaluation. The response of the authors towards this criticism is also not sufficient. The final results are mixed which does not show very clearly that the presented associative model performs better than the sole seq2seq baseline that the authors use for comparison. We think that addressing these immediate concerns would improve the quality of this paper.
val
[ "SySoIWLgf", "SJ4THI9gG", "BkC87_Cgz", "SkQSCdCmG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "\n\nThe authors describe a method to be used in text dialogue systems. The contribution of the paper relies on the usage of visual information to enhance the performance of a dialogue system. An input phrase is expanded with visual information (visual context vectors), next visual and textual information is merged...
[ 4, 3, 3, -1 ]
[ 5, 4, 5, -1 ]
[ "iclr_2018_HJ39YKiTb", "iclr_2018_HJ39YKiTb", "iclr_2018_HJ39YKiTb", "iclr_2018_HJ39YKiTb" ]
iclr_2018_HypkN9yRW
DDRprog: A CLEVR Differentiable Dynamic Reasoning Programmer
We present a generic dynamic architecture that employs a problem specific differentiable forking mechanism to leverage discrete logical information about the problem data structure. We adapt and apply our model to CLEVR Visual Question Answering, giving rise to the DDRprog architecture; compared to previous approaches, our model achieves higher accuracy in half as many epochs with five times fewer learnable parameters. Our model directly models underlying question logic using a recurrent controller that jointly predicts and executes functional neural modules; it explicitly forks subprocesses to handle logical branching. While FiLM and other competitive models are static architectures with less supervision, we argue that inclusion of program labels enables learning of higher level logical operations -- our architecture achieves particularly high performance on questions requiring counting and integer comparison. We further demonstrate the generality of our approach though DDRstack -- an application of our method to reverse Polish notation expression evaluation in which the inclusion of a stack assumption allows our approach to generalize to long expressions, significantly outperforming an LSTM with ten times as many learnable parameters.
rejected-papers
The reviewers generally agree that the DDRprog method is both novel and interesting, while also seeing merit in outperformance of related methods in the empirical results. However, There were a lot of complaints about the writing quality, the clarity of the exposition, and unclear motivation of some of the work. The reviewers also noted insufficient comparisons and discussions regarding relevant prior art, including recursive NNs, Tree RNNs, IEP, etc. While the authors have made substantial revisions to the manuscript, with several additional pages of exposition, reviewers have not raised their scores or confidence in response.
train
[ "BJZGbH9lz", "Sk3LFlJZf", "Byd74WfbM", "HkibWuTQM", "SJ7lW_pmM", "By8feuaXz", "rJQ-xu6XM", "HkQ3ydpXM", "SJqSJOamM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "\nSummary: This paper leverages an explicit program format and proposes a stack based RNN to solve question answering. The paper shows state-of-the art performance on the CLEVR dataset.\n\nClarity:\n- The description of the model is vague: I have to looking into appendix on what are the Cell and Controller functio...
[ 5, 5, 6, -1, -1, -1, -1, -1, -1 ]
[ 2, 2, 2, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HypkN9yRW", "iclr_2018_HypkN9yRW", "iclr_2018_HypkN9yRW", "SJ7lW_pmM", "BJZGbH9lz", "rJQ-xu6XM", "Sk3LFlJZf", "Byd74WfbM", "iclr_2018_HypkN9yRW" ]
iclr_2018_r1TA9ZbA-
Learning to search with MCTSnets
Planning problems are among the most important and well-studied problems in artificial intelligence. They are most typically solved by tree search algorithms that simulate ahead into the future, evaluate future states, and back-up those evaluations to the root of a search tree. Among these algorithms, Monte-Carlo tree search (MCTS) is one of the most general, powerful and widely used. A typical implementation of MCTS uses cleverly designed rules, optimised to the particular characteristics of the domain. These rules control where the simulation traverses, what to evaluate in the states that are reached, and how to back-up those evaluations. In this paper we instead learn where, what and how to search. Our architecture, which we call an MCTSnet, incorporates simulation-based search inside a neural network, by expanding, evaluating and backing-up a vector embedding. The parameters of the network are trained end-to-end using gradient-based optimisation. When applied to small searches in the well-known planning problem Sokoban, the learned search algorithm significantly outperformed MCTS baselines.
rejected-papers
All reviewers agree that the contribution of this paper, a new way of training neural nets to execute Monte-Carlo Tree Search, is an appealing idea. For the most part, the reviewers found the exposition to be fairly clear, and the proposed architecture of good technical quality. Two of the reviewers point out flaws in implementing in a single domain, 10x10 Sokoban with four boxes and four targets. Since their training methodology uses supervised training on approximate ground-truth trajectories derived from extensive plain MCTS trials, it seems unlikely that the trained DNN will be able to generalize to other geometries (beyond 10x10x4) that were not seen during training. Sokoban also has a low branching ratio, so that these experiments do not provide any insight into how the methodology will scale at much higher branching ratios. Pros: Good technical quality, interesting novel idea, exposition is mostly clear. Good empirical results in one very limited domain. Cons: Single 10x10x4 Sokoban domain is too limited to derive any general conclusions. Point for improvement: The paper compares performance of MCTSnet trials vs. plain MCTS trials based on the number of trials performed. This is not an appropriate comparison, because the NN trials will be much more heavyweight in terms of CPU time, and there is usually a time limit to cut off MCTS trials and execute an action. It will be much better to plot performance of MCTSnet and plain MCTS vs. CPU time used.
val
[ "ByL3DP9gf", "HyC90Zoez", "HkTkZHjlM", "BySoOGDff", "ByptdGwfz", "r1bluMwMG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors introduce an approach for adding learning to search capability to Monte Carlo tree search. The proposed method incorporates simulation-based search inside a neural network by expanding, evaluating and backing-up a vector-embedding. The key is to represent the internal state of the search by a memory ve...
[ 7, 4, 5, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_r1TA9ZbA-", "iclr_2018_r1TA9ZbA-", "iclr_2018_r1TA9ZbA-", "ByL3DP9gf", "HyC90Zoez", "HkTkZHjlM" ]
iclr_2018_BkPrDFgR-
Piecewise Linear Neural Networks verification: A comparative study
The success of Deep Learning and its potential use in many important safety- critical applications has motivated research on formal verification of Neural Net- work (NN) models. Despite the reputation of learned NN models to behave as black boxes and theoretical hardness results of the problem of proving their prop- erties, researchers have been successful in verifying some classes of models by exploiting their piecewise linear structure. Unfortunately, most of these works test their algorithms on their own models and do not offer any comparison with other approaches. As a result, the advantages and downsides of the different al- gorithms are not well understood. Motivated by the need of accelerating progress in this very important area, we investigate the trade-offs of a number of different approaches based on Mixed Integer Programming, Satisfiability Modulo Theory, as well as a novel method based on the Branch-and-Bound framework. We also propose a new data set of benchmarks, in addition to a collection of previously released testcases that can be used to compare existing methods. Our analysis not only allowed a comparison to be made between different strategies, the compar- ision of results from different solvers also revealed implementation bugs in pub- lished methods. We expect that the availability of our benchmark and the analysis of the different approaches will allow researchers to invent and evaluate promising approaches for making progress on this important topic.
rejected-papers
All three reviewers are in agreement that this paper is not ready for ICLR in its current state. Given the pros/cons, the committee feels the paper is not ready for acceptance in its current form.
train
[ "ByZNy3ggM", "H1c3wqQef", "rkq9KFDlz", "SJFIDzpbG", "B1EHxS_bG", "ryngtRP-G", "Sk39OCPWf", "B1ZzuCwbM", "S122D0PZM", "SJGAoJ--f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "The paper studies methods for verifying neural nets through their piecewise\nlinear structure. The authors survey different methods from the literature,\npropose a novel one, and evaluate them on a set of benchmarks.\n\nA major drawback of the evaluation of the different approaches is that\neverything was used wit...
[ 6, 5, 3, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 4, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BkPrDFgR-", "iclr_2018_BkPrDFgR-", "iclr_2018_BkPrDFgR-", "B1EHxS_bG", "ryngtRP-G", "ByZNy3ggM", "H1c3wqQef", "rkq9KFDlz", "iclr_2018_BkPrDFgR-", "rkq9KFDlz" ]
iclr_2018_B1nxTzbRZ
Forward Modeling for Partial Observation Strategy Games - A StarCraft Defogger
This paper we present a defogger, a model that learns to predict future hidden information from partial observations. We formulate this model in the context of forward modeling and leverage spatial and sequential constraints and correlations via convolutional neural networks and long short-term memory networks, respectively. We evaluate our approach on a large dataset of human games of StarCraft: Brood War, a real-time strategy video game. Our models consistently beat strong rule-based baselines and qualitatively produce sensible future game states.
rejected-papers
The reviewer scores are fairly close, and the comments in their reviews are likewise similar. All reviewers indicate that they find this to be an interesting learning domain. However, they also agree in assessing the proposed method as having limited novelty and significance. They also critiqued the empirical evaluation as being too specific to Starcraft and not comprehensive, without providing evidence that the defogger contributes to winning at StarCraft. The authors wrote a substantial rebuttal to the reviews, but it did not convince anyone to increase their scores.
train
[ "SJ-MqkDVf", "HJr90mOlM", "BknmMUteM", "Sy46KdXfM", "rkoTsDaQz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "I appreciate the authors responses to my review, and their emphasis on task definition, but my other main concern about the work (poor evaluation --- no actual gameplay using defogger vs no defogger) remains. Also, the authors do not mention any added discussion about how to generalize their \"defogging\" task to...
[ -1, 5, 4, 5, -1 ]
[ -1, 4, 1, 3, -1 ]
[ "BknmMUteM", "iclr_2018_B1nxTzbRZ", "iclr_2018_B1nxTzbRZ", "iclr_2018_B1nxTzbRZ", "iclr_2018_B1nxTzbRZ" ]
iclr_2018_H1LAqMbRW
Latent forward model for Real-time Strategy game planning with incomplete information
Model-free deep reinforcement learning approaches have shown superhuman performance in simulated environments (e.g., Atari games, Go, etc). During training, these approaches often implicitly construct a latent space that contains key information for decision making. In this paper, we learn a forward model on this latent space and apply it to model-based planning in miniature Real-time Strategy game with incomplete information (MiniRTS). We first show that the latent space constructed from existing actor-critic models contains relevant information of the game, and design training procedure to learn forward models. We also show that our learned forward model can predict meaningful future state and is usable for latent space Monte-Carlo Tree Search (MCTS), in terms of win rates against rule-based agents.
rejected-papers
There was certainly some interest in this paper which investigates learning latent models of the environment for model-based planning, particularly articulated by Reviewer3. However, the bulk of reviewer remarks focused on negatives, such as: --The model-based approach is disappointing compared to the model-free approach. --The idea of learning a model based on the features from a model-free agent seems novel but lacks significance in that the results are not very compelling. --I feel the paper overstates the results in saying that the learned forward model is usable in MCTS. -- the paper in it’s current form is not written well and does not contain strong enough empirical results
train
[ "BJ-32VOxf", "B1qenWKxM", "HJh2yfcgz", "rko8LBpXG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "The paper proposes to use a pretrained model-free RL agent to extract the developed state representation and further re-use it for learning forward model of the environment and planning.\nThe idea of re-using a pretrained agent has both pros and cons. On one hand, it can be simpler than learning a model from scrat...
[ 5, 4, 4, -1 ]
[ 4, 5, 3, -1 ]
[ "iclr_2018_H1LAqMbRW", "iclr_2018_H1LAqMbRW", "iclr_2018_H1LAqMbRW", "iclr_2018_H1LAqMbRW" ]
iclr_2018_r15kjpHa-
Reward Design in Cooperative Multi-agent Reinforcement Learning for Packet Routing
In cooperative multi-agent reinforcement learning (MARL), how to design a suitable reward signal to accelerate learning and stabilize convergence is a critical problem. The global reward signal assigns the same global reward to all agents without distinguishing their contributions, while the local reward signal provides different local rewards to each agent based solely on individual behavior. Both of the two reward assignment approaches have some shortcomings: the former might encourage lazy agents, while the latter might produce selfish agents. In this paper, we study reward design problem in cooperative MARL based on packet routing environments. Firstly, we show that the above two reward signals are prone to produce suboptimal policies. Then, inspired by some observations and considerations, we design some mixed reward signals, which are off-the-shelf to learn better policies. Finally, we turn the mixed reward signals into the adaptive counterparts, which achieve best results in our experiments. Other reward signals are also discussed in this paper. As reward design is a very fundamental problem in RL and especially in MARL, we hope that MARL researchers can rethink the rewards used in their systems.
rejected-papers
All reviewers are unanimous that the paper is below threshold for acceptance. The authors have not provided rebuttals, but merely perfunctory generic responses. I think the most important criticism is that the approach is "very ad-hoc." I would encourage the authors to consider more principled ways of automatically designing reward functions, like for example, Inverse Reinforcement Learning, in which you start with a good agent behavior policy, and then estimate a reward function for which the behavior policy maximizes the reward function.
train
[ "r1OoL_Yxz", "rkY2B6KgM", "S1uz175xf", "HJ9oOJxZG", "r1JeKkgWM", "SkzCuyxZM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The authors suggest using a mixture of shared and individual rewards within a MARL environment to induce cooperation among independent agents. They show that on their specific application this can lead to a better overall global performance than purely sharing the global signal, or using just the independent rewar...
[ 5, 2, 5, -1, -1, -1 ]
[ 3, 4, 2, -1, -1, -1 ]
[ "iclr_2018_r15kjpHa-", "iclr_2018_r15kjpHa-", "iclr_2018_r15kjpHa-", "S1uz175xf", "r1OoL_Yxz", "rkY2B6KgM" ]
iclr_2018_SJvrXqvaZ
Adversary A3C for Robust Reinforcement Learning
Asynchronous Advantage Actor Critic (A3C) is an effective Reinforcement Learning (RL) algorithm for a wide range of tasks, such as Atari games and robot control. The agent learns policies and value function through trial-and-error interactions with the environment until converging to an optimal policy. Robustness and stability are critical in RL; however, neural network can be vulnerable to noise from unexpected sources and is not likely to withstand very slight disturbances. We note that agents generated from mild environment using A3C are not able to handle challenging environments. Learning from adversarial examples, we proposed an algorithm called Adversary Robust A3C (AR-A3C) to improve the agent’s performance under noisy environments. In this algorithm, an adversarial agent is introduced to the learning process to make it more robust against adversarial disturbances, thereby making it more adaptive to noisy environments. Both simulations and real-world experiments are carried out to illustrate the stability of the proposed algorithm. The AR-A3C algorithm outperforms A3C in both clean and noisy environments.
rejected-papers
Reviewers are unanimous in scoring this paper below threshold for acceptance. The authors did not submit any rebuttals of the reviews. Pros: Paper is generally clear. Hardware results are valuable. Cons: Limited simulation results. Proposed method is not really novel. Insufficient empirical validation of the approach.
test
[ "S14kDbqlG", "r1tT8Ncez", "HJzp02rMM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Positive:\n- Interesting approach\n- Hardware validation (the RL field needs more of this!)\n\nNegative:\n- Figure 2: what is the reward here? The one from Section 5.1?\n- No comparisons to other methods: Single pendulum swing-up is a very easy task that has been solved with various methods (mostly in a cart-pole ...
[ 4, 4, 4 ]
[ 4, 4, 4 ]
[ "iclr_2018_SJvrXqvaZ", "iclr_2018_SJvrXqvaZ", "iclr_2018_SJvrXqvaZ" ]
iclr_2018_rJIN_4lA-
Maintaining cooperation in complex social dilemmas using deep reinforcement learning
Social dilemmas are situations where individuals face a temptation to increase their payoffs at a cost to total welfare. Building artificially intelligent agents that achieve good outcomes in these situations is important because many real world interactions include a tension between selfish interests and the welfare of others. We show how to modify modern reinforcement learning methods to construct agents that act in ways that are simple to understand, nice (begin by cooperating), provokable (try to avoid being exploited), and forgiving (try to return to mutual cooperation). We show both theoretically and experimentally that such agents can maintain cooperation in Markov social dilemmas. Our construction does not require training methods beyond a modification of self-play, thus if an environment is such that good strategies can be constructed in the zero-sum case (eg. Atari) then we can construct agents that solve social dilemmas in this environment.
rejected-papers
The reviewers found numerous issues in the paper, including unclear problem definitions, lack of motivation, no support for desiderata, clarity issues, points in discussion appearing to be technically incorrect, restrictive setting, sloppy definitions, and uninteresting experiments. Unfortunately, little note of positive aspects was mentioned. The authors wrote substantial rebuttals, including an extended exchange with Reviewer2, but this had no effect in terms of score changes. Given the current state of the paper, the committee feels the paper falls short of acceptance in its current form.
train
[ "HkdM2LS4z", "Sya_dLrVz", "Sk2SPUSNz", "BkHI88rEM", "Skso_BrVz", "ryocOrBVf", "By8cjAVNM", "HJb7qCN4M", "H1CLw0V4G", "rkhkuoNgf", "B1_TQ-clG", "Bk_1Ws3xf", "SkKkV2qMM", "ByIQVhqGG", "By2P8nqGz", "ByKVU35fG", "rJDtLncGf", "rJg0Nh5zf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "a...
[ "I feel like things are becoming more convoluted as we go along. Surely, agents \n\n\"remain on the equilibrium path because of what they anticipate would happen if\nthey were to deviate\" -- Binmore (1992)\n\nBut this is a statement that holds for general games, I don't see how this helps define a \"social dilemma...
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, -1, -1, -1, -1, -1, -1 ]
[ "ryocOrBVf", "Skso_BrVz", "Skso_BrVz", "Skso_BrVz", "ryocOrBVf", "By8cjAVNM", "rJDtLncGf", "By2P8nqGz", "ByKVU35fG", "iclr_2018_rJIN_4lA-", "iclr_2018_rJIN_4lA-", "iclr_2018_rJIN_4lA-", "Bk_1Ws3xf", "SkKkV2qMM", "ByKVU35fG", "rkhkuoNgf", "By2P8nqGz", "B1_TQ-clG" ]
iclr_2018_B1EGg7ZCb
Autonomous Vehicle Fleet Coordination With Deep Reinforcement Learning
Autonomous vehicles are becoming more common in city transportation. Companies will begin to find a need to teach these vehicles smart city fleet coordination. Currently, simulation based modeling along with hand coded rules dictate the decision making of these autonomous vehicles. We believe that complex intelligent behavior can be learned by these agents through Reinforcement Learning.In this paper, we discuss our work for solving this system by adapting the Deep Q-Learning (DQN) model to the multi-agent setting. Our approach applies deep reinforcement learning by combining convolutional neural networks with DQN to teach agents to fulfill customer demand in an environment that is partially observ-able to them. We also demonstrate how to utilize transfer learning to teach agents to balance multiple objectives such as navigating to a charging station when its en-ergy level is low. The two evaluations presented show that our solution has shown hat we are successfully able to teach agents cooperation policies while balancing multiple objectives.
rejected-papers
The reviewers agree that the manuscript is below the acceptance threshold at ICLR. Many points of criticism were evident in the reviewer comments, including small artificial test domain, no new methods introduced, poor writing in some places, and dubious need for DeepRL in this domain. The reviews mentioned a number of constructive comments to improve the paper, and we hope this will provide useful guidance for the authors to rewrite and resubmit to a future venue.
train
[ "Hy73csVeG", "HyqzL3Ogz", "rJJxcdqeM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "\n\nThis paper proposes to use deep reinforcement learning to solve a multiagent coordination task. In particular, the paper introduces a benchmark domain to model fleet coordination problems as might be encountered in taxi companies. \n\nThe paper does not really introduce new methods, and as such, this paper sho...
[ 3, 3, 4 ]
[ 5, 3, 4 ]
[ "iclr_2018_B1EGg7ZCb", "iclr_2018_B1EGg7ZCb", "iclr_2018_B1EGg7ZCb" ]
iclr_2018_rye7IMbAZ
Explicit Induction Bias for Transfer Learning with Convolutional Networks
In inductive transfer learning, fine-tuning pre-trained convolutional networks substantially outperforms training from scratch. When using fine-tuning, the underlying assumption is that the pre-trained model extracts generic features, which are at least partially relevant for solving the target task, but would be difficult to extract from the limited amount of data available on the target task. However, besides the initialization with the pre-trained model and the early stopping, there is no mechanism in fine-tuning for retaining the features learned on the source task. In this paper, we investigate several regularization schemes that explicitly promote the similarity of the final solution with the initial model. We eventually recommend a simple L2 penalty using the pre-trained model as a reference, and we show that this approach behaves much better than the standard scheme using weight decay on a partially frozen network.
rejected-papers
This paper addresses the question of how to regularize when starting from a pre-trained convolutional network in the context of transfer learning. The authors propose to regularize toward the parameters of the pre-trained model and study multiple regularizers of this type. The experiments are thorough and convincing enough. This regularizer has been used quite a bit for shallow models (e.g. SVMs as the authors mention, but also e.g. more general MaxEnt models). There is at least some work on regularization toward a pre-trained model also in the context of domain adaptation with deep neural networks (e.g. for speaker adaptation in speech recognition). The only remaining novelty is the transfer learning context. This is not a sufficiently different setting to merit a new paper on the topic.
train
[ "ryD53e9xG", "Hku7RS6lf", "BJQD_I_eM", "S1A91_pmM", "BkoNYvaXf", "SylgFP6Xz", "BkbKdDT7G" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This work addresses the scenario of fine-tuning a pre-trained network for new data/tasks and empirically studies various regularization techniques. Overall, the evaluation concludes with recommending that all layers of a network whose weights are directly transferred during fine-tuning should be regularized agains...
[ 6, 7, 6, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_rye7IMbAZ", "iclr_2018_rye7IMbAZ", "iclr_2018_rye7IMbAZ", "iclr_2018_rye7IMbAZ", "ryD53e9xG", "BJQD_I_eM", "Hku7RS6lf" ]
iclr_2018_rkZzY-lCb
Feat2Vec: Dense Vector Representation for Data with Arbitrary Features
Methods that calculate dense vector representations for features in unstructured data—such as words in a document—have proven to be very successful for knowledge representation. We study how to estimate dense representations when multiple feature types exist within a dataset for supervised learning where explicit labels are available, as well as for unsupervised learning where there are no labels. Feat2Vec calculates embeddings for data with multiple feature types enforcing that all different feature types exist in a common space. In the supervised case, we show that our method has advantages over recently proposed methods; such as enabling higher prediction accuracy, and providing a way to avoid the cold-start problem. In the unsupervised case, our experiments suggest that Feat2Vec significantly outperforms existing algorithms that do not leverage the structure of the data. We believe that we are the first to propose a method for learning unsuper vised embeddings that leverage the structure of multiple feature types.
rejected-papers
The paper presents an approach for learning continuous-valued vector representations combining multiple input feature sets of different types, in both unsupervised and supervised settings. The revised paper is a merger of the original submission and another ICLR submission. This meta-review takes into account all of the comments on both submissions and revisions. The merged paper is an improvement over the two separate ones. However, the contribution over previous work is still a bit unclear. It still does not sufficiently compare to/discuss in the context of other recent work on combining multiple feature groups. The experiments are also quite limited. The idea is introduced as extremely general, but the experiments focus on a small number of specific tasks, some of them non-standard.
train
[ "HJoZquUSM", "rJygCYHHM", "r1_2VGLlz", "HJfRKPFeM", "ByQ1mb0xM", "H1cuqDpXf", "SJSx5wTmz", "Bk_CuPamf", "rkNuOP6XG" ]
[ "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Thank you for your insightful comments.\n\nI. NOVELTY\nAfter reviewing your two references, we believe that our novelty claims still stand:\n\n1) Regarding the \"exponential family embeddings,\" our claim refers to general-purpose embeddings, which we define as “embeddings of an unsupervised method that can be use...
[ -1, -1, 7, 2, 7, -1, -1, -1, -1 ]
[ -1, -1, 3, 2, 5, -1, -1, -1, -1 ]
[ "rJygCYHHM", "iclr_2018_rkZzY-lCb", "iclr_2018_rkZzY-lCb", "iclr_2018_rkZzY-lCb", "iclr_2018_rkZzY-lCb", "iclr_2018_rkZzY-lCb", "HJfRKPFeM", "r1_2VGLlz", "ByQ1mb0xM" ]
iclr_2018_H18uzzWAZ
Correcting Nuisance Variation using Wasserstein Distance
Profiling cellular phenotypes from microscopic imaging can provide meaningful biological information resulting from various factors affecting the cells. One motivating application is drug development: morphological cell features can be captured from images, from which similarities between different drugs applied at different dosages can be quantified. The general approach is to find a function mapping the images to an embedding space of manageable dimensionality whose geometry captures relevant features of the input images. An important known issue for such methods is separating relevant biological signal from nuisance variation. For example, the embedding vectors tend to be more correlated for cells that were cultured and imaged during the same week than for cells from a different week, despite having identical drug compounds applied in both cases. In this case, the particular batch a set of experiments were conducted in constitutes the domain of the data; an ideal set of image embeddings should contain only the relevant biological information (e.g. drug effects). We develop a general framework for adjusting the image embeddings in order to `forget' domain-specific information while preserving relevant biological information. To do this, we minimize a loss function based on distances between marginal distributions (such as the Wasserstein distance) of embeddings across domains for each replicated treatment. For the dataset presented, the replicated treatment is the negative control. We find that for our transformed embeddings (1) the underlying geometric structure is not only preserved but the embeddings also carry improved biological signal (2) less domain-specific information is present.
rejected-papers
This is a nice but very narrow study of domain invariance in a microscopic imaging application. Since the problem is very general, the paper should include much more substantial context, e.g. discussion of various alternative methods (e.g. the ones cited in Sun et al. 2017). In order to contribute to the broader ICLR community, ideally the paper would also include application to more than just the one task.
val
[ "Hkyq_kqgz", "HJk2HZqxM", "SkvdG35xz", "ByRaH8pXM", "BJoM5IpXG", "Bk5rFIamG", "rkIzDL67M", "HyAuXL6Qf", "S1HeGU6Xf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "The paper discusses a method for adjusting image embeddings in order tease apart technical variation from biological signal. A loss function based on the Wasserstein distance is used. \nThe paper is interesting but could certainly do with more explanations. \n\nComments:\n1. It is difficult for the reader to under...
[ 5, 4, 7, -1, -1, -1, -1, -1, -1 ]
[ 3, 5, 3, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H18uzzWAZ", "iclr_2018_H18uzzWAZ", "iclr_2018_H18uzzWAZ", "HJk2HZqxM", "Hkyq_kqgz", "HJk2HZqxM", "HJk2HZqxM", "SkvdG35xz", "iclr_2018_H18uzzWAZ" ]
iclr_2018_BJvVbCJCb
Neural Clustering By Predicting And Copying Noise
We propose a neural clustering model that jointly learns both latent features and how they cluster. Unlike similar methods our model does not require a predefined number of clusters. Using a supervised approach, we agglomerate latent features towards randomly sampled targets within the same space whilst progressively removing the targets until we are left with only targets which represent cluster centroids. To show the behavior of our model across different modalities we apply our model on both text and image data and very competitive results on MNIST. Finally, we also provide results against baseline models for fashion-MNIST, the 20 newsgroups dataset, and a Twitter dataset we ourselves create.
rejected-papers
The paper proposes an approach to jointly learning a data clustering and latent representation. The main selling point is that the number of clusters need not be pre-specified. However, there are other hyperparameters and it is not clear why trading # clusters for other hyperparameters is a win. The empirical results are not strong enough to overcome these concerns.
train
[ "ryMv4SZ7M", "rkX7hRmeM", "Hk2HKdrlz", "H1flxytxf", "SyR-e8uWG", "rJmxgLuZG", "r1l0JIdWz", "SyUwqSoJf", "HJyr5Hokf", "SkrcjjVJM" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "We have made some changes and additions to the paper during this rebuttal/discussion period. Our main changes are to add further experiments to demonstrate the robustness of the NATAC training method, and to add more baselines to our text-based experiments. In full, we have:\n\n* Added other clustering methods int...
[ -1, 5, 5, 5, -1, -1, -1, -1, -1, -1 ]
[ -1, 4, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BJvVbCJCb", "iclr_2018_BJvVbCJCb", "iclr_2018_BJvVbCJCb", "iclr_2018_BJvVbCJCb", "rkX7hRmeM", "Hk2HKdrlz", "H1flxytxf", "SkrcjjVJM", "iclr_2018_BJvVbCJCb", "iclr_2018_BJvVbCJCb" ]
iclr_2018_S191YzbRZ
Prototype Matching Networks for Large-Scale Multi-label Genomic Sequence Classification
One of the fundamental tasks in understanding genomics is the problem of predicting Transcription Factor Binding Sites (TFBSs). With more than hundreds of Transcription Factors (TFs) as labels, genomic-sequence based TFBS prediction is a challenging multi-label classification task. There are two major biological mechanisms for TF binding: (1) sequence-specific binding patterns on genomes known as “motifs” and (2) interactions among TFs known as co-binding effects. In this paper, we propose a novel deep architecture, the Prototype Matching Network (PMN) to mimic the TF binding mechanisms. Our PMN model automatically extracts prototypes (“motif”-like features) for each TF through a novel prototype-matching loss. Borrowing ideas from few-shot matching models, we use the notion of support set of prototypes and an LSTM to learn how TFs interact and bind to genomic sequences. On a reference TFBS dataset with 2.1 million genomic sequences, PMN significantly outperforms baselines and validates our design choices empirically. To our knowledge, this is the first deep learning architecture that introduces prototype learning and considers TF-TF interactions for large scale TFBS prediction. Not only is the proposed architecture accurate, but it also models the underlying biology.
rejected-papers
This paper proposes an approach for predicting transcription factor (TF) binding sites and TF-TF interaction. The approach is interesting and may ultimately be valuable for the intended application. But in its current state, the paper has insufficient technical novelty (e.g. relative to matching networks of Vinyals 2016), insufficient comparisons with prior work, and unclear benefit of the approach. The reviewers also had some concerns about clarity.
val
[ "ryoWUP5lz", "HkM_FfLxM", "H1yqn7qlM", "ByJ37P67M", "rJn6VDaQf", "HJa3NwT7z", "r1Tj4DTQG", "By6F4v6Qz", "ryJY4Dp7f", "ByMwEP67z", "rk-UVPTmM", "Hk2EVPTQG", "H11bNPamG", "SJjyVwpXz", "SJJsmwp7z", "SkGY7vTXz", "HkmwQwpXf", "rJZSXvpXf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author" ]
[ "This work proposes an approach for transcription factor binding site prediction using a multi-label classification formulation. It is a very interesting problem and application and the approach is interesting. \n\nNovelty:\nThe method is quite similar to matching networks (Vinyals, 2016) with a few changes in the ...
[ 5, 5, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S191YzbRZ", "iclr_2018_S191YzbRZ", "iclr_2018_S191YzbRZ", "ryoWUP5lz", "HkM_FfLxM", "HkM_FfLxM", "HkM_FfLxM", "HkM_FfLxM", "HkM_FfLxM", "H1yqn7qlM", "H1yqn7qlM", "H1yqn7qlM", "H1yqn7qlM", "H1yqn7qlM", "ryoWUP5lz", "ryoWUP5lz", "ryoWUP5lz", "ryoWUP5lz" ]
iclr_2018_SJzmJEq6W
Learning non-linear transform with discriminative and minimum information loss priors
This paper proposes a novel approach for learning discriminative and sparse representations. It consists of utilizing two different models. A predefined number of non-linear transform models are used in the learning stage, and one sparsifying transform model is used at test time. The non-linear transform models have discriminative and minimum information loss priors. A novel measure related to the discriminative prior is proposed and defined on the support intersection for the transform representations. The minimum information loss prior is expressed as a constraint on the conditioning and the expected coherence of the transform matrix. An equivalence between the non-linear models and the sparsifying model is shown only when the measure that is used to define the discriminative prior goes to zero. An approximation of the measure used in the discriminative prior is addressed, connecting it to a similarity concentration. To quantify the discriminative properties of the transform representation, we introduce another measure and present its bounds. Reflecting the discriminative quality of the transform representation we name it as discrimination power. To support and validate the theoretical analysis a practical learning algorithm is presented. We evaluate the advantages and the potential of the proposed algorithm by a computer simulation. A favorable performance is shown considering the execution time, the quality of the representation, measured by the discrimination power and the recognition accuracy in comparison with the state-of-the-art methods of the same category.
rejected-papers
This paper proposes an approach for learning a sparsifying transform via a set of nonlinear transforms at learning time. The presentation needs a lot of work. The original paper was 17 pages long and very difficult to understand. The revised paper is 12 pages long, which is still too long for the content. The paper needs to better distinguish between the major and minor points. It is still too difficult to judge the contribution.
train
[ "BkTGJ-9EM", "rJiAXzOVM", "Sy8Kdltgz", "HykAaKDgf", "BJy3xb9xM", "Hy5qs4_ZM", "ry7q9yQZf", "r1kudkmbz", "H1mVDJ7Wf" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "To all reviewers, we would like to extend the appreciation for taking the necessary time, involvement and effort in reading our initial and rebutted paper version, express our gratitude for all the taken considerations, raised comments and concerns about all aspects of this paper, contributing towards increasing t...
[ -1, -1, 5, 4, 5, -1, -1, -1, -1 ]
[ -1, -1, 2, 2, 1, -1, -1, -1, -1 ]
[ "iclr_2018_SJzmJEq6W", "Sy8Kdltgz", "iclr_2018_SJzmJEq6W", "iclr_2018_SJzmJEq6W", "iclr_2018_SJzmJEq6W", "iclr_2018_SJzmJEq6W", "HykAaKDgf", "Sy8Kdltgz", "BJy3xb9xM" ]
iclr_2018_r1tJKuyRZ
The Set Autoencoder: Unsupervised Representation Learning for Sets
We propose the set autoencoder, a model for unsupervised representation learning for sets of elements. It is closely related to sequence-to-sequence models, which learn fixed-sized latent representations for sequences, and have been applied to a number of challenging supervised sequence tasks such as machine translation, as well as unsupervised representation learning for sequences. In contrast to sequences, sets are permutation invariant. The proposed set autoencoder considers this fact, both with respect to the input as well as the output of the model. On the input side, we adapt a recently-introduced recurrent neural architecture using a content-based attention mechanism. On the output side, we use a stable marriage algorithm to align predictions to labels in the learning phase. We train the model on synthetic data sets of point clouds and show that the learned representations change smoothly with translations in the inputs, preserve distances in the inputs, and that the set size is represented directly. We apply the model to supervised tasks on the point clouds using the fixed-size latent representation. For a number of difficult classification problems, the results are better than those of a model that does not consider the permutation invariance. Especially for small training sets, the set-aware model benefits from unsupervised pretraining.
rejected-papers
The paper proposes an autoencoder for sets, an interesting and timely problem. The encoder here is based on prior related work (Vinyals et al. 2016) while the decoder uses a loss based on finding a matching between the input and output set elements. Experiments on multiple data sets are given, but none are realistic. The reviewers have also pointed out a number of experimental comparisons that would improve the contribution of the paper, such as considering multiple matching algorithms and more baselines. In the end the idea is reasonable and results are encouraging, but too preliminary at this point.
train
[ "rk7TfpBlG", "B1EnXjFxG", "Hk-Qowclf", "rkyAcO7NM", "r1WktiLGG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "This paper mostly extends Vinyals et al, 2015 paper (\"Order Matters\") on how to represent sets as input and/or output of a deep architecture.\n\nAs far as I understood, the set encoder is the same as the one in \"Order Matters\". If not, it would be useful to underline the differences.\n\nThe decoder, on the oth...
[ 4, 5, 5, -1, -1 ]
[ 5, 4, 4, -1, -1 ]
[ "iclr_2018_r1tJKuyRZ", "iclr_2018_r1tJKuyRZ", "iclr_2018_r1tJKuyRZ", "rk7TfpBlG", "iclr_2018_r1tJKuyRZ" ]
iclr_2018_B1EVwkqTW
Make SVM great again with Siamese kernel for few-shot learning
While deep neural networks have shown outstanding results in a wide range of applications, learning from a very limited number of examples is still a challenging task. Despite the difficulties of the few-shot learning, metric-learning techniques showed the potential of the neural networks for this task. While these methods perform well, they don’t provide satisfactory results. In this work, the idea of metric-learning is extended with Support Vector Machines (SVM) working mechanism, which is well known for generalization capabilities on a small dataset. Furthermore, this paper presents an end-to-end learning framework for training adaptive kernel SVMs, which eliminates the problem of choosing a correct kernel and good features for SVMs. Next, the one-shot learning problem is redefined for audio signals. Then the model was tested on vision task (using Omniglot dataset) and speech task (using TIMIT dataset) as well. Actually, the algorithm using Omniglot dataset improved accuracy from 98.1% to 98.5% on the one-shot classification task and from 98.9% to 99.3% on the few-shot classification task.
rejected-papers
This paper proposes to pre-train a feature embedding, using Siamese networks, for use with few-shot learning for SVMs. The idea is not very novel since there is a fairly large body of work in the general setting of pre-trained features + simple predictor. In addition, the experimental results could be stronger -- there are stronger results in the literature (not cited), and better data sets for testing few-shot learning.
train
[ "B1jQdMSeG", "BkatreVxM", "SybmxPplz", "r1a99OpWz", "B1VE6DTZG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "After reading the rebuttal:\n\nThis paper does have encouraging results. But as mentioned earlier, it still lacks systematic comparisons with existing (and strongest) baselines, and perhaps a better understanding the differences between approaches and the pros and cons. The writing also needs to be improved. So I ...
[ 5, 3, 4, -1, -1 ]
[ 4, 4, 5, -1, -1 ]
[ "iclr_2018_B1EVwkqTW", "iclr_2018_B1EVwkqTW", "iclr_2018_B1EVwkqTW", "B1jQdMSeG", "SybmxPplz" ]
iclr_2018_BkVf1AeAZ
Label Embedding Network: Learning Label Representation for Soft Training of Deep Networks
We propose a method, called Label Embedding Network, which can learn label representation (label embedding) during the training process of deep networks. With the proposed method, the label embedding is adaptively and automatically learned through back propagation. The original one-hot represented loss function is converted into a new loss function with soft distributions, such that the originally unrelated labels have continuous interactions with each other during the training process. As a result, the trained model can achieve substantially higher accuracy and with faster convergence speed. Experimental results based on competitive tasks demonstrate the effectiveness of the proposed method, and the learned label embedding is reasonable and interpretable. The proposed method achieves comparable or even better results than the state-of-the-art systems.
rejected-papers
This paper proposes an approach for jointly learning a label embedding and prediction network, as a way of taking advantage of relationships between labels. This general idea is well-motivated, but the specifics of the proposed approach are not motivated or described well. More discussion of relationship with prior work (e.g. other ways of "softening" the softmax) is needed. The authors claim to have state-of-the-art results, but reviewers point out that much better results exist.
train
[ "Hk7pW6HlM", "SyZf4f5gM", "r1zEZ9ief", "By15nh6eG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public" ]
[ "The paper proposes to add an embedding layer for labels that constrains normal classifiers in order to find label representations that are semantically consistent. The approach is then experimented on various image and text tasks.\n\nThe description of the model is laborious and hard to follow. Figure 1 helps but ...
[ 4, 4, 3, -1 ]
[ 5, 3, 4, -1 ]
[ "iclr_2018_BkVf1AeAZ", "iclr_2018_BkVf1AeAZ", "iclr_2018_BkVf1AeAZ", "iclr_2018_BkVf1AeAZ" ]
iclr_2018_HJsk5-Z0W
Structured Deep Factorization Machine: Towards General-Purpose Architectures
In spite of their great success, traditional factorization algorithms typically do not support features (e.g., Matrix Factorization), or their complexity scales quadratically with the number of features (e.g, Factorization Machine). On the other hand, neural methods allow large feature sets, but are often designed for a specific application. We propose novel deep factorization methods that allow efficient and flexible feature representation. For example, we enable describing items with natural language with complexity linear to the vocabulary size—this enables prediction for unseen items and avoids the cold start problem. We show that our architecture can generalize some previously published single-purpose neural architectures. Our experiments suggest improved training times and accuracy compared to shallow methods.
rejected-papers
This paper has been withdrawn by the authors.
train
[ "rkSyVFDeG", "Bk0lEg6eG", "Sk6iGkvbG", "SJrX6Hpmz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author" ]
[ "This paper proposes to improve time complexity of factorization machine. Unfortunately, the paper's claim that FM's time complexity is quadratic to feature size is wrong. Specifically, the dot product can be computed as (which is linear to feature size)\n\n(\\sum x_i \\beta_i)^T (\\sum x_i \\beta_i) - \\sum_i x_i^...
[ 3, 4, 4, -1 ]
[ 5, 5, 3, -1 ]
[ "iclr_2018_HJsk5-Z0W", "iclr_2018_HJsk5-Z0W", "iclr_2018_HJsk5-Z0W", "iclr_2018_HJsk5-Z0W" ]
iclr_2018_SyGT_6yCZ
Simple Fast Convolutional Feature Learning
The quality of the features used in visual recognition is of fundamental importance for the overall system. For a long time, low-level hand-designed feature algorithms as SIFT and HOG have obtained the best results on image recognition. Visual features have recently been extracted from trained convolutional neural networks. Despite the high-quality results, one of the main drawbacks of this approach, when compared with hand-designed features, is the training time required during the learning process. In this paper, we propose a simple and fast way to train supervised convolutional models to feature extraction while still maintaining its high-quality. This methodology is evaluated on different datasets and compared with state-of-the-art approaches.
rejected-papers
The paper addresses the training time of CNNs, in the common setting where a CNN is trained on one domain and then used to extract features for another domain. The paper proposes to speed up the CNN training step via a particular proposed training schedule with a reduced number of epochs. Training time of the pre-trained CNN is not a huge concern, since this is only done once, but optimizing training schedules is a valid and interesting topic of study. However, the approach here does not seem novel; it is typical to adjust training schedules according to the desired tradeoff between training time and performance. The experimental validation is also thin, and the writing needs improvement.
val
[ "SyWq1JvlM", "rk0P3FdeM", "rJUMQfpeM", "rJXa_V5Zz", "r1L6779WM", "ByZqDFKWG", "HkTny32gM", "Bkou3Wjez" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "public" ]
[ "This paper deals with early stopping but the contributions are limited. This work would fit better a workshop as a preliminary result, furthermore it is too short. Following a short review section per section.\n\nIntro: The name SFC is misleading as the method consists in stopping early the training with an optimi...
[ 3, 3, 2, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SyGT_6yCZ", "iclr_2018_SyGT_6yCZ", "iclr_2018_SyGT_6yCZ", "rJUMQfpeM", "ByZqDFKWG", "iclr_2018_SyGT_6yCZ", "Bkou3Wjez", "iclr_2018_SyGT_6yCZ" ]
iclr_2018_r1cLblgCZ
Recurrent Auto-Encoder Model for Multidimensional Time Series Representation
Recurrent auto-encoder model can summarise sequential data through an encoder structure into a fixed-length vector and then reconstruct into its original sequential form through the decoder structure. The summarised information can be used to represent time series features. In this paper, we propose relaxing the dimensionality of the decoder output so that it performs partial reconstruction. The fixed-length vector can therefore represent features only in the selected dimensions. In addition, we propose using rolling fixed window approach to generate samples. The change of time series features over time can be summarised as a smooth trajectory path. The fixed-length vectors are further analysed through additional visualisation and unsupervised clustering techniques. This proposed method can be applied in large-scale industrial processes for sensors signal analysis purpose where clusters of the vector representations can be used to reflect the operating states of selected aspects of the industrial system.
rejected-papers
This paper applies a form of recurrent autoencoder for a specific type of industrial sensor signal analysis. The application is very narrow and the data set is proprietary. The approach is not clearly described, but seems very straightforward and is not placed in context of prior work. It is therefore not clear how to evaluate the contribution of the method. The authors have revised the paper to include more details and prior work, but it still needs a lot more work on all of the above dimensions before it can make a significant contribution to the ICLR community.
train
[ "HJI6Rf1eG", "r1CkPdteG", "BkPl4O9xM", "S1XLye3QG", "SkLr9127f", "HkHE9y27G", "B1oGcJ27M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This writeup describes an application of recurrent autoencoder to analysis of multidimensional time series. The quality of writing, experimentation and scholarship is clearly below than what is expected from a scientific article. The method is explained in a very unclear way, there is no mention of any related wor...
[ 2, 2, 4, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1 ]
[ "iclr_2018_r1cLblgCZ", "iclr_2018_r1cLblgCZ", "iclr_2018_r1cLblgCZ", "iclr_2018_r1cLblgCZ", "HJI6Rf1eG", "r1CkPdteG", "BkPl4O9xM" ]
iclr_2018_H1uP7ebAW
Learning to diagnose from scratch by exploiting dependencies among labels
The field of medical diagnostics contains a wealth of challenges which closely resemble classical machine learning problems; practical constraints, however, complicate the translation of these endpoints naively into classical architectures. Many tasks in radiology, for example, are largely problems of multi-label classification wherein medical images are interpreted to indicate multiple present or suspected pathologies. Clinical settings drive the necessity for high accuracy simultaneously across a multitude of pathological outcomes and greatly limit the utility of tools which consider only a subset. This issue is exacerbated by a general scarcity of training data and maximizes the need to extract clinically relevant features from available samples -- ideally without the use of pre-trained models which may carry forward undesirable biases from tangentially related tasks. We present and evaluate a partial solution to these constraints in using LSTMs to leverage interdependencies among target labels in predicting 14 pathologic patterns from chest x-rays and establish state of the art results on the largest publicly available chest x-ray dataset from the NIH without pre-training. Furthermore, we propose and discuss alternative evaluation metrics and their relevance in clinical practice.
rejected-papers
Authors apply dense nets and LSTM to model dependencies among labels and demonstrate new state-of-art performance on an X-Ray dataset. Pros: - Well written. - New improvement to state-of-art Cons: - Novelties are not strong. One combination of existing approaches are used to achieve state-of-art on what is still a relatively new dataset. (All Reviewers) - Using LSTM to model dependencies would be affected by the selected order of the disease states. In this sense, LSTM seems like the wrong architecture to use to model dependencies among labels. This may be a drawback in comparison to other methods of modeling dependencies, but this is not thoroughly discussed or evaluated. (Reviewer 1 & 3) - There is a large body of work on multi-task learning with shared information, which have not been evaluated for comparison. Because of this, the contribution of the LSTM to model dependencies between labels in comparison to other available approaches cannot be verified. (Reviewer 1 & 3) - Top AUC performance on this dataset does not carry much significance on its own, as the dataset is new (CVPR 2017), and few approaches have been tested against it. - Medical literature not cited to justify with evidence the discovered dependencies among disease states. (Reviewer 1)
train
[ "HJZ2MKRbM", "S1KuIB5gz", "BkE5LPlZG", "SyAfEqafG", "BJoLLcTMM", "BkpwSqpMG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "public" ]
[ "The paper proposes to combine the recently proposed DenseNet architecture with LSTMs to tackle the problem of predicting different pathologic patterns from chest x-rays. In particular, the use of LSTMs helps take into account interdependencies between pattern labels. \n\nStrengths:\n- The paper is very well writte...
[ 6, 6, 6, -1, -1, -1 ]
[ 3, 4, 3, -1, -1, -1 ]
[ "iclr_2018_H1uP7ebAW", "iclr_2018_H1uP7ebAW", "iclr_2018_H1uP7ebAW", "HJZ2MKRbM", "S1KuIB5gz", "BkE5LPlZG" ]
iclr_2018_ryserbZR-
Classification and Disease Localization in Histopathology Using Only Global Labels: A Weakly-Supervised Approach
Analysis of histopathology slides is a critical step for many diagnoses, and in particular in oncology where it defines the gold standard. In the case of digital histopathological analysis, highly trained pathologists must review vast whole-slide-images of extreme digital resolution (100,000^2 pixels) across multiple zoom levels in order to locate abnormal regions of cells, or in some cases single cells, out of millions. The application of deep learning to this problem is hampered not only by small sample sizes, as typical datasets contain only a few hundred samples, but also by the generation of ground-truth localized annotations for training interpretable classification and segmentation models. We propose a method for disease available during training. Even without pixel-level annotations, we are able to demonstrate performance comparable with models trained with strong annotations on the Camelyon-16 lymph node metastases detection challenge. We accomplish this through the use of pre-trained deep convolutional networks, feature embedding, as well as learning via top instances and negative evidence, a multiple instance learning technique fromatp the field of semantic segmentation and object detection.
rejected-papers
Authors present a method for disease classification and localization in histopathology images. Standard image processing techniques are used to extract and normalize tiles of tissue, after which features are extracted from pertained networks. A 1-D convolutional filter is applied to the bag of features from the tiles (along the tile dimension, kernel filter size equal to dimensionality of feature vector). The max R and min R values are kept as input into a neural network for classification, and thresholding of these values provides localization for disease / non-disease. Pro: - Potential to reduce annotation complexity of datasets while producing predictions and localization Con: - Results are not great. If anything, results re-affirm why strong annotations are necessary. - Several reviewer concerns regarding novelty of proposed method. While authors have made clear the distinctions from prior art, the significance of those changes are debated. Given the current pros/cons, the committee feels the paper is not ready for acceptance in its current form.
test
[ "S1O8uhkxf", "SkWQLvebf", "Bk72o4NWM", "BkH_ar6XM", "Byy2THTQG", "BJDr6HaXM", "BJ8riS6Qf", "Hy1koS67M", "rkmE5raXf", "r1WptBaXz", "rkjIwbqxM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "public" ]
[ "This paper describes a semi-supervised method to classify and segment WSI histological images that are only labeled at the whole image level. Images are tiled and tiles are sampled and encoded into a feature vector via a ResNET-50 pretrained on ImageNET. A 1D convolutional layer followed by a min-max layer and 2 f...
[ 5, 6, 5, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 4, 3, 3, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ryserbZR-", "iclr_2018_ryserbZR-", "iclr_2018_ryserbZR-", "Bk72o4NWM", "Bk72o4NWM", "Bk72o4NWM", "S1O8uhkxf", "SkWQLvebf", "iclr_2018_ryserbZR-", "iclr_2018_ryserbZR-", "iclr_2018_ryserbZR-" ]
iclr_2018_rk1FQA0pW
End-to-End Abnormality Detection in Medical Imaging
Deep neural networks (DNN) have shown promising performance in computer vision. In medical imaging, encouraging results have been achieved with deep learning for applications such as segmentation, lesion detection and classification. Nearly all of the deep learning based image analysis methods work on reconstructed images, which are obtained from original acquisitions via solving inverse problems (reconstruction). The reconstruction algorithms are designed for human observers, but not necessarily optimized for DNNs which can often observe features that are incomprehensible for human eyes. Hence, it is desirable to train the DNNs directly from the original data which lie in a different domain with the images. In this paper, we proposed an end-to-end DNN for abnormality detection in medical imaging. To align the acquisition with the annotations made by radiologists in the image domain, a DNN was built as the unrolled version of iterative reconstruction algorithms to map the acquisitions to images, and followed by a 3D convolutional neural network (CNN) to detect the abnormality in the reconstructed images. The two networks were trained jointly in order to optimize the entire DNN for the detection task from the original acquisitions. The DNN was implemented for lung nodule detection in low-dose chest computed tomography (CT), where a numerical simulation was done to generate acquisitions from 1,018 chest CT images with radiologists' annotations. The proposed end-to-end DNN demonstrated better sensitivity and accuracy for the task compared to a two-step approach, in which the reconstruction and detection DNNs were trained separately. A significant reduction of false positive rate on suspicious lesions were observed, which is crucial for the known over-diagnosis in low-dose lung CT imaging. The images reconstructed by the proposed end-to-end network also presented enhanced details in the region of interest.
rejected-papers
Authors present an evaluation of end-to-end training connecting reconstruction network with detection network for lung nodules. Pros: - Optimizing a mapping jointly with the task may preserve more information that is relevant to the task. Cons: - Reconstruction network is not "needed" to generate an image -- other algorithms exist for reconstructing images from raw data. Therefore, adding the reconstruction network serves to essentially add more parameters to the neural network. As a baseline, authors should compare to a detection-only framework with a comparable number of parameters to the end-to-end system. Since this is not provided, the true benefit of end-to-end training cannot be assessed. - Performance improvement presented is negligible - Novelty is not clear / significant
train
[ "SkoQMHqlG", "S1gaKDqlM", "Byyu-H4-f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper proposes a DNN for patch-based lung nodule detection, directly from the CT projection data. The two-component network, comprising of the reconstruction network and the nodule detection network, is trained end-to-end. The trained network was validated on a simulated dataset of 1018\tlow-dose chest CT imag...
[ 4, 5, 6 ]
[ 4, 4, 3 ]
[ "iclr_2018_rk1FQA0pW", "iclr_2018_rk1FQA0pW", "iclr_2018_rk1FQA0pW" ]
iclr_2018_HkJ1rgbCb
Using Deep Reinforcement Learning to Generate Rationales for Molecules
Deep learning algorithms are increasingly used in modeling chemical processes. However, black box predictions without rationales have limited used in practical applications, such as drug design. To this end, we learn to identify molecular substructures -- rationales -- that are associated with the target chemical property (e.g., toxicity). The rationales are learned in an unsupervised fashion, requiring no additional information beyond the end-to-end task. We formulate this problem as a reinforcement learning problem over the molecular graph, parametrized by two convolution networks corresponding to the rationale selection and prediction based on it, where the latter induces the reward function. We evaluate the approach on two benchmark toxicity datasets. We demonstrate that our model sustains high performance under the additional constraint that predictions strictly follow the rationales. Additionally, we validate the extracted rationales through comparison against those described in chemical literature and through synthetic experiments.
rejected-papers
Pro: - Interesting approach to tie together reinforcement Q-learning with CNN for prediction and reward function learning in predicting downstream effects of chemical structures, while providing relevant areas for decision-making. Con: - Datasets are small, generalizability not clear. - Performance is not high (although performance wasn't the goal necessarily) - Sometimes test performance is higher than training performance, making results questionable. - Should include comparison to other wrapper-based combinatorial approaches. - Too targeted an appeal/audience (better for chemical journal)
train
[ "r11LXabJz", "S1wvy15xz", "SyI8c-T-f", "ByLbXBdzM", "ByqZ04_GG", "H1iOTE_fz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "\nThe paper proposes a feature learning technique for molecular prediction using reinforcement learning. The predictive model is an interesting two-step approach where important atoms of the molecule are added one-by-one with a reward given by a second Q-network that learns how well we can solve the prediction pro...
[ 5, 5, 5, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_HkJ1rgbCb", "iclr_2018_HkJ1rgbCb", "iclr_2018_HkJ1rgbCb", "S1wvy15xz", "r11LXabJz", "SyI8c-T-f" ]
iclr_2018_HytSvlWRZ
Subspace Network: Deep Multi-Task Censored Regression for Modeling Neurodegenerative Diseases
Over the past decade a wide spectrum of machine learning models have been developed to model the neurodegenerative diseases, associating biomarkers, especially non-intrusive neuroimaging markers, with key clinical scores measuring the cognitive status of patients. Multi-task learning (MTL) has been extensively explored in these studies to address challenges associated to high dimensionality and small cohort size. However, most existing MTL approaches are based on linear models and suffer from two major limitations: 1) they cannot explicitly consider upper/lower bounds in these clinical scores; 2) they lack the capability to capture complicated non-linear effects among the variables. In this paper, we propose the Subspace Network, an efficient deep modeling approach for non-linear multi-task censored regression. Each layer of the subspace network performs a multi-task censored regression to improve upon the predictions from the last layer via sketching a low-dimensional subspace to perform knowledge transfer among learning tasks. We show that under mild assumptions, for each layer the parametric subspace can be recovered using only one pass of training data. In addition, empirical results demonstrate that the proposed subspace network quickly picks up correct parameter subspaces, and outperforms state-of-the-arts in predicting neurodegenerative clinical scores using information in brain imaging.
rejected-papers
Authors present a method for modeling neurodegenerative diseases using a multitask learning framework that considers "censored regression" problems (to model where the outputs have discrete values and ranges). Given the pros/cons, the committee feels this paper is not ready for acceptance in its current state. Pro: - This approach to modeling discrete regression problems is interesting and may hold potential, but the evaluation is not in a state where strong meaningful conclusions can be made. Con: - Reviewers raise multiple concerns regarding evaluation and comparison standards for tasks. While authors have added some model comparisons in response, in other areas comparisons don't appear complete. For example, when using MRI data, networks compared all use features derived from images, rather than systems that may learn from images themselves. Authors claim dataset is too small to learn directly from pixels in this data (in comments), but transfer learning and data augmentation have been successfully applied to learn from datasets of this size. In addition, new multitask techniques in the imaging domain have also been presented that dynamically learn the network structure, rather than relying on a hand-crafted neural network design. How this approach would compare is not addressed.
train
[ "SkwZAL4ef", "r1z2QSOlz", "B1r0SU9gz", "Skzmnk2mf", "SyFl9LxQf", "H189gvyQM", "ByQ7kvkmf", "BJl30LJ7G", "BkE02U1Xf", "rJ2_MDJXf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author" ]
[ "This work proposes a multi task learning framework for the modeling of clinical data in neurodegenerative diseases. \nDifferently from previous applications of machine learning in neurodegeneration modeling, the proposed approach models the clinical data accounting for the bounded nature of cognitive tests scores....
[ 4, 5, 5, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, 3, 4, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HytSvlWRZ", "iclr_2018_HytSvlWRZ", "iclr_2018_HytSvlWRZ", "iclr_2018_HytSvlWRZ", "iclr_2018_HytSvlWRZ", "SkwZAL4ef", "BJl30LJ7G", "B1r0SU9gz", "r1z2QSOlz", "H189gvyQM" ]
iclr_2018_HkanP0lRW
Data-driven Feature Sampling for Deep Hyperspectral Classification and Segmentation
The high dimensionality of hyperspectral imaging forces unique challenges in scope, size and processing requirements. Motivated by the potential for an in-the-field cell sorting detector, we examine a Synechocystis sp. PCC 6803 dataset wherein cells are grown alternatively in nitrogen rich or deplete cultures. We use deep learning techniques to both successfully classify cells and generate a mask segmenting the cells/condition from the background. Further, we use the classification accuracy to guide a data-driven, iterative feature selection method, allowing the design neural networks requiring 90% fewer input features with little accuracy degradation.
rejected-papers
Area chair is in agreement with reviewers: this is a good experiment that successfully applies specific machine learning techniques to the particular task. However, the authors have not discussed or studied the breadth of other possible methods that could also solve the given task ... besides those mentioned by the reviewers, U-Nets, and variants thereof, come to mind. Without these comparisons, the novelty and significance cannot be assessed. Authors are encouraged to study similar works, and perform a comparison among multiple possible approaches, before submission to another venue.
train
[ "HyrFUnNgf", "rkEn8swgG", "S12q91ZZM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Authors propose a greedy scheme to select a subset of (highly correlated) spectral features in a classification task. The selection criterion used is the average magnitude with which this feature contributes to the activation of a next-layer perceptron. Once validation accuracy drops too much, the pruned network i...
[ 3, 6, 4 ]
[ 5, 5, 5 ]
[ "iclr_2018_HkanP0lRW", "iclr_2018_HkanP0lRW", "iclr_2018_HkanP0lRW" ]
iclr_2018_H1K6Tb-AZ
TESLA: Task-wise Early Stopping and Loss Aggregation for Dynamic Neural Network Inference
For inference operations in deep neural networks on end devices, it is desirable to deploy a single pre-trained neural network model, which can dynamically scale across a computation range without comprising accuracy. To achieve this goal, Incomplete Dot Product (IDP) has been proposed to use only a subset of terms in dot products during forward propagation. However, there are some limitations, including noticeable performance degradation in operating regions with low computational costs, and essential performance limitations since IDP uses hand-crafted profile coefficients. In this paper, we extend IDP by proposing new training algorithms involving a single profile, which may be trainable or pre-determined, to significantly improve the overall performance, especially in operating regions with low computational costs. Specifically, we propose the Task-wise Early Stopping and Loss Aggregation (TESLA) algorithm, which is showed in our 3-layer multilayer perceptron on MNIST that outperforms the original IDP by 32\% when only 10\% of dot products terms are used and achieves 94.7\% accuracy on average. By introducing trainable profile coefficients, TESLA further improves the accuracy to 95.5\% without specifying coefficients in advance. Besides, TESLA is applied to the VGG-16 model, which achieves 80\% accuracy using only 20\% of dot product terms on CIFAR-10 and also keeps 60\% accuracy using only 30\% of dot product terms on CIFAR-100, but the original IDP performs like a random guess in these two datasets at such low computation costs. Finally, we visualize the learned representations at different dot product percentages by class activation map and show that, by applying TESLA, the learned representations can adapt over a wide range of operation regions.
rejected-papers
General consensus among reviewers that paper does not meet criteria for publication. Pro: - Improvement over the original IDP proposal. - Some promising preliminary results. Con: - Insufficient comparison to other methods of network compression, - Insufficient comparison to other datasets (such as ImageNet) - Insufficient evaluation on variety of other models - Writing could be more clear
train
[ "SJF0AbKgG", "rJyYwFhlz", "r1VT-4J-z", "SykDmN6Xz", "ryaL9Qp7G", "S11DgXamf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "An approach to adjust inference speed, power consumption or latency by using incomplete dot products McDanel et al. (2017) is investigated.\n\nThe approach is based on `profile coefficients’ which are learned for every channel in a convolution layer, or for every column in the fully connected layer. Based on the m...
[ 4, 5, 4, -1, -1, -1 ]
[ 4, 2, 2, -1, -1, -1 ]
[ "iclr_2018_H1K6Tb-AZ", "iclr_2018_H1K6Tb-AZ", "iclr_2018_H1K6Tb-AZ", "SJF0AbKgG", "rJyYwFhlz", "r1VT-4J-z" ]
iclr_2018_HkjL6MiTb
Siamese Survival Analysis with Competing Risks
Survival Analysis (time-to-event analysis) in the presence of multiple possible adverse events, i.e., competing risks, is a challenging, yet very important problem in medicine, finance, manufacturing, etc. Extending classical survival analysis to competing risks is not trivial since only one event (e.g. one cause of death) is observed and hence, the incidence of an event of interest is often obscured by other related competing events. This leads to the nonidentifiability of the event times’ distribution parameters, which makes the problem significantly more challenging. In this work we introduce Siamese Survival Prognosis Network, a novel Siamese Deep Neural Network architecture that is able to effectively learn from data in the presence of multiple adverse events. The Siamese Survival Network is especially crafted to issue pairwise concordant time-dependent risks, in which longer event times are assigned lower risks. Furthermore, our architecture is able to directly optimize an approximation to the C-discrimination index, rather than relying on well-known metrics of cross-entropy etc., and which are not able to capture the unique requirements of survival analysis with competing risks. Our results show consistent performance improvements on a number of publicly available medical datasets over both statistical and deep learning state-of-the-art methods.
rejected-papers
Reviewers unanimous in assessment that manuscript has merits, but does not satisfy criteria for publication. Pros: - Potentially novel application of neural networks to survival analysis with competing risks, where only one terminal event from one risk category may be observed. Cons: - Incomplete coverage of other literature. - Architecture novelty may not be significant. - Small performance gains (though statistically significant)
train
[ "SyJXpk5lG", "SkpfobogG", "rkOt8g2ef", "H106JBomz", "SyZNJHimz", "SksVpEimG", "ByYiANimf", "HkcPpViXf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "This paper introduces siamese neural networks to the competing risks framework of Fine and Gray. The authors optimize for the c-index by minimizing a loss function driven by the cumulative risk of competing risk m and correct ordering of comparable pairs. While the idea of optimizing directly for the c-index direc...
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ 4, 4, 5, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HkjL6MiTb", "iclr_2018_HkjL6MiTb", "iclr_2018_HkjL6MiTb", "SyJXpk5lG", "ByYiANimf", "rkOt8g2ef", "SkpfobogG", "SksVpEimG" ]
iclr_2018_ByJbJwxCW
Relational Multi-Instance Learning for Concept Annotation from Medical Time Series
Recent advances in computing technology and sensor design have made it easier to collect longitudinal or time series data from patients, resulting in a gigantic amount of available medical data. Most of the medical time series lack annotations or even when the annotations are available they could be subjective and prone to human errors. Earlier works have developed natural language processing techniques to extract concept annotations and/or clinical narratives from doctor notes. However, these approaches are slow and do not use the accompanying medical time series data. To address this issue, we introduce the problem of concept annotation for the medical time series data, i.e., the task of predicting and localizing medical concepts by using the time series data as input. We propose Relational Multi-Instance Learning (RMIL) - a deep Multi Instance Learning framework based on recurrent neural networks, which uses pooling functions and attention mechanisms for the concept annotation tasks. Empirical results on medical datasets show that our proposed models outperform various multi-instance learning models.
rejected-papers
This paper presents a MIL method for medical time series data. General consensus among reviewers that work does not meet criteria for being accepted. Specifically: Pros: - A variety of meta-learning parameters are evaluated for the task at hand. - Minor novelty of the proposed method Cons: - Minor novelty of the proposed method - Rationale behind architectural design - Thoroughness of experimentation - Suboptimal choice of baseline methods - Lack of broad evaluation across applications for new design - Small dataset size - Significance of improvement
train
[ "Hk2mNy-gG", "rkcWXX9gf", "Hyu6DTogG", "Skeyh697G", "r1ozjTcQM", "SyYWc6qQG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper addresses the classification of medical time-series data by formulating the problem as a multi-instance learning (MIL) task, where there is an instance for each timestep of each time series, labels are observed at the time-series level (i.e. for each bag), and the goal is to perform instance-level and se...
[ 6, 3, 3, -1, -1, -1 ]
[ 4, 3, 5, -1, -1, -1 ]
[ "iclr_2018_ByJbJwxCW", "iclr_2018_ByJbJwxCW", "iclr_2018_ByJbJwxCW", "Hk2mNy-gG", "rkcWXX9gf", "Hyu6DTogG" ]
iclr_2018_SJFM0ZWCb
Deep Temporal Clustering: Fully unsupervised learning of time-domain features
Unsupervised learning of timeseries data is a challenging problem in machine learning. Here, we propose a novel algorithm, Deep Temporal Clustering (DTC), a fully unsupervised method, to naturally integrate dimensionality reduction and temporal clustering into a single end to end learning framework. The algorithm starts with an initial cluster estimates using an autoencoder for dimensionality reduction and a novel temporal clustering layer for cluster assignment. Then it jointly optimizes the clustering objective and the dimensionality reduction objective. Based on requirement and application, the temporal clustering layer can be customized with any temporal similarity metric. Several similarity metrics are considered and compared. To gain insight into features that the network has learned for its clustering, we apply a visualization method that generates a heat map of regions of interest in the timeseries. The viability of the algorithm is demonstrated using timeseries data from diverse domains, ranging from earthquakes to sensor data from spacecraft. In each case, we show that our algorithm outperforms traditional methods. This performance is attributed to fully integrated temporal dimensionality reduction and clustering criterion.
rejected-papers
Joint optimization of dimensionality reduction and temporal clusters. Results suggest performance improvement in a variety of scenarios versus a baseline of a recent state-of-art clustering method. Pro: - Joint optimization may be new and results suggest performance improvement when done on NASA Magnetospheric Multiscale (MMS) Mission. Con: - Small datasets evaluated, impact unclear - Breadth of possible applications unclear - Similarities exist to prior works. Significance of novelty not clear. - Unanimous consensus among reviewers that work is not in a state to be accepted.
train
[ "ryMizdDef", "rkq18W9eG", "HyWGBr5lf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors proposed an algorithm named Deep Temporal Clustering (DTC) that integrates autoencoder with time-series data clustering. Compared to existing methods, DTC used a network structure (CNN + BiLSTM) that suits time-series data. In addition, a new clustering loss with different similarity measures are adopt...
[ 3, 5, 4 ]
[ 5, 4, 4 ]
[ "iclr_2018_SJFM0ZWCb", "iclr_2018_SJFM0ZWCb", "iclr_2018_SJFM0ZWCb" ]
iclr_2018_rJr4kfWCb
Lung Tumor Location and Identification with AlexNet and a Custom CNN
Lung cancer is the leading cause of cancer deaths in the world and early detection is a crucial part of increasing patient survival. Deep learning techniques provide us with a method of automated analysis of patient scans. In this work, we compare AlexNet, a multi-layered and highly flexible architecture, with a custom CNN to determine if lung nodules with patient scans are benign or cancerous. We have found our CNN architecture to be highly accurate (99.79%) and fast while maintaining low False Positive and False Negative rates (< 0.01% and 0.15% respectively). This is important as high false positive rates are a serious issue with lung cancer diagnosis. We have found that AlexNet is not well suited to the problem of nodule identification, though it is a good baseline comparison because of its flexibility.
rejected-papers
Pros: - Addresses an important medical imaging application - Uses an open dataset Con: - Authors do not cite original article describing challenge from which they use their data: https://arxiv.org/pdf/1612.08012.pdf , or the website for the corresponding challenge: https://luna16.grand-challenge.org/results/ - Authors either 1) do not follow the evaluation protocol set forth by the challenge, making it impossible to compare to other methods published on this dataset, or 2) incorrectly describe their use of that public dataset. - Compares only to AlexNet architecture, and not to any of the other multiple methods published on this dataset (see: https://arxiv.org/pdf/1612.08012.pdf). - Too much space is spent explaining well-understood evaluation functions. - As reviewers point out, no motivation for new architecture is given.
train
[ "HkQQ3IQxf", "B1dApr_lf", "SkOp9W5gf", "rk24sxMzf", "Hk36mQzfz", "rkWJCpgGM", "SkbN34JWz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "public", "public", "public" ]
[ "This paper compares 2 CNN architectures (Alexnet and a VGG variant) for the task of classifying images of lung cancer from CT scans. The comparison is trivial and does not go in depth to explain why one architecture works better than the other. Also, no effort is made to explain the data beyond some superficial de...
[ 2, 3, 3, -1, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_rJr4kfWCb", "iclr_2018_rJr4kfWCb", "iclr_2018_rJr4kfWCb", "iclr_2018_rJr4kfWCb", "iclr_2018_rJr4kfWCb", "iclr_2018_rJr4kfWCb", "iclr_2018_rJr4kfWCb" ]
iclr_2018_HJqUtdOaZ
ENRICHMENT OF FEATURES FOR CLASSIFICATION USING AN OPTIMIZED LINEAR/NON-LINEAR COMBINATION OF INPUT FEATURES
Automatic classification of objects is one of the most important tasks in engineering and data mining applications. Although using more complex and advanced classifiers can help to improve the accuracy of classification systems, it can be done by analyzing data sets and their features for a particular problem. Feature combination is the one which can improve the quality of the features. In this paper, a structure similar to Feed-Forward Neural Network (FFNN) is used to generate an optimized linear or non-linear combination of features for classification. Genetic Algorithm (GA) is applied to update weights and biases. Since nature of data sets and their features impact on the effectiveness of combination and classification system, linear and non-linear activation functions (or transfer function) are used to achieve more reliable system. Experiments of several UCI data sets and using minimum distance classifier as a simple classifier indicate that proposed linear and non-linear intelligent FFNN-based feature combination can present more reliable and promising results. By using such a feature combination method, there is no need to use more powerful and complex classifier anymore.
rejected-papers
The presented method essentially builds a model that remaps features into a new space that optimizes nearest-neighbor classification. The model is a neural network, and the optimization is carried out through a genetic algorithm. Pros: - One major issue with neural network classification is that of a lack of explainability. Many networks are currently "black box" approaches. By moving to the optimization problem to that of building a feature space for nearest neighbor classification, one can, to a degree, alleviate the "black box" issue by providing the discovered nearest neighbor instances as "evidence" of the decision. - Authors use established datasets. Cons: - Authors do not properly cite previous work, as brought up by reviewers. There is much literature on optimization of feature spaces (such as the entire field of metric learning), as well as prior approaches using genetic optimization. The originality and significance here is therefore not clear.
val
[ "rkbO-pIgf", "rkvIrecgG", "r1o4FsqgM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents a method for feature projection which uses a two level neural network like structure to generate new features from the input features. The weights of the NN like structure are optimised using a genetic search algorithm which optimises the cross-validation error of a nearest neighbor classifier. ...
[ 1, 3, 2 ]
[ 5, 4, 3 ]
[ "iclr_2018_HJqUtdOaZ", "iclr_2018_HJqUtdOaZ", "iclr_2018_HJqUtdOaZ" ]
iclr_2018_S1m6h21Cb
The Cramer Distance as a Solution to Biased Wasserstein Gradients
The Wasserstein probability metric has received much attention from the machine learning community. Unlike the Kullback-Leibler divergence, which strictly measures change in probability, the Wasserstein metric reflects the underlying geometry between outcomes. The value of being sensitive to this geometry has been demonstrated, among others, in ordinal regression and generative modelling, and most recently in reinforcement learning. In this paper we describe three natural properties of probability divergences that we believe reflect requirements from machine learning: sum invariance, scale sensitivity, and unbiased sample gradients. The Wasserstein metric possesses the first two properties but, unlike the Kullback-Leibler divergence, does not possess the third. We provide empirical evidence suggesting this is a serious issue in practice. Leveraging insights from probabilistic forecasting we propose an alternative to the Wasserstein metric, the Cramér distance. We show that the Cramér distance possesses all three desired properties, combining the best of the Wasserstein and Kullback-Leibler divergences. We give empirical results on a number of domains comparing these three divergences. To illustrate the practical relevance of the Cramér distance we design a new algorithm, the Cramér Generative Adversarial Network (GAN), and show that it has a number of desirable properties over the related Wasserstein GAN.
rejected-papers
Pros: - The authors propose a new algorithm to train GAN based on Cramer distance arguing that this eases optimization compared to Wasserstein GAN. - Reviewers agree that the paper reads well and provides a good overview of the properties of divergence measures used for GAN training. Cons: - It is not clear how much the central arguments about scale sensitivity, sum invariance, and unbiased sample gradients of the distances hold true in practice and generalize. - The reviewers do not agree the benefits of the new algorithm is clear from the experiments shown. Given the pros/cons ,the committee feels the paper falls short of acceptance in its current form.
train
[ "SJrqRODeM", "B1tHGLTgG", "B1xzpQy-z", "Hy_hZL2WM", "S14Zi72ZG", "rJYu5XnbM", "Bk4m5X2bM", "S19g4lpAZ" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public" ]
[ "The manuscript proposes to use the Cramer distance as a measure between distributions (acting as a loss) when optimizing\nan objective function using stochastic gradient descent (SGD). Cramer distance is a Bregman divergence and is a member of the Lp family of divergences. Here a \"distance\" means a symmetric di...
[ 5, 4, 7, -1, -1, -1, -1, -1 ]
[ 3, 5, 2, -1, -1, -1, -1, -1 ]
[ "iclr_2018_S1m6h21Cb", "iclr_2018_S1m6h21Cb", "iclr_2018_S1m6h21Cb", "S19g4lpAZ", "SJrqRODeM", "B1tHGLTgG", "B1xzpQy-z", "iclr_2018_S1m6h21Cb" ]
iclr_2018_SJahqJZAW
Stabilizing GAN Training with Multiple Random Projections
Training generative adversarial networks is unstable in high-dimensions as the true data distribution tends to be concentrated in a small fraction of the ambient space. The discriminator is then quickly able to classify nearly all generated samples as fake, leaving the generator without meaningful gradients and causing it to deteriorate after a point in training. In this work, we propose training a single generator simultaneously against an array of discriminators, each of which looks at a different random low-dimensional projection of the data. Individual discriminators, now provided with restricted views of the input, are unable to reject generated samples perfectly and continue to provide meaningful gradients to the generator throughout training. Meanwhile, the generator learns to produce samples consistent with the full data distribution to satisfy all discriminators simultaneously. We demonstrate the practical utility of this approach experimentally, and show that it is able to produce image samples with higher quality than traditional training with a single discriminator.
rejected-papers
The paper proposes to use multiple discriminators to stabilize the GAN training process. Additionally, the discriminators only see randomly projected real and generated samples. Some valid concerns raised by the reviewers which makes the paper weak: - Multiple discriminators have been tried before and the authors do not clearly show experimentally / theoretically if the random projection is adding any value. - Authors compare only with DCGAN and the results are mostly subjective. How much improvement the proposed approach provides when compared to other GAN models that are developed with stability as the main goal is hence not clear.
val
[ "r1rx-5Oxf", "BJARkptxz", "rkhnnvolz", "rkk7fo37G", "BkwO6iHGf", "ry0wsiSGz", "SygZiiBMM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "\nThe paper proposes to stabilize GAN training by using an ensemble of discriminators, each workin on a random projection of the input data, to provide the training signal for the generator model.\n\nQ1: “In relation to “Theorem 3.1. … will produce samples from a distribution whose marginals along each of the proj...
[ 5, 3, 8, -1, -1, -1, -1 ]
[ 4, 5, 4, -1, -1, -1, -1 ]
[ "iclr_2018_SJahqJZAW", "iclr_2018_SJahqJZAW", "iclr_2018_SJahqJZAW", "iclr_2018_SJahqJZAW", "r1rx-5Oxf", "BJARkptxz", "rkhnnvolz" ]
iclr_2018_Hy7EPh10W
Novelty Detection with GAN
The ability of a classifier to recognize unknown inputs is important for many classification-based systems. We discuss the problem of simultaneous classification and novelty detection, i.e. determining whether an input is from the known set of classes and from which specific class, or from an unknown domain and does not belong to any of the known classes. We propose a method based on the Generative Adversarial Networks (GAN) framework. We show that a multi-class discriminator trained with a generator that generates samples from a mixture of nominal and novel data distributions is the optimal novelty detector. We approximate that generator with a mixture generator trained with the Feature Matching loss and empirically show that the proposed method outperforms conventional methods for novelty detection. Our findings demonstrate a simple, yet powerful new application of the GAN framework for the task of novelty detection.
rejected-papers
Pros: The paper aims to unify classification and novelty detection which is interesting and challenging. Cons: - The reviewers find that the work is incremental and contains heuristics. Reviewers find the repurposing of the fake logit in semi-supervised GAN discriminator for assigning novelty strange. - The experiments presented are weak and authors do not compare with traditional/stronger approaches for novelty detection such as "learning with abstention" models and density models. GIven the pros and cons, the committe finds the paper to fall short of acceptance in its current form.
train
[ "H18Yvh7xG", "HkMvZzYez", "HkCfpG5xf", "Hy-6lhjgG", "BJZYcBsxf", "rJrVZgdlG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "public", "author", "public" ]
[ "This paper proposed a GAN to unify classification and novelty detection. The technical difficulty is acceptable, but there are several issues. First of all, the motivation is clearly given in the 1st paragraph of the introduction: \"In fact for such novel input the algorithm will produce erroneous output and class...
[ 4, 5, 6, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1 ]
[ "iclr_2018_Hy7EPh10W", "iclr_2018_Hy7EPh10W", "iclr_2018_Hy7EPh10W", "BJZYcBsxf", "rJrVZgdlG", "iclr_2018_Hy7EPh10W" ]
iclr_2018_S1EfylZ0Z
Anomaly Detection with Generative Adversarial Networks
Many anomaly detection methods exist that perform well on low-dimensional problems however there is a notable lack of effective methods for high-dimensional spaces, such as images. Inspired by recent successes in deep learning we propose a novel approach to anomaly detection using generative adversarial networks. Given a sample under consideration, our method is based on searching for a good representation of that sample in the latent space of the generator; if such a representation is not found, the sample is deemed anomalous. We achieve state-of-the-art performance on standard image benchmark datasets and visual inspection of the most anomalous samples reveals that our method does indeed return anomalies.
rejected-papers
The authors propose to detect anomaly based on its representation quality in the latent space of the GAN trained on valid samples. Reviewers agree that: - The proposed solution lacks novelty and similar approaches have been tried before. - The baselines presented in the paper are primitive and hence do not demonstrate the clear benefits over traditional approaches.
val
[ "By9QpjXlf", "ryxTDKPlz", "BJ1oIDYlG", "By8xh4T7z", "H1-oo4aQf", "Hy3EoV6QM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "In the paper, the authors proposed using GAN for anomaly detection.\nIn the method, we first train generator g_\\theta from a dataset consisting of only healthy data points.\nFor evaluating whether the data point x is anomalous or not, we search for a latent representation z such that x \\approx g_\\theta(z).\nIf ...
[ 4, 6, 4, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2018_S1EfylZ0Z", "iclr_2018_S1EfylZ0Z", "iclr_2018_S1EfylZ0Z", "By9QpjXlf", "ryxTDKPlz", "BJ1oIDYlG" ]
iclr_2018_rJHcpW-CW
NOVEL AND EFFECTIVE PARALLEL MIX-GENERATOR GENERATIVE ADVERSARIAL NETWORKS
In this paper, we propose a mix-generator generative adversarial networks (PGAN) model that works in parallel by mixing multiple disjoint generators to approximate a complex real distribution. In our model, we propose an adjustment component that collects all the generated data points from the generators, learns the boundary between each pair of generators, and provides error to separate the support of each of the generated distributions. To overcome the instability in a multiplayer game, a shrinkage adjustment component method is introduced to gradually reduce the boundary between generators during the training procedure. To address the linearly growing training time problem in a multiple generators model, we propose a method to train the generators in parallel. This means that our work can be scaled up to large parallel computation frameworks. We present an efficient loss function for the discriminator, an effective adjustment component, and a suitable generator. We also show how to introduce the decay factor to stabilize the training procedure. We have performed extensive experiments on synthetic datasets, MNIST, and CIFAR-10. These experiments reveal that the error provided by the adjustment component could successfully separate the generated distributions and each of the generators can stably learn a part of the real distribution even if only a few modes are contained in the real distribution.
rejected-papers
The paper aims to address the mode collapse issue in GANs by training multiple generators and forcing them to be diverse. Reviewers agree that the proposed solution is not novel and has disadvantages such as increased parameters due to multiple generator models. The authors do not provide convincing arguments as to why the proposed approach should work well. The experiments presented also fail to demonstrate this. The results are limited to smaller MNIST and CIFAR10 datasets. Comparisons with approaches that directly address the mode collapse problem are missing.
train
[ "HyqGENDgz", "ByyDCx9xf", "BkQh8t5gz", "B1n0E0vlf", "S19N17XlM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "public" ]
[ "Overall, the writing is very confusing at points and needs some attention to make the paper clearer. I’m not entirely sure the authors understand the material particularly well, as I found some of the arguments and narrative confusing or just incorrect. I don’t really see any significant contribution here except “...
[ 3, 6, 5, -1, -1 ]
[ 5, 4, 4, -1, -1 ]
[ "iclr_2018_rJHcpW-CW", "iclr_2018_rJHcpW-CW", "iclr_2018_rJHcpW-CW", "S19N17XlM", "iclr_2018_rJHcpW-CW" ]
iclr_2018_S1FQEfZA-
A Classification-Based Perspective on GAN Distributions
A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on. The current approaches to examining this issue require significant human supervision, such as visual inspection of sampled images, and often offer only fairly limited scalability. In this paper, we propose new techniques that employ classification-based perspective to evaluate synthetic GAN distributions and their capability to accurately reflect the essential properties of the training data. These techniques require only minimal human supervision and can easily be scaled and adapted to evaluate a variety of state-of-the-art GANs on large, popular datasets. They also indicate that GANs have significant problems in reproducing the more distributional properties of the training dataset. In particular, the diversity of such synthetic data is orders of magnitude smaller than that of the original data.
rejected-papers
The paper proposes a new metric to measure GAN performance by training a classifier on the true labeled dataset and then comparing the distribution of the labels of the generated samples to the true label distribution. Reviewers find that the paper is well written but lacks novelty and is quite experimental does not present any new insights. The paper investigates well-known model collapse and diversity issues. Reviewers are not convinced that this is a good metric to measure sample quality or diversity as the generator can drop examples far away from the boundary and still achieve a good score on this metric.
train
[ "H1WAH_kxM", "BkTles9xM", "rJCxSIslz", "BJ-hES67G", "SJYJ1Ip7z", "SkMOSHp7M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "Overall comments: Trying to shed light at comparison between different GAN variants, but the metrics introduced are not very novel, results are not comparable with prior work and older version of certain models are used (WGAN instead of Improved WGAN)\n\nSection 2.1: quantifying mode collapse\n* This section shoul...
[ 5, 6, 3, -1, -1, -1 ]
[ 5, 4, 4, -1, -1, -1 ]
[ "iclr_2018_S1FQEfZA-", "iclr_2018_S1FQEfZA-", "iclr_2018_S1FQEfZA-", "rJCxSIslz", "H1WAH_kxM", "BkTles9xM" ]
iclr_2018_B1tExikAW
LatentPoison -- Adversarial Attacks On The Latent Space
Robustness and security of machine learning (ML) systems are intertwined, wherein a non-robust ML system (classifiers, regressors, etc.) can be subject to attacks using a wide variety of exploits. With the advent of scalable deep learning methodologies, a lot of emphasis has been put on the robustness of supervised, unsupervised and reinforcement learning algorithms. Here, we study the robustness of the latent space of a deep variational autoencoder (dVAE), an unsupervised generative framework, to show that it is indeed possible to perturb the latent space, flip the class predictions and keep the classification probability approximately equal before and after an attack. This means that an agent that looks at the outputs of a decoder would remain oblivious to an attack.
rejected-papers
The paper proposes to launch adversarial attacks in the latent space of VAE such that the minimal change in the latent representation leads to the decoder producing an image with class predictions altered. Given the pros/cons the paper in its current form falls short of acceptance. Pros: Reviewers agree that the paper is well written and easy to follow Cons: - The paper lacks novelty and uses standard attacks and defense methodology. - Reviewers find the attack scenario presented is unrealistic and hence may not useful. - Experiments lack rigorous comparisons with baselines and it is not clear if the attack in the latent space will be stronger than the attack in the input space.
train
[ "H1I-1LYxf", "HkCuu2YxG", "B1xzWeqgG", "rJUUQkn7f", "Sk9azy3mf", "BJ5KMJhmf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The idea is clearly stated (but lacks some details) and I enjoyed reading the paper. \n\nI understand the difference between [Kos+17] and the proposed scheme but I could not understand in which situation the proposed scheme works better. From the adversary's standpoint, it would be easier to manipulate inputs than...
[ 5, 3, 4, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_B1tExikAW", "iclr_2018_B1tExikAW", "iclr_2018_B1tExikAW", "H1I-1LYxf", "HkCuu2YxG", "B1xzWeqgG" ]
iclr_2018_ryepFJbA-
On Convergence and Stability of GANs
We propose studying GAN training dynamics as regret minimization, which is in contrast to the popular view that there is consistent minimization of a divergence between real and generated distributions. We analyze the convergence of GAN training from this new point of view to understand why mode collapse happens. We hypothesize the existence of undesirable local equilibria in this non-convex game to be responsible for mode collapse. We observe that these local equilibria often exhibit sharp gradients of the discriminator function around some real data points. We demonstrate that these degenerate local equilibria can be avoided with a gradient penalty scheme called DRAGAN. We show that DRAGAN enables faster training, achieves improved stability with fewer mode collapses, and leads to generator networks with better modeling performance across a variety of architectures and objective functions.
rejected-papers
Pros: The proposed regularization for GAN training is interesting and simple to implement. Cons: - Reviewers agree that the methodology is incremental over the WGAN with gradient penalty and the modification is not well motivated. - Experimental results do not clearly demonstrate the benefits of the proposed algorithm and the paper also lacks comparisons with related works. GIven the pros/cons, the committee feels the paper is not ready for acceptance in its current state.
train
[ "ByPQQOX1G", "SyYO2aIlG", "Hkd3vAUeG", "Bk9rWSD-f", "rJbz9QP-z", "H1HD-BDWf", "ryKUsoYbf", "S1wmdHPWf", "B1nRUuAgG", "S1GEAvq0Z", "S1EGL-uC-", "rJ91vcOCb", "ryIPRnuCb", "HkXuRauAZ", "HkxbyyKCb", "Sk8vLeK0b", "Hy1sFOsRW", "r184LvsCZ", "HyVsPP5CZ", "HkcJxbYRb", "r1nCXkK0Z", "...
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public", "author", "author", "author", "author", "author", "author", "author", "author", "public", "public", "public", "public", "public", "public", "public...
[ "Summary\n========\nThe authors present a new regularization term, inspired from game theory, which encourages the discriminator's gradient to have a norm equal to one. This leads to reduce the number of local minima, so that the behavior of the optimization scheme gets closer to the optimization of a zero-sum game...
[ 5, 4, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, 5, 3, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ryepFJbA-", "iclr_2018_ryepFJbA-", "iclr_2018_ryepFJbA-", "SyYO2aIlG", "Hkd3vAUeG", "Bk9rWSD-f", "B1nRUuAgG", "ByPQQOX1G", "iclr_2018_ryepFJbA-", "HyVsPP5CZ", "ByD_wguCW", "rJxYq4uAZ", "S1yaH2dAb", "Bk-qS6_RW", "BJlRh0dC-", "r1nCXkK0Z", "r184LvsCZ", "S1GEAvq0Z", "iclr_...
iclr_2018_ry4SNTe0-
Improve Training Stability of Semi-supervised Generative Adversarial Networks with Collaborative Training
Improved generative adversarial network (Improved GAN) is a successful method of using generative adversarial models to solve the problem of semi-supervised learning. However, it suffers from the problem of unstable training. In this paper, we found that the instability is mostly due to the vanishing gradients on the generator. To remedy this issue, we propose a new method to use collaborative training to improve the stability of semi-supervised GAN with the combination of Wasserstein GAN. The experiments have shown that our proposed method is more stable than the original Improved GAN and achieves comparable classification accuracy on different data sets.
rejected-papers
The paper aims to combine Wasserstein GAN with Improved GAN framework for semi-supervised learning. The reviewers unanimously agree that: - the paper lacks novelty and such approaches have been tried before. - the approach does not make sufficient gains over the baselines and stronger baselines are missing. - the paper is not well written and experimental results are not satisfactory.
test
[ "B1uXHzDeM", "SkaNEl9xM", "BJGtGM9eM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "* Summary *\nThe paper addresses the instability of GAN training. More precisely, the authors aim at improving the stability of the semi-supervised version of GANs presented in [1] (IGAN for short) . The paper presents a novel architecture for training adversarial networks in a semi-supervised settings (Algorithm ...
[ 3, 2, 3 ]
[ 4, 4, 5 ]
[ "iclr_2018_ry4SNTe0-", "iclr_2018_ry4SNTe0-", "iclr_2018_ry4SNTe0-" ]
iclr_2018_By9iRkWA-
Phase Conductor on Multi-layered Attentions for Machine Comprehension
Attention models have been intensively studied to improve NLP tasks such as machine comprehension via both question-aware passage attention model and self-matching attention model. Our research proposes phase conductor (PhaseCond) for attention models in two meaningful ways. First, PhaseCond, an architecture of multi-layered attention models, consists of multiple phases each implementing a stack of attention layers producing passage representations and a stack of inner or outer fusion layers regulating the information flow. Second, we extend and improve the dot-product attention function for PhaseCond by simultaneously encoding multiple question and passage embedding layers from different perspectives. We demonstrate the effectiveness of our proposed model PhaseCond on the SQuAD dataset, showing that our model significantly outperforms both state-of-the-art single-layered and multiple-layered attention models. We deepen our results with new findings via both detailed qualitative analysis and visualized examples showing the dynamic changes through multi-layered attention models.
rejected-papers
Generally solid engineering work but a bit lacking in terms of novelty and some issues with clarity. At the end of the day the empirical gains are not sufficient for acceptance - the results are state-of-the-art relative to published work, but not in the top 10 based on the official leaderboard (not even at time of submission). Since the technical contributions are small and the engineering contributions have been made obsolete by concurrent work, I suggest rejection.
train
[ "HkFCIBhgf", "HyrDvKC1z", "B1s84WMlM", "Bkhb6wpXM", "BJVf3P67G", "HykAjPT7G", "By6zYD6mz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper introduces a fairly elaborate model for reading comprehension evaluated on the SQuAD dataset. The model is shown to improve on the published results but not as-of-submission leaderboard numbers.\n\nThe main weakness of the paper in my opinion is that the innovations seem to be incremental and not base...
[ 8, 5, 5, -1, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1, -1 ]
[ "iclr_2018_By9iRkWA-", "iclr_2018_By9iRkWA-", "iclr_2018_By9iRkWA-", "HyrDvKC1z", "B1s84WMlM", "B1s84WMlM", "HkFCIBhgf" ]
iclr_2018_S1Q79heRW
Unsupervised Learning of Entailment-Vector Word Embeddings
Entailment vectors are a principled way to encode in a vector what information is known and what is unknown. They are designed to model relations where one vector should include all the information in another vector, called entailment. This paper investigates the unsupervised learning of entailment vectors for the semantics of words. Using simple entailment-based models of the semantics of words in text (distributional semantics), we induce entailment-vector word embeddings which outperform the best previous results for predicting entailment between words, in unsupervised and semi-supervised experiments on hyponymy.
rejected-papers
Two knowledgeable and confident reviewers suggest rejection, while one not confident reviewer suggests acceptance. I agree with the confident reviewers. All reviewers also point out that the paper is confusingly written and difficult to understand.
train
[ "r1kr4BQgG", "SJ4HCiUgz", "BkjD4Eqxz", "ryK-BFqfM", "H1mMzY9Gf", "Sk_LeFqGf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "I'm finding this paper really difficult to understand. The introduction is very abstract, and it is hard for me to understand the model as it is explained at the moment. Could the authors please clarify, perhaps in more algorithmic terms, how the model works?\n\nAs for the evaluation, BLESS is a nice dataset, but ...
[ 3, 7, 3, -1, -1, -1 ]
[ 5, 3, 5, -1, -1, -1 ]
[ "iclr_2018_S1Q79heRW", "iclr_2018_S1Q79heRW", "iclr_2018_S1Q79heRW", "r1kr4BQgG", "SJ4HCiUgz", "BkjD4Eqxz" ]
iclr_2018_ryOG3fWCW
Model Specialization for Inference Via End-to-End Distillation, Pruning, and Cascades
The availability of general-purpose reference and benchmark datasets such as ImageNet have spurred the development of general-purpose popular reference model architectures and pre-trained weights. However, in practice, neural net- works are often employed to perform specific, more restrictive tasks, that are narrower in scope and complexity. Thus, simply fine-tuning or transfer learn- ing from a general-purpose network inherits a large computational cost that may not be necessary for a given task. In this work, we investigate the potential for model specialization, or reducing a model’s computational footprint by leverag- ing task-specific knowledge, such as a restricted inference distribution. We study three methods for model specialization—1) task-aware distillation, 2) task-aware pruning, and 3) specialized model cascades—and evaluate their performance on a range of classification tasks. Moreover, for the first time, we investigate how these techniques complement one another, enabling up to 5× speedups with no loss in accuracy and 9.8× speedups while remaining within 2.5% of a highly ac- curate ResNet on specialized image classification tasks. These results suggest that simple and easy-to-implement specialization procedures may benefit a large num- ber practical applications in which the representational power of general-purpose networks need not be inherited.
rejected-papers
This paper does not meet the bar for ICLR - neither in terms of the quality of the write-up, nor in experimental design. The two confident reviewers agree to reject the paper, the weak accept comes from a less confident reviewer who did not write a good review at all. The rebuttal does not change this assessment.
train
[ "rJ6qTo7gz", "r1wK-mFlM", "HJtb_2teM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper presents an approach to do task aware distillation, task-specific pruning and specialized cascades. The main result is that such methods can yield smaller, efficient and sometimes more accurate models.\n\nThe proposed approach is simple and easy to understand. The task aware distillation relies on the av...
[ 6, 4, 3 ]
[ 3, 4, 4 ]
[ "iclr_2018_ryOG3fWCW", "iclr_2018_ryOG3fWCW", "iclr_2018_ryOG3fWCW" ]
iclr_2018_rJ8rHkWRb
A Simple Fully Connected Network for Composing Word Embeddings from Characters
This work introduces a simple network for producing character aware word embeddings. Position agnostic and position aware character embeddings are combined to produce an embedding vector for each word. The learned word representations are shown to be very sparse and facilitate improved results on language modeling tasks, despite using markedly fewer parameters, and without the need to apply dropout. A final experiment suggests that weight sharing contributes to sparsity, increases performance, and prevents overfitting.
rejected-papers
The paper presents yet another approach for modeling words based on their characters. Unfortunately the authors do not compare properly to previous approaches and the idea is very incremental.
val
[ "By7uW7Pef", "HkDiq0Flf", "Byp-dy9gG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose a neural network architecture which takes the characters of a word as input along with their positions, and output a word embedding. They then use these as inputs to a GRU language model, which is evaluated on two medium size data sets made from a series of novels and the Project Gutenberg Cana...
[ 3, 5, 4 ]
[ 4, 4, 5 ]
[ "iclr_2018_rJ8rHkWRb", "iclr_2018_rJ8rHkWRb", "iclr_2018_rJ8rHkWRb" ]
iclr_2018_rkYgAJWCZ
One-shot and few-shot learning of word embeddings
Standard deep learning systems require thousands or millions of examples to learn a concept, and cannot integrate new concepts easily. By contrast, humans have an incredible ability to do one-shot or few-shot learning. For instance, from just hearing a word used in a sentence, humans can infer a great deal about it, by leveraging what the syntax and semantics of the surrounding words tells us. Here, we draw inspiration from this to highlight a simple technique by which deep recurrent networks can similarly exploit their prior knowledge to learn a useful representation for a new word from little data. This could make natural language processing systems much more flexible, by allowing them to learn continually from the new words they encounter.
rejected-papers
The paper is looking at an interesting problem, but it seems too early. The approach requires training a new language model from scratch for each new word, rendering it completely impractical for real use. The main evaluation therefore only considers four words - "bonuses", "explained", "marketers", "strategist" (expanded to 20 during the rebuttal). This is not sufficient for ICLR.
train
[ "Sk9dBhUlG", "HJnrNAtlG", "Sybz8F9eG", "B1rxDNtmG", "r1i_IEtmG", "S1fm8Vt7z", "ByDSg2Ggz", "rJEcBKfxG", "rJcKVpa1f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "public", "author", "public" ]
[ "I am highly sympathetic to the goals of this paper, and the authors do a good job of contrasting human learning with current deep learning systems, arguing that the lack of a mechanism for few-shot learning in such systems is a barrier to applying them in realistic scenarios. However, the main evaluation only cons...
[ 4, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rkYgAJWCZ", "iclr_2018_rkYgAJWCZ", "iclr_2018_rkYgAJWCZ", "Sk9dBhUlG", "HJnrNAtlG", "Sybz8F9eG", "rJEcBKfxG", "rJcKVpa1f", "iclr_2018_rkYgAJWCZ" ]
iclr_2018_HJw8fAgA-
Learning Dynamic State Abstractions for Model-Based Reinforcement Learning
A key challenge in model-based reinforcement learning (RL) is to synthesize computationally efficient and accurate environment models. We show that carefully designed models that learn predictive and compact state representations, also called state-space models, substantially reduce the computational costs for predicting outcomes of sequences of actions. Extensive experiments establish that state-space models accurately capture the dynamics of Atari games from the Arcade Learning Environment (ALE) from raw pixels. Furthermore, RL agents that use Monte-Carlo rollouts of these models as features for decision making outperform strong model-free baselines on the game MS_PACMAN, demonstrating the benefits of planning using learned dynamic state abstractions.
rejected-papers
There was quite a bit of discussion about this paper but in the end the majority felt that, though the paper is interesting, the results are too limited and more needs to be done for publication. PROS: 1. Good comparison of state space model variations 2. Good writing (perhaps a bit dense in places) 3. Promising results, especially concerning speedup CONS: 1. The evaluation is quite limited
train
[ "HyV_TbIBf", "SyfTyqXSM", "H1KbSmYeM", "BkJknsz-f", "BkCqRjmZz", "B106AfgfG", "HyFH0GgGf", "HkDEpzlGz", "r1FwqfeGG" ]
[ "author", "public", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Would the author of the comment elaborate on their objection? \n\nThe title is justified in our opinion; we use the term \"dynamic state abstraction\" to emphasize the following:\n- we learn state presentations that are more compact than the the raw observations at a single time step, hence they constitute \"abst...
[ -1, -1, 6, 5, 8, -1, -1, -1, -1 ]
[ -1, -1, 4, 4, 4, -1, -1, -1, -1 ]
[ "SyfTyqXSM", "iclr_2018_HJw8fAgA-", "iclr_2018_HJw8fAgA-", "iclr_2018_HJw8fAgA-", "iclr_2018_HJw8fAgA-", "H1KbSmYeM", "HkDEpzlGz", "BkJknsz-f", "BkCqRjmZz" ]
iclr_2018_SJky6Ry0W
Learning Independent Causal Mechanisms
Independent causal mechanisms are a central concept in the study of causality with implications for machine learning tasks. In this work we develop an algorithm to recover a set of (inverse) independent mechanisms relating a distribution transformed by the mechanisms to a reference distribution. The approach is fully unsupervised and based on a set of experts that compete for data to specialize and extract the mechanisms. We test and analyze the proposed method on a series of experiments based on image transformations. Each expert successfully maps a subset of the transformed data to the original domain, and the learned mechanisms generalize to other domains. We discuss implications for domain transfer and links to recent trends in generative modeling.
rejected-papers
PROS: 1. All the reviewers thought that the work was interesting and showed promise 2. The paper is relatively well written CONS: 1. Limited experimental evaluation (just MNIST) The reviewers were all really on the fence about this but in the end felt that while the idea was a good one and the authors were responsive in their rebuttal, the experimental evaluation needed more work.
train
[ "rJAS034ez", "S12z02uez", "SkiVnWtxM", "Hy0bIIFQf", "ryQnSLtXM", "rkUzBUKXz", "rJR1r8FQf", "SkzjmUKXG", "HypuI-dxG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "This paper presents a framework to recover a set of independent mechanisms. In order to do so it uses a set of experts each one made out of a GAN.\n\nMy main concern with this work is that I don't see any mechanism in the framework that prevents an expert (or few of them) to win all examples except its own learni...
[ 6, 5, 5, -1, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_SJky6Ry0W", "iclr_2018_SJky6Ry0W", "iclr_2018_SJky6Ry0W", "iclr_2018_SJky6Ry0W", "rJAS034ez", "S12z02uez", "S12z02uez", "SkiVnWtxM", "iclr_2018_SJky6Ry0W" ]
iclr_2018_HkepKG-Rb
A Semantic Loss Function for Deep Learning with Symbolic Knowledge
This paper develops a novel methodology for using symbolic knowledge in deep learning. From first principles, we derive a semantic loss function that bridges between neural output vectors and logical constraints. This loss function captures how close the neural network is to satisfying the constraints on its output. An experimental evaluation shows that our semantic loss function effectively guides the learner to achieve (near-)state-of-the-art results on semi-supervised multi-class classification. Moreover, it significantly increases the ability of the neural network to predict structured objects, such as rankings and shortest paths. These discrete concepts are tremendously difficult to learn, and benefit from a tight integration of deep learning and symbolic reasoning methods.
rejected-papers
This one was really on the fence. After some additional rounds of discussion post-rebuttal with the reviewers I think the general consensus is that it's a good paper and almost there but not quite ready for acceptance at this time. A detailed list of issues and concerns below. PROS: 1. good idea: an additional loss term that enforces semantic constraints on the network output (like exactly 1 output element must be 1). 2. well written generally 3. a nice variety of different experiments CONS: 1. paper organization. The authors start with the axioms they would like a semantic loss function to obey, then provide a general definition, then show it does obey the axioms. The general definition is intractable in a naive implementation. The authors use boolean circuits to tractably solve the problem but this isn't discussed enough and it's unreasonable to expect readers to just give a pass on it without some more background. I personally would prefer an organization that presented the motivation (in english) for the loss definition; then the definition with a description of its pieces and why they are there; then a short discussion of how to implement such a loss in practice using boolean circuits (or if this is too much put it in the appendix); and a pointer to the axiomatization in an appendix. 2. related to 1, I didn't see anything which discussed the training time of this approach. Given that the semantic loss has to be computed in a more involved way than usual, it's not clear whether it is practical.
train
[ "SkRvF__xf", "ByEQXA5lM", "SJJw0N0eM", "S1cu8ZLmf", "BJYV8WUmG", "Syd2rWUQM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "SUMMARY \n\nThe paper proposes a new form of regularization utilizing logical constraints. The semantic loss function is built on the exploitation of symbolic knowledge extracted from data and connecting the logical constraints to the outputs of a neural network. The use of Boolean logic as a constraint provides a...
[ 7, 5, 4, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1 ]
[ "iclr_2018_HkepKG-Rb", "iclr_2018_HkepKG-Rb", "iclr_2018_HkepKG-Rb", "SkRvF__xf", "ByEQXA5lM", "SJJw0N0eM" ]
iclr_2018_HJnQJXbC-
AMPNet: Asynchronous Model-Parallel Training for Dynamic Neural Networks
New types of compute hardware in development and entering the market hold the promise of revolutionizing deep learning in a manner as profound as GPUs. However, existing software frameworks and training algorithms for deep learning have yet to evolve to fully leverage the capability of the new wave of silicon. In particular, models that exploit structured input via complex and instance-dependent control flow are difficult to accelerate using existing algorithms and hardware that typically rely on minibatching. We present an asynchronous model-parallel (AMP) training algorithm that is specifically motivated by training on networks of interconnected devices. Through an implementation on multi-core CPUs, we show that AMP training converges to the same accuracy as conventional synchronous training algorithms in a similar number of epochs, but utilizes the available hardware more efficiently, even for small minibatch sizes, resulting in shorter overall training times. Our framework opens the door for scaling up a new class of deep learning models that cannot be efficiently trained today.
rejected-papers
The authors propose a system for asynchronous, model-parallel training, suitable for dynamic neural networks. To summarize the reviewers: PROS: 1. Paper contrasts well with existing work. 2. Positive results on dynamic neural network problems. 3. Well written and clear CONS: 1. Some concern about extrapolations/estimates to hardware other than that on CPU. 2. Comparisons with Dynet seem to suggest auto-batching results in a dynamic mode aren't very positive. For 1) the AC notes the author's objections to reviewer 1's views on the value of estimation/extrapolation to non-CPU hardware. However, reviewer 3 voiced a similar concern and both still feel that there is more to be done to be convincing in the experiments.
train
[ "B1vCBbYlz", "HJwWcV8gz", "HJKXoRdgf", "HkGay76mG", "BkBqKCvMz", "r1Gy453ZM", "Symd8cB-G", "HyWWIcHbf", "HJlT49r-M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author" ]
[ "This paper proposes new direction for asynchronous training. While many synchronous and asynchronous approaches for data parallelism have been proposed and implemented in the past, the space of asynchronous model parallelism hasn't really been explored before. This paper discusses an implementation of this approac...
[ 6, 6, 4, -1, -1, -1, -1, -1, -1 ]
[ 5, 4, 5, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJnQJXbC-", "iclr_2018_HJnQJXbC-", "iclr_2018_HJnQJXbC-", "iclr_2018_HJnQJXbC-", "r1Gy453ZM", "HyWWIcHbf", "HJwWcV8gz", "HJKXoRdgf", "B1vCBbYlz" ]
iclr_2018_B1CQGfZ0b
Learning to select examples for program synthesis
Program synthesis is a class of regression problems where one seeks a solution, in the form of a source-code program, that maps the inputs to their corresponding outputs exactly. Due to its precise and combinatorial nature, it is commonly formulated as a constraint satisfaction problem, where input-output examples are expressed constraints, and solved with a constraint solver. A key challenge of this formulation is that of scalability: While constraint solvers work well with few well-chosen examples, constraining the entire set of example constitutes a significant overhead in both time and memory. In this paper we address this challenge by constructing a representative subset of examples that is both small and is able to constrain the solver sufficiently. We build the subset one example at a time, using a trained discriminator to predict the probability of unchosen input-output examples conditioned on the chosen input-output examples, adding the least probable example to the subset. Experiment on a diagram drawing domain shows our approach produces subset of examples that are small and representative for the constraint solver.
rejected-papers
The reviewers were largely agreed that the paper presented an interesting idea and has potential but needs a better empirical evaluation. It seems that the authors largely agree and are working to improve it. PROS: 1. Improving the speed of program synthesis is a useful problem 2. Good treatment of related work, e.g. CEGIS CONS: 1. The approach likely does not scale 2. The architecture is underspecified making it hard to reproduce 3. Only 1 domain for evaluation
train
[ "SJC_Polgz", "Bycm6ytgf", "By8NGl0xG", "Syz7_waXf", "BycOZvTXf", "SkzWAUTQG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper proposes a method for identifying representative examples for program\nsynthesis to increase the scalability of existing constraint programming\nsolutions. The authors present their approach and evaluate it empirically.\n\nThe proposed approach is interesting, but I feel that the experimental section\ndo...
[ 4, 5, 5, -1, -1, -1 ]
[ 4, 4, 3, -1, -1, -1 ]
[ "iclr_2018_B1CQGfZ0b", "iclr_2018_B1CQGfZ0b", "iclr_2018_B1CQGfZ0b", "SJC_Polgz", "Bycm6ytgf", "By8NGl0xG" ]
iclr_2018_r1kjEuHpZ
Learning Less-Overlapping Representations
In representation learning (RL), how to make the learned representations easy to interpret and less overfitted to training data are two important but challenging issues. To address these problems, we study a new type of regularization approach that encourages the supports of weight vectors in RL models to have small overlap, by simultaneously promoting near-orthogonality among vectors and sparsity of each vector. We apply the proposed regularizer to two models: neural networks (NNs) and sparse coding (SC), and develop an efficient ADMM-based algorithm for regularized SC. Experiments on various datasets demonstrate that weight vectors learned under our regularizer are more interpretable and have better generalization performance.
rejected-papers
Each of the reviewers had a slightly different set of issues with this paper but here is an attempt at a summary: PROS: 1. Paper is mostly clear and well structured. CONS: 1. Lack of novelty 2. Unsupported claims 3. Questionable methodology (using dropout confounds the goal of the experiment) The authors did not submit a rebuttal.
train
[ "B1taBfmlG", "BJ9J8G_ez", "ByL47G5lM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "*Summary*\nThe paper introduces a matrix regularizer to simultaneously induce both sparsity and (approximate) orthogonality. The definition of the regularizer mostly relies on the previous proposal from Xie et al. 2017b, to which a weighted L1 term is added.\nThe regularizer aims at reducing overlap among the lear...
[ 5, 4, 3 ]
[ 4, 4, 5 ]
[ "iclr_2018_r1kjEuHpZ", "iclr_2018_r1kjEuHpZ", "iclr_2018_r1kjEuHpZ" ]
iclr_2018_SJdCUMZAW
Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Grasping an object and precisely stacking it on another is a difficult task for traditional robotic control or hand-engineered approaches. Here we examine the problem in simulation and provide techniques aimed at solving it via deep reinforcement learning. We introduce two straightforward extensions to the Deep Deterministic Policy Gradient algorithm (DDPG), which make it significantly more data-efficient and scalable. Our results show that by making extensive use of off-policy data and replay, it is possible to find high-performance control policies. Further, our results hint that it may soon be feasible to train successful stacking policies by collecting interactions on real robots.
rejected-papers
The reviewers were quite unanimous in their assessment of this paper. PROS: 1. The paper is relatively clear and the approach makes sense 2. The paper presents and evaluates a collection of approaches to speed learning of policies for manipulation tasks. 3. Improving the data efficiency of learning algorithms and enabling learning across multiple robots is important for practical use in robot manipulation. 4. The multi-stage structure of manipulation is nicely exploited in reward shaping and distribution of starting states for training. CONS 1. Lack of novelty e.g. wrt to Finn et al. in "Deep Spatial Autoencoders for Visuomotor Learning" 2. The techniques of asynchronous update and multiple replay steps may have limited novelty, building closely on previous work and applying it to this new problem. 3. The contribution on reward shaping would benefit from a more detailed description and investigation. 4. There is concern that results may be specific to the chosen task. 5. Experiments using real robots are needed for practical evaluation.
train
[ "SJVDtoHef", "SyFsE_Def", "SkHZuZqxf", "SJEzPorez" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "I already reviewed this paper for R:SS 2017. There were no significant updates in this version, see my largely identical detailed comment in \"Official Comment\"\n\nQuality\n======\nThe proposed approaches make sense but it is unclear how task specific they are.\n\nClarity\n=====\nThe paper reads well. The authors...
[ 4, 2, 3, -1 ]
[ 4, 5, 4, -1 ]
[ "iclr_2018_SJdCUMZAW", "iclr_2018_SJdCUMZAW", "iclr_2018_SJdCUMZAW", "iclr_2018_SJdCUMZAW" ]
iclr_2018_H1kMMmb0-
Sequential Coordination of Deep Models for Learning Visual Arithmetic
Achieving machine intelligence requires a smooth integration of perception and reasoning, yet models developed to date tend to specialize in one or the other; sophisticated manipulation of symbols acquired from rich perceptual spaces has so far proved elusive. Consider a visual arithmetic task, where the goal is to carry out simple arithmetical algorithms on digits presented under natural conditions (e.g. hand-written, placed randomly). We propose a two-tiered architecture for tackling this kind of problem. The lower tier consists of a heterogeneous collection of information processing modules, which can include pre-trained deep neural networks for locating and extracting characters from the image, as well as modules performing symbolic transformations on the representations extracted by perception. The higher tier consists of a controller, trained using reinforcement learning, which coordinates the modules in order to solve the high-level task. For instance, the controller may learn in what contexts to execute the perceptual networks and what symbolic transformations to apply to their outputs. The resulting model is able to solve a variety of tasks in the visual arithmetic domain,and has several advantages over standard, architecturally homogeneous feedforward networks including improved sample efficiency.
rejected-papers
The consensus among the reviewers is that this paper is not quite ready for publication for reasons I will summarize in more detail below. However, I think there are some things that are really nice about this approach, and worth calling out: PROS: 1. the idea of tackling tasks broadly all the way from perception through symbolic reasoning is an important direction. 2. It certainly would be useful to have a "plug and play" framework in which various knowledge sources or skills can be assembled behind a simple interface designed by the ML practioner to solve a given problem or class of problems. 3. Clearly finding ways to increase sample efficiency -- especially in a deep net approach -- is of great importance practically. 4. The writing is good. CONS: 1. The comparison to feedforward networks needs to be made fair in order to disentangle the benefit of the architecture from the benefit of pre-training the modules. 2. Using the very limited 2x2 grid was too low a bar for the reviewers. The authors aim at a more general, efficient architecture useful for a variety of tasks, and perhaps you didn't want to devote too much time to this particular task, but I think having a slam-dunk example of the power of the approach is really necessary to be convincing. 3. Given the similarity, I think more has to be done to show the intellectual contribution over Zaremba et al, the difference in motivation notwithstanding. One way to do this is to really prove out the increased sample efficiency claim.
train
[ "ByFXl7Def", "BkQXYLhgz", "BJzWfyTlf", "SkqQIMvMf", "rkcJUMwGf", "H1njHMvzG", "SkoIrMwfG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Summary: This work is a variant of previous work (Zaremba et al. 2016) that enables the use of (noisy) operators that invoke pre-trained neural networks and is trained with Actor-Critic. In this regard it lacks a bit of originality. The quality of the experimental evaluation is not great. The clarity of the paper ...
[ 4, 3, 2, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_H1kMMmb0-", "iclr_2018_H1kMMmb0-", "iclr_2018_H1kMMmb0-", "ByFXl7Def", "ByFXl7Def", "BkQXYLhgz", "BJzWfyTlf" ]
iclr_2018_BkIkkseAZ
Theoretical properties of the global optimizer of two-layer Neural Network
In this paper, we study the problem of optimizing a two-layer artificial neural network that best fits a training dataset. We look at this problem in the setting where the number of parameters is greater than the number of sampled points. We show that for a wide class of differentiable activation functions (this class involves most nonlinear functions and excludes piecewise linear functions), we have that arbitrary first-order optimal solutions satisfy global optimality provided the hidden layer is non-singular. We essentially show that these non-singular hidden layer matrix satisfy a ``"good" property for these big class of activation functions. Techniques involved in proving this result inspire us to look at a new algorithmic, where in between two gradient step of hidden layer, we add a stochastic gradient descent (SGD) step of the output layer. In this new algorithmic framework, we extend our earlier result and show that for all finite iterations the hidden layer satisfies the``good" property mentioned earlier therefore partially explaining success of noisy gradient methods and addressing the issue of data independency of our earlier result. Both of these results are easily extended to hidden layers given by a flat matrix from that of a square matrix. Results are applicable even if network has more than one hidden layer provided all inner hidden layers are arbitrary, satisfy non-singularity, all activations are from the given class of differentiable functions and optimization is only with respect to the outermost hidden layer. Separately, we also study the smoothness properties of the objective function and show that it is actually Lipschitz smooth, i.e., its gradients do not change sharply. We use smoothness properties to guarantee asymptotic convergence of O(1/number of iterations) to a first-order optimal solution.
rejected-papers
Understanding the quality of the solutions found by gradient descent for optimizing deep nets is certainly an important area of research. The reviewers found several intermediate results to be interesting. At the same time, the reviewers unanimously have pointed out various technical aspects of the paper that are unclear, particularly new contributions relative to recent prior work. As such, at this time, the paper is not ready for ICLR-2018 acceptance.
train
[ "SyQTA31bG", "ByASB9UEG", "BJUmm5_lz", "SJccyr0-M", "rkvgtaimf", "SyqV2dWfz", "rJSis_bfz", "r1XDiO-GG", "r1FKsfwWz", "ryRVo4QeM", "rkQ3TFMxG", "SJMDtxGlz", "rypGH4bgG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer" ]
[ "The paper studies the theoretical properties of the two-layer neural networks. \n\nTo summarize the result, let's use the theta to denote the layer closer to the label, and W to denote the layer closer to the data. \n\nThe paper shows that \na) if W is fixed, then with respect to the randomness of the data, with p...
[ 4, -1, 7, 7, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 5, -1, 5, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_BkIkkseAZ", "rJSis_bfz", "iclr_2018_BkIkkseAZ", "iclr_2018_BkIkkseAZ", "SJccyr0-M", "SyQTA31bG", "r1XDiO-GG", "BJUmm5_lz", "SyQTA31bG", "rkQ3TFMxG", "SJMDtxGlz", "rypGH4bgG", "iclr_2018_BkIkkseAZ" ]
iclr_2018_ryCM8zWRb
Recurrent Neural Networks with Top-k Gains for Session-based Recommendations
RNNs have been shown to be excellent models for sequential data and in particular for session-based user behavior. The use of RNNs provides impressive performance benefits over classical methods in session-based recommendations. In this work we introduce a novel ranking loss function tailored for RNNs in recommendation settings. The better performance of such loss over alternatives, along with further tricks and improvements described in this work, allow to achieve an overall improvement of up to 35% in terms of MRR and Recall@20 over previous session-based RNN solutions and up to 51% over classical collaborative filtering approaches. Unlike data augmentation-based improvements, our method does not increase training times significantly.
rejected-papers
While the use of RNNs for building session-based recommender systems is certainly an important class of applications, the main strength of the paper is to propose and benchmark practical modifications to prior RNN-based systems that lead to performance improvements. The reviewers have pointed out that the writing in the paper needs improvement, modifications are somewhat straightforward and some expected baselines such as comparisons against state of the art matrix-factorization based methods is missing. As such the paper could benefit from a revision and resubmission elsewhere.
val
[ "ryqETl9gG", "S1d1eXqlM", "r1QqAe3lG", "B1Qwgzlff", "B1VZezgff", "Hy35efeMz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This is an interesting paper that analyzes existing loss functions for session-based recommendations. Based on the result of these analysis the authors propose two novel losses functions which add a weighting to existing ranking-based loss functions. These novelties are meant to improve issues related to vanishing...
[ 8, 4, 6, -1, -1, -1 ]
[ 4, 5, 5, -1, -1, -1 ]
[ "iclr_2018_ryCM8zWRb", "iclr_2018_ryCM8zWRb", "iclr_2018_ryCM8zWRb", "S1d1eXqlM", "r1QqAe3lG", "ryqETl9gG" ]
iclr_2018_r1saNM-RW
Small Coresets to Represent Large Training Data for Support Vector Machines
Support Vector Machines (SVMs) are one of the most popular algorithms for classification and regression analysis. Despite their popularity, even efficient implementations have proven to be computationally expensive to train at a large-scale, especially in streaming settings. In this paper, we propose a novel coreset construction algorithm for efficiently generating compact representations of massive data sets to speed up SVM training. A coreset is a weighted subset of the original data points such that SVMs trained on the coreset are provably competitive with those trained on the original (massive) data set. We provide both lower and upper bounds on the number of samples required to obtain accurate approximations to the SVM problem as a function of the complexity of the input data. Our analysis also establishes sufficient conditions on the existence of sufficiently compact and representative coresets for the SVM problem. We empirically evaluate the practical effectiveness of our algorithm against synthetic and real-world data sets.
rejected-papers
While the paper shows some encouraging results for scaling up SVMs using coreset methods, it has fallen short of making a fully convincing case, particularly given the amount of intense interest in this topic back in the heydey of kernel methods. When it comes to scalability, it has become the norm now to benchmark results on far larger datasets using parallelism, specialized hardware in conjunction with algorithmic speedups (e.g., using random feature methods, low-rank approximations such as Nystrom and other approaches). As such the paper is unlikely to generate much interest in the ICLR community in its current form.
train
[ "BylLVbdNG", "H11vg1wVf", "Byu50uOxf", "BJu6VdYlf", "rkydZLAef", "ByibM2BMz", "B1vBW3SGz", "Sk3SxYn-G", "HJwWeF3bM", "rJoTyFh-G", "SkwxyY2bM" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author" ]
[ "Thank you for the additional consideration.\n\n1) Regarding the *offline* running time of our algorithm, we include below the response that we had posted earlier regarding the runtime comparisons. In short, our algorithm, unlike prior approaches, can be applied to streaming settings where it may not be possible to...
[ -1, -1, 5, 7, 5, -1, -1, -1, -1, -1, -1 ]
[ -1, -1, 3, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ "H11vg1wVf", "B1vBW3SGz", "iclr_2018_r1saNM-RW", "iclr_2018_r1saNM-RW", "iclr_2018_r1saNM-RW", "rkydZLAef", "iclr_2018_r1saNM-RW", "Byu50uOxf", "BJu6VdYlf", "rkydZLAef", "iclr_2018_r1saNM-RW" ]
iclr_2018_H1U_af-0-
Quadrature-based features for kernel approximation
We consider the problem of improving kernel approximation via feature maps. These maps arise as Monte Carlo approximation to integral representations of kernel functions and scale up kernel methods for larger datasets. We propose to use more efficient numerical integration technique to obtain better estimates of the integrals compared to the state-of-the-art methods. Our approach allows to use information about the integrand to enhance approximation and facilitates fast computations. We derive the convergence behavior and conduct an extensive empirical study that supports our hypothesis.
rejected-papers
This an interesting new contribution to construction of random features for approximating kernel functions. While the empirical results look promising, the reviewers have raised concerns about not having insights into why the approach is more effective; the exposition of the quadrature method is difficult to follow; and the connection between the quadrature rules and the random feature map is never explicitly stated. Some comparisons are missing (e.g., QMC methods). As such the paper will benefit from a revision and is not ready for ICLR-2018 acceptance.
train
[ "H1--71dlz", "BkpB7yqxz", "SyGYH-ieG", "BymT16sXz", "SyjTxTsmz", "S1D-gTjmf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper proposes to improve the kernel approximation of random features by using quadratures, in particular, stochastic spherical-radial rules. The quadrature rules have smaller variance given the same number of random features, and experiments show its reconstruction error and classification accuracies are bett...
[ 4, 7, 6, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1 ]
[ "iclr_2018_H1U_af-0-", "iclr_2018_H1U_af-0-", "iclr_2018_H1U_af-0-", "SyGYH-ieG", "H1--71dlz", "BkpB7yqxz" ]
iclr_2018_HJBhEMbRb
A Spectral Approach to Generalization and Optimization in Neural Networks
The recent success of deep neural networks stems from their ability to generalize well on real data; however, Zhang et al. have observed that neural networks can easily overfit random labels. This observation demonstrates that with the existing theory, we cannot adequately explain why gradient methods can find generalizable solutions for neural networks. In this work, we use a Fourier-based approach to study the generalization properties of gradient-based methods over 2-layer neural networks with sinusoidal activation functions. We prove that if the underlying distribution of data has nice spectral properties such as bandlimitedness, then the gradient descent method will converge to generalizable local minima. We also establish a Fourier-based generalization bound for bandlimited spaces, which generalizes to other activation functions. Our generalization bound motivates a grouped version of path norms for measuring the complexity of 2-layer neural networks with ReLU activation functions. We demonstrate numerically that regularization of this group path norm results in neural network solutions that can fit true labels without losing test accuracy while not overfitting random labels.
rejected-papers
Understanding the generalization behavior of deep networks is certainly an open problem. While this paper appears to develop some interesting new Fourier-based methods in this direction, the analysis in its current form is currently too restrictive, with somewhat limited empirical support, to broadly appeal to the ICLR community. Please see the reviews for more details.
train
[ "rkcAW6tJG", "SJk182OlM", "H1Mweadlz", "S1Zg-PTQG", "B1f8kP6QM", "BJg_JQeXz", "ryglRAbMG", "HkpNpRbMf", "SJhIn0bfM", "ByHaYRulf", "HJ815-Lxz", "HJ1KekUef", "ry3-q3Xez", "r1I4--WxG", "rJOFw1u1z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "public", "author", "author", "author", "author", "public", "author", "public", "author", "public" ]
[ "Deep neural networks have found great success in various applications. This paper presents a theoretical analysis for 2-layer neural networks (NNs) through a spectral approach. Specifically, the authors develop a Fourier-based generalization bound. Based on this, the authors show that the bandwidth, Fourier l_1 no...
[ 6, 6, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HJBhEMbRb", "iclr_2018_HJBhEMbRb", "iclr_2018_HJBhEMbRb", "BJg_JQeXz", "iclr_2018_HJBhEMbRb", "iclr_2018_HJBhEMbRb", "rkcAW6tJG", "SJk182OlM", "H1Mweadlz", "HJ815-Lxz", "HJ1KekUef", "ry3-q3Xez", "r1I4--WxG", "rJOFw1u1z", "iclr_2018_HJBhEMbRb" ]
iclr_2018_SJ8M9yup-
On Optimality Conditions for Auto-Encoder Signal Recovery
Auto-Encoders are unsupervised models that aim to learn patterns from observed data by minimizing a reconstruction cost. The useful representations learned are often found to be sparse and distributed. On the other hand, compressed sensing and sparse coding assume a data generating process, where the observed data is generated from some true latent signal source, and try to recover the corresponding signal from measurements. Looking at auto-encoders from this signal recovery perspective enables us to have a more coherent view of these techniques. In this paper, in particular, we show that the true hidden representation can be approximately recovered if the weight matrices are highly incoherent with unit ℓ2 row length and the bias vectors takes the value (approximately) equal to the negative of the data mean. The recovery also becomes more and more accurate as the sparsity in hidden signals increases. Additionally, we empirically also demonstrate that auto-encoders are capable of recovering the data generating dictionary when only data samples are given.
rejected-papers
- The paper is overall difficult to read and would benefit from a revised presentation. - The practical relevance of the recovery conditions and algorithmic consequences of the work is not sufficiently clear or convincing.
train
[ "HySa8MQgz", "B1NjQYOeG", "HkQOWPieM", "r1Nk9t2Xz", "BkNZsY27f", "r1bwcKhQz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "*Summary*\nThe paper studies recovery guarantees within the context of auto-encoders. Assuming a noise-corrupted linear model for the inputs x's, the paper looks at some sufficient properties (e.g., over the generating dictionary denoted by W) to recover the true underlying sparse signals (denoted by h). Several s...
[ 4, 5, 5, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1 ]
[ "iclr_2018_SJ8M9yup-", "iclr_2018_SJ8M9yup-", "iclr_2018_SJ8M9yup-", "HkQOWPieM", "HySa8MQgz", "B1NjQYOeG" ]
iclr_2018_SJu63o10b
UNSUPERVISED METRIC LEARNING VIA NONLINEAR FEATURE SPACE TRANSFORMATIONS
In this paper, we propose a nonlinear unsupervised metric learning framework to boost of the performance of clustering algorithms. Under our framework, nonlinear distance metric learning and manifold embedding are integrated and conducted simultaneously to increase the natural separations among data samples. The metric learning component is implemented through feature space transformations, regulated by a nonlinear deformable model called Coherent Point Drifting (CPD). Driven by CPD, data points can get to a higher level of linear separability, which is subsequently picked up by the manifold embedding component to generate well-separable sample projections for clustering. Experimental results on synthetic and benchmark datasets show the effectiveness of our proposed approach over the state-of-the-art solutions in unsupervised metric learning.
rejected-papers
The paper is well written overall. However, the algorithmic framework has limited novelty and the reviewers unanimously are unconvinced by experimental results showing marginal improvements on smallish UCI datasets.
train
[ "HJAY2L-ez", "BJfgoy9xz", "rkX795cez" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposed an unsupervised metric learning method, which is designed for clustering and cannot be used for other problems. The authors argued that unsupervised metric learning should not be a pre-processing method for the following clustering method due to the lack of any similarity/dissimilarity constrai...
[ 6, 4, 4 ]
[ 4, 5, 4 ]
[ "iclr_2018_SJu63o10b", "iclr_2018_SJu63o10b", "iclr_2018_SJu63o10b" ]
iclr_2018_SJDYgPgCZ
Understanding Local Minima in Neural Networks by Loss Surface Decomposition
To provide principled ways of designing proper Deep Neural Network (DNN) models, it is essential to understand the loss surface of DNNs under realistic assumptions. We introduce interesting aspects for understanding the local minima and overall structure of the loss surface. The parameter domain of the loss surface can be decomposed into regions in which activation values (zero or one for rectified linear units) are consistent. We found that, in each region, the loss surface have properties similar to that of linear neural networks where every local minimum is a global minimum. This means that every differentiable local minimum is the global minimum of the corresponding region. We prove that for a neural network with one hidden layer using rectified linear units under realistic assumptions. There are poor regions that lead to poor local minima, and we explain why such regions exist even in the overparameterized DNNs.
rejected-papers
The reviewers are unanimous in their opinion that the theoretical results in this paper are of limited novelty and significance. Several parts of the paper are not presented clearly enough. As such the paper is not ready for ICLR-2018 acceptance.
train
[ "HJfKMWtxz", "HJ_5MStgf", "B1kVUQjxM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "This paper proposes to study the loss surfaces of neural networks with ReLU activations by viewing the loss surface as a sum of piecewise linear functions at each point in parameter space, i.e. one piecewise linear function per sample. The main result is that every local minimum of the total surface is a global mi...
[ 4, 5, 4 ]
[ 4, 4, 4 ]
[ "iclr_2018_SJDYgPgCZ", "iclr_2018_SJDYgPgCZ", "iclr_2018_SJDYgPgCZ" ]
iclr_2018_BJgd7m0xRZ
Unsupervised Adversarial Anomaly Detection using One-Class Support Vector Machines
Anomaly detection discovers regular patterns in unlabeled data and identifies the non-conforming data points, which in some cases are the result of malicious attacks by adversaries. Learners such as One-Class Support Vector Machines (OCSVMs) have been successfully in anomaly detection, yet their performance may degrade significantly in the presence of sophisticated adversaries, who target the algorithm itself by compromising the integrity of the training data. With the rise in the use of machine learning in mission critical day-to-day activities where errors may have significant consequences, it is imperative that machine learning systems are made secure. To address this, we propose a defense mechanism that is based on a contraction of the data, and we test its effectiveness using OCSVMs. The proposed approach introduces a layer of uncertainty on top of the OCSVM learner, making it infeasible for the adversary to guess the specific configuration of the learner. We theoretically analyze the effects of adversarial perturbations on the separating margin of OCSVMs and provide empirical evidence on several benchmark datasets, which show that by carefully contracting the data in low dimensional spaces, we can successfully identify adversarial samples that would not have been identifiable in the original dimensional space. The numerical results show that the proposed method improves OCSVMs performance significantly (2-7%)
rejected-papers
The reviewers have unanimously expressed concerns about clarity, novelty, sound theoretical justification and intuitive motivation of the proposed approach.
test
[ "ByZrbWcxG", "BJLMRY2gG", "SkrFXoAef" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The authors propose a defense against attacks on the security of one-class SVM based anonaly detectors. The core idea is to perform a random projection of the data (which is supposed to decrease the impact from adversarial distortions). The approach is empirically tested on the following data: MNIST, CIFAR, and SV...
[ 4, 4, 4 ]
[ 4, 4, 4 ]
[ "iclr_2018_BJgd7m0xRZ", "iclr_2018_BJgd7m0xRZ", "iclr_2018_BJgd7m0xRZ" ]
iclr_2018_HyiRazbRb
Demystifying overcomplete nonlinear auto-encoders: fast SGD convergence towards sparse representation from random initialization
Auto-encoders are commonly used for unsupervised representation learning and for pre-training deeper neural networks. When its activation function is linear and the encoding dimension (width of hidden layer) is smaller than the input dimension, it is well known that auto-encoder is optimized to learn the principal components of the data distribution (Oja1982). However, when the activation is nonlinear and when the width is larger than the input dimension (overcomplete), auto-encoder behaves differently from PCA, and in fact is known to perform well empirically for sparse coding problems. We provide a theoretical explanation for this empirically observed phenomenon, when rectified-linear unit (ReLu) is adopted as the activation function and the hidden-layer width is set to be large. In this case, we show that, with significant probability, initializing the weight matrix of an auto-encoder by sampling from a spherical Gaussian distribution followed by stochastic gradient descent (SGD) training converges towards the ground-truth representation for a class of sparse dictionary learning models. In addition, we can show that, conditioning on convergence, the expected convergence rate is O(1/t), where t is the number of updates. Our analysis quantifies how increasing hidden layer width helps the training performance when random initialization is used, and how the norm of network weights influence the speed of SGD convergence.
rejected-papers
The reviewers have unanimously expressed strong concerns about the technical correctness of the theoretical results in the paper. The paper should be carefully revised and checked for technical errors. In its current form, the paper is not suitable for acceptance at ICLR 2018.
train
[ "HyutwEMJG", "HkErJ5eez", "SJyMoI5gG", "H1vdfKpXG", "rJwXE9amz", "rkofl9aXM", "H1wvdY6mz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The authors study the convergence of a procedure for learning\nan autoencoder with a ReLu non-linearity. The procedure is akin\nto stochastic gradient descent, with some parameters updated at\neach iteration in a manner that performs optimization with respect\nto the population risk.\n\nThe autoencoders that they...
[ 2, 3, 2, -1, -1, -1, -1 ]
[ 4, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_HyiRazbRb", "iclr_2018_HyiRazbRb", "iclr_2018_HyiRazbRb", "SJyMoI5gG", "iclr_2018_HyiRazbRb", "HyutwEMJG", "HkErJ5eez" ]
iclr_2018_rJSr0GZR-
Learning Priors for Adversarial Autoencoders
Most deep latent factor models choose simple priors for simplicity, tractability or not knowing what prior to use. Recent studies show that the choice of the prior may have a profound effect on the expressiveness of the model, especially when its generative network has limited capacity. In this paper, we propose to learn a proper prior from data for adversarial autoencoders (AAEs). We introduce the notion of code generators to transform manually selected simple priors into ones that can better characterize the data distribution. Experimental results show that the proposed model can generate better image quality and learn better disentangled representations than AAEs in both supervised and unsupervised settings. Lastly, we present its ability to do cross-domain translation in a text-to-image synthesis task.
rejected-papers
The paper proposes learning the prior for AAEs by training a code-generator that is seeded by the standard Gaussian distribution and whose output is taken as the prior. The code generator is trained by minimizing the GAN loss b/w the distribution coming out of the decoder and the real image distribution. The paper also modifies the AAE by replacing the L2 loss in pixel domain with "learned similarity metric" loss inspired by the earlier work (Larsen et al., 2015). The contribution of the paper is specific to AAE which makes the scope narrow. Even there, the benefits of learning the prior using the proposed method are not clear. Experiments make two claims: (i) improved image generation over AAE, (ii) improved "disentanglement". Towards (i), the paper compares images generated by AAE with those generated by their model. However, it is not clear if the improved generation quality is due to the use of decoder loss on the learned similarity metric (Larsen at al, 2015), or due to the use of GAN loss in the image space (ie, just having GAN loss over decoder's output w/o having a code generator), or due to learning the prior which is the main contribution of the paper. This has also been hinted at by AnonReviewer1. Hence, it's not clear if the sharper generated images are really due to the learned prior. Towards (ii), the paper uses InfoGAN inspired objective to generate class conditional images. It shows the class-conditional generated images for AAE and the proposed method. Here AAE is also trained on "learned similarity metric" and augmented with similar InfoGAN type objective so the only difference is in the prior. Authors say the performance of both models is similar on MNIST and SVHN but on CIFAR their model with "learned prior" generates images that match the conditioned-upon labels better. However this claim is also subjective/qualitative and even if true, it is not clear if this is due to learned prior or due to the extra GAN discriminator loss in the image-space -- in other words, how do the results look for AAE + a discriminator in the image space, just like in the proposed model but without a code generator? The t-SNE plots for the learned prior are also shown but they are only shown when InfoGAN loss is added. The same plots are not shown for AAE with added InfoGAN loss so it is difficult to know the benefits of learning the code-generator as proposed. Overall, I feel the scope of the paper is narrow and the benefits of learning the prior using the method proposed in the paper are not clearly established by the reported experiments. I am hesitant to recommend acceptance to the main conference in its current form.
train
[ "ByzPtktlM", "BkD44d9gM", "ByjrTO5ef", "H1csCkCmz", "ryz-A1CXz", "HkKhMOgEM", "Hk2EakRmf", "SJvH3J0XM", "SkCykNYCb" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "public" ]
[ "This paper proposes an interesting idea--to learn a flexible prior from data by maximizing data likelihood.\n\nIt seems that in the prior improvement stage, what you do is training a GAN with CG+dec as the generator while D_I as the discriminator (since you also update dec at the prior improvement stage). So it ca...
[ 6, 5, 6, -1, -1, -1, -1, -1, -1 ]
[ 3, 3, 4, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_rJSr0GZR-", "iclr_2018_rJSr0GZR-", "iclr_2018_rJSr0GZR-", "SkCykNYCb", "ByzPtktlM", "Hk2EakRmf", "BkD44d9gM", "ByjrTO5ef", "iclr_2018_rJSr0GZR-" ]
iclr_2018_HyI6s40a-
Towards Safe Deep Learning: Unsupervised Defense Against Generic Adversarial Attacks
Recent advances in adversarial Deep Learning (DL) have opened up a new and largely unexplored surface for malicious attacks jeopardizing the integrity of autonomous DL systems. We introduce a novel automated countermeasure called Parallel Checkpointing Learners (PCL) to thwart the potential adversarial attacks and significantly improve the reliability (safety) of a victim DL model. The proposed PCL methodology is unsupervised, meaning that no adversarial sample is leveraged to build/train parallel checkpointing learners. We formalize the goal of preventing adversarial attacks as an optimization problem to minimize the rarely observed regions in the latent feature space spanned by a DL network. To solve the aforementioned minimization problem, a set of complementary but disjoint checkpointing modules are trained and leveraged to validate the victim model execution in parallel. Each checkpointing learner explicitly characterizes the geometry of the input data and the corresponding high-level data abstractions within a particular DL layer. As such, the adversary is required to simultaneously deceive all the defender modules in order to succeed. We extensively evaluate the performance of the PCL methodology against the state-of-the-art attack scenarios, including Fast-Gradient-Sign (FGS), Jacobian Saliency Map Attack (JSMA), Deepfool, and Carlini&WagnerL2 algorithm. Extensive proof-of-concept evaluations for analyzing various data collections including MNIST, CIFAR10, and ImageNet corroborate the effectiveness of our proposed defense mechanism against adversarial samples.
rejected-papers
The paper proposes a method to detect and correct adversarial examples at the input stage (using a sparse coding based model) and/or at a hidden layer (using a GMM). These detector/corrector models are trained using only the natural examples. While the proposed method is interesting and has some novelty wrt to the specific models used for detection/correction (ie sparse coding and GMMs), there are crucial gaps in the empirical studies: - It does not compare with a highly relevant prior work MagNet (Meng and Chen, 2017) which also detects and corrects adversarial examples by modeling the distribution of the natural examples - The attacks used in the evaluations do not consider the setting where the existence (and architecture) of the defender models is known to the attacker - It does not evaluate the method on a stronger PGD attack (also known as iterative FGSM)
train
[ "rJ7exuPgM", "SJHkBN_xM", "HkkqUpk-f", "BJ9P82pmz", "Hklf_pYzf", "rJL6vTYMf", "H1R6UTKzM", "rkSaXaKMz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper present a method for detecting adversarial examples in a deep learning classification setting. The idea is to characterize the latent feature space (a function of inputs) as observed vs unobserved, and use a module to fit a 'cluster-aware' loss that aims to cluster similar classes tighter in the latent...
[ 5, 7, 3, -1, -1, -1, -1, -1 ]
[ 3, 3, 5, -1, -1, -1, -1, -1 ]
[ "iclr_2018_HyI6s40a-", "iclr_2018_HyI6s40a-", "iclr_2018_HyI6s40a-", "rkSaXaKMz", "SJHkBN_xM", "rJ7exuPgM", "SJHkBN_xM", "HkkqUpk-f" ]
iclr_2018_ryZERzWCZ
The Information-Autoencoding Family: A Lagrangian Perspective on Latent Variable Generative Modeling
A variety of learning objectives have been recently proposed for training generative models. We show that many of them, including InfoGAN, ALI/BiGAN, ALICE, CycleGAN, VAE, β-VAE, adversarial autoencoders, AVB, and InfoVAE, are Lagrangian duals of the same primal optimization problem. This generalization reveals the implicit modeling trade-offs between flexibility and computational requirements being made by these models. Furthermore, we characterize the class of all objectives that can be optimized under certain computational constraints. Finally, we show how this new Lagrangian perspective can explain undesirable behavior of existing methods and provide new principled solutions.
rejected-papers
The paper provides a constrained mutual information objective function whose Lagrangian dual covers several existing generative models. However reviewers are not convinced of the significance or usefulness of the proposed unifying framework (at least from the way results are presented currently in the paper). Authors have not taken any steps towards revising the paper to address these concerns. Improving the presentation to bring out the significance/utility of the proposed unifying framework is needed.
val
[ "S1ufxZqlG", "SkugmHtgf", "BJ8bKuOlM", "B1A1_t67z", "SJ2PZA-XM", "BycXcabXz", "rkckcaWXM", "HJqtmElmf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author" ]
[ "EDIT: I have read the authors' rebuttals and other reviews. My opinion has not been changed. I recommend the authors significantly revise their work, streamlining the narrative and making clear what problems and solutions they solve. While I enjoy the perspective of unifying various paths, it's unclear what insigh...
[ 4, 5, 6, -1, -1, -1, -1, -1 ]
[ 4, 4, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ryZERzWCZ", "iclr_2018_ryZERzWCZ", "iclr_2018_ryZERzWCZ", "SJ2PZA-XM", "BycXcabXz", "BJ8bKuOlM", "SkugmHtgf", "S1ufxZqlG" ]
iclr_2018_H1wt9x-RW
Interpretable and Pedagogical Examples
Teachers intentionally pick the most informative examples to show their students. However, if the teacher and student are neural networks, the examples that the teacher network learns to give, although effective at teaching the student, are typically uninterpretable. We show that training the student and teacher iteratively, rather than jointly, can produce interpretable teaching strategies. We evaluate interpretability by (1) measuring the similarity of the teacher's emergent strategies to intuitive strategies in each domain and (2) conducting human experiments to evaluate how effective the teacher's strategies are at teaching humans. We show that the teacher network learns to select or generate interpretable, pedagogical examples to teach rule-based, probabilistic, boolean, and hierarchical concepts.
rejected-papers
The paper proposes iterative training strategies for learning teacher and student models. They show how iterative training can lead to interpretable strategies over joint training on multiple datasets. All the reviewers felt the idea was interesting, although, one of the reviewers had concerns about the experimentation. However, there is a BIG problem with this submission. The author names appear in the manuscript thus disregarding anonymity.
train
[ "Bk8IzGblG", "Hk2pegqlf", "r1RtXlk-f", "B1U8nvY7G", "H1Et5DFmf", "BJYBqDFmG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This is a well written paper on a compelling topic: how to train \"an automated teacher\" to use intuitive strategies that would also apply to humans. \n\nThe introduction is fairly strong, but this reviewer wishes that the authors would have come up with an intuitive example that illustrates why the strategy \"1...
[ 8, 8, 4, -1, -1, -1 ]
[ 3, 4, 3, -1, -1, -1 ]
[ "iclr_2018_H1wt9x-RW", "iclr_2018_H1wt9x-RW", "iclr_2018_H1wt9x-RW", "Hk2pegqlf", "Bk8IzGblG", "r1RtXlk-f" ]
iclr_2018_B13EC5u6W
Thinking like a machine — generating visual rationales through latent space optimization
Interpretability and small labelled datasets are key issues in the practical application of deep learning, particularly in areas such as medicine. In this paper, we present a semi-supervised technique that addresses both these issues simultaneously. We learn dense representations from large unlabelled image datasets, then use those representations to both learn classifiers from small labeled sets and generate visual rationales explaining the predictions. Using chest radiography diagnosis as a motivating application, we show our method has good generalization ability by learning to represent our chest radiography dataset while training a classifier on an separate set from a different institution. Our method identifies heart failure and other thoracic diseases. For each prediction, we generate visual rationales for positive classifications by optimizing a latent representation to minimize the probability of disease while constrained by a similarity measure in image space. Decoding the resultant latent representation produces an image without apparent disease. The difference between the original and the altered image forms an interpretable visual rationale for the algorithm's prediction. Our method simultaneously produces visual rationales that compare favourably to previous techniques and a classifier that outperforms the current state-of-the-art.
rejected-papers
The paper proposes a semi-supervised method to make deep learning more interpretable and at the same time be accurate on small datasets. The main idea is to learn dense representations from unlabelled data and then use those for building classifiers on small datasets as well as generate visual explanations. The idea is interesting, however, as one reviewer points out the presentation is poor. For instance, Table 2 is not understandable. Given the high standards of ICLR this cannot be ignored especially given the fact that the authors had the benefit of updating the paper which is a luxury for conference submissions.
test
[ "BJvBh8sEz", "HJ8fKmLVz", "SkjR3ZUNM", "r1MKh-INz", "rJIBvHSEz", "ByWcetHlG", "rygt6qdxM", "SkMSOTOlG", "S1Hw58XEz", "H1Qslp1XG", "BJY8gayXz", "r1QHxTJXM", "SJ2XQTJQf" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "Thank you for your reply - as per your request Table 2 has been updated to include some of our response to Reviewer 1 as a caption to help understand the table's contents. \n\n", "Thank you for updating the paper. I am satisfied with the changes.\n\nHowever, and as noted by the other reviewers, the description o...
[ -1, -1, -1, -1, -1, 4, 8, 7, -1, -1, -1, -1, -1 ]
[ -1, -1, -1, -1, -1, 3, 2, 4, -1, -1, -1, -1, -1 ]
[ "HJ8fKmLVz", "r1QHxTJXM", "rJIBvHSEz", "S1Hw58XEz", "BJY8gayXz", "iclr_2018_B13EC5u6W", "iclr_2018_B13EC5u6W", "iclr_2018_B13EC5u6W", "H1Qslp1XG", "ByWcetHlG", "rygt6qdxM", "SkMSOTOlG", "iclr_2018_B13EC5u6W" ]
iclr_2018_ByYPLJA6W
Distribution Regression Network
We introduce our Distribution Regression Network (DRN) which performs regression from input probability distributions to output probability distributions. Compared to existing methods, DRN learns with fewer model parameters and easily extends to multiple input and multiple output distributions. On synthetic and real-world datasets, DRN performs similarly or better than the state-of-the-art. Furthermore, DRN generalizes the conventional multilayer perceptron (MLP). In the framework of MLP, each node encodes a real number, whereas in DRN, each node encodes a probability distribution.
rejected-papers
The paper proposes a method to map input probability distributions to output probability distributions with few parameters. They show the efficacy of their method on synthetic and real stock data. After revision they seemed to have added another dataset, however, it is not carefully analyzed like the stock data. More rigorous experimentation needs to be done to justify the method.
train
[ "B10sNZ9gM", "ByjbpsngM", "B1LsdVTlf", "HJtDnjDGG", "HJSgXjgGz", "SkS3fjgfG", "SJs8MoxfM", "Sym4GjlGM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author" ]
[ "The paper considers distribution to distribution regression with MLPs. The authors use an energy function based approach. They test on a few problems, showing similar performance to other distribution to distribution alternatives, but requiring fewer parameters.\n\nThis seems to be a nice treatment of distributi...
[ 5, 7, 7, -1, -1, -1, -1, -1 ]
[ 4, 2, 4, -1, -1, -1, -1, -1 ]
[ "iclr_2018_ByYPLJA6W", "iclr_2018_ByYPLJA6W", "iclr_2018_ByYPLJA6W", "SkS3fjgfG", "B10sNZ9gM", "ByjbpsngM", "B1LsdVTlf", "iclr_2018_ByYPLJA6W" ]
iclr_2018_ry9tUX_6-
Entropy-SGD optimizes the prior of a PAC-Bayes bound: Data-dependent PAC-Bayes priors via differential privacy
We show that Entropy-SGD (Chaudhari et al., 2017), when viewed as a learning algorithm, optimizes a PAC-Bayes bound on the risk of a Gibbs (posterior) classifier, i.e., a randomized classifier obtained by a risk-sensitive perturbation of the weights of a learned classifier. Entropy-SGD works by optimizing the bound’s prior, violating the hypothesis of the PAC-Bayes theorem that the prior is chosen independently of the data. Indeed, available implementations of Entropy-SGD rapidly obtain zero training error on random labels and the same holds of the Gibbs posterior. In order to obtain a valid generalization bound, we show that an ε-differentially private prior yields a valid PAC-Bayes bound, a straightforward consequence of results connecting generalization with differential privacy. Using stochastic gradient Langevin dynamics (SGLD) to approximate the well-known exponential release mechanism, we observe that generalization error on MNIST (measured on held out data) falls within the (empirically nonvacuous) bounds computed under the assumption that SGLD produces perfect samples. In particular, Entropy-SGLD can be configured to yield relatively tight generalization bounds and still fit real labels, although these same settings do not obtain state-of-the-art performance.
rejected-papers
The paper proposes a new analysis of the optimization method called entropy-sgd which seemingly leads to more robust neural network classifiers. This is a very important problem if successful. The reviewers are on the fence with this paper. On the one hand they appreciate the direction and theoretical contribution, while on the other they feel the assumptions are not clearly elucidated or justified. This is important for such a paper. The author responses have not helped in alleviating these concerns. As one of the reviewers points out, the writing needs a massive overhaul. I would suggest the authors clearly state their assumptions and corresponding justifications in future submissions of this work.
train
[ "Skza1ggrG", "Bk1HygxSM", "ryq2cm9xG", "r1dNqr9xf", "Hy0bdarZG", "Hk9z76OfG", "BkbCG6dzM", "SJMcMTOMz", "rJXPfp_GM" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "We revised our paper considerably over a month ago. We have since had a long back and forth conversation with AnonReviewer3 discussing the privacy approximation, which seems to have addressed their misgivings. \n\nWe would much appreciate it if you could update your reviews and/or score. ", "Dear AnonReviewer1,\...
[ -1, -1, 6, 6, 6, -1, -1, -1, -1 ]
[ -1, -1, 3, 3, 3, -1, -1, -1, -1 ]
[ "ryq2cm9xG", "r1dNqr9xf", "iclr_2018_ry9tUX_6-", "iclr_2018_ry9tUX_6-", "iclr_2018_ry9tUX_6-", "iclr_2018_ry9tUX_6-", "ryq2cm9xG", "r1dNqr9xf", "Hy0bdarZG" ]
iclr_2018_HJUOHGWRb
Contextual Explanation Networks
We introduce contextual explanation networks (CENs)---a class of models that learn to predict by generating and leveraging intermediate explanations. CENs are deep networks that generate parameters for context-specific probabilistic graphical models which are further used for prediction and play the role of explanations. Contrary to the existing post-hoc model-explanation tools, CENs learn to predict and to explain jointly. Our approach offers two major advantages: (i) for each prediction, valid instance-specific explanations are generated with no computational overhead and (ii) prediction via explanation acts as a regularization and boosts performance in low-resource settings. We prove that local approximations to the decision boundary of our networks are consistent with the generated explanations. Our results on image and text classification and survival analysis tasks demonstrate that CENs are competitive with the state-of-the-art while offering additional insights behind each prediction, valuable for decision support.
rejected-papers
The paper proposes a method to learn and explain simultaneously. The explanations are generated as part of the learning and in some sense come for free. It also goes the other way in that the explanations also help performance in simpler settings. Reviewers found the paper easy to follow and the idea has some value, however, the related work is sparse and consequently comparison to existing state-of-the-art explanation methods is also sparse. These are nontrivial concerns which should have been addressed in the main article not hidden away in the supplement.
test
[ "H1wsCJjez", "Bk-6h6Txz", "B1E57a-ZG", "SkESQBOMz", "ryOeXB_Mf", "r1SpfB_fG", "ryKwMH_Mz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "the paper is clearly written; it works on a popular idea of combining graphical models and neural nets.\n\nthis work could benefit from differentiating more from previous literature.\n\none key component is interpretability, which comes from the use of graphical models. the authors claim that the previous art dir...
[ 6, 6, 6, -1, -1, -1, -1 ]
[ 5, 2, 3, -1, -1, -1, -1 ]
[ "iclr_2018_HJUOHGWRb", "iclr_2018_HJUOHGWRb", "iclr_2018_HJUOHGWRb", "H1wsCJjez", "Bk-6h6Txz", "B1E57a-ZG", "iclr_2018_HJUOHGWRb" ]
iclr_2018_HyPpD0g0Z
Grouping-By-ID: Guarding Against Adversarial Domain Shifts
When training a deep neural network for supervised image classification, one can broadly distinguish between two types of latent features of images that will drive the classification of class Y. Following the notation of Gong et al. (2016), we can divide features broadly into the classes of (i) “core” or “conditionally invariant” features X^ci whose distribution P(X^ci | Y) does not change substantially across domains and (ii) “style” or “orthogonal” features X^orth whose distribution P(X^orth | Y) can change substantially across domains. These latter orthogonal features would generally include features such as position, rotation, image quality or brightness but also more complex ones like hair color or posture for images of persons. We try to guard against future adversarial domain shifts by ideally just using the “conditionally invariant” features for classification. In contrast to previous work, we assume that the domain itself is not observed and hence a latent variable. We can hence not directly see the distributional change of features across different domains. We do assume, however, that we can sometimes observe a so-called identifier or ID variable. We might know, for example, that two images show the same person, with ID referring to the identity of the person. In data augmentation, we generate several images from the same original image, with ID referring to the relevant original image. The method requires only a small fraction of images to have an ID variable. We provide a causal framework for the problem by adding the ID variable to the model of Gong et al. (2016). However, we are interested in settings where we cannot observe the domain directly and we treat domain as a latent variable. If two or more samples share the same class and identifier, (Y, ID)=(y,i), then we treat those samples as counterfactuals under different style interventions on the orthogonal or style features. Using this grouping-by-ID approach, we regularize the network to provide near constant output across samples that share the same ID by penalizing with an appropriate graph Laplacian. This is shown to substantially improve performance in settings where domains change in terms of image quality, brightness, color changes, and more complex changes such as changes in movement and posture. We show links to questions of interpretability, fairness and transfer learning.
rejected-papers
The paper proposes a method to robustify neural networks which is an important problem. They uses ideas from causality and create a model that would only depend on "stable" features ignoring the easy to manipulate ones. The paper has some interesting ideas, however, the main concern is regarding insufficient comparison to existing literature. One of the reviewers also has concerns regarding novelty of the approach.
train
[ "rJqVoyclf", "SkKuc-Kef", "BkDbv9qlM", "HyyaEhBbG", "SJ0jX3Bbz", "rJmAGnrbG", "ryWwGhrWf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The paper discusses ways to guard against adversarial domain shifts with so-called counterfactual regularization. The main idea is that in several datasets there are many instances of images for the same object/person, and that taking this into account by learning a classifier that is invariant to the superficial ...
[ 7, 4, 5, -1, -1, -1, -1 ]
[ 3, 5, 4, -1, -1, -1, -1 ]
[ "iclr_2018_HyPpD0g0Z", "iclr_2018_HyPpD0g0Z", "iclr_2018_HyPpD0g0Z", "SkKuc-Kef", "rJqVoyclf", "BkDbv9qlM", "BkDbv9qlM" ]
iclr_2018_H1xJjlbAZ
INTERPRETATION OF NEURAL NETWORK IS FRAGILE
In order for machine learning to be deployed and trusted in many applications, it is crucial to be able to reliably explain why the machine learning algorithm makes certain predictions. For example, if an algorithm classifies a given pathology image to be a malignant tumor, then the doctor may need to know which parts of the image led the algorithm to this classification. How to interpret black-box predictors is thus an important and active area of research. A fundamental question is: how much can we trust the interpretation itself? In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different}interpretations. We systematically characterize the fragility of the interpretations generated by several widely-used feature-importance interpretation methods (saliency maps, integrated gradient, and DeepLIFT) on ImageNet and CIFAR-10. Our experiments show that even small random perturbation can change the feature importance and new systematic perturbations can lead to dramatically different interpretations without changing the label. We extend these results to show that interpretations based on exemplars (e.g. influence functions) are similarly fragile. Our analysis of the geometry of the Hessian matrix gives insight on why fragility could be a fundamental challenge to the current interpretation approaches.
rejected-papers
The paper tries to show that many of the state-of-the-art interpretability methods are brittle and do not provide consistent stable explanations. The authors show this by perturbing (even randomly) the inputs so that the differences are imperceptible to a human observer but the interpretability methods provide completely different explanations. Although the output class is maintained before and after the perturbation it is not clear to me or the reviewers why one shouldn't have different explanations. The difference in explanations can be attributed to the fragility of the learned models (highly non-smooth decision boundaries) rather than the explanation methods. This is a critical point and has to come out more clearly in the paper.
test
[ "HJIRZCbeG", "S1Z4Lgqez", "Sk-uqZ9xG", "rkjVKPJGG", "H1XWYv1GM", "SkDAOvkfz", "HktDzGqgf", "BkgXE1qef", "SJBmRuQxG", "S1UARBXgG", "HkJSs4Xlf", "Sk7hGWhkM", "SJtR6e3kz", "BJhbOjiJG", "HyNgEDoJM", "SylgNFXJf", "ryEWYqGJM", "HJAUq8CA-", "r1g2UGCRZ", "HyA39DpR-" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "public", "public", "author", "public", "author", "public", "author", "public", "public", "author", "public", "author", "public" ]
[ "The authors study cases where interpretation of deep learning predictions is extremely fragile. They systematically characterize the fragility of several widely-used feature-importance interpretation methods. In general, questioning the reliability of the visualization techniques is interesting. Regarding the tech...
[ 6, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ 2, 4, 5, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1 ]
[ "iclr_2018_H1xJjlbAZ", "iclr_2018_H1xJjlbAZ", "iclr_2018_H1xJjlbAZ", "HJIRZCbeG", "S1Z4Lgqez", "Sk-uqZ9xG", "BkgXE1qef", "iclr_2018_H1xJjlbAZ", "S1UARBXgG", "HkJSs4Xlf", "iclr_2018_H1xJjlbAZ", "SJtR6e3kz", "BJhbOjiJG", "HyNgEDoJM", "iclr_2018_H1xJjlbAZ", "ryEWYqGJM", "HJAUq8CA-", "...
iclr_2018_S1EzRgb0W
Explaining the Mistakes of Neural Networks with Latent Sympathetic Examples
Neural networks make mistakes. The reason why a mistake is made often remains a mystery. As such neural networks often are considered a black box. It would be useful to have a method that can give an explanation that is intuitive to a user as to why an image is misclassified. In this paper we develop a method for explaining the mistakes of a classifier model by visually showing what must be added to an image such that it is correctly classified. Our work combines the fields of adversarial examples, generative modeling and a correction technique based on difference target propagation to create an technique that creates explanations of why an image is misclassified. In this paper we explain our method and demonstrate it on MNIST and CelebA. This approach could aid in demystifying neural networks for a user.
rejected-papers
The paper proposes a way to find why a classifier misclassified a certain instance. It tries to find pertubations in the input space to identify the appropriate reasons for the misclassification. The reviewers feel that the idea is interesting, however, it is insufficiently evaluated. Even for the datasets they do evaluate not enough examples of success are provided. In fact, for CelebA the results are far from flattering.
train
[ "BJncxx9gf", "HJ3Gw8clz", "B1IHpaWWM", "BkKZxuaQz", "S1an1uaQz", "HyhKJ_pXM", "rJ072Pa7f" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposes a method for explaining the classification mistakes of neural networks. For a misclassified image, gradient descent is used to find the minimal change to the input image so that it will be correctly classified. \n\nMy understanding is that the proposed method does not explain why a classifier m...
[ 4, 4, 6, -1, -1, -1, -1 ]
[ 5, 3, 4, -1, -1, -1, -1 ]
[ "iclr_2018_S1EzRgb0W", "iclr_2018_S1EzRgb0W", "iclr_2018_S1EzRgb0W", "BJncxx9gf", "HJ3Gw8clz", "B1IHpaWWM", "iclr_2018_S1EzRgb0W" ]
iclr_2018_r1Oen--RW
The (Un)reliability of saliency methods
Saliency methods aim to explain the predictions of deep neural networks. These methods lack reliability when the explanation is sensitive to factors that do not contribute to the model prediction. We use a simple and common pre-processing step ---adding a mean shift to the input data--- to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute. We define input invariance as the requirement that a saliency method mirror the sensitivity of the model with respect to transformations of the input. We show, through several examples, that saliency methods that do not satisfy a input invariance property are unreliable and can lead to misleading and inaccurate attribution.
rejected-papers
This paper showcases how saliency methods are brittle and cannot be trusted to obtain robust explanations. They define a property called input invariance that they claim all reliable explanation methods must possess. The reviewers have concerns regarding the motivation of this property in terms of why is it needed. This is not clear from the exposition. Moreover, even after having the opportunity to update the manuscript they seem to have not touched upon this issue other than providing a generic response.
train
[ "BJ6e_ttgf", "HyVpmntgG", "B1nPks1-f", "BkVa3BTQz", "By_xnrTQf", "S14SnSpmM", "rk1tiS67M" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "The scope of the paper is interesting i.e. taking a closer look at saliency methods in view of explaining deep learning neural networks. The authors state that saliency methods that do not satisfy an input invariance property can be misleading.\n\nOn the other hand the paper can be improved in my opinion in differ...
[ 5, 4, 4, -1, -1, -1, -1 ]
[ 3, 4, 4, -1, -1, -1, -1 ]
[ "iclr_2018_r1Oen--RW", "iclr_2018_r1Oen--RW", "iclr_2018_r1Oen--RW", "BJ6e_ttgf", "B1nPks1-f", "HyVpmntgG", "iclr_2018_r1Oen--RW" ]
iclr_2018_SJPpHzW0-
Influence-Directed Explanations for Deep Convolutional Networks
We study the problem of explaining a rich class of behavioral properties of deep neural networks. Our influence-directed explanations approach this problem by peering inside the network to identify neurons with high influence on the property of interest using an axiomatically justified influence measure, and then providing an interpretation for the concepts these neurons represent. We evaluate our approach by training convolutional neural networks on Pubfig, ImageNet, and Diabetic Retinopathy datasets. Our evaluation demonstrates that influence-directed explanations (1) localize features used by the network, (2) isolate features distinguishing related instances, (3) help extract the essence of what the network learned about the class, and (4) assist in debugging misclassifications.
rejected-papers
The paper defines a new measure of influence and uses it to highlight important features. The definition is novel however, the reviewers have concerns regarding its significance, novelty and a thorough empirical comparison to existing literature is missing.
train
[ "ryI_k_PeM", "r1ieZZcxf", "HyPSFK2gf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Notions of \"influence\" have become popular recently, and these notions try to understand how the output of a classifier or a learning algorithm is influenced by its training set. In this paper, the authors propose a way to measure influence that satisfies certain axioms. This notion of influence may be used to i...
[ 5, 4, 4 ]
[ 3, 5, 3 ]
[ "iclr_2018_SJPpHzW0-", "iclr_2018_SJPpHzW0-", "iclr_2018_SJPpHzW0-" ]
iclr_2018_rJhR_pxCZ
Interpretable Classification via Supervised Variational Autoencoders and Differentiable Decision Trees
As deep learning-based classifiers are increasingly adopted in real-world applications, the importance of understanding how a particular label is chosen grows. Single decision trees are an example of a simple, interpretable classifier, but are unsuitable for use with complex, high-dimensional data. On the other hand, the variational autoencoder (VAE) is designed to learn a factored, low-dimensional representation of data, but typically encodes high-likelihood data in an intrinsically non-separable way. We introduce the differentiable decision tree (DDT) as a modular component of deep networks and a simple, differentiable loss function that allows for end-to-end optimization of a deep network to compress high-dimensional data for classification by a single decision tree. We also explore the power of labeled data in a supervised VAE (SVAE) with a Gaussian mixture prior, which leverages label information to produce a high-quality generative model with improved bounds on log-likelihood. We combine the SVAE with the DDT to get our classifier+VAE (C+VAE), which is competitive in both classification error and log-likelihood, despite optimizing both simultaneously and using a very simple encoder/decoder architecture.
rejected-papers
The paper proposes a new model called differential decision tree which captures the benefits of decision trees and VAEs. They evaluate the method only on the MNIST dataset. The reviewers thus rightly complain that the evaluation is thus insufficient and one also questions its technical novelty.
train
[ "SJCjWdiJG", "H1v-LprxG", "HktiHfugG", "Sy9KgIP-G", "rJ09X3Tlf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "\nSummary\n\nThis paper proposes a hybrid model (C+VAE)---a variational autoencoder (VAE) composed with a differentiable decision tree (DDT)---and an accompanying training scheme. Firstly, the prior is specified as a mixture distribution with one component per class (SVAE). During training, the ELBO’s KL term us...
[ 3, 4, 5, -1, -1 ]
[ 5, 4, 4, -1, -1 ]
[ "iclr_2018_rJhR_pxCZ", "iclr_2018_rJhR_pxCZ", "iclr_2018_rJhR_pxCZ", "iclr_2018_rJhR_pxCZ", "iclr_2018_rJhR_pxCZ" ]
iclr_2018_B1ydPgTpW
Predicting Auction Price of Vehicle License Plate with Deep Recurrent Neural Network
In Chinese societies, superstition is of paramount importance, and vehicle license plates with desirable numbers can fetch very high prices in auctions. Unlike other valuable items, license plates are not allocated an estimated price before auction. I propose that the task of predicting plate prices can be viewed as a natural language processing (NLP) task, as the value depends on the meaning of each individual character on the plate and its semantics. I construct a deep recurrent neural network (RNN) to predict the prices of vehicle license plates in Hong Kong, based on the characters on a plate. I demonstrate the importance of having a deep network and of retraining. Evaluated on 13 years of historical auction prices, the deep RNN's predictions can explain over 80 percent of price variations, outperforming previous models by a significant margin. I also demonstrate how the model can be extended to become a search engine for plates and to provide estimates of the expected price distribution.
rejected-papers
Reviewers concur that the paper and the application area are interesting but that the approaches are not sufficiently novel to justify presentation at ICLR.
val
[ "r1ffKGS4M", "rkMjOyqlM", "BJT4Wx5ez", "Bk3B7T5gf", "ryXwsKp7M", "S1xQ8F67M" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "Thank you for your detailed comments and suggestions. The following are improvements I have made:\n- The odd reference in the introduction was in response to a referee's inquiry in a previous submission. It has been removed. The introduction has also been shortened.\n- Citation has been added for Akita et al.\n- T...
[ -1, 6, 4, 4, -1, -1 ]
[ -1, 5, 4, 4, -1, -1 ]
[ "BJT4Wx5ez", "iclr_2018_B1ydPgTpW", "iclr_2018_B1ydPgTpW", "iclr_2018_B1ydPgTpW", "Bk3B7T5gf", "rkMjOyqlM" ]
iclr_2018_HJcjQTJ0W
PrivyNet: A Flexible Framework for Privacy-Preserving Deep Neural Network Training
Massive data exist among user local platforms that usually cannot support deep neural network (DNN) training due to computation and storage resource constraints. Cloud-based training schemes provide beneficial services but suffer from potential privacy risks due to excessive user data collection. To enable cloud-based DNN training while protecting the data privacy simultaneously, we propose to leverage the intermediate representations of the data, which is achieved by splitting the DNNs and deploying them separately onto local platforms and the cloud. The local neural network (NN) is used to generate the feature representations. To avoid local training and protect data privacy, the local NN is derived from pre-trained NNs. The cloud NN is then trained based on the extracted intermediate representations for the target learning task. We validate the idea of DNN splitting by characterizing the dependency of privacy loss and classification accuracy on the local NN topology for a convolutional NN (CNN) based image classification task. Based on the characterization, we further propose PrivyNet to determine the local NN topology, which optimizes the accuracy of the target learning task under the constraints on privacy loss, local computation, and storage. The efficiency and effectiveness of PrivyNet are demonstrated with CIFAR-10 dataset.
rejected-papers
Reviews are marginal. I concur with the two less-favorable reviews that the metrics for privacy protection are not sufficiently strong for preserving privacy.
train
[ "HyAGOnOgM", "ryOaYRdez", "SJKRQb5lz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "1. This is an interesting paper - introduces useful concepts such as the formulation of the utility and privacy loss functions with respect to the learning paradigm\n2. From the initial part of the paper, it seems that the proposed PrivyNet is supposed to be a meta-learning framework to split a DNN in order to imp...
[ 6, 5, 3 ]
[ 5, 3, 3 ]
[ "iclr_2018_HJcjQTJ0W", "iclr_2018_HJcjQTJ0W", "iclr_2018_HJcjQTJ0W" ]
iclr_2018_H1DJFybC-
Learning to Infer Graphics Programs from Hand-Drawn Images
We introduce a model that learns to convert simple hand drawings into graphics programs written in a subset of \LaTeX.~The model combines techniques from deep learning and program synthesis. We learn a convolutional neural network that proposes plausible drawing primitives that explain an image. These drawing primitives are like a trace of the set of primitive commands issued by a graphics program. We learn a model that uses program synthesis techniques to recover a graphics program from that trace. These programs have constructs like variable bindings, iterative loops, or simple kinds of conditionals. With a graphics program in hand, we can correct errors made by the deep network and extrapolate drawings. Taken together these results are a step towards agents that induce useful, human-readable programs from perceptual input.
rejected-papers
The paper addresses an interesting problem, is novel and works. While the paper improved through reviews + rebuttal, the reviewers still find the presentation lacking.
train
[ "ryUoLK6VG", "Sk-ZlwcgG", "B1Te809gM", "HJR0yoJ-z", "B1Rzx_Zzf", "HyOFkdZMG", "ryAfyd-fz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "I think the paper became better. However, it still needs more work.\n\nOverall, it is not very clear what to be solved in the paper -- if they want to verify the trace hypothesis, or they want to show that the combination of the proposed components is important to build a system for the problem, or the improvement...
[ -1, 4, 6, 4, -1, -1, -1 ]
[ -1, 4, 4, 2, -1, -1, -1 ]
[ "ryAfyd-fz", "iclr_2018_H1DJFybC-", "iclr_2018_H1DJFybC-", "iclr_2018_H1DJFybC-", "Sk-ZlwcgG", "B1Te809gM", "HJR0yoJ-z" ]
iclr_2018_HJWGdbbCW
Reinforcement and Imitation Learning for Diverse Visuomotor Skills
We propose a general deep reinforcement learning method and apply it to robot manipulation tasks. Our approach leverages demonstration data to assist a reinforcement learning agent in learning to solve a wide range of tasks, mainly previously unsolved. We train visuomotor policies end-to-end to learn a direct mapping from RGB camera inputs to joint velocities. Our experiments indicate that our reinforcement and imitation approach can solve contact-rich robot manipulation tasks that neither the state-of-the-art reinforcement nor imitation learning method can solve alone. We also illustrate that these policies achieved zero-shot sim2real transfer by training with large visual and dynamics variations.
rejected-papers
While the reviewers agree that this paper does provide a contribution, it is small and does overlap with several concurrent works. it is a bit hand-engineered. The authors have provided a lengthy rebuttal, but the final reviews are not strong enough.
train
[ "rySp4xnBf", "Bkc_ExhHf", "ByNRqb5lz", "B1oZGo_ez", "ry6Xu6UEz", "r1C8JhrNz", "HJge1dvgz", "rJA2tLHQM", "BkVdKIrmf" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author" ]
[ "We thank this reviewer for the additional feedback. We would like to address the reviewer’s comments on the use of simulation and the amount of hand engineering in this work. We will also make an effort to clearly describe our engineering components in the next version of the draft.\n\nWe acknowledged that simulat...
[ -1, -1, 4, 4, -1, -1, 6, -1, -1 ]
[ -1, -1, 4, 4, -1, -1, 5, -1, -1 ]
[ "ry6Xu6UEz", "r1C8JhrNz", "iclr_2018_HJWGdbbCW", "iclr_2018_HJWGdbbCW", "B1oZGo_ez", "HJge1dvgz", "iclr_2018_HJWGdbbCW", "BkVdKIrmf", "iclr_2018_HJWGdbbCW" ]
iclr_2018_S1FFLWWCZ
LSD-Net: Look, Step and Detect for Joint Navigation and Multi-View Recognition with Deep Reinforcement Learning
Multi-view recognition is the task of classifying an object from multi-view image sequences. Instead of using a single-view for classification, humans generally navigate around a target object to learn its multi-view representation. Motivated by this human behavior, the next best view can be learned by combining object recognition with navigation in complex environments. Since deep reinforcement learning has proven successful in navigation tasks, we propose a novel multi-task reinforcement learning framework for joint multi-view recognition and navigation. Our method uses a hierarchical action space for multi-task reinforcement learning. The framework was evaluated with an environment created from the ModelNet40 dataset. Our results show improvements on object recognition and demonstrate human-like behavior on navigation.
rejected-papers
This paper describes active vision for object recognition learned in an RL framework. Reviewers think the paper is not of sufficient quality: Insufficient detail, and insufficient evaluation. While the authors have provided a lengthy rebuttal, the shortcomings have not yet been addressed in the paper.
train
[ "rJKiKBGef", "Bk9Z3ZQlG", "HJwAZZvxG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Paper Summary: The paper proposes an approach to perform object classification and changing the viewpoint simultaneously. The idea is that the viewpoint changes until the object is recognized. The results have been reported on ModelNet40.\n\nPaper Strength: The idea of combining active vision with object classific...
[ 4, 6, 3 ]
[ 4, 4, 4 ]
[ "iclr_2018_S1FFLWWCZ", "iclr_2018_S1FFLWWCZ", "iclr_2018_S1FFLWWCZ" ]
iclr_2018_SkmM6M_pW
Egocentric Spatial Memory Network
Inspired by neurophysiological discoveries of navigation cells in the mammalian brain, we introduce the first deep neural network architecture for modeling Egocentric Spatial Memory (ESM). It learns to estimate the pose of the agent and progressively construct top-down 2D global maps from egocentric views in a spatially extended environment. During the exploration, our proposed ESM network model updates belief of the global map based on local observations using a recurrent neural network. It also augments the local mapping with a novel external memory to encode and store latent representations of the visited places based on their corresponding locations in the egocentric coordinate. This enables the agents to perform loop closure and mapping correction. This work contributes in the following aspects: first, our proposed ESM network provides an accurate mapping ability which is vitally important for embodied agents to navigate to goal locations. In the experiments, we demonstrate the functionalities of the ESM network in random walks in complicated 3D mazes by comparing with several competitive baselines and state-of-the-art Simultaneous Localization and Mapping (SLAM) algorithms. Secondly, we faithfully hypothesize the functionality and the working mechanism of navigation cells in the brain. Comprehensive analysis of our model suggests the essential role of individual modules in our proposed architecture and demonstrates efficiency of communications among these modules. We hope this work would advance research in the collaboration and communications over both fields of computer science and computational neuroscience.
rejected-papers
Authors do not respond to significant criticism - e.g. lack of a critical reference Reviewers unanimously reject.
train
[ "r1hg0NjgG", "BkfIZxFlG", "Bk5nrSoeG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper is well written, well-motivated and the idea is very interesting for the computer vision and robotic communities. The technical contribution is original. The vision-based agent localization approach is novel compared to the methods of the literature. However, the experimental validation of the proposed a...
[ 5, 3, 4 ]
[ 4, 4, 4 ]
[ "iclr_2018_SkmM6M_pW", "iclr_2018_SkmM6M_pW", "iclr_2018_SkmM6M_pW" ]
iclr_2018_rJqfKPJ0Z
Clipping Free Attacks Against Neural Networks
During the last years, a remarkable breakthrough has been made in AI domain thanks to artificial deep neural networks that achieved a great success in many machine learning tasks in computer vision, natural language processing, speech recognition, malware detection and so on. However, they are highly vulnerable to easily crafted adversarial examples. Many investigations have pointed out this fact and different approaches have been proposed to generate attacks while adding a limited perturbation to the original data. The most robust known method so far is the so called C&W attack [1]. Nonetheless, a countermeasure known as fea- ture squeezing coupled with ensemble defense showed that most of these attacks can be destroyed [6]. In this paper, we present a new method we call Centered Initial Attack (CIA) whose advantage is twofold : first, it insures by construc- tion the maximum perturbation to be smaller than a threshold fixed beforehand, without the clipping process that degrades the quality of attacks. Second, it is robust against recently introduced defenses such as feature squeezing, JPEG en- coding and even against a voting ensemble of defenses. While its application is not limited to images, we illustrate this using five of the current best classifiers on ImageNet dataset among which two are adversarialy retrained on purpose to be robust against attacks. With a fixed maximum perturbation of only 1.5% on any pixel, around 80% of attacks (targeted) fool the voting ensemble defense and nearly 100% when the perturbation is only 6%. While this shows how it is difficult to defend against CIA attacks, the last section of the paper gives some guidelines to limit their impact.
rejected-papers
The reviewers have various reservations. While the paper has interesting suggestions, it is slightly incremental and the results are not sufficiently compared to other techniques. We not that one reviewer revised his opinion
train
[ "B1fZIQcxM", "ryU7ZMsgf", "BkrIo4ixG", "ryWp9xwGf", "SkWHqxDzf", "Byu9txvMM" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "The paper is not anonymized. In page 2, the first line, the authors revealed [15] is a self-citation and [15] is not anonumized in the reference list.\n\n", "This paper presents a reparametrization of the perturbation applied to features in adversarial examples based attacks. It tests this attack variation on ag...
[ 3, 4, 5, -1, -1, -1 ]
[ 3, 3, 2, -1, -1, -1 ]
[ "iclr_2018_rJqfKPJ0Z", "iclr_2018_rJqfKPJ0Z", "iclr_2018_rJqfKPJ0Z", "B1fZIQcxM", "ryU7ZMsgf", "BkrIo4ixG" ]
iclr_2018_ByCPHrgCW
Deep Learning Inferences with Hybrid Homomorphic Encryption
When deep learning is applied to sensitive data sets, many privacy-related implementation issues arise. These issues are especially evident in the healthcare, finance, law and government industries. Homomorphic encryption could allow a server to make inferences on inputs encrypted by a client, but to our best knowledge, there has been no complete implementation of common deep learning operations, for arbitrary model depths, using homomorphic encryption. This paper demonstrates a novel approach, efficiently implementing many deep learning functions with bootstrapped homomorphic encryption. As part of our implementation, we demonstrate Single and Multi-Layer Neural Networks, for the Wisconsin Breast Cancer dataset, as well as a Convolutional Neural Network for MNIST. Our results give promising directions for privacy-preserving representation learning, and the return of data control to users.
rejected-papers
While the reviewers all seem to think this is interesting and basically good work, the Reviewers are consistent and unanimous in rejecting the paper. While the authors did provide a thorough rebuttal, the original paper did not meet the criteria and the reviewers have not changed their scores.
train
[ "HkCG-X5lG", "rJhC64Olf", "Sy52MUdgG", "S174cGs7z", "BJxS1hFXG", "ryMApotmG", "B1p3soF7z" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author" ]
[ "This paper proposes a hybrid Homomorphic encryption system that is well suited for privacy-sensitive data inference applications with the deep learning paradigm. \nThe paper presents a well laid research methodology that shows a good decomposition of the problem at hand and the approach foreseen to solve it. It is...
[ 4, 4, 4, -1, -1, -1, -1 ]
[ 4, 5, 5, -1, -1, -1, -1 ]
[ "iclr_2018_ByCPHrgCW", "iclr_2018_ByCPHrgCW", "iclr_2018_ByCPHrgCW", "iclr_2018_ByCPHrgCW", "rJhC64Olf", "Sy52MUdgG", "HkCG-X5lG" ]
iclr_2018_H1u8fMW0b
Toward predictive machine learning for active vision
We develop a comprehensive description of the active inference framework, as proposed by Friston (2010), under a machine-learning compliant perspective. Stemming from a biological inspiration and the auto-encoding principles, a sketch of a cognitive architecture is proposed that should provide ways to implement estimation-oriented control policies. Computer simulations illustrate the effectiveness of the approach through a foveated inspection of the input data. The pros and cons of the control policy are analyzed in detail, showing interesting promises in terms of processing compression. Though optimizing future posterior entropy over the actions set is shown enough to attain locally optimal action selection, offline calculation using class-specific saliency maps is shown better for it saves processing costs through saccades pathways pre-processing, with a negligible effect on the recognition/compression rates.
rejected-papers
All 3 reviewers consider the paper insufficiently good, including a post-rebuttal updated score. All reviewers + anonymous comment find that the paper isn't well-enough situated with the appropriate literature. Two reviewers cite poor presentation - spelling /grammar errors making hte paper hard to read. Authors have revised the paper and promise further revisions for final version.
train
[ "SyKJh-qlM", "Hy9KrjINf", "HJedzYOxf", "HkEi3Koxf", "BJLFk84-G", "rJenZIEbf", "Bk9Se8NZM" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author" ]
[ "This paper introduces a machine learning adaptation of the active inference framework proposed by Friston (2010), and applies it to the task of image classification on MNIST through a foveated inspection of images. It describes a cognitive architecture for the same, and provide analyses in terms of processing comp...
[ 5, -1, 3, 3, -1, -1, -1 ]
[ 2, -1, 4, 5, -1, -1, -1 ]
[ "iclr_2018_H1u8fMW0b", "HJedzYOxf", "iclr_2018_H1u8fMW0b", "iclr_2018_H1u8fMW0b", "HkEi3Koxf", "HJedzYOxf", "SyKJh-qlM" ]
iclr_2018_r1ayG7WRZ
Don't encrypt the data; just approximate the model \ Towards Secure Transaction and Fair Pricing of Training Data
As machine learning becomes ubiquitous, deployed systems need to be as accu- rate as they can. As a result, machine learning service providers have a surging need for useful, additional training data that benefits training, without giving up all the details about the trained program. At the same time, data owners would like to trade their data for its value, without having to first give away the data itself be- fore receiving compensation. It is difficult for data providers and model providers to agree on a fair price without first revealing the data or the trained model to the other side. Escrow systems only complicate this further, adding an additional layer of trust required of both parties. Currently, data owners and model owners don’t have a fair pricing system that eliminates the need to trust a third party and training the model on the data, which 1) takes a long time to complete, 2) does not guarantee that useful data is paid valuably and that useless data isn’t, without trusting in the third party with both the model and the data. Existing improve- ments to secure the transaction focus heavily on encrypting or approximating the data, such as training on encrypted data, and variants of federated learning. As powerful as the methods appear to be, we show them to be impractical in our use case with real world assumptions for preserving privacy for the data owners when facing black-box models. Thus, a fair pricing scheme that does not rely on secure data encryption and obfuscation is needed before the exchange of data. This pa- per proposes a novel method for fair pricing using data-model efficacy techniques such as influence functions, model extraction, and model compression methods, thus enabling secure data transactions. We successfully show that without running the data through the model, one can approximate the value of the data; that is, if the data turns out redundant, the pricing is minimal, and if the data leads to proper improvement, its value is properly assessed, without placing strong assumptions on the nature of the model. Future work will be focused on establishing a system with stronger transactional security against adversarial attacks that will reveal details about the model or the data to the other party.
rejected-papers
The reviewers highlight a lack of technical content and poor writing. They all agree on rejection. There was no author rebuttal or pointer to a new version.
test
[ "Hy0ZkHuxG", "BktJHw_lM", "BJ8Ijatxz" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "Summary\n\nThe paper addresses the issues of fair pricing and secure transactions between model and data providers in the context of machine learning real-world application.\n\nMajor\n\nThe paper addresses an important issue regarding the real-world application of machine learning, that is, the transactions betwee...
[ 2, 4, 3 ]
[ 4, 5, 5 ]
[ "iclr_2018_r1ayG7WRZ", "iclr_2018_r1ayG7WRZ", "iclr_2018_r1ayG7WRZ" ]
iclr_2018_H1BHbmWCZ
TOWARDS ROBOT VISION MODULE DEVELOPMENT WITH EXPERIENTIAL ROBOT LEARNING
n this paper we present a thrust in three directions of visual development us- ing supervised and semi-supervised techniques. The first is an implementation of semi-supervised object detection and recognition using the principles of Soft At- tention and Generative Adversarial Networks (GANs). The second and the third are supervised networks that learn basic concepts of spatial locality and quantity respectively using Convolutional Neural Networks (CNNs). The three thrusts to- gether are based on the approach of Experiential Robot Learning, introduced in previous publication. While the results are unripe for implementation, we believe they constitute a stepping stone towards autonomous development of robotic vi- sual modules.
rejected-papers
Reviewers unanimous on rejection. Authors don't maintain anonymity. No rebuttal from authors. Poorly written
train
[ "S1w1mX_xG", "B1IfKjYgM", "B1XsJ_3lf" ]
[ "official_reviewer", "official_reviewer", "official_reviewer" ]
[ "The paper is motivated with building robots that learn in an open-ended way, which is really interesting. What it actually investigates is the performance of existing image classifiers and object detectors. I could not find any technical contribution or something sufficiently mature and interesting for presenting ...
[ 3, 2, 2 ]
[ 4, 3, 4 ]
[ "iclr_2018_H1BHbmWCZ", "iclr_2018_H1BHbmWCZ", "iclr_2018_H1BHbmWCZ" ]