markdown_text
stringlengths 1
2.5k
| pdf_metadata
dict | header_metadata
dict | chunk_metadata
dict |
|---|---|---|---|
## **Teaching Machines to Read and Comprehend**
**Karl Moritz Hermann** *[†]* **Tom´aˇs Koˇcisk´y** *[‡]* **Edward Grefenstette** *[†]*
**Lasse Espeholt** *[†]* **Will Kay** *[†]* **Mustafa Suleyman** *[†]* **Phil Blunsom** *[†‡]*
*†* Google DeepMind *‡* University of Oxford
*{* kmh,etg,lespeholt,wkay,mustafasul,pblunsom *}* @google.com
tomas@kocisky.eu
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": null,
"Header 2": "**Teaching Machines to Read and Comprehend**",
"Header 3": null,
"Header 4": null
}
|
{
"chunk_type": "title"
}
|
### **Abstract**
Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions
posed on the contents of documents that they have seen, but until now large scale
training and test datasets have been missing for this type of evaluation. In this
work we define a new methodology that resolves this bottleneck and provides
large scale supervised reading comprehension data. This allows us to develop a
class of attention based deep neural networks that learn to read real documents and
answer complex questions with minimal prior knowledge of language structure.
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": null,
"Header 2": "**Teaching Machines to Read and Comprehend**",
"Header 3": "**Abstract**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### **1 Introduction**
Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine
reading and comprehension have been based on either hand engineered grammars [1], or information
extraction methods of detecting predicate argument triples that can later be queried as a relational
database [2]. Supervised machine learning approaches have largely been absent from this space due
to both the lack of large scale training datasets, and the difficulty in structuring statistical models
flexible enough to learn to exploit document structure.
While obtaining supervised natural language reading comprehension data has proved difficult, some
researchers have explored generating synthetic narratives and queries [3, 4]. Such approaches allow
the generation of almost unlimited amounts of supervised data and enable researchers to isolate the
performance of their algorithms on individual simulated phenomena. Work on such data has shown
that neural network based models hold promise for modelling reading comprehension, something
that we will build upon here. Historically, however, many similar approaches in Computational
Linguistics have failed to manage the transition from synthetic data to real environments, as such
closed worlds inevitably fail to capture the complexity, richness, and noise of natural language [5].
In this work we seek to directly address the lack of real natural language training data by introducing a novel approach to building a supervised reading comprehension data set. We observe that
summary and paraphrase sentences, with their associated documents, can be readily converted to
context–query–answer triples using simple entity detection and anonymisation algorithms. Using
this approach we have collected two new corpora of roughly a million news stories with associated
queries from the CNN and Daily Mail websites.
We demonstrate the efficacy of our new corpora by building novel deep learning models for reading
comprehension. These models draw on recent developments for incorporating attention mechanisms
into recurrent neural network architectures [6, 7, 8]. This allows a model to focus on the aspects of a
document that it believes will help it answer a question, and also allows us to visualises its inference
process. We compare these neural models to a range of baselines and heuristic benchmarks based
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": null,
"Header 2": "**Teaching Machines to Read and Comprehend**",
"Header 3": "**1 Introduction**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
document that it believes will help it answer a question, and also allows us to visualises its inference
process. We compare these neural models to a range of baselines and heuristic benchmarks based
upon a traditional frame semantic analysis provided by a state-of-the-art natural language processing
1
-----
**CNN** **Daily Mail**
train valid test train valid test
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": null,
"Header 2": "**Teaching Machines to Read and Comprehend**",
"Header 3": "**1 Introduction**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
# months 95 1 1 56 1 1
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "months 95 1 1 56 1 1",
"Header 2": null,
"Header 3": null,
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
# documents 107,582 1,210 1,165 195,461 11,746 11,074
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "documents 107,582 1,210 1,165 195,461 11,746 11,074",
"Header 2": null,
"Header 3": null,
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
# queries 438,148 3,851 3,446 837,944 60,798 55,077
Max # entities 456 190 398 424 247 250
Avg # entities 29.5 32.4 30.2 41.3 44.7 45.0
Avg tokens/doc 780 809 773 1044 1061 1066
Vocab size 124,814 274,604
Table 1: Corpus statistics. Articles were collected starting in
April 2007 for CNN and June 2010 for the Daily Mail, both until
the end of April 2015. Validation data is from March, test data
from April 2015. Articles of over 2000 tokens and queries whose
answer entity did not appear in the context were filtered out.
**Category** **Sentences**
**1** **2** *≥* **3**
Simple 12 2 0
Lexical 14 0 0
Coref 0 8 2
Coref/Lex 10 8 4
Complex 8 8 14
Unanswerable 10
Table 2: Distribution (in percent) of queries over category and number of context
sentences required to answer
them based on a subset of the
CNN validation data.
(NLP) pipeline. Our results indicate the that the neural models achieve a higher accuracy, and do so
without any specific encoding of the document or query structure.
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": null,
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### **2 Supervised training data for reading comprehension**
The reading comprehension task naturally lends itself to a formulation as a supervised learning
problem. Specifically we seek to estimate the conditional probability *p* ( *a|c, q* ), where *c* is a context
document, *q* a query relating to that document, and *a* the answer to that query. For a focused
evaluation we wish to be able to exclude additional information, such as world knowledge gained
from co-occurrence statistics, in order to test a model’s core capability to detect and understand the
linguistic relationships between entities in the context document.
Such an approach requires a large training corpus of document–query–answer triples and until now
such corpora have been limited to hundreds of examples and thus mostly of use only for testing [9].
This limitation has meant that most work in this area has taken the form of unsupervised approaches
which use templates or syntactic/semantic analysers to extract relation tuples from the document to
form a knowledge graph that can be queried.
Here we propose a methodology for creating real-world, large scale supervised training data for
learning reading comprehension models. Inspired by work in summarisation [10, 11], we create two
machine reading corpora by exploiting online newspaper articles and their matching summaries. We
have collected 110k articles from the CNN [1] and 218k articles from the Daily Mail [2] websites. Both
news providers supplement their articles with a number of bullet points, summarising aspects of the
information contained in the article. Of key importance is that these summary points are abstractive
and do not simply copy sentences from the documents. We construct a corpus of document–query–
answer triples by turning these bullet points into Cloze [12] style questions by replacing one entity
at a time with a placeholder. This results in a combined corpus of roughly 1M data points (Table 1).
**2.1** **Qualitative Analysis of Answer Difficulty**
In order to gain a perspective on the difficulty of the task in absolute terms we conduct a small
qualitative analysis on a subset of the validation data. We classify queries by the depth of inference
required to solve them, sorting them into six categories. *Simple* queries can be answered without
any syntactic or semantic complexity. *Lexical* queries require some form of lexical generalisation—
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**2 Supervised training data for reading comprehension**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
required to solve them, sorting them into six categories. *Simple* queries can be answered without
any syntactic or semantic complexity. *Lexical* queries require some form of lexical generalisation—
something that comes naturally to embedding based systems, but causes difficulty for methods relying primarily on syntactic analysis. The *Coref* category describes queries that require coreference
resolution, with a joint category *Coref/Lex* for queries that require both coreference resolution as
well as lexical generalisation. *Complex* queries require some form of causality or other complex
inference in order to be answered. Finally, we group all unanswerable queries in another category
1
www.cnn.com
2 www.dailymail.co.uk
2
-----
**Original Version** **Anonymised Version**
**Context**
The BBC producer allegedly struck by Jeremy the *ent381* producer allegedly struck by *ent212* will
Clarkson will not press charges against the “Top not press charges against the “ *ent153* ” host, his
Gear” host, his lawyer said Friday. Clarkson, who lawyer said friday . *ent212*, who hosted one of the
hosted one of the most-watched television shows most - watched television shows in the world, was
in the world, was dropped by the BBC Wednesday dropped by the *ent381* wednesday after an internal
after an internal investigation by the British broad- investigation by the *ent180* broadcaster found he
caster found he had subjected producer Oisin Tymon had subjected producer *ent193* “ to an unprovoked
“to an unprovoked physical and verbal attack.” . . . physical and verbal attack . ” . . .
**Query**
Producer **X** will not press charges against Jeremy Producer **X** will not press charges against *ent212*,
Clarkson, his lawyer says. his lawyer says.
**Answer**
Oisin Tymon *ent193*
Table 3: Original and anonymised version of a data point from the Daily Mail validation set. The
anonymised entity markers are constantly permuted during training and testing.
*Unanswerable* . This includes ambiguous queries, those where the answer is not contained in the
context and such queries which are rendered unanswerable by the anonymisation strategy.
The results in Table 2 highlight the difficulty of the task with the majority of queries requiring at
least two context sentences to be answered. With 30% of queries needing complex inference and
another 10% ambiguous or unanswerable, it is clear that our machine reading setup is a hard task.
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**2 Supervised training data for reading comprehension**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
least two context sentences to be answered. With 30% of queries needing complex inference and
another 10% ambiguous or unanswerable, it is clear that our machine reading setup is a hard task.
**2.2** **Entity replacement and permutation**
Note that the focus of this paper is to provide a corpus for evaluating a model’s ability to read
and comprehend a single document, not world knowledge or co-occurrence. To understand that
distinction consider for instance the following Cloze form queries (created from headlines in the
Daily Mail validation set): *a* ) The hi-tech bra that helps you beat breast **X** ; *b* ) Could Saccharin help
beat **X** ?; *c* ) Can fish oils help fight prostate **X** ? An ngram language model trained on the Daily Mail
would easily correctly predict that ( **X** = *cancer* ), regardless of the contents of the context document,
simply because this is a very frequently cured entity in the Daily Mail corpus.
To prevent such degenerate solutions and create a focused task we anonymise and randomise our
corpora with the following procedure, *a* ) use a coreference system to establish coreferents in each
data point; *b* ) replace all entities with abstract entity markers according to coreference; *c* ) randomly
permute these entity markers whenever a data point is loaded.
Compare the original and anonymised version of the example in Table 3. Clearly a human reader can
answer both queries correctly. However in the anonymised setup the context document is required
for answering the query, whereas the original version could also be answered by someone with the
requisite background knowledge.
Following this procedure, the only remaining strategy for answering questions is to do so by exploiting the context presented with each question. Thus performance on our two corpora truly measures
reading comprehension capability. Naturally a production system would benefit from using all available information sources, such as clues through language and co-occurrence statistics.
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**2 Supervised training data for reading comprehension**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### **3 Models**
So far we have motivated the need for better datasets and tasks to evaluate the capabilities of machine
reading models. We proceed by describing a number of baselines, benchmarks and new models to
evaluate against this paradigm. We define two simple baselines, the majority baseline (maximum
frequency) picks the entity most frequently observed in the context document, whereas the exclusive majority (exclusive frequency) chooses the entity most frequently observed in the
3
-----
context but not observed in the query. The idea behind this exclusion is that the placeholder is
unlikely to be mentioned twice in a single Cloze form query.
**3.1** **Symbolic Matching Models**
Traditionally, a pipeline of NLP models has been used for attempting question answering, that is
models that make heavy use of linguistic annotation, structured world knowledge and semantic
parsing and similar NLP pipeline outputs. Building on these approaches, we define a number of
NLP-centric models for our machine reading task.
**Frame-Semantic Parsing** Frame-semantic parsing attempts to identify predicates and their arguments, allowing models access to information about “who did what to whom”. Naturally this kind
of annotation lends itself to being exploited for question answering. We develop a benchmark that
makes use of frame-semantic annotations which we obtained by parsing our model with a state-ofthe-art frame-semantic parser [13, 14]. As the parser makes extensive use of linguistic information
we run these benchmarks on the unanonymised version of our corpora. There is no significant
advantage in this as the frame-semantic approach used here does not possess the capability to generalise through a language model beyond exploiting one during the parsing phase. Thus, the key
objective of evaluating machine comprehension abilities is maintained. Extracting entity-predicate
triples—denoted as ( *e* 1 *, V, e* 2 )—from both the query and context document, we attempt to resolve
queries using a number of rules with an increasing recall/precision trade-off as follows (Table 4).
Strategy Pattern *∈* *Q* Pattern *∈* *C* Example (Cloze / Context)
1 Exact match ( *p, V, y* ) ( ***x*** *, V, y* ) X loves Suse / **Kim** loves Suse
2 be.01.V match ( *p, be.01.V, y* ) ( ***x*** *, be.01.V, y* ) X is president / **Mike** is president
3 Correct frame ( *p, V, y* ) ( ***x*** *, V, z* ) X won Oscar / **Tom** won Academy Award
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**3 Models**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
2 be.01.V match ( *p, be.01.V, y* ) ( ***x*** *, be.01.V, y* ) X is president / **Mike** is president
3 Correct frame ( *p, V, y* ) ( ***x*** *, V, z* ) X won Oscar / **Tom** won Academy Award
4 Permuted frame ( *p, V, y* ) ( *y, V,* ***x*** ) X met Suse / Suse met **Tom**
5 Matching entity ( *p, V, y* ) ( ***x*** *, Z, y* ) X likes candy / **Tom** loves candy
6 Back-off strategy *Pick the most frequent entity from the context that doesn’t appear in the query*
Table 4: Resolution strategies using PropBank triples. ***x*** denotes the entity proposed as anwer, *V* is
a fully qualified PropBank frame (e.g. *give.01.V* ). Strategies are ordered by precedence and answers
determined accordingly. This heuristic algorithm was iteratively tuned on the validation data set.
For reasons of clarity, we pretend that all PropBank triples are of the form ( *e* 1 *, V, e* 2 ). In practice,
we take the argument numberings of the parser into account and only compare like with like, except
in cases such as the permuted frame rule, where ordering is relaxed. In the case of multiple possible
answers from a single rule, we randomly choose one.
**Word Distance Benchmark** We consider another baseline that relies on word distance measurements. Here, we align the placeholder of the Cloze form question with each possible entity in the
context document and calculate a distance measure between the question and the context around the
aligned entity. This score is calculated by summing the distances of every word in *Q* to their nearest
aligned word in *C*, where alignment is defined by matching words either directly or as aligned by
the coreference system. We tune the maximum penalty per word ( *m* = 8) on the validation data.
**3.2** **Neural Network Models**
Neural networks have successfully been applied to a range of tasks in NLP. This includes classification tasks such as sentiment analysis [15] or POS tagging [16], as well as generative problems such
as language modelling or machine translation [17]. We propose three neural models for estimating
the probability of word type *a* from document *d* answering query *q* :
*p* ( *a|d, q* ) *∝* exp ( *W* ( *a* ) *g* ( *d, q* )) *,* s.t. *a ∈* *d,*
where *W* ( *a* ) indexes row *a* of weight matrix *W* and through a slight abuse of notation word types
double as indexes. Note that we do not privilege entities or variables, the model must learn to
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**3 Models**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
where *W* ( *a* ) indexes row *a* of weight matrix *W* and through a slight abuse of notation word types
double as indexes. Note that we do not privilege entities or variables, the model must learn to
differentiate these in the input sequence. The function *g* ( *d, q* ) returns a vector embedding of a
document and query pair.
4
-----
|g r u s(1)y(1) s(4)y(4) s(2)y(2) s(3)y(3) Mary went to England X visited Englan (a) Attentive Reader. Mary went toEng|Col2|r|r r g u and X visited England nt Reader.|Col5|
|---|---|---|---|---|
||England X visited Englan ntive Reader. Mary went toEng|d Mary went to Engl (b) Impatie g land ||| X visited England|and X visited England nt Reader.||
|(c) A two lay The Deep LSTM Re considerable success i for translation, Deep a vector representation|er Deep LSTM Reader with the question encoded following the document. Figure 1: Document and query embedding models. ader Long short-term memory (LSTM, [18]) networks have recently seen n tasks such as machine translation and language modelling [17]. When used LSTMs [19] have shown a remarkable ability to embed long sequences into which contains enough information to generate a full translation in another||||
language. Our first neural model for reading comprehension tests the ability of Deep LSTM encoders
to handle significantly longer sequences. We feed our documents one word at a time into a Deep
LSTM encoder, after a delimiter we then also feed the query into the encoder. Alternatively we also
experiment with processing the query then the document. The result is that this model processes
each document query pair as a single long sequence. Given the embedded document and query the
network predicts which token in the document answers the query.
We employ a Deep LSTM cell with skip connections from each input *x* ( *t* ) to every hidden layer,
and from every hidden layer to the output *y* ( *t* ):
*x* *[′]* ( *t, k* ) = *x* ( *t* ) *||y* *[′]* ( *t, k −* 1) *,* *y* ( *t* ) = *y* *[′]* ( *t,* 1) *|| . . . ||y* *[′]* ( *t, K* )
*i* ( *t, k* ) = *σ* ( *W* *kxi* *x* *[′]* ( *t, k* ) + *W* *khi* *h* ( *t −* 1 *, k* ) + *W* *kci* *c* ( *t −* 1 *, k* ) + *b* *ki* )
*f* ( *t, k* ) = *σ* ( *W* *kxf* *x* ( *t* ) + *W* *khf* *h* ( *t −* 1 *, k* ) + *W* *kcf* *c* ( *t −* 1 *, k* ) + *b* *kf* )
*c* ( *t, k* ) = *f* ( *t, k* ) *c* ( *t −* 1 *, k* ) + *i* ( *t, k* ) tanh ( *W* *kxc* *x* *[′]* ( *t, k* ) + *W* *khc* *h* ( *t −* 1 *, k* ) + *b* *kc* )
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**3 Models**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
*c* ( *t, k* ) = *f* ( *t, k* ) *c* ( *t −* 1 *, k* ) + *i* ( *t, k* ) tanh ( *W* *kxc* *x* *[′]* ( *t, k* ) + *W* *khc* *h* ( *t −* 1 *, k* ) + *b* *kc* )
*o* ( *t, k* ) = *σ* ( *W* *kxo* *x* *[′]* ( *t, k* ) + *W* *kho* *h* ( *t −* 1 *, k* ) + *W* *kco* *c* ( *t, k* ) + *b* *ko* )
*h* ( *t, k* ) = *o* ( *t, k* ) tanh ( *c* ( *t, k* ))
*y* *[′]* ( *t, k* ) = *W* *ky* *h* ( *t, k* ) + *b* *ky*
where *||* indicates vector concatenation *h* ( *t, k* ) is the hidden state for layer *k* at time *t*, and *i*, *f*,
*o* are the input, forget, and output gates respectively. Thus our Deep LSTM Reader is defined by
*g* [LSTM] ( *d, q* ) = *y* ( *|d|* + *|q|* ) with input *x* ( *t* ) the concatenation of *d* and *q* separated by the delimiter *|||* .
**The Attentive Reader** The Deep LSTM Reader must propagate dependencies over long distances
in order to connect queries to their answers. The fixed width hidden vector forms a bottleneck for
this information flow that we propose to circumvent using an attention mechanism inspired by recent
results in translation and image recognition [6, 7]. This attention model first encodes the document
and the query using separate bidirectional single layer LSTMs [19].
We denote the outputs of the forward and backward LSTMs as *[−→]* *y* ( *t* ) and *[←−]* *y* ( *t* ) respectively. The
encoding *u* of a query of length *|q|* is formed by the concatenation of the final forward and backward
outputs, *u* = *[−→]* *y* *q* ( *|q|* ) *||* *[←−]* *y* *q* (1) *.*
5
-----
For the document the composite output for each token at position *t* is, *y* *d* ( *t* ) = *[−→]* *y* *d* ( *t* ) *||* *[←−]* *y* *d* ( *t* ) *.* The
representation *r* of the document *d* is formed by a weighted sum of these output vectors. These
weights are interpreted as the degree to which the network attends to a particular token in the document when answering the query:
*m* ( *t* ) = tanh ( *W* *ym* *y* *d* ( *t* ) + *W* *um* *u* ) *,*
*s* ( *t* ) *∝* exp (w *ms* [⊺] *[m]* [(] *[t]* [))] *[,]*
*r* = *y* *d* *s,*
where we are interpreting *y* *d* as a matrix with each column being the composite representation *y* *d* ( *t* )
of document token *t* . The variable *s* ( *t* ) is the normalised attention at token *t* . Given this attention
score the embedding of the document *r* is computed as the weighted sum of the token embeddings.
The model is completed with the definition of the joint document and query embedding via a nonlinear combination:
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**3 Models**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
score the embedding of the document *r* is computed as the weighted sum of the token embeddings.
The model is completed with the definition of the joint document and query embedding via a nonlinear combination:
*g* [AR] ( *d, q* ) = tanh ( *W* *rg* *r* + *W* *ug* *u* ) *.*
**The Impatient Reader** The Attentive Reader is able to focus on the passages of a context document that are most likely to inform the answer to the query. We can go further by equipping the
model with the ability to reread from the document as each query token is read. At each token *i*
of the query *q* the model computes a document representation vector *r* ( *i* ) using the bidirectional
embedding *y* *q* ( *i* ) = *[−→]* *y* *q* ( *i* ) *||* *[←−]* *y* *q* ( *i* ):
*m* ( *i, t* ) = tanh ( *W* *dm* *y* *d* ( *t* ) + *W* *rm* *r* ( *i −* 1) + *W* *qm* *y* *q* ( *i* )) *,* 1 *≤* *i ≤|q|,*
*s* ( *i, t* ) *∝* exp (w *ms* [⊺] *[m]* [(] *[i, t]* [))] *[,]*
*r* (0) = **r** **0** *,* *r* ( *i* ) = *y* *d* [⊺] *[s]* [(] *[i]* [)] *[,]* 1 *≤* *i ≤|q|.*
The result is an attention mechanism that allows the model to recurrently accumulate information
from the document as it sees each query token, ultimately outputting a final joint document query
representation for the answer prediction,
*g* [IR] ( *d, q* ) = tanh ( *W* *rg* *r* ( *|q|* ) + *W* *qg* *u* ) *.*
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**3 Models**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### **4 Empirical Evaluation**
Having described a number of models in the previous section, we next evaluate these models on our
reading comprehension corpora. Our hypothesis is that neural models should in principle be well
suited for this task. However, we argued that simple recurrent models such as the LSTM probably
have insufficient expressive power for solving tasks that require complex inference. We expect that
the attention-based models would therefore outperform the pure LSTM-based approaches.
Considering the second dimension of our investigation, the comparison of traditional versus neural
approaches to NLP, we do not have a strong prior favouring one approach over the other. While numerous publications in the past few years have demonstrated neural models outperforming classical
methods, it remains unclear how much of that is a side-effect of the language modelling capabilities
intrinsic to any neural model for NLP. The entity anonymisation and permutation aspect of the task
presented here may end up levelling the playing field in that regard, favouring models capable of
dealing with syntax rather than just semantics.
With these considerations in mind, the experimental part of this paper is designed with a threefold aim. First, we want to establish the difficulty of our machine reading task by applying a wide
range of models to it. Second, we compare the performance of parse-based methods versus that of
neural models. Third, within the group of neural models examined, we want to determine what each
component contributes to the end performance; that is, we want to analyse the extent to which an
LSTM can solve this task, and to what extent various attention mechanisms impact performance.
All model hyperparameters were tuned on the respective validation sets of the two corpora. [3] Our
experimental results are in Table 5, with the Impatient Reader performing best across both datasets.
3 For the Deep LSTM Reader, we consider hidden layer sizes [64 *,* 128 *,* 256], depths [1 *,* 2 *,* 4 ], initial learning rates [1 E *−* 3 *,* 5 E *−* 4 *,* 1 E *−* 4 *,* 5 E *−* 4] and batch sizes [16 *,* 32 ]. We evaluate two types of feeds. In the *cqa*
6
-----
CNN Daily Mail
valid test valid test
Maximum frequency 26.3 27.9 22.5 22.7
Exclusive frequency 30.8 32.6 27.3 27.7
Frame-semantic model 32.2 33.0 30.7 31.1
Word distance model 46.2 46.9 55.6 54.8
Deep LSTM Reader 49.0 49.9 57.1 57.3
Uniform attention 31.1 33.6 31.0 31.7
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**4 Empirical Evaluation**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
Maximum frequency 26.3 27.9 22.5 22.7
Exclusive frequency 30.8 32.6 27.3 27.7
Frame-semantic model 32.2 33.0 30.7 31.1
Word distance model 46.2 46.9 55.6 54.8
Deep LSTM Reader 49.0 49.9 57.1 57.3
Uniform attention 31.1 33.6 31.0 31.7
Attentive Reader 56.5 58.9 64.5 63.7
Impatient Reader **57.0** **60.6** **64.8** **63.9**
Table 5: Results for all the models and benchmarks on the CNN and Daily Mail datasets. The
Uniform attention baseline sets all of the *m* ( *t* )
parameters to be equal.

Figure 2: Precision@Recall for the attention
models on the CNN validation data.
**Frame-semantic benchmark** Considering our qualitative analysis (Table 2) the relatively weak
performance of the frame-semantic approach is not entirely surprising. While this one model is
clearly a simplification of what could be achieved with annotations from an NLP pipeline, it does
highlight the difficulty of the task when approached from a symbolic NLP perspective.
Two issues stand out when analysing the results in detail. First, the frame-semantic pipeline has a
poor degree of coverage with many relations not being picked up by our PropBank parser as they
do not adhere to the default predicate-argument structure. This effect is exacerbated by the type
of language used in the highlights that form the basis of our datasets. The second issue is that
the frame-semantic approach does not trivially scale to situations where several sentences, and thus
frames, are required to answer a query. This was true for the majority of queries in the dataset.
**Word distance benchmark** More surprising perhaps is the relatively strong performance of the
word distance benchmark, particularly relative to the frame-semantic benchmark, which we had
expected to perform better. Here, again, the nature of the datasets used can explain aspects of this
result. Where the frame-semantic model suffered due to the language used in the highlights, the word
distance model benefited. Particularly in the case of the Daily Mail dataset, highlights frequently
have significant lexical overlap with passages in the accompanying article, which makes it easy for
the word distance benchmark. For instance the query “ *Tom Hanks is friends with* **X** *’s manager,*
*Scooter Brown* ” has the phrase “ *... turns out he is good friends with Scooter Brown, manager for*
*Carly Rae Jepson* ” in the context. The word distance benchmark correctly aligns these two while
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**4 Empirical Evaluation**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
*Scooter Brown* ” has the phrase “ *... turns out he is good friends with Scooter Brown, manager for*
*Carly Rae Jepson* ” in the context. The word distance benchmark correctly aligns these two while
the frame-semantic approach fails to pickup the friendship or management relations when parsing
the query. We expect that on other types of machine reading data where questions rather than Cloze
queries are used this particular model would perform significantly worse.
**Neural models** Within the group of neural models explored here, the results paint a clear picture
with the Impatient and the Attentive Readers outperforming all other models. This is consistent
with our hypothesis that attention is a key ingredient for machine reading and question answering
due to the need to propagate information over long distances. The Deep LSTM Reader performs
surprisingly well, once again demonstrating that this simple sequential architecture can do a reasonable job of learning to abstract long sequences, even when they are up to two thousand tokens in
length. However this model does fail to match the performance of the attention based models, even
though these only use single layer LSTMs. [4] It is notable that the best performing models obtain an
setup we feed first the context document and subsequently the question into the encoder, while the *qca* model
starts by feeding in the question followed by the context document. We report results on the best model (underlined hyperparameters, *qca* setup). For the attention models we consider hidden layer sizes [64 *,* 128 *,* 256]
(Attentive) and [64 *,* 128] (Impatient), single layer, initial learning rates [1 E *−* 2 *,* 5 E *−* 3 *,* 1 E *−* 3 *,* 5 E *−* 4], and batch
sizes [16 *,* 32] (Attentive) and [8 *,* 12] (Impatient). For all models we used asynchronous RmsProp [20] with a
momentum of 0 *.* 9 and a decay of 0 *.* 95.
4 Memory constraints prevented us from experimenting with deeper Attentive Readers.
7
-----


. . . . . .
Figure 3: Attention heat maps from the Attentive Reader for two correctly answered validation set
queries (the correct answers are *ent23* and *ent63*, respectively). Both examples require significant
lexical generalisation and co-reference resolution in order to be answered correctly by a given model.
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**4 Empirical Evaluation**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
queries (the correct answers are *ent23* and *ent63*, respectively). Both examples require significant
lexical generalisation and co-reference resolution in order to be answered correctly by a given model.
accuracy of close to 60% which, according to our analysis in Table 2, is the proportion of questions
answerable without requiring complex inference.
The poor results of the Uniform Reader support our hypothesis of the significance of the attention
mechanism in the Attentive model’s performance as the only difference between these models is that
the attention variables are ignored in the Uniform Reader. The precision@recall statistics in Figure 2
again highlight the strength of the Impatient Reader compared with the other attention models.
We can visualise the attention mechanism as a heatmap over a context document to gain further
insight into the models’ performance. The highlighted words show which tokens in the document
were attended to by the model. In addition we must also take into account that the vectors at each
token integrate long range contextual information via the bidirectional LSTM encoders. Figure
3 depicts heat maps for two queries that were corretly answered by the Attentive Reader. [5] In both
cases confidently arriving at the correct answer requires the model to perform both significant lexical
generalsiation, e.g. ‘killed’ *→* ‘deceased’, and co-refernece or anaphora resolution, e.g. ‘ *ent119* was
killed’ *→* ‘he was identified.’ However it is also clear that the model is able to integrate these signals
with rough heuristic indicators such as the proximity of query words to the candidate answer.
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**4 Empirical Evaluation**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### **5 Conclusion**
The supervised paradigm for training machine reading and comprehension models provides a
promising avenue for making progress on the path to building full natural language understanding
systems. We have demonstrated a methodology for obtaining a large number of document-queryanswer triples and shown that recurrent and attention based neural networks provide an effective
modelling framework for this task. Our analysis indicates that the Attentive and Impatient Readers are able to propagate and integrate semantic information over long distances. In particular we
believe that the incorporation of an attention mechanism is the key contributor to these results.
The attention mechanism that we have employed is just one instantiation of a very general idea
which can be further exploited. However, the incorporation of world knowledge and multi-document
queries will also require the development of attention and embedding mechanisms whose complexity to query does not scale linearly with the data set size. There are still many queries requiring
complex inference and long range reference resolution that our models are not yet able to answer.
As such our data provides a scalable challenge that should support NLP research into the future. Further, significantly bigger training data sets can be acquired using the techniques we have described,
undoubtedly allowing us to train more expressive and accurate models.
5 Note that these examples were chosen as they were short, the average CNN validation document contained
809 tokens and 32 entities, thus most instances were significantly harder to answer than these examples.
8
-----
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**5 Conclusion**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### **References**
[1] Ellen Riloff and Michael Thelen. A rule-based question answering system for reading comprehension tests. In *Proceedings of the 2000 ANLP/NAACL Workshop on Reading Compre-*
*hension Tests As Evaluation for Computer-based Language Understanding Sytems - Volume 6*,
ANLP/NAACL-ReadingComp ’00, pages 13–19, 2000.
[2] Hoifung Poon, Janara Christensen, Pedro Domingos, Oren Etzioni, Raphael Hoffmann, Chloe
Kiddon, Thomas Lin, Xiao Ling, Mausam, Alan Ritter, Stefan Schoenmackers, Stephen Soderland, Dan Weld, Fei Wu, and Congle Zhang. Machine reading at the University of Washington. In *Proceedings of the NAACL HLT 2010 First International Workshop on Formalisms and*
*Methodology for Learning by Reading*, FAM-LbR ’10, pages 87–95, 2010.
[3] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. *CoRR*, abs/1410.3916,
2014.
[4] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. Weakly supervised memory networks. *CoRR*, abs/1503.08895, 2015.
[5] Terry Winograd. *Understanding Natural Language* . Academic Press, Inc., Orlando, FL, USA,
1972.
[6] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by
jointly learning to align and translate. *CoRR*, abs/1409.0473, 2014.
[7] Volodymyr Mnih, Nicolas Heess, Alex Graves, and Koray Kavukcuoglu. Recurrent models of
visual attention. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, *Advances in Neural Information Processing Systems 27*, pages 2204–2212.
2014.
[8] Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW: A recurrent neural
network for image generation. *CoRR*, abs/1502.04623, 2015.
[9] Matthew Richardson, Christopher J. C. Burges, and Erin Renshaw. Mctest: A challenge dataset
for the open-domain machine comprehension of text. In *EMNLP*, pages 193–203. ACL, 2013.
[10] Krysta Svore, Lucy Vanderwende, and Christopher Burges. Enhancing single-document summarization by combining RankNet and third-party sources. In *Proceedings of the Joint Con-*
*ference of EMNLP-CoNLL*, Prague, Czech Republic, June 2007.
[11] Kristian Woodsend and Mirella Lapata. Automatic generation of story highlights. In *Proceed-*
*ings of ACL*, pages 565–574, 2010.
[12] Wilson L Taylor. “Cloze procedure”: a new tool for measuring readability. *Journalism Quar-*
*terly*, 30:415–433, 1953.
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**References**",
"Header 4": null
}
|
{
"chunk_type": "references"
}
|
*ings of ACL*, pages 565–574, 2010.
[12] Wilson L Taylor. “Cloze procedure”: a new tool for measuring readability. *Journalism Quar-*
*terly*, 30:415–433, 1953.
[13] Dipanjan Das, Desai Chen, Andr´e F. T. Martins, Nathan Schneider, and Noah A. Smith. Framesemantic parsing. *Computational Linguistics*, 40(1):9–56, 2013.
[14] Karl Moritz Hermann, Dipanjan Das, Jason Weston, and Kuzman Ganchev. Semantic frame
identification with distributed word representations. In *Proceedings of ACL*, June 2014.
[15] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network
for modelling sentences. *Proceedings of the 52nd Annual Meeting of the Association for Com-*
*putational Linguistics*, June 2014.
[16] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel
Kuksa. Natural language processing (almost) from scratch. *Journal of Machine Learning*
*Research*, 12:2493–2537, November 2011.
[17] Ilya Sutskever, Oriol Vinyals, and Quoc V. V Le. Sequence to sequence learning with neural
networks. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger,
editors, *Advances in Neural Information Processing Systems 27*, pages 3104–3112. 2014.
[18] Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. *Neural Computation*,
9(8):1735–1780, November 1997.
[19] Alex Graves. *Supervised Sequence Labelling with Recurrent Neural Networks*, volume 385 of
*Studies in Computational Intelligence* . Springer, 2012.
[20] T. Tieleman and G. Hinton. Lecture 6.5—RmsProp: Divide the gradient by a running average
of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
9
-----
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**References**",
"Header 4": null
}
|
{
"chunk_type": "references"
}
|
### **A Additional Heatmap Analysis**
We expand on the analysis of the attention mechanism presented in the paper by including visualisations for additional queries from the CNN validation dataset below. We consider examples from
the Attentive Reader as well as the Impatient Reader in this appendix.
**A.1** **Attentive Reader**
**Positive Instances** Figure 4 shows two positive examples from the CNN validation set that require reasonable levels of lexical generalisation and co-reference in order to be answered. The first
query in Figure 5 contains strong lexical cues through the quote, but requires identifying the entity
quoted, which is non-trivial in the context document. The final positive example (also in Figure 5)


selection process. The correct entity, *ent15*, and the predicted one, *ent81*, both refer to the same
person, but not being clustered together. Arguably this is a difficult clustering as one entity refers
10
-----


Figure 6: Attention heat maps from the Attentive Reader for two wrongly answered validation set
queries. In the left case the model returns *ent85* (correct: *ent67* ), in the right example it gives *ent24*
(correct: *ent64* ). In both cases the query is unanswerable due to its ambiguous nature and the model
selects a plausible answer.
to “Kate Middleton” and the other to “The Duchess of Cambridge”. The right example shows a
situation in which the model fails as it perhaps gets too little information from the short query and
then selects the wrong cue with the term “claims” near the wrongly identified entity *ent1* (correct:
*ent74* ).
11
-----





-----






Figure 11: Attention of the Impatient Reader at time steps 10, 11 and 12.
13
-----
|
{
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
}
|
{
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**A Additional Heatmap Analysis**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
## **Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**
**David Duvenaud** *[†]* **, Dougal Maclaurin** *[†]* **, Jorge Aguilera-Iparraguirre**
**Rafael G´omez-Bombarelli, Timothy Hirzel, Al´an Aspuru-Guzik, Ryan P. Adams**
Harvard University
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": null,
"Header 4": null
}
|
{
"chunk_type": "title"
}
|
### **Abstract**
Predicting properties of molecules requires functions that take graphs as inputs.
Molecular graphs are usually preprocessed using hash-based functions to produce
fixed-size fingerprint vectors, which are used as features for making predictions.
We introduce a convolutional neural network that operates directly on graphs, allowing end-to-end learning of the feature pipeline. This architecture generalizes
standard molecular fingerprints. We show that these data-driven features are more
interpretable, and have better predictive performance on a variety of tasks.
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**Abstract**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### **1 Introduction**
Recent work in materials design has applied neural networks to virtual screening, where the task is
to predict the properties of novel molecules by generalizing from examples. One difficulty with this
task is that the input to the predictor, a molecule, can be of arbitrary size and shape. Most machine
learning pipelines can only handle inputs of a fixed size. The current state of the art is to use offthe-shelf fingerprint software to compute fixed-dimensional feature vectors, and use those features
as inputs to a fully-connected deep neural network or other standard machine learning method. This
formula was followed by [28, 3, 19]. During training, the molecular fingerprints were treated as
fixed.
In this paper, we replace the bottom layer of this stack – the fixed molecular fingerprints – with a differentiable neural network whose input is a graph representing the original molecule. In this graph,
vertices represent individual atoms and edges represent bonds. The lower layers of this network is
convolutional in the sense that the same local filter is applied to each atom and its neighborhood.
After several such layers, a global pooling step combines features from all the atoms in the molecule.
Neural graph fingerprints offer several advantages over fixed fingerprints:
*•* **Predictive performance.** By using data adapting to the task at hand, machine-optimized
fingerprints can provide substantially better predictive performance than fixed fingerprints.
We compare the effectiveness of neural graph fingerprints against standard fingerprints at
predicting solubility, drug efficacy, and organic photovoltaic efficiency.
*•* **Parsimony.** Fixed fingerprints must be extremely large to encode all possible substructures
without overlap. For example, [28] used a fingerprint vector of size 43,000, after having
removed rarely-occurring features. Differentiable fingerprints can be optimized to encode
only relevant features, reducing downstream computation and regularization requirements.
*•* **Interpretability.** Standard fingerprints encode each fragment differently (up to random
collisions), with no notion of similarity between fragments. Each feature of a neural graph
fingerprint can be activated by similar but distinct molecular fragments, making the feature
representation more meaningful.
*†* Equal contribution.
1
-----


|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**1 Introduction**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
representation more meaningful.
*†* Equal contribution.
1
-----


Figure 1: *Left* : A visual representation of the computational graph of both standard circular fingerprints and neural graph fingerprints. First, at bottom, a graph is constructed matching the topology of
the molecule being fingerprinted, in which edges represent bonds. At each layer, information flows
between neighbors in the graph. Finally, each node in the graph turns on one bit in the fixed-length
fingerprint vector. This is a simplified sketch – in reality, each layer writes to the fingerprint. *Right* :
A more detailed graph also including the bond information used in each operation.
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**1 Introduction**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### **2 Circular fingerprints**
The state of the art in molecular fingerprints are extended-connectivity circular fingerprints
(ECFP) [21]. Circular fingerprints [6] are a refinement of the Morgan algorithm [17], designed to
identify which substructures are present in a molecule in a way that is invariant to atom-relabeling.
Circular fingerprints generate each layer’s features by applying a fixed hash function to the concatenated features of the neighborhood in the previous layer. The results of these hashes are then treated
as integer indices, where a 1 is written to the fingerprint vector at the index given by the feature
vector at each node in the graph. Figure 1(left) shows a sketch of this computational architecture.
Ignoring collisions, each index of the fingerprint denotes the presence of a particular substructure.
The size of the substructures represented by each index depends on the depth of the network. Thus
the number of layers is referred to as the ‘radius’ of the fingerprints.
Circular fingerprints are analogous to convolutional networks in that they apply the same operation
locally everywhere, and combine information in a global pooling step.
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**2 Circular fingerprints**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### **3 Creating a differentiable fingerprint**
The space of possible network architectures is large. In the spirit of starting from a known-good
configuration, we chose an architecture analogous to existing fingerprints. This section describes
our replacement of each discrete operation in circular fingerprints with a differentiable analog.
**Hashing** The purpose of the hash functions applied at each layer of circular fingerprints is to
combine information about each atom and its neighboring substructures. This ensures that any
change in a fragment, no matter how small, will lead to a different fingerprint index being activated.
We replace the hash operation with a single layer of a neural network. Using a smooth function
allows the activations to be similar when the local molecular structure varies in minor ways.
**Indexing** Circular fingerprints use an indexing operation to combine each atom’s feature vector
into a fingerprint of the whole molecule. Each atom sets a single bit of the fingerprint to one,
at an index determined by the hash of its feature vector. This pooling-like operation converts an
arbitrary-sized graph into a fixed-sized vector. For small molecules and a large fingerprint length,
the fingerprints are always sparse. We use the softmax operation as a differentiable analog of
indexing. In essence, each atom is asked to classify itself as belonging to a single category. The sum
of all these classification label vectors produces the final fingerprint. This operation is analogous to
the pooling operation in standard convolutional neural networks.
2
-----
**Al** **g** **orithm 1** Circular fin g er p rints
1: **Input:** molecule, radius *R*, fingerprint
length *S*
2: **Initialize:** fingerprint vector **f** *←* **0** *S*
3: **for** each atom *a* in molecule
4: **r** *a* *←* *g* ( *a* ) *▷* lookup atom features
5: **for** *L* = 1 to *R* *▷* for each layer
6: **for** each atom *a* in molecule
7: **r** 1 *. . .* **r** *N* = neighbors( *a* )
8: **v** *←* [ **r** *a* *,* **r** 1 *, . . .,* **r** *N* ] *▷* concatenate
9: **r** *a* *←* hash( **v** ) *▷* hash function
10: *i ←* mod( *r* *a* *, S* ) *▷* convert to index
11: **f** *i* *←* 1 *▷* Write 1 at index
12: **Return:** binary vector **f**
**Al** **g** **orithm 2** Neural g ra p h fin g er p rints
1: **Input:** molecule, radius *R*, hidden weights
*H* 1 [1] *[. . . H]* *R* [5] [, output weights] *[ W]* [1] *[ . . . W]* *[R]*
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**3 Creating a differentiable fingerprint**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
12: **Return:** binary vector **f**
**Al** **g** **orithm 2** Neural g ra p h fin g er p rints
1: **Input:** molecule, radius *R*, hidden weights
*H* 1 [1] *[. . . H]* *R* [5] [, output weights] *[ W]* [1] *[ . . . W]* *[R]*
2: **Initialize:** fingerprint vector **f** *←* **0** *S*
3: **for** each atom *a* in molecule
4: **r** *a* *←* *g* ( *a* ) *▷* lookup atom features
5: **for** *L* = 1 to *R* *▷* for each layer
6: **for** each atom *a* in molecule
7: **r** 1 *. . .* **r** *N* = neighbors( *a* )
8: **v** *←* **r** *a* + [�] *[N]* *i* =1 **[r]** *[i]* *▷* sum
9: **r** *a* *←* *σ* ( **v** *H* *L* *[N]* [)] *▷* smooth function
10: **i** *←* softmax( **r** *a* *W* *L* ) *▷* sparsify
11: **f** *←* **f** + **i** *▷* add to fingerprint
12: **Return:** real-valued vector **f**
Figure 2: Pseudocode of circular fingerprints ( *left* ) and neural graph fingerprints ( *right* ). Differences
are highlighted in blue. Every non-differentiable operation is replaced with a differentiable analog.
**Canonicalization** Circular fingerprints are identical regardless of the ordering of atoms in each
neighborhood. This invariance is achieved by sorting the neighboring atoms according to their
features, and bond features. We experimented with this sorting scheme, and also with applying the
local feature transform on all possible permutation of the local neighborhood. An alternative to
canonicalization is to apply a permutation-invariant function, such as summation. In the interests of
simplicity and scalability, we chose summation.
Algorithms 1 and 2 summarize these two algorithms and highlight their differences. Given a fingerprint length *L*, and *F* features at each layer, the parameters of neural graph fingerprints consist of
a separate output weight matrix of size *F × L* for each layer, as well as a set of hidden-to-hidden
weight matrices of size *F × F* at each layer, one for each possible number of bonds an atom can
have (up to 5 in organic molecules).
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**3 Creating a differentiable fingerprint**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### **4 Experiments**
Circular fingerprints can be interpreted as a special case of neural graph fingerprints having large
random weights. This is reasonable to expect, since in the limit of large input weights, tanh
nonlinearities approach step functions, which when concatenated resemble a hash function. Also,
in the limit of large input weights, the softmax operator approaches a one-hot-coded argmax
operator, which is analogous to an indexing operation.
One use for molecular fingerprints is to compute distances between molecules. We examined
whether ECFP-based distances were similar to random neural fingerprint-based distances. Figure 3 (left) shows a scatterplot of pairwise distances calculated using either circular or neural
fingerprints. Fingerprints had length 2048, and were calculated on molecules from the solubility
dataset [4]. Distance was measured using a continuous generalization of the Tanimoto (a.k.a. Jaccard) similarity measure, given by
distance( **x** *,* **y** ) = 1 *−* � min( *x* *i* *, y* *i* )�� max( *x* *i* *, y* *i* ) (1)
There is a correlation of *r* = 0 *.* 823 between the distances. The line of points on the far left shows
that for some pairs of molecules, binary ECFP fingerprints have exactly zero overlap.
Figure 3 (right) shows that the predictive performance of random neural fingerprints is similar to that
of circular fingerprints. Shown are average predictive performance on the same dataset using linear
regression on top of fingerprints. Both curves follow a similar trajectory, suggesting that randomly
initialized neural fingerprints are similar to circular fingerprints. In contrast, the performance of
neural fingerprints with small random weights follows a different curve, and is substantially better.
This suggests the possibility that even for untrained neural weights, their relatively smooth activation
helps generalization.
3
-----
|Neural vs Circular distances, r =0:823 1.0 distances 0.9 fingerprint 0.8 0.7 Neural 0.6 0.5 0.5 0.6 0.7 0.8 0.9 1.0 Circular fingerprint distances|Col2|
|---|---|
||2.0 Circular fingerprints 1.8 Random conv with large parameters Random conv with small parameters Mol/L) 1.6 (log 1.4 RMSE 1.2 1.0 0.8 0 1 2 3 4 5 6 Fingerprint radius|
|||
Figure 3: *Left:* Comparison of pairwise distances between molecules, measured using circular fingerprints and neural graph fingerprints with large random weights. *Right* : Predictive performance
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**4 Experiments**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
|||
Figure 3: *Left:* Comparison of pairwise distances between molecules, measured using circular fingerprints and neural graph fingerprints with large random weights. *Right* : Predictive performance
of circular fingerprints (red), neural graph fingerprints with fixed large random weights (green) and
neural graph fingerprints with fixed small random weights (blue). The performance of neural graph
fingerprints with large random weights closely matches the performance of circular fingerprints.
**4.1** **Examining learned features**
To demonstrate that neural graph fingerprints are interpretable, we show examples of classes of
substructures which activate individual features in a fingerprint. Circular fingerprint features can
each only be activated by a single fragment of a single radius, except for accidental collisions. In
contrast, neural graph fingerprints can be activated by variations of the same structure, making them
more parsimonious and interpretable.
**Solubility features** Figure 4 shows the fragments that maximally activate the most predictive features of a fingerprint. The fingerprint network was trained as inputs to a linear model predicting
solubility, as measured in [4]. The top fingerprint has a positive predictive relationship with solubility, and is most activated by fragments containing a hydrophilic R-OH group, a standard indicator
of solubility. The bottom fingerprint, strongly predictive of insolubility, is activated by non-polar
Figure 4: Examining fingerprints optimized for predicting solubility. Shown here are representative
|repeated ring structures.|.|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|repeated ring structures Fragments most activated by pro-solubility feature|. O O OH NH O|OH|OH|||
|||||OH||
|||||||
|Fragments most activated by anti-solubility feature||||||
|||||||
samples of molecular fragments (highlighted in blue) which most activate different features of the
fingerprint. *Top row:* The feature having the strongest predictive weight for solubility. *Bottom row:*
The feature having the strongest predictive weight against solubility.
4
-----
**Toxicity features** We trained the same model architecture to predict toxicity, as measured in two
different datasets in [26]. Figure 5 shows fragments which maximally activate the feature most
predictive of toxicity.
Fragments most
activated by
toxicity feature
on SR-MMP
dataset
Fragments most
activated by
toxicity feature
on NR-AHR
dataset
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**4 Experiments**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
predictive of toxicity.
Fragments most
activated by
toxicity feature
on SR-MMP
dataset
Fragments most
activated by
toxicity feature
on NR-AHR
dataset



Figure 5: Visualizing fingerprints optimized for predicting toxicity. Shown here are representative
samples of molecular fragments (highlighted in red) which most activate the feature most predictive
of toxicity, on two different datasets. *Top row:* the most predictive feature identifies groups containing a sulphur atom attached to an aromatic ring. *Bottom row:* the most predictive feature identifies
fused aromatic rings, also known as polycyclic aromatic hydrocarbons, a well-known carcinogen.
[27] constructed similar visualizations, but in a semi-manual way: to determine which toxic fragments activated a given neuron, they searched over a hand-made list of toxic substructures and chose
the one most correlated with a given neuron. In contrast, our visualizations are generated automatically, without the need to restrict the range of possible answers beforehand.
**4.2** **Predictive Performance**
We ran several experiments to compare the predictive performance of neural graph fingerprints to
that of the standard state-of-the-art setup: circular fingerprints fed into a fully-connected neural
network.
**Experimental setup** Our pipeline takes as input the SMILES [30] string encoding of each
molecule, which is then converted into a graph using RDKit [20]. We also used RDKit to produce
the extended circular fingerprints used in the baseline. Hydrogen atoms were treated implicitly.
In our convolutional networks, the initial atom and bond features were chosen to be similar to those
used by ECFP: Initial atom features concatenated a one-hot encoding of the atom’s element, its
degree, the number of attached hydrogen atoms, and the implicit valence, and an aromaticity indicator. The bond features were a concatenation of whether the bond type was single, double, triple,
or aromatic, whether the bond was conjugated, and whether the bond was part of a ring.
**Training and Architecture** Training used batch normalization [11]. We also experimented with
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**4 Experiments**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
or aromatic, whether the bond was conjugated, and whether the bond was part of a ring.
**Training and Architecture** Training used batch normalization [11]. We also experimented with
tanh vs relu activation functions for both the neural fingerprint network layers and the fullyconnected network layers. relu had a slight but consistent performance advantage on the validation set. We also experimented with dropconnect [29], a variant of dropout in which weights are
randomly set to zero instead of hidden units, but found that it led to worse validation error in general. Each experiment optimized for 10000 minibatches of size 100 using the Adam algorithm [13],
a variant of RMSprop that includes momentum.
**Hyperparameter Optimization** To optimize hyperparameters, we used random search. The hyperparameters of all methods were optimized using 50 trials for each cross-validation fold. The
following hyperparameters were optimized: log learning rate, log of the initial weight scale, the log
*L* 2 penalty, fingerprint length, fingerprint depth (up to 6), and the size of the hidden layer in the
fully-connected network. Additionally, the size of the hidden feature vector in the convolutional
neural fingerprint networks was optimized.
5
-----
|Dataset Units|Solubility [4] Drug efficacy [5] Photovoltaic efficiency [8] log Mol/L EC in nM percent 50|
|---|---|
Predict mean 4.29 *±* 0.40 1.47 *±* 0.07 6.40 *±* 0.09
Circular FPs + linear layer 1.84 *±* 0.08 **1.13** *±* **0.03** 2.62 *±* 0.07
Circular FPs + neural net 1.40 *±* 0.15 1.24 *±* 0.03 2.04 *±* 0.07
Neural FPs + linear layer 0.74 *±* 0.09 **1.16** *±* **0.03** 2.71 *±* 0.13
Neural FPs + neural net **0.53** *±* **0.07** **1.17** *±* **0.03** **1.44** *±* **0.11**
Table 1: Mean predictive accuracy of neural fingerprints compared to standard circular fingerprints.
**Datasets** We compared the performance of standard circular fingerprints against neural graph fingerprints on a variety of domains:
*•* **Solubility:** The aqueous solubility of 1144 molecules as measured by [4].
*•* **Drug efficacy:** The half-maximal effective concentration (EC 50 ) *in vitro* of 10,000
molecules against a sulfide-resistant strain of *P. falciparum*, the parasite that causes malaria,
as measured by [5].
*•* **Organic photovoltaic efficiency:** The Harvard Clean Energy Project [8] uses expensive
DFT simulations to estimate the photovoltaic efficiency of organic molecules. We used a
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**4 Experiments**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
as measured by [5].
*•* **Organic photovoltaic efficiency:** The Harvard Clean Energy Project [8] uses expensive
DFT simulations to estimate the photovoltaic efficiency of organic molecules. We used a
subset of 20,000 molecules from this dataset.
**Predictive accuracy** We compared the performance of circular fingerprints and neural graph fingerprints under two conditions: In the first condition, predictions were made by a linear layer using
the fingerprints as input. In the second condition, predictions were made by a one-hidden-layer
neural network using the fingerprints as input. The neural graph fingerprints were simultaneously
optimized during training by propagating through the linear and neur Results are summarized in
Table 4.2. In all experiments, the neural graph fingerprints match or beat the accuracy of circular
fingerprints, and the methods with a neural network on top of the fingerprints typically outperformed
the linear layers.
**Software** Automatic differentiation (AD) software packages such as Theano [1] significantly
speed up development time by providing gradients automatically, but can only handle limited control
structures and indexing. Since we required relatively complex control flow and indexing in order
to implement variants of Algorithm 2, we used a more flexible automatic differentiation package
[for Python called Autograd (github.com/HIPS/autograd). This package handles standard](http://github.com/HIPS/autograd)
Numpy [18] code, and can differentiate code containing while loops, branches, and indexing.
Code for computing neural fingerprints and producing visualizations is available at
[github.com/HIPS/neural-fingerprint.](http://github.com/HIPS/neural-fingerprint)
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**4 Experiments**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### **5 Limitations**
**Computational cost** Neural fingerprints have the same asymptotic complexity in the number of
atoms and the depth of the network as circular fingerprints, but have additional terms due to the
matrix multiplies necessary to transform the feature vector at each step. To be precise, computing
the neural fingerprint of depth *R*, fingerprint length *L* of a molecule with *N* atoms using a molecular
convolutional net having *F* features at each layer costs *O* ( *RNFL* + *RNF* [2] ). In practice, training
neural networks on top of circular fingerprints usually took several minutes, while training both the
fingerprints and the network on top took on the order of an hour.
**Limited computation at each layer** How complicated should we make the function that goes
from one layer of the network to the next? In this paper we chose the simplest feasible architecture:
a single layer of a neural network. However, it may be fruitful to apply multiple layers of nonlinearities between each message-passing step (as in [22]), or to make information preservation easier by
adapting the Long Short-Term Memory [10] architecture to pass information upwards.
6
-----
**Limited information propagation across the graph** The local message-passing architecture developed in this paper scales well in the size of the graph (due to the low degree of organic molecules),
but its ability to propagate information across the graph is limited by the depth of the network. This
may be appropriate for small graphs such as those representing the small organic molecules used in
this paper. However, in the worst case, it can take a depth *[N]* 2 [network to distinguish between graphs]
of size *N* . To avoid this problem, [2] proposed a hierarchical clustering of graph substructures. A
tree-structured network could examine the structure of the entire graph using only log( *N* ) layers,
but would require learning to parse molecules. Techniques from natural language processing [25]
might be fruitfully adapted to this domain.
**Inability to distinguish stereoisomers** Special bookkeeping is required to distinguish between
stereoisomers, including enantomers (mirror images of molecules) or *cis/trans* isomers (rotation
around double bonds). Most circular fingerprint implementations have the option to make these
distinctions. Neural fingerprints could be extended to be sensitive to stereoisomers, but this remains
a task for future work.
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**5 Limitations**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### **6 Related work**
This work is similar in spirit to the neural Turing machine [7], in the sense that we take an existing
discrete computational architecture, and make each part differentiable in order to do gradient-based
optimization.
**Convolutional neural networks** Convolutional neural networks have been used to model images,
speech, and time series [14]. However, standard convolutional architectures use a fixed computational graph, making them difficult to apply to objects of varying size or structure, such as molecules.
More recently, [12] and others have developed a convolutional neural network architecture for modeling sentences of varying length.
**Neural fingerprints** The most closely related work is [15], who build a neural network having
graph-valued inputs. Their approach is to remove all cycles and build the graph into a tree structure,
choosing one atom to be the root. A recursive neural network [23, 24] is then run from the leaves
to the root to produce a fixed-size representation. Because a graph having *N* nodes has *N* possible
roots, all *N* possible graphs are constructed. The final descriptor is a sum of the representations
computed by all distinct graphs. There are as many distinct graphs as there are atoms in the network.
The computational cost of this method thus grows as *O* ( *F* [2] *N* [2] ), where *F* is the size of the feature
vector and *N* is the number of atoms, making it less suitable for large molecules.
**Neural nets for quantitative structure-activity relationship (QSAR)** The modern standard for
predicting properties of novel molecules is to compose circular fingerprints with fully-connected
neural networks or other regression methods. [3] used circular fingerprints as inputs to an ensemble
of neural networks, Gaussian processes, and random forests. [19] used circular fingerprints (of depth
2) as inputs to a multitask neural network, showing that multiple tasks helped performance.
**Neural networks on fixed graphs** [2] introduce convolutional networks on graphs in the regime
where the graph structure is fixed, and each training example differs only in having different features
at the vertices of the same graph. In contrast, our networks address the situation where each training
input is a different graph.
**Neural networks on input-dependent graphs** [22] propose a neural network model for graphs
having an interesting training procedure. The forward pass consists of running a message-passing
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**6 Related work**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
input is a different graph.
**Neural networks on input-dependent graphs** [22] propose a neural network model for graphs
having an interesting training procedure. The forward pass consists of running a message-passing
scheme to equilibrium, a fact which allows the reverse-mode gradient to be computed without storing
the entire forward computation. They apply their network to predicting mutagenesis of molecular
compounds as well as web page rankings. [16] also propose a neural network model for graphs
with a learning scheme whose inner loop optimizes not the training loss, but rather the correlation
between each newly-proposed vector and the training error residual. They apply their model to a
dataset of boiling points of 150 molecular compounds. Our paper builds on these ideas, with the
7
-----
following differences: Our method replaces their complex training algorithms with simple gradientbased optimization, generalizes existing circular fingerprint computations, and applies these networks in the context of modern QSAR pipelines which use neural networks on top of the fingerprints
to increase model capacity.
**Unrolled inference algorithms** [9] and others have noted that iterative inference procedures
sometimes resemble the feedforward computation of a recurrent neural network. One natural extension of these ideas is to parameterize each inference step, and train a neural network to approximately
match the output of exact inference using only a small number of iterations. The neural fingerprint,
when viewed in this light, resembles an unrolled message-passing algorithm on the original graph.
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**6 Related work**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### **7 Conclusion**
We generalized existing hand-crafted molecular features to allow their optimization for diverse tasks.
By making each operation in the feature pipeline differentiable, we can use standard neural-network
training methods to scalably optimize the parameters of these neural molecular fingerprints end-toend. We demonstrated the interpretability and predictive performance of these new fingerprints.
Data-driven features have already replaced hand-crafted features in speech recognition, machine
vision, and natural-language processing. Carrying out the same task for virtual screening, drug
design, and materials design is a natural next step.
**Acknowledgments**
We thank Edward Pyzer-Knapp, Jennifer Wei, and Samsung Advanced Institute of Technology for
their support.
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**7 Conclusion**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### **References**
[1] Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud
Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
[2] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally
connected networks on graphs. *arXiv preprint arXiv:1312.6203*, 2013.
[3] George E. Dahl, Navdeep Jaitly, and Ruslan Salakhutdinov. Multi-task neural networks for
QSAR predictions. *arXiv preprint arXiv:1406.1231*, 2014.
[4] John S. Delaney. ESOL: Estimating aqueous solubility directly from molecular structure. *Jour-*
*nal of Chemical Information and Computer Sciences*, 44(3):1000–1005, 2004.
[5] Francisco-Javier Gamo, Laura M Sanz, Jaume Vidal, Cristina de Cozar, Emilio Alvarez,
Jose-Luis Lavandera, Dana E Vanderwall, Darren VS Green, Vinod Kumar, Samiul Hasan,
et al. Thousands of chemical starting points for antimalarial lead identification. *Nature*,
465(7296):305–310, 2010.
[6] Robert C. Glem, Andreas Bender, Catrin H. Arnby, Lars Carlsson, Scott Boyer, and James
Smith. Circular fingerprints: flexible molecular descriptors with applications from physical
chemistry to ADME. *IDrugs: the investigational drugs journal*, 9(3):199–204, 2006.
[7] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing machines. *arXiv preprint*
*arXiv:1410.5401*, 2014.
[8] Johannes Hachmann, Roberto Olivares-Amaya, Sule Atahan-Evrenk, Carlos Amador-Bedolla,
Roel S S´anchez-Carrera, Aryeh Gold-Parker, Leslie Vogt, Anna M Brockway, and Al´an
Aspuru-Guzik. The Harvard clean energy project: large-scale computational screening and
design of organic photovoltaics on the world community grid. *The Journal of Physical Chem-*
*istry Letters*, 2(17):2241–2251, 2011.
[9] John R Hershey, Jonathan Le Roux, and Felix Weninger. Deep unfolding: Model-based inspiration of novel deep architectures. *arXiv preprint arXiv:1409.2574*, 2014.
8
-----
[10] Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. *Neural computation*,
9(8):1735–1780, 1997.
[11] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training
by reducing internal covariate shift. *arXiv preprint arXiv:1502.03167*, 2015.
[12] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**References**",
"Header 4": null
}
|
{
"chunk_type": "references"
}
|
by reducing internal covariate shift. *arXiv preprint arXiv:1502.03167*, 2015.
[12] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network
for modelling sentences. *Proceedings of the 52nd Annual Meeting of the Association for Com-*
*putational Linguistics*, June 2014.
[13] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. *arXiv preprint*
*arXiv:1412.6980*, 2014.
[14] Yann LeCun and Yoshua Bengio. Convolutional networks for images, speech, and time series.
*The handbook of brain theory and neural networks*, 3361, 1995.
[15] Alessandro Lusci, Gianluca Pollastri, and Pierre Baldi. Deep architectures and deep learning
in chemoinformatics: the prediction of aqueous solubility for drug-like molecules. *Journal of*
*chemical information and modeling*, 53(7):1563–1575, 2013.
[16] Alessio Micheli. Neural network for graphs: A contextual constructive approach. *Neural*
*Networks, IEEE Transactions on*, 20(3):498–511, 2009.
[17] H.L. Morgan. The generation of a unique machine description for chemical structure. *Journal*
*of Chemical Documentation*, 5(2):107–113, 1965.
[18] Travis E Oliphant. Python for scientific computing. *Computing in Science & Engineering*,
9(3):10–20, 2007.
[19] Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, and Vijay
Pande. Massively multitask networks for drug discovery. *arXiv:1502.02072*, 2015.
[20] RDKit: Open-source cheminformatics. www.rdkit.org. [accessed 11-April-2013].
[21] David Rogers and Mathew Hahn. Extended-connectivity fingerprints. *Journal of Chemical*
*Information and Modeling*, 50(5):742–754, 2010.
[22] F. Scarselli, M. Gori, Ah Chung Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural
network model. *Neural Networks, IEEE Transactions on*, 20(1):61–80, Jan 2009.
[23] Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng.
Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In *Advances*
*in Neural Information Processing Systems*, pages 801–809, 2011.
[24] Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. Semi-supervised recursive autoencoders for predicting sentiment distributions. In *Pro-*
*ceedings of the Conference on Empirical Methods in Natural Language Processing*, pages
151–161. Association for Computational Linguistics, 2011.
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**References**",
"Header 4": null
}
|
{
"chunk_type": "references"
}
|
*ceedings of the Conference on Empirical Methods in Natural Language Processing*, pages
151–161. Association for Computational Linguistics, 2011.
[25] Kai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic
representations from tree-structured long short-term memory networks. *arXiv preprint*
*arXiv:1503.00075*, 2015.
[[26] Tox21 Challenge. National center for advancing translational sciences. http://tripod.](http://tripod.nih.gov/tox21/challenge)
[nih.gov/tox21/challenge, 2014. [Online; accessed 2-June-2015].](http://tripod.nih.gov/tox21/challenge)
[27] Thomas Unterthiner, Andreas Mayr, G¨unter Klambauer, and Sepp Hochreiter. Toxicity prediction using deep learning. *arXiv preprint arXiv:1503.01445*, 2015.
[28] Thomas Unterthiner, Andreas Mayr, G ¨unter Klambauer, Marvin Steijaert, J¨org Wenger, Hugo
Ceulemans, and Sepp Hochreiter. Deep learning as an opportunity in virtual screening. In
*Advances in Neural Information Processing Systems*, 2014.
[29] Li Wan, Matthew Zeiler, Sixin Zhang, Yann L. Cun, and Rob Fergus. Regularization of neural
networks using dropconnect. In *International Conference on Machine Learning*, 2013.
[30] David Weininger. SMILES, a chemical language and information system. *Journal of chemical*
*information and computer sciences*, 28(1):31–36, 1988.
9
-----
|
{
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
}
|
{
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**References**",
"Header 4": null
}
|
{
"chunk_type": "references"
}
|
## A Deep Reinforced Model for Abstractive Summarization
### Romain Paulus, Caiming Xiong and Richard Socher {rpaulus,cxiong,rsocher}@salesforce.com
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "Romain Paulus, Caiming Xiong and Richard Socher {rpaulus,cxiong,rsocher}@salesforce.com",
"Header 4": null
}
|
{
"chunk_type": "title"
}
|
### Abstract
Attentional, RNN-based encoder-decoder
models for abstractive summarization
have achieved good performance on short
input and output sequences. However, for
longer documents and summaries, these
models often include repetitive and incoherent phrases. We introduce a neural
network model with intra-attention and a
new training method. This method combines standard supervised word prediction
and reinforcement learning (RL). Models
trained only with the former often exhibit
“exposure bias” – they assume ground
truth is provided at each step during training. However, when standard word prediction is combined with the global sequence prediction training of RL the resulting summaries become more readable.
We evaluate this model on the CNN/Daily
Mail and New York Times datasets. Our
model obtains a 41.16 ROUGE-1 score
on the CNN/Daily Mail dataset, a 5.7 absolute points improvement over previous
state-of-the-art models. It also performs
well as the first abstractive model on the
New York Times corpus. Human evaluation also shows that our model produces
higher quality summaries.
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "Abstract",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### 1 Introduction
Text summarization is the process of automatically generating natural language summaries from
an input document while retaining the important
points.
By condensing large quantities of information
into short, informative summaries, summarization
can aid many downstream applications such as
creating news digests, search, and report generation.
There are two prominent types of summarization algorithms. First, extractive summarization
systems form summaries by copying parts of the
input (Neto et al., 2002; Dorr et al., 2003). Second, abstractive summarization systems generate
new phrases, possibly rephrasing or using words
that were not in the original text (Chopra et al.,
2016; Nallapati et al., 2016; Zeng et al., 2016).
Recently, neural network models (Nallapati et al., 2016; Zeng et al., 2016),
based on the attentional encoder-decoder model
for machine translation (Bahdanau et al., 2014),
were able to generate abstractive summaries with
high ROUGE scores. However, these systems
have typically focused on summarizing short
input sequences (one or two sentences) to generate even shorter summaries. For example, the
summaries on the DUC-2004 dataset generated by
the state-of-the-art system by Zeng et al. (2016)
are limited to 75 characters.
Nallapati et al. (2016) also applied their abstractive summarization model on the CNN/Daily
Mail dataset (Hermann et al., 2015), which contains input sequences of up to 800 tokens and
multi-sentence summaries of up to 100 tokens.
The analysis by Nallapati et al. (2016) illustrate
a key problem with attentional encoder-decoder
models: they often generate unnatural summaries
consisting of repeated phrases.
We present a new abstractive summarization
model that achieves state-of-the-art results on the
CNN/Daily Mail and similarly good results on the
New York Times dataset (NYT) (Sandhaus, 2008).
To our knowledge, this is the first model for abstractive summarization on the NYT dataset. We
introduce a key attention mechanism and a new
learning objective to address the repeating phrase
problem: (i) we use an intra-temporal attention in
-----
the encoder that records previous attention weights
for each of the input tokens while a sequential
intra-attention model in the decoder takes into ac
count which words have already been generated
by the decoder. (ii) we propose a new objective
function by combining the maximum-likelihood
cross-entropy loss used in prior work with rewards
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "1 Introduction",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
intra-attention model in the decoder takes into ac
count which words have already been generated
by the decoder. (ii) we propose a new objective
function by combining the maximum-likelihood
cross-entropy loss used in prior work with rewards
from policy gradient reinforcement learning to reduce exposure bias. We show that our model
achieves 41.16 ROUGE-1 on the CNN/Daily Mail
dataset, an absolute improvement of 5.70 to the
previous state-of-the-art result. Moreover, we
show, through human evaluation of generated outputs, that our model generates more readable summaries compared to other techniques.
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "1 Introduction",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### 2 Neural Intra-attention Model
In this section, we present our intra-attention
model based on the encoder-decoder network
(Sutskever et al., 2014). In all our equations, x =
{x 1, x 2, . . ., x n } represents the sequence of input (article) tokens, y = {y 1, y 2, . . ., y n ′ } the sequence of output (summary) tokens, and ∥ denotes
the vector concatenation operator.
Our model reads the input sequence
with a bi-directional LSTM encoder
{RNN [e fwd], RNN [e bwd] } computing hidden
states h [e] i [= [][h] i [e fwd] ∥h [e bwd] i ] from the embedding
vectors of x i . We use a single LSTM decoder
RNN [d], computing hidden states h [d] t [from the]
embedding vectors of y t . Both input and output
embeddings are taken from the same matrix
W emb . We initialize the decoder hidden state with
h [d] 0 [=][ h] n [e] [.]
2.1 Intra-temporal attention on input
sequence
At each decoding step t, we use an intra-temporal
attention function to attend over specific parts of
the encoded input sequence in addition to the
decoder’s own hidden state and the previouslygenerated word (Sankaran et al., 2016). This kind
of attention prevents the model from attending
over the sames parts of the input on different decoding steps. Nallapati et al. (2016) have shown
that such an intra-temporal attention can reduce
the amount of repetitions when attending over
long documents.
We define e ti as the attention score of the hidden
input state h [e] i [at decoding time step][ t][:]
e ti = f (h [d] t [, h] [e] i [)][,] (1)
where f can be any function returning a scalar e ti
from the h [d] t [and][ h] [e] i [vectors. While some attention]
models use functions as simple as the dot-product
between the two vectors, we choose to use a bilin
ear function:
T
f (h [d] t [, h] [e] i [) =][ h] [d] t W eattn [h] [e] i [.] (2)
We normalize the attention weights with the following temporal attention function, penalizing input tokens that have obtained high attention scores
in past decoding steps. We define new temporal
scores e [′] ti [:]
exp(e ti ) if t = 1
e [′] ti [=] exp(e ti ) (3)
� tj−=11 [exp(][e] [ji] [)] otherwise.
Finally, we compute the normalized attention
scores α [e] ti [across the inputs and use these weights]
to obtain the input context vector c [e] t [:]
e [′]
ti
α [e] ti [=] � nj=1 [e] [′] tj (4)
2.2 Intra-decoder attention
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "2 Neural Intra-attention Model",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
Finally, we compute the normalized attention
scores α [e] ti [across the inputs and use these weights]
to obtain the input context vector c [e] t [:]
e [′]
ti
α [e] ti [=] � nj=1 [e] [′] tj (4)
2.2 Intra-decoder attention
While this intra-temporal attention function ensures that different parts of the encoded input sequence are used, our decoder can still generate repeated phrases based on its own hidden states, especially when generating long sequences. To prevent that, we want to incorporate more information about the previously decoded sequence into
the decoder. Looking back at previous decoding
steps will allow our model to make more structured predictions and avoid repeating the same information, even if that information was generated
many steps away. To achieve this, we introduce
an intra-decoder attention mechanism. This mech
anism is not present in current encoder-decoder
models.
For each decoding step t, our model computes a
new decoder context vector c [d] t [. We set][ c] [d] 1 [to a vec-]
tor of zeros since the generated sequence is empty
on the first decoding step. For t > 1, we use the
following equations:
e [d] tt [′] [ =][ h] t [d] T W dattn [h] [d] t [′] (6)
exp(e [d] tt [′] [)]
α [d] tt [′] [ =] � tj−=11 [exp][(][e] [d] tj [)] (7)
c [e] t [=]
n
� α [e] ti [h] [e] i [.] (5)
i=1
-----
t−1
c [d] t [=] � α [d] tj [h] [d] j (8)
j=1
A closely-related intra-RNN attention function
has been introduced by Cheng et al. (2016) but
their implementation works by modifying the underlying LSTM function, and they do not apply it
to long sequence generation problems. This is a
major difference with our method, which makes
no assumptions about the type of decoder RNN,
thus is more simple and widely applicable to other
types of recurrent networks.
2.3 Token generation and pointer
To generate a token, our decoder uses either a
token-generation softmax layer or a pointer mechanism to copy rare or unseen from the input sequence. We use a switch function that decides
at each decoding step whether to use the token
generation or the pointer (Gulcehre et al., 2016;
Nallapati et al., 2016). We define u t as a binary
value, equal to 1 if the pointer mechanism is used
to output y t, and 0 otherwise. In the following equations, all probabilities are conditioned on
y t, . . ., y t−1, x, even when not explicitly stated.
Our token-generation layer generates the following probability distribution:
p(y t |u t = 0) =
(9)
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "2 Neural Intra-attention Model",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
y t, . . ., y t−1, x, even when not explicitly stated.
Our token-generation layer generates the following probability distribution:
p(y t |u t = 0) =
(9)
softmax(W out [h [d] t [∥][c] [e] t [∥][c] [d] t [] +][ b] [out] [)]
On the other hand, the pointer mechanism uses
the temporal attention weights α [e] ti [as the probabil-]
ity distribution to copy the input token x i .
p(y t = x i |u t = 1) = α [e] ti (10)
We also compute the probability of using the
copy mechanism for the decoding step t:
p(u t = 1) = σ(W u [h [d] t [∥][c] [e] t [∥][c] [d] t [] +][ b] [u] [)][,] (11)
where σ is the sigmoid activation function.
Putting Equations 9, 10 and 11 together, we obtain our final probability distribution for the output
token y t :
p(y t ) = p(u t = 1)p(y t |u t = 1)
(12)
+p(u t = 0)p(y t |u t = 0).
The ground-truth value for u t and the corresponding i index of the target input token when
u t = 1 are provided at every decoding step during training. We set u t = 1 either when y t is an
out-of-vocabulary token or when it is a pre-defined
named entity (see Section 5).
2.4 Sharing decoder weights
In addition to using the same embedding matrix
W emb for the encoder and the decoder sequences,
we introduce some weight-sharing between this
embedding matrix and the W out matrix of the
token-generation layer:
W out = tanh(W emb W proj ) (13)
The goal of this weight-sharing is to use the
syntactic and semantic information contained in
the embedding matrix to improve the tokengeneration function. Similar weight-sharing methods have been applied to language modeling
(Inan et al., 2016; Press and Wolf, 2016). We
believe this method is even more applicable to
sequence-to-sequence tasks like summarization
where the input and output sequences are tightly
related, sharing the same vocabulary and a similar
syntax. In practice, we found that a summarization
model using such shared weights converges much
faster than when using separate W out and W emb
matrices.
2.5 Repetition avoidance at test time
Another way to avoid repetitions comes from
our observation that in both the CNN/Daily Mail
and NYT datasets, ground-truth summaries almost
never contain the same trigram twice. Based on
this observation, we force our decoder to never
output the same trigram more than once during
testing. We do this by setting p(y t ) = 0 during
beam search, when outputting y t would create a
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "2 Neural Intra-attention Model",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
never contain the same trigram twice. Based on
this observation, we force our decoder to never
output the same trigram more than once during
testing. We do this by setting p(y t ) = 0 during
beam search, when outputting y t would create a
trigram that already exists in the previously decoded sequence of the current beam. Even though
this method makes assumptions about the output
format and the dataset at hand, we believe that the
majority of abstractive summarization tasks would
benefit from this hard constraint. We apply this
method to all our models in the experiments section.
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "2 Neural Intra-attention Model",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### 3 Hybrid Learning Objective
In this section, we explore different ways of training our encoder-decoder model. In particular, we
propose reinforcement learning-based algorithms
and their application to our summarization task.
-----
3.1 Supervised learning with teacher forcing
The most widely used method to train a decoder
RNN for sequence generation, called the
teacher forcing” algorithm (Williams and Zipser,
1989), minimizes a maximum-likelihood loss
at each decoding step. We define y [∗] =
{y 1 [∗] [, y] 2 [∗] [, . . ., y] n [∗] [′] [}][ as the ground-truth output se-]
quence for a given input sequence x. The
maximum-likelihood training objective is the minimization of the following loss:
the p(y t [s] [|][y] 1 [s] [, . . ., y] t [s] −1 [, x][)][ probability distribution]
at each decoding time step, and ˆy, the baseline
output, obtained by maximizing the output probability distribution at each time step, essentially
performing a greedy search. We define r(y) as the
reward function for an output sequence y, comparing it with the ground truth sequence y [∗] with the
evaluation metric of our choice.
n [′]
L rl = (r(ˆy)−r(y [s] )) � log p(y t [s] [|][y] 1 [s] [, . . ., y] t [s] −1 [, x][)]
t=1
(15)
We can see that minimizing L rl is equivalent to
maximizing the conditional likelihood of the sampled sequence y [s] if it obtains a higher reward than
the baseline ˆy, thus increasing the reward expectation of our model.
3.3 Mixed training objective function
One potential issue of this reinforcement training
objective is that optimizing for a specific discrete
metric like ROUGE does not guarantee an increase
in quality and readability of the output. It is possible to game such discrete metrics and increase
their score without an actual increase in readabil
ity or relevance (Liu et al., 2016). While ROUGE
measures the n-gram overlap between our generated summary and a reference sequence, humanreadability is better captured by a language model,
which is usually measured by perplexity.
Since our maximum-likelihood training objective (Equation 14) is essentially a conditional language model, calculating the probability of a token y t based on the previously predicted sequence
{y 1, . . ., y t−1 } and the input sequence x, we hypothesize that it can assist our policy learning algorithm to generate more natural summaries. This
motivates us to define a mixed learning objective
function that combines equations 14 and 15:
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "3 Hybrid Learning Objective",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
{y 1, . . ., y t−1 } and the input sequence x, we hypothesize that it can assist our policy learning algorithm to generate more natural summaries. This
motivates us to define a mixed learning objective
function that combines equations 14 and 15:
L mixed = γL rl + (1 − γ)L ml, (16)
where γ is a scaling factor accounting for the difference in magnitude between L rl and L ml . A
similar mixed-objective learning function has been
used by Wu et al. (2016) for machine translation
on short sequences, but this is its first use in combination with self-critical policy learning for long
summarization to explicitly improve readability in
addition to evaluation metrics.
L ml = −
n [′]
� log p(y t [∗] [|][y] 1 [∗] [, . . ., y] t [∗] −1 [, x][)] (14)
t=1
However, minimizing L ml does not always produce the best results on discrete evaluation metrics
such as ROUGE (Lin, 2004). This phenomenon
has been observed with similar sequence generation tasks like image captioning with CIDEr
(Rennie et al., 2016) and machine translation with
BLEU (Wu et al., 2016; Norouzi et al., 2016).
There are two main reasons for this discrepancy.
The first one, called exposure bias (Ranzato et al.,
2015), comes from the fact that the network is
fully supervised at each output token during training, always knowing the ground truth sequence up
to the next token to predict, but does not have such
supervision when testing, hence accumulating errors as it predicts the sequence. The second reason
is more specific to our summarization task: while
we only have one ground truth sequence per example during training, a summary can still be considered valid by a human even if it is not equal to
the reference summary word for word. The number of potentially valid summaries increases as sequences get longer, since there are more ways to
arrange tokens to produce paraphrases or different
sentence orders. The ROUGE metrics take some
of this flexibility into account, but the maximumlikelihood objective does not.
3.2 Policy learning
One way to remedy this is to learn a policy that
maximizes a specific discrete metric instead of
minimizing the maximum-likelihood loss, which
is made possible with reinforcement learning. In
our model, we use the self-critical policy gradient
training algorithm (Rennie et al., 2016).
For this training algorithm, we produce two
separate output sequences at each training iteration: y [s], which is obtained by sampling from
-----
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "3 Hybrid Learning Objective",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### 4 Related Work
4.1 Neural encoder-decoder sequence models
Neural encoder-decoder models are widely used
in NLP applications such as machine translation (Sutskever et al., 2014), summarization
(Chopra et al., 2016; Nallapati et al., 2016), and
question answering (Hermann et al., 2015). These
models use recurrent neural networks (RNN),
such as long-short term memory network (LSTM)
(Hochreiter and Schmidhuber, 1997) to encode an
input sentence into a fixed vector, and create a
new output sequence from that vector using another RNN. To apply this sequence-to-sequence
approach to natural language, word embeddings
(Mikolov et al., 2013; Pennington et al., 2014) are
used to convert language tokens to vectors that
can be used as inputs for these networks. Attention mechanisms (Bahdanau et al., 2014) make
these models more performant and scalable, allowing them to look back at parts of the encoded input sequence while the output is generated. These models often use a fixed input
and output vocabulary, which prevents them from
learning representations for new words. One
way to fix this is to allow the decoder network
to point back to some specific words or subsequences of the input and copy them onto the
output sequence (Vinyals et al., 2015; Ling et al.,
2016). Gulcehre et al. (2016) and Merity et al.
(2016) combine this pointer mechanism with the
original word generation layer in the decoder to
allow the model to use either method at each de
coding step.
4.2 Reinforcement learning for sequence
generation
Reinforcement learning (RL) is a way of training
an agent to interact with a given environment in
order to maximize a reward. RL has been used to
solve a wide variety of problems, usually when an
agent has to perform discrete actions before obtaining a reward, or when the metric to optimize is
not differentiable and traditional supervised learning methods cannot be used. This is applicable
to sequence generation tasks, because many of the
metrics used to evaluate these tasks (like BLEU,
ROUGE or METEOR) are not differentiable.
In order to optimize that metric directly,
Ranzato et al. (2015) have applied the REINFORCE algorithm (Williams, 1992) to train various RNN-based models for sequence generation
tasks, leading to significant improvements compared to previous supervised learning methods.
While their method requires an additional neural
network, called a critic model, to predict the expected reward and stabilize the objective function
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "4 Related Work",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
tasks, leading to significant improvements compared to previous supervised learning methods.
While their method requires an additional neural
network, called a critic model, to predict the expected reward and stabilize the objective function
gradients, Rennie et al. (2016) designed a selfcritical sequence training method that does not require this critic model and lead to further improvements on image captioning tasks.
4.3 Text summarization
Most summarization models studied in the
past are extractive in nature (Neto et al., 2002;
Dorr et al., 2003; Filippova and Altun, 2013;
Colmenares et al., 2015), which usually work
by identifying the most important phrases of an
input document and re-arranging them into a new
summary sequence. The more recent abstractive
summarization models have more degrees of
freedom and can create more novel sequences.
Many abstractive models such as Rush et al.
(2015), Chopra et al. (2016), Zeng et al. (2016)
and Nallapati et al. (2016) are all based on the
neural encoder-decoder architecture (Section 4.1).
A well-studied set of summarization tasks is
the Document Understanding Conference (DUC)
1 . These summarization tasks are varied, including short summaries of a single document and
long summaries of multiple documents categorized by subject. Most abstractive summarization models have been evaluated on the DUC-2004
dataset, and outperform extractive models on that
task (Dorr et al., 2003). However, models trained
on the DUC-2004 task can only generate very
short summaries up to 75 characters, and are usually used with one or two input sentences. Only
Nallapati et al. (2016) have tried to apply abstractive summarization techniques to longer input and
outputs with the CNN/Daily Mail dataset.
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "4 Related Work",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### 5 Datasets
5.1 CNN/Daily Mail
We evaluate our model on a modified version
of the CNN/Daily Mail dataset (Hermann et al.,
2015), following the same pre-processing steps
described in Nallapati et al. (2016). We refer the
reader to that paper for a detailed description. The
final dataset contains 286,817 training examples,
13,368 validation examples and 11,487 testing examples. After limiting the input length to 800 to
1 http://duc.nist.gov/
-----
kens and output length to 100 tokens, the average
input and output lengths are respectively 632 and
53 tokens.
5.2 New York Times
The New York Times (NYT) dataset (Sandhaus,
2008) is a large collection of articles published
between 1996 and 2007. Even though this
dataset has been used to train extractive sum
marization systems (Hong and Nenkova, 2014;
Li et al., 2016) or closely-related models for predicting the importance of a phrase in an article (Yang and Nenkova, 2014; Nye and Nenkova,
2015; Hong et al., 2015), we are the first group
to run an end-to-end abstractive summarization
model on the article-abstract pairs of this dataset.
While CNN/Daily Mail summaries have a similar wording to their corresponding articles, NYT
abstracts are more varied, are shorter and can use
a higher level of abstraction and paraphrase. We
believe that these two formats are a great complement to each other for abstractive summarization
models.
Preprocessing: We remove all documents that do
not have a full article text, abstract or headline.
We concatenate the headline, byline and full article text, separated by special tokens, to produce a
single input sequence for each example. We tokenize the input and abstract pairs with the Stanford tokenizer (Manning et al., 2014). We convert all tokens to lower-case and replace all numbers with “0”, remove “(s)” and “(m)” marks in
the abstracts and all occurrences of the following
words, singular or plural, if they are surrounded by
semicolons or at the end of the abstract: “photo”,
“graph”, “chart”, “map”, “table” and “drawing”.
Since the NYT abstracts almost never contain periods, we consider them multi-sentence summaries
if we split sentences based on semicolons. This
allows us to make the summary format and evaluation procedure similar to the CNN/Daily Mail
dataset. These pre-processing steps give us an average of 549 input tokens and 40 output tokens
per example, after limiting the input and output
lengths to 800 and 100 tokens.
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "5 Datasets",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
dataset. These pre-processing steps give us an average of 549 input tokens and 40 output tokens
per example, after limiting the input and output
lengths to 800 and 100 tokens.
Pointer supervision: We run each input and abstract sequence through the Stanford named entity recognizer (NER) (Manning et al., 2014). For
all named entity tokens in the abstract if the type
“PERSON”, “LOCATION”, “ORGANIZATION”
or “MISC”, we find their first occurrence in the in
put sequence. We use this information to supervise
p(u t ) (Equation 11) and α [e] ti [(Equation][ 4][) during]
training. Note that the NER tagger is only used to
create the dataset and is no longer needed during
testing, thus we’re not adding any dependencies
to our model. We also add pointer supervision for
out-of-vocabulary output tokens if they are present
in the input.
Dataset splits: We created our own training,validation, and testing splits for this dataset.
Instead of producing random splits, we sorted the
documents by their publication date in chronological order and used the first 90% (589,284 examples) for training, the next 5% (32,736) for validation, and the remaining 5% (32,739) for testing. This makes our dataset splits easily reproducible and follows the intuition that if used in
a production environment, such a summarization
model would be used on recent articles rather than
random ones.
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "5 Datasets",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### 6 Results
6.1 Experiments
Setup: We evaluate the intra-decoder attention mechanism and the mixed-objective learning by running the following experiments on both
datasets. We first run maximum-likelihood (ML)
training with and without intra-decoder attention
(removing c [d] t [from Equations][ 9][ and][ 11][ to dis-]
able intra-attention) and select the best performing architecture. Next, we initialize our model
with the best ML parameters and we compare reinforcement learning (RL) with our mixed-objective
learning (ML+RL), following our objective functions in Equation 15 and 16. For ML training, we
use the teacher forcing algorithm with the only difference that at each decoding step, we choose with
a 25% probability the previously generated token
instead of the ground-truth token as the decoder
input token y t−1, which reduces exposure bias
(Venkatraman et al., 2015). We use a γ = 0.9984
for the ML+RL loss function.
Implementation details: We use two 200
dimensional LSTMs for the bidirectional encoder
and one 400-dimensional LSTM for the decoder.
We limit the input vocabulary size to 150,000
tokens, and the output vocabulary to 50,000 tokens by selecting the most frequent tokens in
the training set. Input word embeddings are
100-dimensional and are initialized with GloVe
(Pennington et al., 2014). We train all our models
-----
Model ROUGE-1 ROUGE-2 ROUGE-L
words-lvt2k-temp-att (Nallapati et al., 2016) 35.46 13.30 32.65
ML, no intra-attention 37.86 14.69 34.99
ML, with intra-attention 38.30 14.81 35.49
RL, with intra-attention 41.16 15.75 39.08
ML+RL, with intra-attention 39.87 15.82 36.90
Table 1: Quantitative results for various models on the CNN/Daily Mail test dataset
Model ROUGE-1 ROUGE-2 ROUGE-L
ML, no intra-attention 44.26 27.43 40.41
ML, with intra-attention 43.86 27.10 40.11
RL, no intra-attention 47.22 30.51 43.27
ML+RL, no intra-attention 47.03 30.72 43.10
Table 2: Quantitative results for various models on the New York Times test dataset
Source document
Jenson Button was denied his 100th race for McLaren after an ERS prevented him from making it to the start-line. It
capped a miserable weekend for the Briton; his time in Bahrain plagued by reliability issues. Button spent much of the
race on Twitter delivering his verdict as the action unfolded. ’Kimi is the man to watch,’ and ’loving the sparks’, were
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "6 Results",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
capped a miserable weekend for the Briton; his time in Bahrain plagued by reliability issues. Button spent much of the
race on Twitter delivering his verdict as the action unfolded. ’Kimi is the man to watch,’ and ’loving the sparks’, were
among his pearls of wisdom, but the tweet which courted the most attention was a rather mischievous one: ’Ooh is Lewis
backing his team mate into Vettel?’ he quizzed after Rosberg accused Hamilton of pulling off such a manoeuvre in China.
Jenson Button waves to the crowd ahead of the Bahrain Grand Prix which he failed to start Perhaps a career in the media
beckons Lewis Hamilton has out-qualified and finished ahead of Nico Rosberg at every race this season. Indeed Rosberg
has now beaten his Mercedes team-mate only once in the 11 races since the pair infamously collided in Belgium last year.
Hamilton secured the 36th win of his career in Bahrain and his 21st from pole position. Only Michael Schumacher (40),
Ayrton Senna (29) and Sebastian Vettel (27) have more. He also became only the sixth F1 driver to lead 2,000 laps. Nico
Rosberg has been left in the shade by Lewis Hamilton who celebrates winning his third race of the year Kimi Raikkonen
secured a record seventh podium finish in Bahrain following his superb late salvo, although the Ferrari driver has never
won in the Gulf Kingdom. It was the Finn’s first trip to the rostrum since the 2013 Korean Grand Prix, but his triumph
brought a typically deadpan response: ’You’re never happy when you finish second... I’m a bit pleased to get a result.’
Sparks fly off the back of Kimi Raikkonen’s Ferrari en route to finishing second in Bahrain Bernie Ecclestone was in the
Bahrain paddock this weekend. He denied trying to engineer a deal for Hamilton, out of contract at the end of the season,
to join Ferrari despite earlier insisting that such a move would be ’great’ for the sport. The 84-year-old also confirmed
that F1 would be in Azerbaijan for the first time next year, even with concerns surrounding the countrys human rights
record. ’I think everybody seems to be happy,’ Ecclestone said. ’There doesn’t seem to be any big problem there. There’s
no question of it not being on the calendar. It’s going to be another good race. Formula One supremo Bernie Ecclestone
speaks to Nico Rosberg ahead of the Bahrain Grand Prix
Ground truth summary
Button denied 100th race start for McLaren after ERS failure. Button then spent much of the Bahrain Grand Prix
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "6 Results",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
speaks to Nico Rosberg ahead of the Bahrain Grand Prix
Ground truth summary
Button denied 100th race start for McLaren after ERS failure. Button then spent much of the Bahrain Grand Prix
on Twitter delivering his verdict on the action as it unfolded. Lewis Hamilton has out-qualified and finished ahead
of Mercedes team-mate Nico Rosberg at every race this season. Bernie Ecclestone confirms F1 will make its bow in
Azerbaijan next season.
ML, with intra-attention (ROUGE-1 41.58)
Button was denied his 100th race for McLaren. ERS prevented him from making it to the start-line. The Briton. He
quizzed after Nico Rosberg accused Lewis Hamilton of pulling off such a manoeuvre in China. Button has been in
Azerbaijan for the first time since 2013.
RL, with intra-attention (ROUGE-1 50.00)
Button was denied his 100th race for McLaren after an ERS prevented him from making it to the start-line. It capped a
miserable weekend for the Briton. Button has out-qualified. Finished ahead of Nico Rosberg at Bahrain. Lewis Hamilton
has. In 11 races. . The race. To lead 2,000 laps. . In. . . And. .
ML+RL, with intra-attention (ROUGE-1 44.00)
Button was denied his 100th race for McLaren. The ERS prevented him from making it to the start-line. Button was
his team mate in the 11 races in Bahrain. He quizzed after Nico Rosberg accused Lewis Hamilton of pulling off such a
manoeuvre in China.
Table 3: Example from the CNN/Daily Mail test dataset showing the outputs of our three best models after de-tokenization, re
capitalization, replacing anonymized entities, and replacing numbers. The ROUGE score corresponds to the specific example.
-----
Model Average readability
ML 7.88
RL 5.43
ML+RL 8.15
Ground truth 9.85
Table 4: Comparison of human readability scores on a ran
dom subset of the CNN/Daily Mail test dataset. All models
are with intra-decoder attention.
with Adam (Kingma and Ba, 2014) with a batch
size of 50 and a learning rate α of 0.001 for ML
training and 0.0001 for RL and ML+RL training.
At test time, we use beam search of width 5 on all
our models to generate our final predictions.
ROUGE metrics and options: We report the fulllength F-1 score of the ROUGE-1, ROUGE-2 and
ROUGE-L metrics with the Porter stemmer option. For RL and ML+RL training, we use the
ROUGE-L score as a reinforcement reward. We
also tried ROUGE-2 but we found that it created
summaries that almost always reached the maximum length, often ending sentences abruptly.
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "6 Results",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
ROUGE-L score as a reinforcement reward. We
also tried ROUGE-2 but we found that it created
summaries that almost always reached the maximum length, often ending sentences abruptly.
6.2 Quantitative analysis
Our results for the CNN/Daily Mail dataset are
shown in Table 1, and for the NYT dataset in Ta
ble 2. We observe that the intra-decoder attention
function helps our model achieve better ROUGE
scores on the CNN/Daily Mail but not on the NYT
dataset. We believe that the difference in sum
mary lengths between the CNN/Daily Mail and
NYT datasets is one of the main reason for this dif
ference in outcome, given that our intra-decoder
was designed to improve performance over long
output sequences. Further differences in the nature of the summaries and the level of complexity and abstraction between these datasets could
also explain these intra-attention results, as well
as the absolute ROUGE score differences between
CNN/Daily Mail and NYT results.
In addition, we can see that on all datasets, both
the RL and ML+RL models obtain much higher
scores than the ML model. In particular, these
methods clearly surpass the state-of-the-art model
from Nallapati et al. (2016) on the CNN/Daily
Mail dataset.
6.3 Qualitative analysis
We perform human evaluation to ensure that our
increase in ROUGE scores is also followed by
an increase in human readability and quality. In
particular, we want to know whether the ML+RL
training objective did improve readability compared to RL.
Evaluation setup: To perform this evaluation,
we randomly select 100 test examples from the
CNN/Daily Mail dataset. For each example, we
show the ground truth summary as well as summaries generated by different models side by side
to a human evaluator. The human evaluator does
not know which summaries come from which
model or which one is the ground truth. A score
from 1 to 10 is then assigned to each summary, 1
corresponding to the lower level of readability and
10 the highest.
Results: Our human evaluation results are shown
in Table 4. We can see that even though RL has
the highest ROUGE-1 and ROUGE-L scores, it
produces the least readable summaries among our
experiments. The most common readability issue observed in our RL results, as shown in the
example of Table 3, is the presence of short and
truncated sentences towards the end of sequences.
This confirms that optimizing for single discrete
evaluation metric such as ROUGE with RL can be
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "6 Results",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
example of Table 3, is the presence of short and
truncated sentences towards the end of sequences.
This confirms that optimizing for single discrete
evaluation metric such as ROUGE with RL can be
detrimental to the model quality.
On the other hand, our RL+ML summaries obtain the highest readability scores among our models, hence solving the readability issues of the RL
model while also having a higher ROUGE score
than ML. This demonstrates the usefulness and
value of our RL+ML training method for abstractive summarization.
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "6 Results",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### 7 Conclusion
We presented a new model and training procedure
that obtains state-of-the-art results in text sum
marization for the CNN/Daily Mail, improves the
readability of the generated summaries and is better suited to long output sequences. We also run
our abstractive model on the NYT dataset for the
first time. We saw that despite their common use
for evaluation, ROUGE scores have their shortcomings and should not be the only metric to optimize on summarization model for long sequences.
We believe that our intra-attention decoder and
combined training objective could be applied to
other sequence-to-sequence tasks with long inputs
and outputs, which is an interesting direction for
further research.
-----
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "7 Conclusion",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly
learning to align and translate. arXiv preprint
arXiv:1409.0473 .
Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016.
Long short-term memory-networks for machine
reading. arXiv preprint arXiv:1601.06733 .
Sumit Chopra, Michael Auli, Alexander M Rush, and
SEAS Harvard. 2016. Abstractive sentence sum
marization with attentive recurrent neural networks.
Proceedings of NAACL-HLT16 pages 93–98.
Carlos A Colmenares, Marina Litvak, Amin Mantrach,
and Fabrizio Silvestri. 2015. Heads: Headline generation as sequence prediction using an abstract
feature-rich space. In HLT-NAACL. pages 133–142.
Bonnie Dorr, David Zajic, and Richard Schwartz. 2003.
Hedge trimmer: A parse-and-trim approach to headline generation. In Proceedings of the HLT-NAACL
03 on Text summarization workshop-Volume 5. Association for Computational Linguistics, pages 1–8.
Katja Filippova and Yasemin Altun. 2013. Overcoming the lack of parallel data in sentence compression.
In EMNLP. Citeseer, pages 1481–1491.
Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016.
Pointing the unknown words. arXiv preprint
arXiv:1603.08148 .
Karl Moritz Hermann, Tomas Kocisky, Edward
Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems. pages 1693–
1701.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997.
Long short-term memory. Neural computation
9(8):1735–1780.
Kai Hong, Mitchell Marcus, and Ani Nenkova. 2015.
System combination for multi-document summarization. In EMNLP. pages 107–117.
Kai Hong and Ani Nenkova. 2014. Improving the
estimation of word importance for news multidocument summarization-extended technical report
.
Hakan Inan, Khashayar Khosravi, and Richard Socher.
2016. Tying word vectors and word classifiers:
A loss framework for language modeling. arXiv
preprint arXiv:1611.01462 .
Diederik Kingma and Jimmy Ba. 2014. Adam: A
method for stochastic optimization. arXiv preprint
arXiv:1412.6980 .
Junyi Jessy Li, Kapil Thadani, and Amanda Stent.
2016. The role of discourse units in near-extractive
summarization. In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue. page
137.
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "References",
"Header 4": null
}
|
{
"chunk_type": "references"
}
|
arXiv:1412.6980 .
Junyi Jessy Li, Kapil Thadani, and Amanda Stent.
2016. The role of discourse units in near-extractive
summarization. In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue. page
137.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop. Barcelona, Spain, volume 8.
Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tom´aˇs Koˇcisk`y, Andrew Senior, Fumin
Wang, and Phil Blunsom. 2016. Latent predictor networks for code generation. arXiv preprint
arXiv:1603.06744 .
Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael
Noseworthy, Laurent Charlin, and Joelle Pineau.
2016. How not to evaluate your dialogue system:
An empirical study of unsupervised evaluation metrics for dialogue response generation. arXiv preprint
arXiv:1603.08023 .
Christopher D Manning, Mihai Surdeanu, John Bauer,
Jenny Rose Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In ACL (System Demonstrations). pages 55–60.
Stephen Merity, Caiming Xiong, James Bradbury, and
Richard Socher. 2016. Pointer sentinel mixture
models. arXiv preprint arXiv:1609.07843 .
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing
systems. pages 3111–3119.
Ramesh Nallapati, Bowen Zhou, C¸ a˘glar G¨ulc¸ehre,
Bing Xiang, et al. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023 .
Joel Larocca Neto, Alex A Freitas, and Celso AA
Kaestner. 2002. Automatic text summarization using a machine learning approach. In Brazilian Symposium on Artificial Intelligence. Springer, pages
205–215.
Mohammad Norouzi, Samy Bengio, Navdeep Jaitly,
Mike Schuster, Yonghui Wu, Dale Schuurmans,
et al. 2016. Reward augmented maximum likelihood for neural structured prediction. In Advances
In Neural Information Processing Systems. pages
1723–1731.
Benjamin Nye and Ani Nenkova. 2015. Identification
and characterization of newsworthy verbs in world
news. In HLT-NAACL. pages 1440–1445.
Jeffrey Pennington, Richard Socher, and Christopher D
Manning. 2014. Glove: Global vectors for word
representation. In EMNLP. volume 14, pages 1532–
1543.
-----
Ofir Press and Lior Wolf. 2016. Using the output
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "References",
"Header 4": null
}
|
{
"chunk_type": "references"
}
|
Jeffrey Pennington, Richard Socher, and Christopher D
Manning. 2014. Glove: Global vectors for word
representation. In EMNLP. volume 14, pages 1532–
1543.
-----
Ofir Press and Lior Wolf. 2016. Using the output
embedding to improve language models. arXiv
preprint arXiv:1608.05859 .
Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli,
and Wojciech Zaremba. 2015. Sequence level training with recurrent neural networks. arXiv preprint
arXiv:1511.06732 .
Steven J Rennie, Etienne Marcheret, Youssef Mroueh,
Jarret Ross, and Vaibhava Goel. 2016. Self-critical
sequence training for image captioning. arXiv
preprint arXiv:1612.00563 .
Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. arXiv preprint
arXiv:1509.00685 .
Evan Sandhaus. 2008. The new york times annotated
corpus. Linguistic Data Consortium, Philadelphia
6(12):e26752.
Baskaran Sankaran, Haitao Mi, Yaser Al-Onaizan, and
Abe Ittycheriah. 2016. Temporal attention model
for neural machine translation. arXiv preprint
arXiv:1608.02927 .
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014.
Sequence to sequence learning with neural networks. In Advances in neural information processing systems. pages 3104–3112.
Arun Venkatraman, Martial Hebert, and J Andrew
Bagnell. 2015. Improving multi-step prediction of
learned time series models. In AAAI. pages 3024–
3030.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly.
2015. Pointer networks. In Advances in Neural Information Processing Systems. pages 2692–2700.
Ronald J Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine learning 8(3-4):229–256.
Ronald J Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent
neural networks. Neural computation 1(2):270–280.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V
Le, Mohammad Norouzi, Wolfgang Macherey,
Maxim Krikun, Yuan Cao, Qin Gao, Klaus
Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between
human and machine translation. arXiv preprint
arXiv:1609.08144 .
Yinfei Yang and Ani Nenkova. 2014. Detecting
information-dense texts in multiple news domains.
In AAAI. pages 1650–1656.
Wenyuan Zeng, Wenjie Luo, Sanja Fidler, and Raquel
Urtasun. 2016. Efficient summarization with
read-again and copy mechanism. arXiv preprint
arXiv:1611.03382 .
-----
|
{
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
}
|
{
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "References",
"Header 4": null
}
|
{
"chunk_type": "references"
}
|
Journal of Machine Learning Research 12 (2011) 2825-2830 Submitted 3/11; Revised 8/11; Published 10/11
|
{
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
}
|
{
"Header 1": null,
"Header 2": null,
"Header 3": null,
"Header 4": null
}
|
{
"chunk_type": "title"
}
|
## Scikit-learn: Machine Learning in Python
Fabian Pedregosa fabian.pedregosa@inria.fr
Ga¨el Varoquaux gael.varoquaux@normalesup.org
Alexandre Gramfort alexandre.gramfort@inria.fr
Vincent Michel vincent.michel@logilab.fr
Bertrand Thirion bertrand.thirion@inria.fr
Parietal, INRIA Saclay
Neurospin, Bˆat 145, CEA Saclay
91191 Gif sur Yvette – France
Olivier Grisel olivier.grisel@ensta.fr
Nuxeo
20 rue Soleillet
75 020 Paris – France
Mathieu Blondel mblondel@ai.cs.kobe-u.ac.jp
Kobe University
1-1 Rokkodai, Nada
Kobe 657-8501 – Japan
Peter Prettenhofer peter.prettenhofer@gmail.com
Bauhaus-Universit¨at Weimar
Bauhausstr. 11
99421 Weimar – Germany
Ron Weiss ronweiss@gmail.com
Google Inc
76 Ninth Avenue
New York, NY 10011 – USA
Vincent Dubourg vincent.dubourg@gmail.com
Clermont Universit´e, IFMA, EA 3867, LaMI
BP 10448, 63000 Clermont-Ferrand – France
Jake Vanderplas vanderplas@astro.washington.edu
Astronomy Department
University of Washington, Box 351580
Seattle, WA 98195 – USA
Alexandre Passos alexandre.tp@gmail.com
IESL Lab
UMass Amherst
Amherst MA 01002 – USA
David Cournapeau cournape@gmail.com
Enthought
21 J.J. Thompson Avenue
Cambridge, CB3 0FA – UK
⃝c 2011 Fabian Pedregosa, Ga¨el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel,
Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David
Cournapeau, Matthieu Brucher, Matthieu Perrot and Edouard Duchesnay [´]
-----
Pedregosa, Varoquaux, Gramfort et al.
Matthieu Brucher matthieu.brucher@gmail.com
Total SA, CSTJF
avenue Larribau
64000 Pau – France
Matthieu Perrot matthieu.perrot@cea.fr
´Edouard Duchesnay edouard.duchesnay@cea.fr
LNAO
Neurospin, Bˆat 145, CEA Saclay
91191 Gif sur Yvette – France
Editor: Mikio Braun
|
{
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
}
|
{
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": null,
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### Abstract
Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package
focuses on bringing machine learning to non-specialists using a general-purpose high-level
language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license,
encouraging its use in both academic and commercial settings. Source code, binaries, and
[documentation can be downloaded from http://scikit-learn.sourceforge.net.](http://scikit-learn.sourceforge.net)
Keywords: Python, supervised learning, unsupervised learning, model selection
|
{
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
}
|
{
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "Abstract",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### 1. Introduction
The Python programming language is establishing itself as one of the most popular languages for scientific computing. Thanks to its high-level interactive nature and its maturing
ecosystem of scientific libraries, it is an appealing choice for algorithmic development and
exploratory data analysis (Dubois, 2007; Milmann and Avaizis, 2011). Yet, as a generalpurpose language, it is increasingly used not only in academic settings but also in industry.
Scikit-learn harnesses this rich environment to provide state-of-the-art implementations
of many well known machine learning algorithms, while maintaining an easy-to-use interface
tightly integrated with the Python language. This answers the growing need for statistical
data analysis by non-specialists in the software and web industries, as well as in fields
outside of computer-science, such as biology or physics. Scikit-learn differs from other
machine learning toolboxes in Python for various reasons: i) it is distributed under the
BSD license ii) it incorporates compiled code for efficiency, unlike MDP (Zito et al., 2008)
and pybrain (Schaul et al., 2010), iii) it depends only on numpy and scipy to facilitate easy
distribution, unlike pymvpa (Hanke et al., 2009) that has optional dependencies such as
R and shogun, and iv) it focuses on imperative programming, unlike pybrain which uses
a data-flow framework. While the package is mostly written in Python, it incorporates
the C++ libraries LibSVM (Chang and Lin, 2001) and LibLinear (Fan et al., 2008) that
provide reference implementations of SVMs and generalized linear models with compatible
2826
-----
Scikit-learn: Machine Learning in Python
licenses. Binary packages are available on a rich set of platforms including Windows and any
POSIX platforms. Furthermore, thanks to its liberal license, it has been widely distributed
as part of major free software distributions such as Ubuntu, Debian, Mandriva, NetBSD and
Macports and in commercial distributions such as the “Enthought Python Distribution”.
|
{
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
}
|
{
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "1. Introduction",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### 2. Project Vision
Code quality. Rather than providing as many features as possible, the project’s goal has
been to provide solid implementations. Code quality is ensured with unit tests—as of release
0.8, test coverage is 81%—and the use of static analysis tools such as pyflakes and pep8.
Finally, we strive to use consistent naming for the functions and parameters used throughout
a strict adherence to the Python coding guidelines and numpy style documentation.
BSD licensing. Most of the Python ecosystem is licensed with non-copyleft licenses. While
such policy is beneficial for adoption of these tools by commercial projects, it does impose
some restrictions: we are unable to use some existing scientific code, such as the GSL.
Bare-bone design and API. To lower the barrier of entry, we avoid framework code and keep
the number of different objects to a minimum, relying on numpy arrays for data containers.
Community-driven development. We base our development on collaborative tools such as
git, github and public mailing lists. External contributions are welcome and encouraged.
Documentation. Scikit-learn provides a ∼300 page user guide including narrative documentation, class references, a tutorial, installation instructions, as well as more than 60
examples, some featuring real-world applications. We try to minimize the use of machinelearning jargon, while maintaining precision with regards to the algorithms employed.
|
{
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
}
|
{
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "2. Project Vision",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### 3. Underlying Technologies
Numpy: the base data structure used for data and model parameters. Input data is presented as numpy arrays, thus integrating seamlessly with other scientific Python libraries.
Numpy’s view-based memory model limits copies, even when binding with compiled code
(Van der Walt et al., 2011). It also provides basic arithmetic operations.
Scipy: efficient algorithms for linear algebra, sparse matrix representation, special functions
and basic statistical functions. Scipy has bindings for many Fortran-based standard numerical packages, such as LAPACK. This is important for ease of installation and portability,
as providing libraries around Fortran code can prove challenging on various platforms.
Cython: a language for combining C in Python. Cython makes it easy to reach the performance of compiled languages with Python-like syntax and high-level operations. It is also
used to bind compiled libraries, eliminating the boilerplate code of Python/C extensions.
|
{
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
}
|
{
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "3. Underlying Technologies",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### 4. Code Design
Objects specified by interface, not by inheritance. To facilitate the use of external objects
with scikit-learn, inheritance is not enforced; instead, code conventions provide a consistent
interface. The central object is an estimator, that implements a fit method, accepting as
arguments an input data array and, optionally, an array of labels for supervised problems.
Supervised estimators, such as SVM classifiers, can implement a predict method. Some
2827
-----
Pedregosa, Varoquaux, Gramfort et al.
scikit-learn mlpy pybrain pymvpa mdp shogun
Support Vector Classification 5.2 9.47 17.5 11.52 40.48 5.63
Lasso (LARS) 1.17 105.3 - 37.35 - Elastic Net 0.52 73.7 - 1.44 -
k-Nearest Neighbors 0.57 1.41 - 0.56 0.58 1.36
PCA (9 components) 0.18 - - 8.93 0.47 0.33
k-Means (9 clusters) 1.34 0.79 ⋆ - 35.75 0.68
License BSD GPL BSD BSD BSD GPL
-: Not implemented. ⋆: Does not converge within 1 hour.
Table 1: Time in seconds on the Madelon data set for various machine learning libraries exposed in Python: MLPy (Albanese et al., 2008), PyBrain (Schaul et al., 2010), pymvpa (Hanke et al., 2009), MDP (Zito et al.,
2008) and Shogun (Sonnenburg et al., 2010). For more benchmarks see
[http://github.com/scikit-learn.](http://github.com/scikit-learn)
estimators, that we call transformers, for example, PCA, implement a transform method,
returning modified input data. Estimators may also provide a score method, which is an
increasing evaluation of goodness of fit: a log-likelihood, or a negated loss function. The
other important object is the cross-validation iterator, which provides pairs of train and test
indices to split input data, for example K-fold, leave one out, or stratified cross-validation.
Model selection. Scikit-learn can evaluate an estimator’s performance or select parameters
using cross-validation, optionally distributing the computation to several cores. This is accomplished by wrapping an estimator in a GridSearchCV object, where the “CV” stands for
“cross-validated”. During the call to fit, it selects the parameters on a specified parameter
grid, maximizing a score (the score method of the underlying estimator). predict, score,
or transform are then delegated to the tuned estimator. This object can therefore be used
transparently as any other estimator. Cross validation can be made more efficient for certain
estimators by exploiting specific properties, such as warm restarts or regularization paths
|
{
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
}
|
{
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "4. Code Design",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
transparently as any other estimator. Cross validation can be made more efficient for certain
estimators by exploiting specific properties, such as warm restarts or regularization paths
(Friedman et al., 2010). This is supported through special objects, such as the LassoCV.
Finally, a Pipeline object can combine several transformers and an estimator to create
a combined estimator to, for example, apply dimension reduction before fitting. It behaves
as a standard estimator, and GridSearchCV therefore tune the parameters of all steps.
|
{
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
}
|
{
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "4. Code Design",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### 5. High-level yet Efficient: Some Trade Offs
While scikit-learn focuses on ease of use, and is mostly written in a high level language, care
has been taken to maximize computational efficiency. In Table 1, we compare computation
time for a few algorithms implemented in the major machine learning toolkits accessible
in Python. We use the Madelon data set (Guyon et al., 2004), 4400 instances and 500
attributes, The data set is quite large, but small enough for most algorithms to run.
SVM. While all of the packages compared call libsvm in the background, the performance of
scikit-learn can be explained by two factors. First, our bindings avoid memory copies and
have up to 40% less overhead than the original libsvm Python bindings. Second, we patch
libsvm to improve efficiency on dense data, use a smaller memory footprint, and better use
2828
-----
Scikit-learn: Machine Learning in Python
memory alignment and pipelining capabilities of modern processors. This patched version
also provides unique features, such as setting weights for individual samples.
LARS. Iteratively refining the residuals instead of recomputing them gives performance
gains of 2–10 times over the reference R implementation (Hastie and Efron, 2004). Pymvpa
uses this implementation via the Rpy R bindings and pays a heavy price to memory copies.
Elastic Net. We benchmarked the scikit-learn coordinate descent implementations of Elastic
Net. It achieves the same order of performance as the highly optimized Fortran version
glmnet (Friedman et al., 2010) on medium-scale problems, but performance on very large
problems is limited since we do not use the KKT conditions to define an active set.
kNN. The k-nearest neighbors classifier implementation constructs a ball tree (Omohundro,
1989) of the samples, but uses a more efficient brute force search in large dimensions.
PCA. For medium to large data sets, scikit-learn provides an implementation of a truncated
PCA based on random projections (Rokhlin et al., 2009).
k-means. scikit-learn’s k-means algorithm is implemented in pure Python. Its performance
is limited by the fact that numpy’s array operations take multiple passes over data.
|
{
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
}
|
{
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "5. High-level yet Efficient: Some Trade Offs",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### 6. Conclusion
Scikit-learn exposes a wide variety of machine learning algorithms, both supervised and
unsupervised, using a consistent, task-oriented interface, thus enabling easy comparison
of methods for a given application. Since it relies on the scientific Python ecosystem, it
can easily be integrated into applications outside the traditional range of statistical data
analysis. Importantly, the algorithms, implemented in a high-level language, can be used
as building blocks for approaches specific to a use case, for example, in medical imaging
(Michel et al., 2011). Future work includes online learning, to scale to large data sets.
|
{
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
}
|
{
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "6. Conclusion",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### References
D. Albanese, G. Merler, S.and Jurman, and R. Visintainer. MLPy: high-performance
Python package for predictive modeling. In NIPS, MLOSS workshop, 2008.
C.C. Chang and C.J. Lin. LIBSVM: a library for support vector machines.
[http://www.csie.ntu.edu.tw/cjlin/libsvm, 2001.](http://www.csie.ntu.edu.tw/cjlin/libsvm)
P.F. Dubois, editor. Python: batteries included, volume 9 of Computing in Science &
Engineering. IEEE/AIP, May 2007.
R.E. Fan, K.W. Chang, C.J. Hsieh, X.R. Wang, and C.J. Lin. LIBLINEAR: A library for
large linear classification. The Journal of Machine Learning Research, 9:1871–1874, 2008.
J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear
models via coordinate descent. Journal of statistical software, 33(1):1, 2010.
I Guyon, S. R. Gunn, A. Ben-Hur, and G. Dror. Result analysis of the NIPS 2003 feature
selection challenge, 2004.
2829
-----
Pedregosa, Varoquaux, Gramfort et al.
M. Hanke, Y.O. Halchenko, P.B. Sederberg, S.J. Hanson, J.V. Haxby, and S. Pollmann.
PyMVPA: A Python toolbox for multivariate pattern analysis of fMRI data. Neuroinformatics, 7(1):37–53, 2009.
T. Hastie and B. Efron. Least Angle Regression, Lasso and Forward Stagewise.
[http://cran.r-project.org/web/packages/lars/lars.pdf, 2004.](http://cran.r-project.org/web/packages/lars/lars.pdf)
V. Michel, A. Gramfort, G. Varoquaux, E. Eger, C. Keribin, and B. Thirion. A supervised
clustering approach for fMRI-based inference of brain states. Patt Rec, page epub ahead
of print, April 2011. doi: 10.1016/j.patcog.2011.04.006.
K.J. Milmann and M. Avaizis, editors. Scientific Python, volume 11 of Computing in Science
& Engineering. IEEE/AIP, March 2011.
S.M. Omohundro. Five balltree construction algorithms. ICSI Technical Report TR-89-063,
1989.
V. Rokhlin, A. Szlam, and M. Tygert. A randomized algorithm for principal component
analysis. SIAM Journal on Matrix Analysis and Applications, 31(3):1100–1124, 2009.
T. Schaul, J. Bayer, D. Wierstra, Y. Sun, M. Felder, F. Sehnke, T. R¨uckstieß, and J. Schmidhuber. PyBrain. The Journal of Machine Learning Research, 11:743–746, 2010.
S. Sonnenburg, G. R¨atsch, S. Henschel, C. Widmer, J. Behr, A. Zien, F. de Bona, A. Binder,
C. Gehl, and V. Franc. The SHOGUN Machine Learning Toolbox. Journal of Machine
Learning Research, 11:1799–1802, 2010.
S. Van der Walt, S.C Colbert, and G. Varoquaux. The NumPy array: a structure for
|
{
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
}
|
{
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "References",
"Header 4": null
}
|
{
"chunk_type": "references"
}
|
C. Gehl, and V. Franc. The SHOGUN Machine Learning Toolbox. Journal of Machine
Learning Research, 11:1799–1802, 2010.
S. Van der Walt, S.C Colbert, and G. Varoquaux. The NumPy array: a structure for
efficient numerical computation. Computing in Science and Engineering, 11, 2011.
T. Zito, N. Wilbert, L. Wiskott, and P. Berkes. Modular toolkit for Data Processing (MDP):
a Python data processing framework. Frontiers in neuroinformatics, 2, 2008.
2830
-----
|
{
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
}
|
{
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "References",
"Header 4": null
}
|
{
"chunk_type": "references"
}
|
## Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent
##### Feng Niu, Benjamin Recht, Christopher R´e and Stephen J. Wright Computer Sciences Department, University of Wisconsin-Madison 1210 W Dayton St, Madison, WI 53706 June 2011
**Abstract**
Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art
performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and
synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented *without any locking* . We present an update scheme
called Hogwild! which allows processors access to shared memory with the possibility of overwriting each other’s work. We show that when the associated optimization problem is *sparse*,
meaning most gradient updates only modify small parts of the decision variable, then Hogwild!
achieves a nearly optimal rate of convergence. We demonstrate experimentally that Hogwild!
outperforms alternative schemes that use locking by an order of magnitude.
**Keywords.** Stochastic gradient descent, Incremental gradient methods, Machine learning,
Parallel computing, Multicore
|
{
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
}
|
{
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": null,
"Header 4": null
}
|
{
"chunk_type": "title"
}
|
### **1 Introduction**
With its small memory footprint, robustness against noise, and rapid learning rates, Stochastic Gradient Descent (SGD) has proved to be well suited to data-intensive machine learning tasks [3,5,27].
However, SGD’s scalability is limited by its inherently sequential nature; it is difficult to parallelize. Nevertheless, the recent emergence of inexpensive multicore processors and mammoth,
web-scale data sets has motivated researchers to develop several clever parallelization schemes for
SGD [4, 10, 12, 17, 31]. As many large data sets are currently pre-processed in a MapReduce-like
parallel-processing framework, much of the recent work on parallel SGD has focused naturally on
MapReduce implementations. MapReduce is a powerful tool developed at Google for extracting
information from huge logs (e.g., “find all the urls from a 100TB of Web data”) that was designed
to ensure fault tolerance and to simplify the maintenance and programming of large clusters of
machines [9]. But MapReduce is not ideally suited for online, numerically intensive data analysis. Iterative computation is difficult to express in MapReduce, and the overhead to ensure fault
tolerance can result in dismal throughput. Indeed, even Google researchers themselves suggest
that other systems, for example Dremel, are more appropriate than MapReduce for data analysis
tasks [22].
For some data sets, the sheer size of the data dictates that one use a cluster of machines. However, there are a host of problems in which, after appropriate preprocessing, the data necessary for
1
-----
statistical analysis may consist of a few terabytes or less. For such problems, one can use a single
inexpensive work station as opposed to a hundred thousand dollar cluster. Multicore systems have
significant performance advantages, including (1) low latency and high throughput shared main
memory (a processor in such a system can write and read the shared physical memory at over
12GB/s with latency in the tens of nanoseconds); and (2) high bandwidth off multiple disks (a
thousand-dollar RAID can pump data into main memory at over 1GB/s). In contrast, a typical
MapReduce setup will read incoming data at rates less than tens of MB/s due to frequent checkpointing for fault tolerance. The high rates achievable by multicore systems move the bottlenecks
in parallel computation to synchronization (or locking) amongst the processors [2, 13]. Thus, to
|
{
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
}
|
{
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**1 Introduction**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
in parallel computation to synchronization (or locking) amongst the processors [2, 13]. Thus, to
enable scalable data analysis on a multicore machine, any performant solution must minimize the
overhead of locking.
In this work, we propose a simple strategy for eliminating the overhead associated with locking:
*run SGD in parallel without locks*, a strategy that we call Hogwild!. In Hogwild!, processors are
allowed equal access to shared memory and are able to update individual components of memory
at will. Such a lock-free scheme might appear doomed to fail as processors could overwrite each
other’s progress. However, when the data access is *sparse*, meaning that individual SGD steps only
modify a small part of the decision variable, we show that memory overwrites are rare and that
they introduce barely any error into the computation when they do occur. We demonstrate both
theoretically and experimentally a near linear speedup with the number of processors on commonly
occurring sparse learning problems.
In Section 2, we formalize a notion of sparsity that is sufficient to guarantee such a speedup
and provide canonical examples of sparse machine learning problems in classification, collaborative
filtering, and graph cuts. Our notion of sparsity allows us to provide theoretical guarantees of
linear speedups in Section 4. As a by-product of our analysis, we also derive rates of convergence
for algorithms with constant stepsizes. We demonstrate that robust 1 */k* convergence rates are
possible with constant stepsize schemes that implement an exponential back-off in the constant
over time. This result is interesting in of itself and shows that one need not settle for 1 */√k* rates
to ensure robustness in SGD algorithms.
In practice, we find that computational performance of a lock-free procedure exceeds even our
theoretical guarantees. We experimentally compare lock-free SGD to several recently proposed
methods. We show that all methods that propose memory locking are significantly slower than
their respective lock-free counterparts on a variety of machine learning applications.
|
{
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
}
|
{
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**1 Introduction**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### **2 Sparse Separable Cost Functions**
Our goal throughout is to minimize a function *f* : *X ⊆* R *[n]* *→* R of the form
*f* ( *x* ) = � *f* *e* ( *x* *e* ) *.* (2.1)
*e∈E*
Here *e* denotes a small subset of *{* 1 *, . . ., n}* and *x* *e* denotes the values of the vector *x* on the
coordinates indexed by *e* . The key observation that underlies our lock-free approach is that the
natural cost functions associated with many machine learning problems of interest are *sparse* in the
sense that *|E|* and *n* are both very large but each individual *f* *e* acts only on a very small number
of components of *x* . That is, each subvector *x* *e* contains just a few components of *x* .
2
-----



|
{
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
}
|
{
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**2 Sparse Separable Cost Functions**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
#### (a) (b) (c)
**Figure 1:** Example graphs induced by cost function. (a) A sparse SVM induces a hypergraph
where each hyperedge corresponds to one example. (b) A matrix completion example induces
a bipartite graph between the rows and columns with an edge between two nodes if an entry is
revealed. (c) The induced hypergraph in a graph-cut problem is simply the graph whose cuts
we aim to find.
The cost function (2.1) induces a *hypergraph G* = ( *V, E* ) whose nodes are the individual components of *x* . Each subvector *x* *e* induces an edge in the graph *e ∈* *E* consisting of some subset of
nodes. A few examples illustrate this concept.
**Sparse SVM.** Suppose our goal is to fit a support vector machine to some data pairs *E* =
*{* ( *z* 1 *, y* 1 ) *, . . .,* ( *z* *|E|* *, y* *|E|* ) *}* where *z ∈* R *[n]* and *y* is a label for each ( *z, y* ) *∈* *E* .
minimize *x* � max(1 *−* *y* *α* *x* *[T]* *z* *α* *,* 0) + *λ∥x∥* 2 [2] *[,]* (2.2)
*α∈E*
and we know *a priori* that the examples *z* *α* are very sparse (see for example [14]). To write this
cost function in the form of (2.1), let *e* *α* denote the components which are non-zero in *z* *α* and let
*d* *u* denote the number of training examples which are non-zero in component *u* ( *u* = 1 *,* 2 *, . . ., n* ).
Then we can rewrite (2.2) as
*x* [2] *u*
minimize *x* � max(1 *−* *y* *α* *x* *[T]* *z* *α* *,* 0) + *λ* � *d* *u* *.* (2.3)
*α∈E* � *u∈e* *α* �
Each term in the sum (2.3) depends only on the components of *x* indexed by the set *e* *α* .
**Matrix Completion.** In the matrix completion problem, we are provided entries of a low-rank,
*n* *r* *× n* *c* matrix ***Z*** from the index set *E* . Such problems arise in collaborative filtering, Euclidean
distance estimation, and clustering [8,18,25]. Our goal is to reconstruct ***Z*** from this sparse sampling
of data. A popular heuristic recovers the estimate of ***Z*** as a product ***LR*** *[∗]* of factors obtained from
the following minimization:
minimize ( ***L*** *,* ***R*** ) � ( ***L*** *u* ***R*** *v* *[∗]* *[−]* *[Z]* *[uv]* [)] [2] [ +] *[µ]* 2 *[∥]* ***[L]*** *[∥]* *F* [2] [+] *[µ]* 2 *[∥]* ***[R]*** *[∥]* *F* [2] *[,]* (2.4)
( *u,v* ) *∈E*
where ***L*** is *n* *r* *× r*, ***R*** is *n* *c* *× r* and ***L*** *u* (resp. ***R*** *v* ) denotes the *u* th (resp. *v* th) row of ***L*** (resp. ***R*** )
[18,25,28]. To put this problem in sparse form, i.e., as (2.1), we write (2.4) as
|
{
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
}
|
{
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**2 Sparse Separable Cost Functions**",
"Header 4": "(a) (b) (c)"
}
|
{
"chunk_type": "body"
}
|
where ***L*** is *n* *r* *× r*, ***R*** is *n* *c* *× r* and ***L*** *u* (resp. ***R*** *v* ) denotes the *u* th (resp. *v* th) row of ***L*** (resp. ***R*** )
[18,25,28]. To put this problem in sparse form, i.e., as (2.1), we write (2.4) as
minimize ( ***L*** *,* ***R*** ) � �( ***L*** *u* ***R*** *v* *[∗]* *[−]* *[Z]* *[uv]* [)] [2] [ +] 2 *|E* *µ* *u−* *|* *[∥]* ***[L]*** *[u]* *[∥]* *F* [2] [+] 2 *|E* *µ* *−v* *|* *[∥]* ***[R]*** *[v]* *[∥]* *F* [2] �
( *u,v* ) *∈E*
where *E* *u−* = *{v* : ( *u, v* ) *∈* *E}* and *E* *−v* = *{u* : ( *u, v* ) *∈* *E}* .
3
-----
**Graph Cuts.** Problems involving minimum cuts in graphs frequently arise in machine learning
(see [6] for a comprehensive survey). In such problems, we are given a sparse, nonnegative matrix *W*
which indexes similarity between entities. Our goal is to find a partition of the index set *{* 1 *, . . ., n}*
that best conforms to this similarity matrix. Here the graph structure is explicitly determined by
the similarity matrix *W* ; arcs correspond to nonzero entries in *W* . We want to match each string
to some list of *D* entities. Each node is associated with a vector *x* *i* in the *D* -dimensional simplex
*S* *D* = *{ζ ∈* R *[D]* : *ζ* *v* *≥* 0 [�] *[D]* *v* =1 *[ζ]* *[v]* [ = 1] *[}]* [. Here, two-way cuts use] *[ D]* [ = 2, but multiway-cuts with]
tens of thousands of classes also arise in entity resolution problems [19]. For example, we may have
a list of *n* strings, and *W* *uv* might index the similarity of each string. Several authors (e.g., [7])
propose to minimize the cost function
minimize *x* � *w* *uv* *∥x* *u* *−* *x* *v* *∥* 1 subject to *x* *v* *∈* *S* *D* for *v* = 1 *, . . ., n .* (2.5)
( *u,v* ) *∈E*
In all three of the preceding examples, the number of components involved in a particular term
*f* *e* is a small fraction of the total number of entries. We formalize this notion by defining the
following statistics of the hypergraph *G* :
*[|{]* *[e][ ∈]* *[E]* [ :] *[ v][ ∈]* *[e]* *[}|]* *[|{]* *[e]* [ˆ] *[ ∈]* *[E]* [ : ˆ] *[e][ ∩]* *[e][ ̸]* [=] *[ ∅]* *[}|]*
Ω:= max ∆:= [max] [1] *[≤]* *[v]* *[≤]* *[n]* *,* *ρ* := [max] *[e][∈][E]* *.* (2.6)
*e∈E* *[|][e][|][,]* *|E|* *|E|*
The quantity Ωsimply quantifies the size of the hyper edges. *ρ* determines the maximum fraction of
edges that intersect any given edge. ∆determines the maximum fraction of edges that intersect any
|
{
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
}
|
{
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**2 Sparse Separable Cost Functions**",
"Header 4": "(a) (b) (c)"
}
|
{
"chunk_type": "body"
}
|
*e∈E* *[|][e][|][,]* *|E|* *|E|*
The quantity Ωsimply quantifies the size of the hyper edges. *ρ* determines the maximum fraction of
edges that intersect any given edge. ∆determines the maximum fraction of edges that intersect any
variable. *ρ* is a measure of the sparsity of the hypergraph, while ∆measures the node-regularity.
For our examples, we can make the following observations about *ρ* and ∆.
1. **Sparse SVM** . ∆is simply the maximum frequency that any feature appears in an example,
while *ρ* measures how clustered the hypergraph is. If some features are very common across
the data set, then *ρ* will be close to one.
2. **Matrix Completion** . If we assume that the provided examples are sampled uniformly at
random and we see more than *n* *c* log( *n* *c* ) of them, then ∆ *≈* [lo] [g(] *n* *r* *[n]* *[r]* [)] and *ρ ≈* [2 lo] *n* [g(] *r* *[n]* *[r]* [)] . This
follows from a *coupon collector* argument [8].
3. **Graph Cuts** . ∆is the maximum degree divided by *|E|*, and *ρ* is at most 2∆.
We now describe a simple protocol that achieves a linear speedup in the number of processors
when Ω, ∆, and *ρ* are relatively small.
|
{
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
}
|
{
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**2 Sparse Separable Cost Functions**",
"Header 4": "(a) (b) (c)"
}
|
{
"chunk_type": "body"
}
|
### **3 The Hogwild! Algorithm**
Here we discuss the parallel processing setup. We assume a shared memory model with *p* processors. The decision variable *x* is accessible to all processors. Each processor can read *x*, and can
contribute an update vector to *x* . The vector *x* is stored in shared memory, and we assume that
the componentwise addition operation is atomic, that is
*x* *v* *←* *x* *v* + *a*
4
-----
**Al** **g** **orithm 1** Hogwild! u p date for individual p rocessors
1: **loop**
2: Sample *e* uniformly at random from *E*
3: Read current state *x* *e* and evaluate *G* *e* ( *x* )
4: **for** *v ∈* *e* **do** *x* *v* *←* *x* *v* *−* *γb* *[T]* *v* *[G]* *[e]* [(] *[x]* [)]
5: **end loop**
can be performed atomically by any processor for a scalar *a* and *v ∈{* 1 *, . . ., n}* . This operation
does not require a separate locking structure on most modern hardware: such an operation is
a single atomic instruction on GPUs and DSPs, and it can be implemented via a compare-andexchange operation on a general purpose multicore processor like the Intel Nehalem. In contrast,
the operation of updating many components at once requires an auxiliary locking structure.
Each processor then follows the procedure in Algorithm 1. To fully describe the algorithm, let
*b* *v* denote one of the standard basis elements in R *[n]*, with *v* ranging from 1 *, . . ., n* . That is, *b* *v* is
equal to 1 on the *v* th component and 0 otherwise. Let *P* *v* denote the Euclidean projection matrix
onto the *v* th coordinate, i.e., *P* *v* = *b* *v* *b* *[T]* *v* [.] *[ P]* *[v]* [is a diagonal matrix equal to 1 on the] *[ v]* [th diagonal and]
zeros elsewhere. Let *G* *e* ( *x* ) *∈* R *[n]* denote a gradient or subgradient of the function *f* *e* multiplied by
*|E|* . That is, we extend *f* *e* from a function on the coordinates of *e* to all of R *[n]* simply by ignoring
the components in *¬e* (i.e., not in *e* ). Then
*|E|* *[−]* [1] *G* *e* ( *x* ) *∈* *∂f* *e* ( *x* ) *.*
Here, *G* *e* is equal to zero on the components in *¬e* . Using a sparse representation, we can calculate
*G* *e* ( *x* ), only knowing the values of *x* in the components indexed by *e* . Note that as a consequence
of the uniform random sampling of *e* from *E*, we have
E[ *G* *e* ( *x* *e* )] *∈* *∂f* ( *x* ) *.*
In Algorithm 1, each processor samples an term *e ∈* *E* uniformly at random, computes the
gradient of *f* *e* at *x* *e*, and then writes
|
{
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
}
|
{
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**3 The Hogwild! Algorithm**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
of the uniform random sampling of *e* from *E*, we have
E[ *G* *e* ( *x* *e* )] *∈* *∂f* ( *x* ) *.*
In Algorithm 1, each processor samples an term *e ∈* *E* uniformly at random, computes the
gradient of *f* *e* at *x* *e*, and then writes
*x* *v* *←* *x* *v* *−* *γb* *[T]* *v* *[G]* *[e]* [(] *[x]* [)] *[,]* for each *v ∈* *e* (3.1)
Importantly, note that the processor modifies only the variables indexed by *e*, leaving all of the
components in *¬e* (i.e., not in *e* ) alone. We assume that the stepsize *γ* is a fixed constant. Even
though the processors have no knowledge as to whether any of the other processors have modified *x*,
we define *x* *j* to be the state of the decision variable *x* after *j* updates have been performed [1] . Since
two processors can write to *x* at the same time, we need to be a bit careful with this definition, but
we simply break ties at random. Note that *x* *j* is generally updated with a stale gradient, which
is based on a value of *x* read many clock cycles earlier. We use *x* *k* ( *j* ) to denote the value of the
decision variable used to compute the gradient or subgradient that yields the state *x* *j* .
In what follows, we provide conditions under which this asynchronous, incremental gradient
algorithm converges. Moreover, we show that if the hypergraph induced by *f* is isotropic and
sparse, then this algorithm converges in nearly the same number of gradient steps as its serial
counterpart. Since we are running in parallel and without locks, this means that we get a nearly
linear speedup in terms of the number of processors.
1
Our notation overloads subscripts of *x* . For clarity throughout, subscripts *i, j*, and *k* refer to iteration counts,
and *v* and *e* refer to components or subsets of components.
5
-----
|
{
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
}
|
{
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**3 The Hogwild! Algorithm**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
### **4 Fast Rates for Lock-Free Parallelism**
We now turn to our theoretical analysis of Hogwild! protocols. To make the analysis tractable, we
assume that we update with the following “with replacement” procedure: each processor samples
an edge *e* uniformly at random and computes a subgradient of *f* *e* at the current value of the decision
variable. Then it chooses an *v ∈* *e* uniformly at random and updates
*x* *v* *←* *x* *v* *−* *γ|e|b* *[T]* *v* *[G]* *[e]* [(] *[x]* [)]
Note that the stepsize is a factor *|e|* larger than the step in (3.1). Also note that this update is
completely equivalent to
*x ←* *x −* *γ|e|P* *v* *[T]* *[G]* *[e]* [(] *[x]* [)] *[ .]* (4.1)
This notation will be more convenient for the subsequent analysis.
This with replacement scheme assumes that a gradient is computed and then only one of its
components is used to update the decision variable. Such a scheme is computationally wasteful as
the rest of the components of the gradient carry information for decreasing the cost. Consequently,
in practice and in our experiments, we perform a modification of this procedure. We partition out
the edges *without replacement* to all of the processors at the beginning of each epoch. The processors
then perform *full updates* of all of the components of each edge in their respective queues. However,
we emphasize again that we do not implement any locking mechanisms on any of the variables. We
do not analyze this “without replacement” procedure because no one has achieved tractable analyses
for SGD in any without replacement sampling models. Indeed, to our knowledge, all analysis of
without-replacement sampling yields rates that are comparable to a standard subgradient descent
algorithm which takes steps along the full gradient of (2.1) (see, for example [23]). That is, these
analyses suggest that without-replacement sampling should require a factor of *|E|* more steps than
with-replacement sampling. In practice, this worst case behavior is never observed. In fact, it is
conventional wisdom in machine learning that without-replacement sampling in stochastic gradient
descent actually outperforms the with-replacement variants on which all of the analysis is based.
To state our theoretical results, we must describe several quantities that important in the
analysis of our parallel stochastic gradient descent scheme. We follow the notation and assumptions
|
{
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
}
|
{
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**4 Fast Rates for Lock-Free Parallelism**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
To state our theoretical results, we must describe several quantities that important in the
analysis of our parallel stochastic gradient descent scheme. We follow the notation and assumptions
of Nemirovski *et al* [24]. To simplify the analysis, we will assume that each *f* *e* in (2.1) is a convex
function. We assume Lipschitz continuous differentiability of *f* with Lipschitz constant *L* :
*∥∇f* ( *x* *[′]* ) *−∇f* ( *x* ) *∥≤* *L∥x* *[′]* *−* *x∥, ∀* *x* *[′]* *, x ∈* *X.* (4.2)
We also assume *f* is strongly convex with modulus *c* . By this we mean that
*f* ( *x* *[′]* ) *≥* *f* ( *x* ) + ( *x* *[′]* *−* *x* ) *[T]* *∇f* ( *x* ) + *[c]* (4.3)
2 *[∥][x]* *[′]* *[ −]* *[x][∥]* [2] *[,]* [ for all] *[ x]* *[′]* *[, x][ ∈]* *[X]* [.]
When *f* is strongly convex, there exists a unique minimizer *x* *⋆* and we denote *f* *⋆* = *f* ( *x* *⋆* ). We
additionally assume that there exists a constant *M* such that
*∥G* *e* ( *x* *e* ) *∥* 2 *≤* *M* almost surely for all *x ∈* *X .* (4.4)
We assume throughout that *γc <* 1. (Indeed, when *γc >* 1, even the ordinary gradient descent
algorithms will diverge.)
Our main results are summarized by the following
6
-----
**Proposition 4.1** *Suppose in Algorithm 1 that the lag between when a gradient is computed and*
*when it is used in step j — namely, j −* *k* ( *j* ) *— is always less than or equal to τ* *, and γ is defined*
*to be*
*ϑϵc*
*γ* = (4.5)
2 *LM* [2] Ω �1 + 6 *ρτ* + 4 *τ* [2] Ω∆ [1] *[/]* [2] [�] *[.]*
*for some ϵ >* 0 *and ϑ ∈* (0 *,* 1) *. Define D* 0 := *∥x* 0 *−* *x* *⋆* *∥* [2] *and let k be an integer satisfying*
� 1 + 6 *τρ* + 6 *τ* [2] Ω∆ [1] *[/]* [2] [�] log ( *LD* 0 */* *ϵ* )
*k ≥* [2] *[LM]* [2] [Ω] *.* (4.6)
*c* [2] *ϑϵ*
*Then after k component updates of x, we have* E[ *f* ( *x* *k* ) *−* *f* *⋆* ] *≤* *ϵ.*
In the case that *τ* = 0, this reduces to precisely the rate achieved by the serial SGD protocol.
A similar rate is achieved if *τ* = *o* ( *n* [1] *[/]* [4] ) as *ρ* and ∆are typically both *o* (1 */n* ). In our setting, *τ*
is proportional to the number of processors, and hence as long as the number of processors is less
*n* [1] *[/]* [4], we get nearly the same recursion as in the linear rate.
We prove Proposition 4.1 in two steps in the Appendix. First, we demonstrate that the sequence
|
{
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
}
|
{
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**4 Fast Rates for Lock-Free Parallelism**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
*n* [1] *[/]* [4], we get nearly the same recursion as in the linear rate.
We prove Proposition 4.1 in two steps in the Appendix. First, we demonstrate that the sequence
*a* *j* = [1] 2 [E][[] *[∥][x]* *[j]* *[ −]* *[x]* *[⋆]* *[∥]* [2] [] satisfies a recursion of the form] *[ a]* *[j]* *[ ≤]* [(1] *[ −]* *[c]* *[r]* *[γ]* [)(] *[a]* *[j]* [+1] *[ −]* *[a]* *[∞]* [) +] *[ a]* *[∞]* [for some]
constant *a* *∞* that depends on many of the algorithm parameters but not on the state, and some
constant *c* *r* *< c* . This *c* *r* is an “effective curvature” for the problem which is smaller that the true
curvature *c* because of the errors introduced by our update rule. Using the fact that *c* *r* *γ <* 1, we
will show in Section 5 how to determine an upper bound on *k* for which *a* *k* *≤* *ϵ/L* . Proposition 4.1
then follows because *E* [ *f* ( *x* *k* ) *−* *f* ( *x* *⋆* )] *≤* *La* *k* since the gradient of *f* is Lipschitz. A full proof is
provided in the appendix.
Note that up to the log(1 */ϵ* ) term in (4.6), our analysis nearly provides a 1 */k* rate of convergence
for a constant stepsize SGD scheme, both in the serial and parallel cases. Moreover, note that our
rate of convergence is fairly robust to error in the value of *c* ; we pay linearly for our underestimate
of the curvature of *f* . In contrast, Nemirovski *et al* demonstrate that when the stepsize is inversely
proportional to the iteration counter, an overestimate of *c* can result in exponential slow-down [24]!
We now turn to demonstrating that we can eliminate the log term from (4.6) by a slightly more
complicated protocol where the stepsize is slowly decreased after a large number of iterations.
|
{
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
}
|
{
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**4 Fast Rates for Lock-Free Parallelism**",
"Header 4": null
}
|
{
"chunk_type": "body"
}
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 1