markdown_text stringlengths 1 2.5k | pdf_metadata dict | header_metadata dict | chunk_metadata dict |
|---|---|---|---|
## **Teaching Machines to Read and Comprehend**
**Karl Moritz Hermann** *[†]* **Tom´aˇs Koˇcisk´y** *[‡]* **Edward Grefenstette** *[†]*
**Lasse Espeholt** *[†]* **Will Kay** *[†]* **Mustafa Suleyman** *[†]* **Phil Blunsom** *[†‡]*
*†* Google DeepMind *‡* University of Oxford
*{* kmh,etg,lespeholt,wkay,mustafasul,... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": null,
"Header 2": "**Teaching Machines to Read and Comprehend**",
"Header 3": null,
"Header 4": null
} | {
"chunk_type": "title"
} |
### **Abstract**
Teaching machines to read natural language documents remains an elusive challenge. Machine reading systems can be tested on their ability to answer questions
posed on the contents of documents that they have seen, but until now large scale
training and test datasets have been missing for this type of... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": null,
"Header 2": "**Teaching Machines to Read and Comprehend**",
"Header 3": "**Abstract**",
"Header 4": null
} | {
"chunk_type": "body"
} |
### **1 Introduction**
Progress on the path from shallow bag-of-words information retrieval algorithms to machines capable of reading and understanding documents has been slow. Traditional approaches to machine
reading and comprehension have been based on either hand engineered grammars [1], or information
extraction... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": null,
"Header 2": "**Teaching Machines to Read and Comprehend**",
"Header 3": "**1 Introduction**",
"Header 4": null
} | {
"chunk_type": "body"
} |
document that it believes will help it answer a question, and also allows us to visualises its inference
process. We compare these neural models to a range of baselines and heuristic benchmarks based
upon a traditional frame semantic analysis provided by a state-of-the-art natural language processing
1
-----
**CN... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": null,
"Header 2": "**Teaching Machines to Read and Comprehend**",
"Header 3": "**1 Introduction**",
"Header 4": null
} | {
"chunk_type": "body"
} |
# months 95 1 1 56 1 1 | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "months 95 1 1 56 1 1",
"Header 2": null,
"Header 3": null,
"Header 4": null
} | {
"chunk_type": "body"
} |
# documents 107,582 1,210 1,165 195,461 11,746 11,074 | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "documents 107,582 1,210 1,165 195,461 11,746 11,074",
"Header 2": null,
"Header 3": null,
"Header 4": null
} | {
"chunk_type": "body"
} |
# queries 438,148 3,851 3,446 837,944 60,798 55,077
Max # entities 456 190 398 424 247 250
Avg # entities 29.5 32.4 30.2 41.3 44.7 45.0
Avg tokens/doc 780 809 773 1044 1061 1066
Vocab size 124,814 274,604
Table 1: Corpus statistics. Articles were collected starting in
April 2007 for CNN and June 2010 for the Daily Ma... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": null,
"Header 4": null
} | {
"chunk_type": "body"
} |
### **2 Supervised training data for reading comprehension**
The reading comprehension task naturally lends itself to a formulation as a supervised learning
problem. Specifically we seek to estimate the conditional probability *p* ( *a|c, q* ), where *c* is a context
document, *q* a query relating to that document, a... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**2 Supervised training data for reading comprehension**",
"Header 4": null
} | {
"chunk_type": "body"
} |
required to solve them, sorting them into six categories. *Simple* queries can be answered without
any syntactic or semantic complexity. *Lexical* queries require some form of lexical generalisation—
something that comes naturally to embedding based systems, but causes difficulty for methods relying primarily on syntac... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**2 Supervised training data for reading comprehension**",
"Header 4": null
} | {
"chunk_type": "body"
} |
least two context sentences to be answered. With 30% of queries needing complex inference and
another 10% ambiguous or unanswerable, it is clear that our machine reading setup is a hard task.
**2.2** **Entity replacement and permutation**
Note that the focus of this paper is to provide a corpus for evaluating a mod... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**2 Supervised training data for reading comprehension**",
"Header 4": null
} | {
"chunk_type": "body"
} |
### **3 Models**
So far we have motivated the need for better datasets and tasks to evaluate the capabilities of machine
reading models. We proceed by describing a number of baselines, benchmarks and new models to
evaluate against this paradigm. We define two simple baselines, the majority baseline (maximum
frequency... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**3 Models**",
"Header 4": null
} | {
"chunk_type": "body"
} |
2 be.01.V match ( *p, be.01.V, y* ) ( ***x*** *, be.01.V, y* ) X is president / **Mike** is president
3 Correct frame ( *p, V, y* ) ( ***x*** *, V, z* ) X won Oscar / **Tom** won Academy Award
4 Permuted frame ( *p, V, y* ) ( *y, V,* ***x*** ) X met Suse / Suse met **Tom**
5 Matching entity ( *p, V, y* ) ( ***x*** *, Z... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**3 Models**",
"Header 4": null
} | {
"chunk_type": "body"
} |
where *W* ( *a* ) indexes row *a* of weight matrix *W* and through a slight abuse of notation word types
double as indexes. Note that we do not privilege entities or variables, the model must learn to
differentiate these in the input sequence. The function *g* ( *d, q* ) returns a vector embedding of a
document and que... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**3 Models**",
"Header 4": null
} | {
"chunk_type": "body"
} |
*c* ( *t, k* ) = *f* ( *t, k* ) *c* ( *t −* 1 *, k* ) + *i* ( *t, k* ) tanh ( *W* *kxc* *x* *[′]* ( *t, k* ) + *W* *khc* *h* ( *t −* 1 *, k* ) + *b* *kc* )
*o* ( *t, k* ) = *σ* ( *W* *kxo* *x* *[′]* ( *t, k* ) + *W* *kho* *h* ( *t −* 1 *, k* ) + *W* *kco* *c* ( *t, k* ) + *b* *ko* )
*h* ( *t, k* ) = *o* ( *t, k* ) ta... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**3 Models**",
"Header 4": null
} | {
"chunk_type": "body"
} |
score the embedding of the document *r* is computed as the weighted sum of the token embeddings.
The model is completed with the definition of the joint document and query embedding via a nonlinear combination:
*g* [AR] ( *d, q* ) = tanh ( *W* *rg* *r* + *W* *ug* *u* ) *.*
**The Impatient Reader** The Attentive Rea... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**3 Models**",
"Header 4": null
} | {
"chunk_type": "body"
} |
### **4 Empirical Evaluation**
Having described a number of models in the previous section, we next evaluate these models on our
reading comprehension corpora. Our hypothesis is that neural models should in principle be well
suited for this task. However, we argued that simple recurrent models such as the LSTM probab... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**4 Empirical Evaluation**",
"Header 4": null
} | {
"chunk_type": "body"
} |
Maximum frequency 26.3 27.9 22.5 22.7
Exclusive frequency 30.8 32.6 27.3 27.7
Frame-semantic model 32.2 33.0 30.7 31.1
Word distance model 46.2 46.9 55.6 54.8
Deep LSTM Reader 49.0 49.9 57.1 57.3
Uniform attention 31.1 33.6 31.0 31.7
Attentive Reader 56.5 58.9 64.5 63.7
Impatient Reader **57.0** **60.6** **64.8**... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**4 Empirical Evaluation**",
"Header 4": null
} | {
"chunk_type": "body"
} |
*Scooter Brown* ” has the phrase “ *... turns out he is good friends with Scooter Brown, manager for*
*Carly Rae Jepson* ” in the context. The word distance benchmark correctly aligns these two while
the frame-semantic approach fails to pickup the friendship or management relations when parsing
the query. We expect tha... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**4 Empirical Evaluation**",
"Header 4": null
} | {
"chunk_type": "body"
} |
queries (the correct answers are *ent23* and *ent63*, respectively). Both examples require significant
lexical generalisation and co-reference resolution in order to be answered correctly by a given model.
accuracy of close to 60% which, according to our analysis in Table 2, is the proportion of questions
answerable ... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**4 Empirical Evaluation**",
"Header 4": null
} | {
"chunk_type": "body"
} |
### **5 Conclusion**
The supervised paradigm for training machine reading and comprehension models provides a
promising avenue for making progress on the path to building full natural language understanding
systems. We have demonstrated a methodology for obtaining a large number of document-queryanswer triples and sh... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**5 Conclusion**",
"Header 4": null
} | {
"chunk_type": "body"
} |
### **References**
[1] Ellen Riloff and Michael Thelen. A rule-based question answering system for reading comprehension tests. In *Proceedings of the 2000 ANLP/NAACL Workshop on Reading Compre-*
*hension Tests As Evaluation for Computer-based Language Understanding Sytems - Volume 6*,
ANLP/NAACL-ReadingComp ’00, pag... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**References**",
"Header 4": null
} | {
"chunk_type": "references"
} |
*ings of ACL*, pages 565–574, 2010.
[12] Wilson L Taylor. “Cloze procedure”: a new tool for measuring readability. *Journalism Quar-*
*terly*, 30:415–433, 1953.
[13] Dipanjan Das, Desai Chen, Andr´e F. T. Martins, Nathan Schneider, and Noah A. Smith. Framesemantic parsing. *Computational Linguistics*, 40(1):9–56, 2... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**References**",
"Header 4": null
} | {
"chunk_type": "references"
} |
### **A Additional Heatmap Analysis**
We expand on the analysis of the attention mechanism presented in the paper by including visualisations for additional queries from the CNN validation dataset below. We consider examples from
the Attentive Reader as well as the Impatient Reader in this appendix.
**A.1** **Atten... | {
"id": "1506.03340",
"title": "Teaching Machines to Read and Comprehend",
"categories": [
"cs.CL",
"cs.AI",
"cs.NE"
]
} | {
"Header 1": "queries 438,148 3,851 3,446 837,944 60,798 55,077",
"Header 2": null,
"Header 3": "**A Additional Heatmap Analysis**",
"Header 4": null
} | {
"chunk_type": "body"
} |
## **Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**
**David Duvenaud** *[†]* **, Dougal Maclaurin** *[†]* **, Jorge Aguilera-Iparraguirre**
**Rafael G´omez-Bombarelli, Timothy Hirzel, Al´an Aspuru-Guzik, Ryan P. Adams**
Harvard University | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": null,
"Header 4": null
} | {
"chunk_type": "title"
} |
### **Abstract**
Predicting properties of molecules requires functions that take graphs as inputs.
Molecular graphs are usually preprocessed using hash-based functions to produce
fixed-size fingerprint vectors, which are used as features for making predictions.
We introduce a convolutional neural network that operate... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**Abstract**",
"Header 4": null
} | {
"chunk_type": "body"
} |
### **1 Introduction**
Recent work in materials design has applied neural networks to virtual screening, where the task is
to predict the properties of novel molecules by generalizing from examples. One difficulty with this
task is that the input to the predictor, a molecule, can be of arbitrary size and shape. Most ... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**1 Introduction**",
"Header 4": null
} | {
"chunk_type": "body"
} |
representation more meaningful.
*†* Equal contribution.
1
-----


Figure 1: *Left* : A visual representation of the computational graph of both standard circular fingerprints and neural graph fingerprints. First, at bo... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**1 Introduction**",
"Header 4": null
} | {
"chunk_type": "body"
} |
### **2 Circular fingerprints**
The state of the art in molecular fingerprints are extended-connectivity circular fingerprints
(ECFP) [21]. Circular fingerprints [6] are a refinement of the Morgan algorithm [17], designed to
identify which substructures are present in a molecule in a way that is invariant to atom-rel... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**2 Circular fingerprints**",
"Header 4": null
} | {
"chunk_type": "body"
} |
### **3 Creating a differentiable fingerprint**
The space of possible network architectures is large. In the spirit of starting from a known-good
configuration, we chose an architecture analogous to existing fingerprints. This section describes
our replacement of each discrete operation in circular fingerprints with ... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**3 Creating a differentiable fingerprint**",
"Header 4": null
} | {
"chunk_type": "body"
} |
12: **Return:** binary vector **f**
**Al** **g** **orithm 2** Neural g ra p h fin g er p rints
1: **Input:** molecule, radius *R*, hidden weights
*H* 1 [1] *[. . . H]* *R* [5] [, output weights] *[ W]* [1] *[ . . . W]* *[R]*
2: **Initialize:** fingerprint vector **f** *←* **0** *S*
3: **for** each atom *a* in molec... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**3 Creating a differentiable fingerprint**",
"Header 4": null
} | {
"chunk_type": "body"
} |
### **4 Experiments**
Circular fingerprints can be interpreted as a special case of neural graph fingerprints having large
random weights. This is reasonable to expect, since in the limit of large input weights, tanh
nonlinearities approach step functions, which when concatenated resemble a hash function. Also,
in th... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**4 Experiments**",
"Header 4": null
} | {
"chunk_type": "body"
} |
|||
Figure 3: *Left:* Comparison of pairwise distances between molecules, measured using circular fingerprints and neural graph fingerprints with large random weights. *Right* : Predictive performance
of circular fingerprints (red), neural graph fingerprints with fixed large random weights (green) and
neural graph fi... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**4 Experiments**",
"Header 4": null
} | {
"chunk_type": "body"
} |
predictive of toxicity.
Fragments most
activated by
toxicity feature
on SR-MMP
dataset
Fragments most
activated by
toxicity feature
on NR-AHR
dataset



Figure 5: Visual... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**4 Experiments**",
"Header 4": null
} | {
"chunk_type": "body"
} |
or aromatic, whether the bond was conjugated, and whether the bond was part of a ring.
**Training and Architecture** Training used batch normalization [11]. We also experimented with
tanh vs relu activation functions for both the neural fingerprint network layers and the fullyconnected network layers. relu had a slig... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**4 Experiments**",
"Header 4": null
} | {
"chunk_type": "body"
} |
as measured by [5].
*•* **Organic photovoltaic efficiency:** The Harvard Clean Energy Project [8] uses expensive
DFT simulations to estimate the photovoltaic efficiency of organic molecules. We used a
subset of 20,000 molecules from this dataset.
**Predictive accuracy** We compared the performance of circular finge... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**4 Experiments**",
"Header 4": null
} | {
"chunk_type": "body"
} |
### **5 Limitations**
**Computational cost** Neural fingerprints have the same asymptotic complexity in the number of
atoms and the depth of the network as circular fingerprints, but have additional terms due to the
matrix multiplies necessary to transform the feature vector at each step. To be precise, computing
the... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**5 Limitations**",
"Header 4": null
} | {
"chunk_type": "body"
} |
### **6 Related work**
This work is similar in spirit to the neural Turing machine [7], in the sense that we take an existing
discrete computational architecture, and make each part differentiable in order to do gradient-based
optimization.
**Convolutional neural networks** Convolutional neural networks have been u... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**6 Related work**",
"Header 4": null
} | {
"chunk_type": "body"
} |
input is a different graph.
**Neural networks on input-dependent graphs** [22] propose a neural network model for graphs
having an interesting training procedure. The forward pass consists of running a message-passing
scheme to equilibrium, a fact which allows the reverse-mode gradient to be computed without storing
... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**6 Related work**",
"Header 4": null
} | {
"chunk_type": "body"
} |
### **7 Conclusion**
We generalized existing hand-crafted molecular features to allow their optimization for diverse tasks.
By making each operation in the feature pipeline differentiable, we can use standard neural-network
training methods to scalably optimize the parameters of these neural molecular fingerprints en... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**7 Conclusion**",
"Header 4": null
} | {
"chunk_type": "body"
} |
### **References**
[1] Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud
Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
[2] Joan Bruna, Wojciech Zaremba,... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**References**",
"Header 4": null
} | {
"chunk_type": "references"
} |
by reducing internal covariate shift. *arXiv preprint arXiv:1502.03167*, 2015.
[12] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network
for modelling sentences. *Proceedings of the 52nd Annual Meeting of the Association for Com-*
*putational Linguistics*, June 2014.
[13] Diederik... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**References**",
"Header 4": null
} | {
"chunk_type": "references"
} |
*ceedings of the Conference on Empirical Methods in Natural Language Processing*, pages
151–161. Association for Computational Linguistics, 2011.
[25] Kai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic
representations from tree-structured long short-term memory networks. *arXiv preprint*
*arX... | {
"id": "1509.09292",
"title": "Convolutional Networks on Graphs for Learning Molecular Fingerprints",
"categories": [
"cs.LG",
"cs.NE",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**Convolutional Networks on Graphs** **for Learning Molecular Fingerprints**",
"Header 3": "**References**",
"Header 4": null
} | {
"chunk_type": "references"
} |
## A Deep Reinforced Model for Abstractive Summarization
### Romain Paulus, Caiming Xiong and Richard Socher {rpaulus,cxiong,rsocher}@salesforce.com | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "Romain Paulus, Caiming Xiong and Richard Socher {rpaulus,cxiong,rsocher}@salesforce.com",
"Header 4": null
} | {
"chunk_type": "title"
} |
### Abstract
Attentional, RNN-based encoder-decoder
models for abstractive summarization
have achieved good performance on short
input and output sequences. However, for
longer documents and summaries, these
models often include repetitive and incoherent phrases. We introduce a neural
network model with intra-att... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "Abstract",
"Header 4": null
} | {
"chunk_type": "body"
} |
### 1 Introduction
Text summarization is the process of automatically generating natural language summaries from
an input document while retaining the important
points.
By condensing large quantities of information
into short, informative summaries, summarization
can aid many downstream applications such as
creatin... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "1 Introduction",
"Header 4": null
} | {
"chunk_type": "body"
} |
intra-attention model in the decoder takes into ac
count which words have already been generated
by the decoder. (ii) we propose a new objective
function by combining the maximum-likelihood
cross-entropy loss used in prior work with rewards
from policy gradient reinforcement learning to reduce exposure bias. We show th... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "1 Introduction",
"Header 4": null
} | {
"chunk_type": "body"
} |
### 2 Neural Intra-attention Model
In this section, we present our intra-attention
model based on the encoder-decoder network
(Sutskever et al., 2014). In all our equations, x =
{x 1, x 2, . . ., x n } represents the sequence of input (article) tokens, y = {y 1, y 2, . . ., y n ′ } the sequence of output (summary) ... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "2 Neural Intra-attention Model",
"Header 4": null
} | {
"chunk_type": "body"
} |
Finally, we compute the normalized attention
scores α [e] ti [across the inputs and use these weights]
to obtain the input context vector c [e] t [:]
e [′]
ti
α [e] ti [=] � nj=1 [e] [′] tj (4)
2.2 Intra-decoder attention
While this intra-temporal attention function ensures that different parts of the encoded inp... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "2 Neural Intra-attention Model",
"Header 4": null
} | {
"chunk_type": "body"
} |
y t, . . ., y t−1, x, even when not explicitly stated.
Our token-generation layer generates the following probability distribution:
p(y t |u t = 0) =
(9)
softmax(W out [h [d] t [∥][c] [e] t [∥][c] [d] t [] +][ b] [out] [)]
On the other hand, the pointer mechanism uses
the temporal attention weights α [e] ti [as t... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "2 Neural Intra-attention Model",
"Header 4": null
} | {
"chunk_type": "body"
} |
never contain the same trigram twice. Based on
this observation, we force our decoder to never
output the same trigram more than once during
testing. We do this by setting p(y t ) = 0 during
beam search, when outputting y t would create a
trigram that already exists in the previously decoded sequence of the current bea... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "2 Neural Intra-attention Model",
"Header 4": null
} | {
"chunk_type": "body"
} |
### 3 Hybrid Learning Objective
In this section, we explore different ways of training our encoder-decoder model. In particular, we
propose reinforcement learning-based algorithms
and their application to our summarization task.
-----
3.1 Supervised learning with teacher forcing
The most widely used method to t... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "3 Hybrid Learning Objective",
"Header 4": null
} | {
"chunk_type": "body"
} |
{y 1, . . ., y t−1 } and the input sequence x, we hypothesize that it can assist our policy learning algorithm to generate more natural summaries. This
motivates us to define a mixed learning objective
function that combines equations 14 and 15:
L mixed = γL rl + (1 − γ)L ml, (16)
where γ is a scaling factor accoun... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "3 Hybrid Learning Objective",
"Header 4": null
} | {
"chunk_type": "body"
} |
### 4 Related Work
4.1 Neural encoder-decoder sequence models
Neural encoder-decoder models are widely used
in NLP applications such as machine translation (Sutskever et al., 2014), summarization
(Chopra et al., 2016; Nallapati et al., 2016), and
question answering (Hermann et al., 2015). These
models use recurrent... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "4 Related Work",
"Header 4": null
} | {
"chunk_type": "body"
} |
tasks, leading to significant improvements compared to previous supervised learning methods.
While their method requires an additional neural
network, called a critic model, to predict the expected reward and stabilize the objective function
gradients, Rennie et al. (2016) designed a selfcritical sequence training meth... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "4 Related Work",
"Header 4": null
} | {
"chunk_type": "body"
} |
### 5 Datasets
5.1 CNN/Daily Mail
We evaluate our model on a modified version
of the CNN/Daily Mail dataset (Hermann et al.,
2015), following the same pre-processing steps
described in Nallapati et al. (2016). We refer the
reader to that paper for a detailed description. The
final dataset contains 286,817 training ... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "5 Datasets",
"Header 4": null
} | {
"chunk_type": "body"
} |
dataset. These pre-processing steps give us an average of 549 input tokens and 40 output tokens
per example, after limiting the input and output
lengths to 800 and 100 tokens.
Pointer supervision: We run each input and abstract sequence through the Stanford named entity recognizer (NER) (Manning et al., 2014). For
al... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "5 Datasets",
"Header 4": null
} | {
"chunk_type": "body"
} |
### 6 Results
6.1 Experiments
Setup: We evaluate the intra-decoder attention mechanism and the mixed-objective learning by running the following experiments on both
datasets. We first run maximum-likelihood (ML)
training with and without intra-decoder attention
(removing c [d] t [from Equations][ 9][ and][ 11][ to ... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "6 Results",
"Header 4": null
} | {
"chunk_type": "body"
} |
capped a miserable weekend for the Briton; his time in Bahrain plagued by reliability issues. Button spent much of the
race on Twitter delivering his verdict as the action unfolded. ’Kimi is the man to watch,’ and ’loving the sparks’, were
among his pearls of wisdom, but the tweet which courted the most attention was a... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "6 Results",
"Header 4": null
} | {
"chunk_type": "body"
} |
speaks to Nico Rosberg ahead of the Bahrain Grand Prix
Ground truth summary
Button denied 100th race start for McLaren after ERS failure. Button then spent much of the Bahrain Grand Prix
on Twitter delivering his verdict on the action as it unfolded. Lewis Hamilton has out-qualified and finished ahead
of Mercedes team-... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "6 Results",
"Header 4": null
} | {
"chunk_type": "body"
} |
ROUGE-L score as a reinforcement reward. We
also tried ROUGE-2 but we found that it created
summaries that almost always reached the maximum length, often ending sentences abruptly.
6.2 Quantitative analysis
Our results for the CNN/Daily Mail dataset are
shown in Table 1, and for the NYT dataset in Ta
ble 2. We... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "6 Results",
"Header 4": null
} | {
"chunk_type": "body"
} |
example of Table 3, is the presence of short and
truncated sentences towards the end of sequences.
This confirms that optimizing for single discrete
evaluation metric such as ROUGE with RL can be
detrimental to the model quality.
On the other hand, our RL+ML summaries obtain the highest readability scores among our... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "6 Results",
"Header 4": null
} | {
"chunk_type": "body"
} |
### 7 Conclusion
We presented a new model and training procedure
that obtains state-of-the-art results in text sum
marization for the CNN/Daily Mail, improves the
readability of the generated summaries and is better suited to long output sequences. We also run
our abstractive model on the NYT dataset for the
first ... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "7 Conclusion",
"Header 4": null
} | {
"chunk_type": "body"
} |
### References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly
learning to align and translate. arXiv preprint
arXiv:1409.0473 .
Jianpeng Cheng, Li Dong, and Mirella Lapata. 2016.
Long short-term memory-networks for machine
reading. arXiv preprint arXiv:1601.06733 . ... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "References",
"Header 4": null
} | {
"chunk_type": "references"
} |
arXiv:1412.6980 .
Junyi Jessy Li, Kapil Thadani, and Amanda Stent.
2016. The role of discourse units in near-extractive
summarization. In 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue. page
137.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summari... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "References",
"Header 4": null
} | {
"chunk_type": "references"
} |
Jeffrey Pennington, Richard Socher, and Christopher D
Manning. 2014. Glove: Global vectors for word
representation. In EMNLP. volume 14, pages 1532–
1543.
-----
Ofir Press and Lior Wolf. 2016. Using the output
embedding to improve language models. arXiv
preprint arXiv:1608.05859 .
Marc’Aurelio Ranzato, Sumit Chop... | {
"id": "1705.04304",
"title": "A Deep Reinforced Model for Abstractive Summarization",
"categories": [
"cs.CL"
]
} | {
"Header 1": null,
"Header 2": "A Deep Reinforced Model for Abstractive Summarization",
"Header 3": "References",
"Header 4": null
} | {
"chunk_type": "references"
} |
Journal of Machine Learning Research 12 (2011) 2825-2830 Submitted 3/11; Revised 8/11; Published 10/11 | {
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
} | {
"Header 1": null,
"Header 2": null,
"Header 3": null,
"Header 4": null
} | {
"chunk_type": "title"
} |
## Scikit-learn: Machine Learning in Python
Fabian Pedregosa fabian.pedregosa@inria.fr
Ga¨el Varoquaux gael.varoquaux@normalesup.org
Alexandre Gramfort alexandre.gramfort@inria.fr
Vincent Michel vincent.michel@logilab.fr
Bertrand Thirion bertrand.thirion@inria.fr
Parietal, INRIA Saclay
Neurospin, Bˆat 145, CEA ... | {
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
} | {
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": null,
"Header 4": null
} | {
"chunk_type": "body"
} |
### Abstract
Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package
focuses on bringing machine learning to non-specialists using a general-purpose high-level
language. Emphasis is put on ease of use,... | {
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
} | {
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "Abstract",
"Header 4": null
} | {
"chunk_type": "body"
} |
### 1. Introduction
The Python programming language is establishing itself as one of the most popular languages for scientific computing. Thanks to its high-level interactive nature and its maturing
ecosystem of scientific libraries, it is an appealing choice for algorithmic development and
exploratory data analysis ... | {
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
} | {
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "1. Introduction",
"Header 4": null
} | {
"chunk_type": "body"
} |
### 2. Project Vision
Code quality. Rather than providing as many features as possible, the project’s goal has
been to provide solid implementations. Code quality is ensured with unit tests—as of release
0.8, test coverage is 81%—and the use of static analysis tools such as pyflakes and pep8.
Finally, we strive to us... | {
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
} | {
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "2. Project Vision",
"Header 4": null
} | {
"chunk_type": "body"
} |
### 3. Underlying Technologies
Numpy: the base data structure used for data and model parameters. Input data is presented as numpy arrays, thus integrating seamlessly with other scientific Python libraries.
Numpy’s view-based memory model limits copies, even when binding with compiled code
(Van der Walt et al., 2011)... | {
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
} | {
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "3. Underlying Technologies",
"Header 4": null
} | {
"chunk_type": "body"
} |
### 4. Code Design
Objects specified by interface, not by inheritance. To facilitate the use of external objects
with scikit-learn, inheritance is not enforced; instead, code conventions provide a consistent
interface. The central object is an estimator, that implements a fit method, accepting as
arguments an input d... | {
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
} | {
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "4. Code Design",
"Header 4": null
} | {
"chunk_type": "body"
} |
transparently as any other estimator. Cross validation can be made more efficient for certain
estimators by exploiting specific properties, such as warm restarts or regularization paths
(Friedman et al., 2010). This is supported through special objects, such as the LassoCV.
Finally, a Pipeline object can combine severa... | {
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
} | {
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "4. Code Design",
"Header 4": null
} | {
"chunk_type": "body"
} |
### 5. High-level yet Efficient: Some Trade Offs
While scikit-learn focuses on ease of use, and is mostly written in a high level language, care
has been taken to maximize computational efficiency. In Table 1, we compare computation
time for a few algorithms implemented in the major machine learning toolkits accessib... | {
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
} | {
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "5. High-level yet Efficient: Some Trade Offs",
"Header 4": null
} | {
"chunk_type": "body"
} |
### 6. Conclusion
Scikit-learn exposes a wide variety of machine learning algorithms, both supervised and
unsupervised, using a consistent, task-oriented interface, thus enabling easy comparison
of methods for a given application. Since it relies on the scientific Python ecosystem, it
can easily be integrated into ap... | {
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
} | {
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "6. Conclusion",
"Header 4": null
} | {
"chunk_type": "body"
} |
### References
D. Albanese, G. Merler, S.and Jurman, and R. Visintainer. MLPy: high-performance
Python package for predictive modeling. In NIPS, MLOSS workshop, 2008.
C.C. Chang and C.J. Lin. LIBSVM: a library for support vector machines.
[http://www.csie.ntu.edu.tw/cjlin/libsvm, 2001.](http://www.csie.ntu.edu.tw/c... | {
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
} | {
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "References",
"Header 4": null
} | {
"chunk_type": "references"
} |
C. Gehl, and V. Franc. The SHOGUN Machine Learning Toolbox. Journal of Machine
Learning Research, 11:1799–1802, 2010.
S. Van der Walt, S.C Colbert, and G. Varoquaux. The NumPy array: a structure for
efficient numerical computation. Computing in Science and Engineering, 11, 2011.
T. Zito, N. Wilbert, L. Wiskott, and... | {
"id": "1201.0490",
"title": "Scikit-learn: Machine Learning in Python",
"categories": [
"cs.LG",
"cs.MS"
]
} | {
"Header 1": null,
"Header 2": "Scikit-learn: Machine Learning in Python",
"Header 3": "References",
"Header 4": null
} | {
"chunk_type": "references"
} |
## Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent
##### Feng Niu, Benjamin Recht, Christopher R´e and Stephen J. Wright Computer Sciences Department, University of Wisconsin-Madison 1210 W Dayton St, Madison, WI 53706 June 2011
**Abstract**
Stochastic Gradient Descent (SGD) is a popular... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": null,
"Header 4": null
} | {
"chunk_type": "title"
} |
### **1 Introduction**
With its small memory footprint, robustness against noise, and rapid learning rates, Stochastic Gradient Descent (SGD) has proved to be well suited to data-intensive machine learning tasks [3,5,27].
However, SGD’s scalability is limited by its inherently sequential nature; it is difficult to pa... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**1 Introduction**",
"Header 4": null
} | {
"chunk_type": "body"
} |
in parallel computation to synchronization (or locking) amongst the processors [2, 13]. Thus, to
enable scalable data analysis on a multicore machine, any performant solution must minimize the
overhead of locking.
In this work, we propose a simple strategy for eliminating the overhead associated with locking:
*run SGD ... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**1 Introduction**",
"Header 4": null
} | {
"chunk_type": "body"
} |
### **2 Sparse Separable Cost Functions**
Our goal throughout is to minimize a function *f* : *X ⊆* R *[n]* *→* R of the form
*f* ( *x* ) = � *f* *e* ( *x* *e* ) *.* (2.1)
*e∈E*
Here *e* denotes a small subset of *{* 1 *, . . ., n}* and *x* *e* denotes the values of the vector *x* on the
coordinates indexed by ... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**2 Sparse Separable Cost Functions**",
"Header 4": null
} | {
"chunk_type": "body"
} |
#### (a) (b) (c)
**Figure 1:** Example graphs induced by cost function. (a) A sparse SVM induces a hypergraph
where each hyperedge corresponds to one example. (b) A matrix completion example induces
a bipartite graph between the rows and columns with an edge between two nodes if an entry is
revealed. (c) The induced ... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**2 Sparse Separable Cost Functions**",
"Header 4": "(a) (b) (c)"
} | {
"chunk_type": "body"
} |
where ***L*** is *n* *r* *× r*, ***R*** is *n* *c* *× r* and ***L*** *u* (resp. ***R*** *v* ) denotes the *u* th (resp. *v* th) row of ***L*** (resp. ***R*** )
[18,25,28]. To put this problem in sparse form, i.e., as (2.1), we write (2.4) as
minimize ( ***L*** *,* ***R*** ) � �( ***L*** *u* ***R*** *v* *[∗]* *[−]* ... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**2 Sparse Separable Cost Functions**",
"Header 4": "(a) (b) (c)"
} | {
"chunk_type": "body"
} |
*e∈E* *[|][e][|][,]* *|E|* *|E|*
The quantity Ωsimply quantifies the size of the hyper edges. *ρ* determines the maximum fraction of
edges that intersect any given edge. ∆determines the maximum fraction of edges that intersect any
variable. *ρ* is a measure of the sparsity of the hypergraph, while ∆measures the node-... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**2 Sparse Separable Cost Functions**",
"Header 4": "(a) (b) (c)"
} | {
"chunk_type": "body"
} |
### **3 The Hogwild! Algorithm**
Here we discuss the parallel processing setup. We assume a shared memory model with *p* processors. The decision variable *x* is accessible to all processors. Each processor can read *x*, and can
contribute an update vector to *x* . The vector *x* is stored in shared memory, and we as... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**3 The Hogwild! Algorithm**",
"Header 4": null
} | {
"chunk_type": "body"
} |
of the uniform random sampling of *e* from *E*, we have
E[ *G* *e* ( *x* *e* )] *∈* *∂f* ( *x* ) *.*
In Algorithm 1, each processor samples an term *e ∈* *E* uniformly at random, computes the
gradient of *f* *e* at *x* *e*, and then writes
*x* *v* *←* *x* *v* *−* *γb* *[T]* *v* *[G]* *[e]* [(] *[x]* [)] *[,]* for... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**3 The Hogwild! Algorithm**",
"Header 4": null
} | {
"chunk_type": "body"
} |
### **4 Fast Rates for Lock-Free Parallelism**
We now turn to our theoretical analysis of Hogwild! protocols. To make the analysis tractable, we
assume that we update with the following “with replacement” procedure: each processor samples
an edge *e* uniformly at random and computes a subgradient of *f* *e* at the cu... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**4 Fast Rates for Lock-Free Parallelism**",
"Header 4": null
} | {
"chunk_type": "body"
} |
To state our theoretical results, we must describe several quantities that important in the
analysis of our parallel stochastic gradient descent scheme. We follow the notation and assumptions
of Nemirovski *et al* [24]. To simplify the analysis, we will assume that each *f* *e* in (2.1) is a convex
function. We assume ... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**4 Fast Rates for Lock-Free Parallelism**",
"Header 4": null
} | {
"chunk_type": "body"
} |
*n* [1] *[/]* [4], we get nearly the same recursion as in the linear rate.
We prove Proposition 4.1 in two steps in the Appendix. First, we demonstrate that the sequence
*a* *j* = [1] 2 [E][[] *[∥][x]* *[j]* *[ −]* *[x]* *[⋆]* *[∥]* [2] [] satisfies a recursion of the form] *[ a]* *[j]* *[ ≤]* [(1] *[ −]* *[c]* *[r]* *... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**4 Fast Rates for Lock-Free Parallelism**",
"Header 4": null
} | {
"chunk_type": "body"
} |
### 5 Robust 1 /k rates.
Suppose we run Algorithm 1 for a fixed number of gradient updates *K* with stepsize *γ <* 1 */c* . Then,
we wait for the threads to coalesce, reduce *γ* by a constant factor *β ∈* (0 *,* 1), and run for *β* *[−]* [1] *K*
iterations. In some sense, this piecewise constant stepsize protocol app... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "5 Robust 1 /k rates.",
"Header 4": null
} | {
"chunk_type": "body"
} |
a constant factor after several iterations. This backoff scheme will have two phases: the first phase
will consist of converging to the ball about *x* *⋆* of squared radius less than [2] *c* *[B]* *r* [at an exponential]
rate. Then we will converge to *x* *⋆* by shrinking the stepsize.
To calculate the number of iter... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "5 Robust 1 /k rates.",
"Header 4": null
} | {
"chunk_type": "body"
} |
*k ≥* *ϑ* *[−]* [1] log
� *ϑB*
lo g( 2 */β* )
� + [2] *c* *r* *[B]* *ϵ* 1 *−* *β*
are sufficient to guarantee that *a* *k* *≤* *ϵ* .
Rearranging terms, the following two expressions give *ϵ* in terms of all of the algorithm param
eters:
1
*ϵ ≤* [2 lo] 1 [g(] *−* [2] *β* *[/β]* [)] *·* *c* *[B]* *r* *·* *k −* ... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "5 Robust 1 /k rates.",
"Header 4": null
} | {
"chunk_type": "body"
} |
overestimates the curvature (corresponding to a small value of Θ), one can get arbitrarily slow
rates of convergence. An simple, one dimensional example in [24] shows that Θ = 0 *.* 2 can yield a
convergence rate of *k* *[−]* [1] *[/]* [5] . In our scheme, *ϑ* = 0 *.* 2 simply increases the number of iterations by a
fa... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "5 Robust 1 /k rates.",
"Header 4": null
} | {
"chunk_type": "body"
} |
### **6 Related Work**
Most schemes for parallelizing stochastic gradient descent are variants of ideas presented in the
seminal text by Bertsekas and Tsitsiklis [4]. For instance, in this text, they describe using stale
gradient updates computed across many computers in a master-worker setting and describe settings
... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**6 Related Work**",
"Header 4": null
} | {
"chunk_type": "body"
} |
by several authors [10, 12]. While these methods do achieve linear speedups, they are difficult
to implement efficiently on multicore machines as they require massive communication overhead.
Distributed averaging of gradients requires message passing between the cores, and the cores need
to synchronize frequently in or... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**6 Related Work**",
"Header 4": null
} | {
"chunk_type": "body"
} |
### **7 Experiments**
We ran numerical experiments on a variety of machine learning tasks, and compared against a
round-robin approach proposed in [16] and implemented in Vowpal Wabbit [15]. We refer to this
approach as RR. To be as fair as possible to prior art, we hand coded RR to be nearly identical
to the Hogwild... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**7 Experiments**",
"Header 4": null
} | {
"chunk_type": "body"
} |
All of the experiments use a constant stepsize *γ* which is diminished by a factor *β* at the end of
each pass over the training set. We run all experiments for 20 such passes, even though less epochs
are often sufficient for convergence. We show results for the largest value of the learning rate *γ*
which converges an... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**7 Experiments**",
"Header 4": null
} | {
"chunk_type": "body"
} |
KDD Cup 2011 (task 2) data set has 624,961 rows, 1,000,990, columns and 252,800,275 revealed
entries. We also synthesized a low-rank matrix with rank 10, 1e7 rows and columns, and 2e9
revealed entries. We refer to this instance as “Jumbo.” In this synthetic example, *ρ* and ∆are
both around 1e-7. These values contrast ... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**7 Experiments**",
"Header 4": null
} | {
"chunk_type": "body"
} |
maximum capacity 10. Both *ρ* and ∆are equal to 9.2e-4 We see that Hogwild! speeds up the cut
problem by more than a factor of 4 with 10 threads, while RR is twice as slow as the serial version.
Our second graph cut problem sought a mulit-way cut to determine entity recognition in a
large database of web data. We creat... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**7 Experiments**",
"Header 4": null
} | {
"chunk_type": "body"
} |
**0.31**
**0** **5** **10** **15** **20**
**Epoch**
**Figure 5:** (a) Speedup for the three matrix completion problems with Hogwild!. In all three
cases, massive speedup is achieved via parallelism. (b) The training error at the end of each
epoch of SVM training on RCV1 for the averaging algorithm [31]. (c) Speedup... | {
"id": "1106.5730",
"title": "HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient\n Descent",
"categories": [
"math.OC",
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent",
"Header 3": "**7 Experiments**",
"Header 4": null
} | {
"chunk_type": "body"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.