text
stringlengths 1
7.76k
| source
stringlengths 17
81
|
|---|---|
386 CHAPTER 16. DISCOURSE • TEMPORAL – Asynchronous – Synchronous: precedence, succession • CONTINGENCY – Cause: result, reason – Pragmatic cause: justification – Condition: hypothetical, general, unreal present, unreal past, real present, real past – Pragmatic condition: relevance, implicit assertion • COMPARISON – Contrast: juxtaposition, opposition – Pragmatic contrast – Concession: expectation, contra-expectation – Pragmatic concession • EXPANSION – Conjunction – Instantiation – Restatement: specification, equivalence, generalization – Alternative: conjunctive, disjunctive, chosen alternative – Exception – List Table 16.1: The hierarchy of discourse relation in the Penn Discourse Treebank annota- tions (Prasad et al., 2008). For example, PRECEDENCE is a subtype of SYNCHRONOUS, which is a type of TEMPORAL relation. Examples of Penn Discourse Treebank annotations are shown in (16.4). In (16.4), the word therefore acts as an explicit discourse connective, linking the two adjacent units of text. The Treebank annotations also specify the “sense” of each relation, linking the con- nective to a relation in the sense inventory shown in Table 16.1: in (16.4), the relation is PRAGMATIC CAUSE:JUSTIFICATION because it relates to the author’s communicative in- tentions. The word therefore can also signal causes in the external world (e.g., He was therefore forced to relinquish his plan). In discourse sense classification, the goal is to de- termine which discourse relation, if any, is expressed by each connective. A related task is the classification of implicit discourse relations, as in (16.5). In this example, the re- lationship between the adjacent sentences could be expressed by the connective because, indicating a CAUSE:REASON relationship. Classifying explicit discourse relations and their arguments As suggested by the examples above, many connectives can be used to invoke multiple types of discourse relations. Similarly, some connectives have senses that are unrelated to discourse: for example, and functions as a discourse connective when it links propo- Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_404_Chunk401
|
16.3. RELATIONS 387 (16.4) . . .as this business of whaling has somehow come to be regarded among landsmen as a rather unpoetical and disreputable pursuit; therefore, I am all anxiety to convince ye, ye landsmen, of the injustice hereby done to us hunters of whales. (16.5) But a few funds have taken other defensive steps. Some have raised their cash positions to record levels. Implicit = BECAUSE High cash positions help buffer a fund when the market falls. (16.6) Michelle lives in a hotel room, and although she drives a canary-colored Porsche, she hasn’t time to clean or repair it. (16.7) Most oil companies, when they set exploration and production budgets for this year, forecast revenue of $15 for each barrel of crude produced. Figure 16.4: Example annotations of discourse relations. In the style of the Penn Discourse Treebank, the discourse connective is underlined, the first argument is shown in italics, and the second argument is shown in bold. Examples (16.5-16.7) are quoted from Prasad et al. (2008). sitions, but not when it links noun phrases (Lin et al., 2014). Nonetheless, the senses of explicitly-marked discourse relations in the Penn Treebank are relatively easy to classify, at least at the coarse-grained level. When classifying the four top-level PDTB relations, 90% accuracy can be obtained simply by selecting the most common relation for each connective (Pitler and Nenkova, 2009). At the more fine-grained levels of the discourse relation hierarchy, connectives are more ambiguous. This fact is reflected both in the ac- curacy of automatic sense classification (Versley, 2011) and in interannotator agreement, which falls to 80% for level-3 discourse relations (Prasad et al., 2008). A more challenging task for explicitly-marked discourse relations is to identify the scope of the arguments. Discourse connectives need not be adjacent to ARG1, as shown in item 16.6, where ARG1 follows ARG2; furthermore, the arguments need not be contigu- ous, as shown in (16.7). For these reasons, recovering the arguments of each discourse connective is a challenging subtask. Because intra-sentential arguments are often syn- tactic constituents (see chapter 10), many approaches train a classifier to predict whether each constituent is an appropriate argument for each explicit discourse connective (Well- ner and Pustejovsky, 2007; Lin et al., 2014, e.g.,). Classifying implicit discourse relations Implicit discourse relations are considerably more difficult to classify and to annotate.4 Most approaches are based on an encoding of each argument, which is then used as input 4In the dataset for the 2015 shared task on shallow discourse parsing, the interannotator agreement was 91% for explicit discourse relations and 81% for implicit relations, across all levels of detail (Xue et al., 2015). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_405_Chunk402
|
388 CHAPTER 16. DISCOURSE to a nonlinear classifier: z(i) =Encode(w(i)) [16.7] z(i+1) =Encode(w(i+1)) [16.8] ˆyi = argmax y Ψ(y, z(i), z(i+1)). [16.9] This basic framework can be instantiated in several ways, including both feature-based and neural encoders. Feature-based approaches Each argument can be encoded into a vector of surface fea- tures. The encoding typically includes lexical features (all words, or all content words, or a subset of words such as the first three and the main verb), Brown clusters of individ- ual words (§ 14.4), and syntactic features such as terminal productions and dependency arcs (Pitler et al., 2009; Lin et al., 2009; Rutherford and Xue, 2014). The classification func- tion then has two parts. First, it creates a joint feature vector by combining the encodings of each argument, typically by computing the cross-product of all features in each encod- ing: f(y, z(i), z(i+1)) = {(a × b × y) : (z(i) a z(i+1) b )} [16.10] The size of this feature set grows with the square of the size of the vocabulary, so it can be helpful to select a subset of features that are especially useful on the training data (Park and Cardie, 2012). After f is computed, any classifier can be trained to compute the final score, Ψ(y, z(i), z(i+1)) = θ · f(y, z(i), z(i+1)). Neural network approaches In neural network architectures, the encoder is learned jointly with the classifier as an end-to-end model. Each argument can be encoded using a variety of neural architectures (surveyed in § 14.8): recursive (§ 10.6.1; Ji and Eisenstein, 2015), recurrent (§ 6.3; Ji et al., 2016), and convolutional (§ 3.4; Qin et al., 2017). The clas- sification function can then be implemented as a feedforward neural network on the two encodings (chapter 3; for examples, see Rutherford et al., 2017; Qin et al., 2017), or as a simple bilinear product, Ψ(y, z(i), z(i+1)) = (z(i))⊤Θyz(i+1) (Ji and Eisenstein, 2015). The encoding model can be trained by backpropagation from the classification objective, such as the margin loss. Rutherford et al. (2017) show that neural architectures outperform feature-based approaches in most settings. While neural approaches require engineering the network architecture (e.g., embedding size, number of hidden units in the classifier), feature-based approaches also require significant engineering to incorporate linguistic re- sources such as Brown clusters and parse trees, and to select a subset of relevant features. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_406_Chunk403
|
16.3. RELATIONS 389 16.3.2 Hierarchical discourse relations In sentence parsing, adjacent phrases combine into larger constituents, ultimately pro- ducing a single constituent for the entire sentence. The resulting tree structure enables structured analysis of the sentence, with subtrees that represent syntactically coherent chunks of meaning. Rhetorical Structure Theory (RST) extends this style of hierarchical analysis to the discourse level (Mann and Thompson, 1988). The basic element of RST is the discourse unit, which refers to a contiguous span of text. Elementary discourse units (EDUs) are the atomic elements in this framework, and are typically (but not always) clauses.5 Each discourse relation combines two or more adjacent discourse units into a larger, composite discourse unit; this process ultimately unites the entire text into a tree-like structure.6 Nuclearity In many discourse relations, one argument is primary. For example: (16.8) [LaShawn loves animals]N [She has nine dogs and one pig]S In this example, the second sentence provides EVIDENCE for the point made in the first sentence. The first sentence is thus the nucleus of the discourse relation, and the second sentence is the satellite. The notion of nuclearity is similar to the head-modifier structure of dependency parsing (see § 11.1.1). However, in RST, some relations have multiple nuclei. For example, the arguments of the CONTRAST relation are equally important: (16.9) [The clash of ideologies survives this treatment]N [but the nuance and richness of Gorky’s individual characters have vanished in the scuffle]N 7 Relations that have multiple nuclei are called coordinating; relations with a single nu- cleus are called subordinating. Subordinating relations are constrained to have only two arguments, while coordinating relations (such as CONJUNCTION) may have more than two. 5Details of discourse segmentation can be found in the RST annotation manual (Carlson and Marcu, 2001). 6While RST analyses are typically trees, this should not be taken as a strong theoretical commitment to the principle that all coherent discourses have a tree structure. Taboada and Mann (2006) write: It is simply the case that trees are convenient, easy to represent, and easy to understand. There is, on the other hand, no theoretical reason to assume that trees are the only possible represen- tation of discourse structure and of coherence relations. The appropriateness of tree structures to discourse has been challenged, e.g., by Wolf and Gibson (2005), who propose a more general graph-structured representation. 7from the RST Treebank (Carlson et al., 2002) Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_407_Chunk404
|
390 CHAPTER 16. DISCOURSE Concession Justify Conjunction Elaboration Justify Conjunction [It could have been a great movie]1A [It does have beautiful scenery,]1B [some of the best since Lord of the Rings.]1C [The acting is well done,]1D [and I really liked the son of the leader of the Samurai.]1E [He was a likable chap,]1F [and I ::::: hated to see him die.]1G [But, other than all that, this movie is :::::: nothing more than hidden ::::: rip-offs.]1H Figure 16.5: A rhetorical structure theory analysis of a short movie review, adapted from Voll and Taboada (2007). Positive and ::::::::: negative sentiment words are underlined, indicat- ing RST’s potential utility in document-level sentiment analysis. RST Relations Rhetorical structure theory features a large inventory of discourse rela- tions, which are divided into two high-level groups: subject matter relations, and presen- tational relations. Presentational relations are organized around the intended beliefs of the reader. For example, in (16.8), the second discourse unit provides evidence intended to increase the reader’s belief in the proposition expressed by the first discourse unit, that LaShawn loves animals. In contrast, subject-matter relations are meant to communicate ad- ditional facts about the propositions contained in the discourse units that they relate: (16.10) [the debt plan was rushed to completion]N [in order to be announced at the meeting]S8 In this example, the satellite describes a world state that is realized by the action described in the nucleus. This relationship is about the world, and not about the author’s commu- nicative intentions. Example Figure 16.5 depicts an RST analysis of a paragraph from a movie review. Asym- metric (subordinating) relations are depicted with an arrow from the satellite to the nu- cleus; symmetric (coordinating) relations are depicted with lines. The elementary dis- course units 1F and 1G are combined into a larger discourse unit with the symmetric CONJUNCTION relation. The resulting discourse unit is then the satellite in a JUSTIFY relation with 1E. 8from the RST Treebank (Carlson et al., 2002) Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_408_Chunk405
|
16.3. RELATIONS 391 Hierarchical discourse parsing The goal of discourse parsing is to recover a hierarchical structural analysis from a doc- ument text, such as the analysis in Figure 16.5. For now, let’s assume a segmentation of the document into elementary discourse units (EDUs); segmentation algorithms are dis- cussed below. After segmentation, discourse parsing can be viewed as a combination of two components: the discourse relation classification techniques discussed in § 16.3.1, and algorithms for phrase-structure parsing, such as chart parsing and shift-reduce, which were discussed in chapter 10. Both chart parsing and shift-reduce require encoding composite discourse units, ei- ther in a discrete feature vector or a dense neural representation.9 Some discourse parsers rely on the strong compositionality criterion (Marcu, 1996), which states the assumption that a composite discourse unit can be represented by its nucleus. This criterion is used in feature-based discourse parsing to determine the feature vector for a composite discourse unit (Hernault et al., 2010); it is used in neural approaches to setting the vector encod- ing for a composite discourse unit equal to the encoding of its nucleus (Ji and Eisenstein, 2014). An alternative neural approach is to learn a composition function over the compo- nents of a composite discourse unit (Li et al., 2014), using a recursive neural network (see § 14.8.3). Bottom-up discourse parsing Assume a segmentation of the text into N elementary discourse units with base representations {z(i)}N i=1, and assume a composition function COMPOSE
|
nlp_Page_409_Chunk406
|
392 CHAPTER 16. DISCOURSE ning i + 1 : j, and this violates the locality assumption that underlie CKY’s optimality guarantee. Bottom-up parsing with recursively constructed span representations is gen- erally not guaranteed to find the best-scoring discourse parse. This problem is explored in an exercise at the end of the chapter. Transition-based discourse parsing One drawback of bottom-up parsing is its cubic time complexity in the length of the input. For long documents, transition-based parsing is an appealing alternative. The shift-reduce algorithm (see § 10.6.2) can be applied to discourse parsing fairly directly (Sagae, 2009): the stack stores a set of discourse units and their representations, and each action is chosen by a function of these representations. This function could be a linear product of weights and features, or it could be a neural network applied to encodings of the discourse units. The REDUCE action then performs composition on the two discourse units at the top of the stack, yielding a larger composite discourse unit, which goes on top of the stack. All of the techniques for integrating learn- ing and transition-based parsing, described in § 11.3, are applicable to discourse parsing. Segmenting discourse units In rhetorical structure theory, elementary discourse units do not cross the sentence bound- ary, so discourse segmentation can be performed within sentences, assuming the sentence segmentation is given. The segmentation of sentences into elementary discourse units is typically performed using features of the syntactic analysis (Braud et al., 2017). One ap- proach is to train a classifier to determine whether each syntactic constituent is an EDU, using features such as the production, tree structure, and head words (Soricut and Marcu, 2003; Hernault et al., 2010). Another approach is to train a sequence labeling model, such as a conditional random field (Sporleder and Lapata, 2005; Xuan Bach et al., 2012; Feng et al., 2014). This is done using the BIO formalism for segmentation by sequence labeling, described in § 8.3. 16.3.3 Argumentation An alternative view of text-level relational structure focuses on argumentation (Stab and Gurevych, 2014b). Each segment (typically a sentence or clause) may support or rebut another segment, creating a graph structure over the text. In the following example (from Peldszus and Stede, 2013), segment S2 provides argumentative support for the proposi- tion in the segment S1: (16.11) [We should tear the building down,]S1 [because it is full of asbestos]S2. Assertions may also support or rebut proposed links between two other assertions, cre- ating a hypergraph, which is a generalization of a graph to the case in which edges can Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_410_Chunk407
|
16.3. RELATIONS 393 join any number of vertices. This can be seen by introducing another sentence into the example: (16.12) [In principle it is possible to clean it up,]S3 [but according to the mayor that is too expensive.]S4 S3 acknowledges the validity of S2, but undercuts its support of S1. This can be repre- sented by introducing a hyperedge, (S3, S2, S1)undercut, indicating that S3 undercuts the proposed relationship between S2 and S1. S4 then undercuts the relevance of S3. Argumentation mining is the task of recovering such structures from raw texts. At present, annotations of argumentation structure are relatively small: Stab and Gurevych (2014a) have annotated a collection of 90 persuasive essays, and Peldszus and Stede (2015) have solicited and annotated a set of 112 paragraph-length “microtexts” in German. 16.3.4 Applications of discourse relations The predominant application of discourse parsing is to select content within a document. In rhetorical structure theory, the nucleus is considered the more important element of the relation, and is more likely to be part of a summary of the document; it may also be more informative for document classification. The D-LTAG theory that underlies the Penn Discourse Treebank lacks this notion of nuclearity, but arguments may have varying importance, depending on the relation type. For example, the span of text constituting ARG1 of an expansion relation is more likely to appear in a summary, while the sentence constituting ARG2 of an implicit relation is less likely (Louis et al., 2010). Discourse rela- tions may also signal segmentation points in the document structure. Explicit discourse markers have been shown to correlate with changes in subjectivity, and identifying such change points can improve document-level sentiment classification, by helping the clas- sifier to focus on the subjective parts of the text (Trivedi and Eisenstein, 2013; Yang and Cardie, 2014). Extractive Summarization Text summarization is the problem of converting a longer text into a shorter one, while still conveying the key facts, events, ideas, and sentiments from the original. In extractive summarization, the summary is a subset of the original text; in abstractive summariza- tion, the summary is produced de novo, by paraphrasing the original, or by first encoding it into a semantic representation (see § 19.2). The main strategy for extractive summa- rization is to maximize coverage, choosing a subset of the document that best covers the concepts mentioned in the document as a whole; typically, coverage is approximated by bag-of-words overlap (Nenkova and McKeown, 2012). Coverage-based objectives can be supplemented by hierarchical discourse relations, using the principle of nuclearity: in any subordinating discourse relation, the nucleus is more critical to the overall meaning Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_411_Chunk408
|
394 CHAPTER 16. DISCOURSE of the text, and is therefore more important to include in an extractive summary (Marcu, 1997a).10 This insight can be generalized from individual relations using the concept of discourse depth (Hirao et al., 2013): for each elementary discourse unit e, the discourse depth de is the number of relations in which a discourse unit containing e is the satellite. Both discourse depth and nuclearity can be incorporated into extractive summariza- tion, using constrained optimization. Let xn be a bag-of-words vector representation of elementary discourse unit n, let yn ∈{0, 1} indicate whether n is included in the summary, and let dn be the depth of unit n. Furthermore, let each discourse unit have a “head” h, which is defined recursively: • if a discourse unit is produced by a subordinating relation, then its head is the head of the (unique) nucleus; • if a discourse unit is produced by a coordinating relation, then its head is the head of the left-most nucleus; • for each elementary discourse unit, its parent π(n) ∈{∅, 1, 2, . . . , N} is the head of the smallest discourse unit containing n whose head is not n; • if n is the head of the discourse unit spanning the whole document, then π(n) = ∅. With these definitions in place, discourse-driven extractive summarization can be for- malized as (Hirao et al., 2013), max y={0,1}N N X n=1 yn Ψ (xn, {x1:N}) dn s.t. N X n=1 yn( V X j=1 xn,j) ≤L yπ(n) ≥yn, ∀n s.t. π(n) ̸= ∅ [16.11] where Ψ (xn, {x1:N}) measures the coverage of elementary discourse unit n with respect to the rest of the document, and PV j=1 xn,j is the number of tokens in xn. The first con- straint ensures that the number of tokens in the summary has an upper bound L. The second constraint ensures that no elementary discourse unit is included unless its parent is also included. In this way, the discourse structure is used twice: to downweight the contributions of elementary discourse units that are not central to the discourse, and to ensure that the resulting structure is a subtree of the original discourse parse. The opti- 10Conversely, the arguments of a multi-nuclear relation should either both be included in the summary, or both excluded (Durrett et al., 2016). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_412_Chunk409
|
16.3. RELATIONS 395 h a b d e c f g Figure 16.6: A discourse depth tree (Hirao et al., 2013) for the discourse parse from Fig- ure 16.5, in which each elementary discourse unit is connected to its parent. The discourse units in one valid summary are underlined. mization problem in 16.11 can be solved with integer linear programming, described in § 13.2.2.11 Figure 16.6 shows a discourse depth tree for the RST analysis from Figure 16.5, in which each elementary discourse is connected to (and below) its parent. The underlined discourse units in the figure constitute the following summary: (16.13) It could have been a great movie, and I really liked the son of the leader of the Samurai. But, other than all that, this movie is nothing more than hidden rip-offs. Document classification Hierarchical discourse structures lend themselves naturally to text classification: in a sub- ordinating discourse relation, the nucleus should play a stronger role in the classification decision than the satellite. Various implementations of this idea have been proposed. • Focusing on within-sentence discourse relations and lexicon-based classification (see § 4.1.2), Voll and Taboada (2007) simply ignore the text in the satellites of each dis- course relation. • At the document level, elements of each discourse relation argument can be reweighted, favoring words in the nucleus, and disfavoring words in the satellite (Heerschop et al., 2011; Bhatia et al., 2015). This approach can be applied recursively, computing weights across the entire document. The weights can be relation-specific, so that the features from the satellites of contrastive relations are discounted or even reversed. • Alternatively, the hierarchical discourse structure can define the structure of a re- cursive neural network (see § 10.6.1). In this network, the representation of each 11Formally, 16.11 is a special case of the knapsack problem, in which the goal is to find a subset of items with maximum value, constrained by some maximum weight (Cormen et al., 2009). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_413_Chunk410
|
396 CHAPTER 16. DISCOURSE discourse unit is computed from its arguments and from a parameter correspond- ing to the discourse relation (Ji and Smith, 2017). Shallow, non-hierarchical discourse relations have also been applied to document clas- sification. One approach is to impose a set of constraints on the analyses of individual discourse units, so that adjacent units have the same polarity when they are connected by a discourse relation indicating agreement, and opposite polarity when connected by a contrastive discourse relation, indicating disagreement (Somasundaran et al., 2009; Zirn et al., 2011). Yang and Cardie (2014) apply explicitly-marked relations from the Penn Discourse Treebank to the problem of sentence-level sentiment polarity classification (see § 4.1). They impose the following soft constraints: • When a CONTRAST relation appears at the beginning of a sentence, the sentence should have the opposite sentiment polarity as its predecessor. • When an EXPANSION or CONTINGENCY appears at the beginning of a sentence, it should have the same polarity as its predecessor. • When a CONTRAST relation appears within a sentence, the sentence should have neutral polarity, since it is likely to express both sentiments. These discourse-driven constraints are shown to improve performance on two datasets of product reviews. Coherence Just as grammaticality is the property shared by well-structured sentences, coherence is the property shared by well-structured discourses. One application of discourse process- ing is to measure (and maximize) the coherence of computer-generated texts like transla- tions and summaries (Kibble and Power, 2004). Coherence assessment is also used to eval- uate human-generated texts, such as student essays (e.g., Miltsakaki and Kukich, 2004; Burstein et al., 2013). Coherence subsumes a range of phenomena, many of which have been highlighted earlier in this chapter: e.g., that adjacent sentences should be lexically cohesive (Foltz et al., 1998; Ji et al., 2015; Li and Jurafsky, 2017), and that entity references should follow the principles of centering theory (Barzilay and Lapata, 2008; Nguyen and Joty, 2017). Discourse relations also bear on the coherence of a text in a variety of ways: • Hierarchical discourse relations tend to have a “canonical ordering” of the nucleus and satellite (Mann and Thompson, 1988): for example, in the ELABORATION rela- tion from rhetorical structure theory, the nucleus always comes first, while in the JUSTIFICATION relation, the satellite tends to be first (Marcu, 1997b). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_414_Chunk411
|
16.3. RELATIONS 397 • Discourse relations should be signaled by connectives that are appropriate to the semantic or functional relationship between the arguments: for example, a coherent text would be more likely to use however to signal a COMPARISON relation than a temporal relation (Kibble and Power, 2004). • Discourse relations tend to be ordered in appear in predictable sequences: for ex- ample, COMPARISON relations tend to immediately precede CONTINGENCY rela- tions (Pitler et al., 2008). This observation can be formalized by generalizing the entity grid model (§ 16.2.2), so that each cell (i, j) provides information about the role of the discourse argument containing a mention of entity j in sentence i (Lin et al., 2011). For example, if the first sentence is ARG1 of a comparison relation, then any entity mentions in the sentence would be labeled COMP.ARG1. This approach can also be applied to RST discourse relations (Feng et al., 2014). Datasets One difficulty with evaluating metrics of discourse coherence is that human- generated texts usually meet some minimal threshold of coherence. For this reason, much of the research on measuring coherence has focused on synthetic data. A typical setting is to permute the sentences of a human-written text, and then determine whether the origi- nal sentence ordering scores higher according to the proposed coherence measure (Barzi- lay and Lapata, 2008). There are also small datasets of human evaluations of the coherence of machine summaries: for example, human judgments of the summaries from the partic- ipating systems in the 2003 Document Understanding Conference are available online.12 Researchers from the Educational Testing Service (an organization which administers sev- eral national exams in the United States) have studied the relationship between discourse coherence and student essay quality (Burstein et al., 2003, 2010). A public dataset of es- says from second-language learners, with quality annotations, has been made available by researchers at Cambridge University (Yannakoudakis et al., 2011). At the other extreme, Louis and Nenkova (2013) analyze the structure of professionally written scientific essays, finding that discourse relation transitions help to distinguish prize-winning essays from other articles in the same genre. Additional resources For a manuscript-length discussion of discourse processing, see Stede (2011). Article- length surveys are offered by Webber et al. (2012) and Webber and Joshi (2012). 12http://homepages.inf.ed.ac.uk/mlap/coherence/ Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_415_Chunk412
|
398 CHAPTER 16. DISCOURSE Exercises 1. Some discourse connectives tend to occur between their arguments; others can pre- cede both arguments, and a few can follow both arguments. Indicate whether the following connectives can occur between, before, and after their arguments: how- ever, but, while (contrastive, not temporal), although, therefore, nonetheless. 2. This exercise is to be done in pairs. Each participant selects an article from to- day’s news, and replaces all mentions of individual people with special tokens like PERSON1, PERSON2, and so on. The other participant should then use the rules of centering theory to guess each type of referring expression: full name (Captain Ahab), partial name (e.g., Ahab), nominal (e.g., the ship’s captain), or pronoun. Check whether the predictions match the original text, and whether the text conforms to the rules of centering theory. 3. In this exercise, you will produce a figure similar to Figure 16.1. a) Implement the smoothed cosine similarity metric from Equation 16.2, using the smoothing kernel k = [.5, .3, .15, .05]. b) Download the text of a news article with at least ten paragraphs. c) Compute and plot the smoothed similarity s over the length of the article. d) Identify local minima in s as follows: first find all sentences m such that sm < sm±1. Then search among these points to find the five sentences with the lowest sm. e) How often do the five local minima correspond to paragraph boundaries? • The fraction of local minima that are paragraph boundaries is the precision- at-k, where in this case, k = 5. • The fraction of paragraph boundaries which are local minima is the recall- at-k. • Compute precision-at-k and recall-at-k for k = 3 and k = 10. 4. One way to formulate text segmentation as a probabilistic model is through the use of the Dirichlet Compound Multinomial (DCM) distribution, which computes the probability of a bag-of-words, DCM(x; α), where the parameter α is a vector of positive reals. This distribution can be configured to assign high likelihood to bag- of-words vectors that are internally coherent, such that individual words appear re- peatedly: for example, this behavior can be observed for simple parameterizations, such as α = α1 with α < 1. Let ψα(i, j) represent the log-probability of a segment wi+1:j under a DCM distribu- tion with parameter α. Give a dynamic program for segmenting a text into a total Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_416_Chunk413
|
16.3. RELATIONS 399 of K segments maximizing the sum of log-probabilities PK k=1 ψα(sk−1, sk), where sk indexes the last token of segment k, and s0 = 0. The time complexity of your dynamic program should not be worse than quadratic in the length of the input and linear in the number of segments. 5. Building on the previous problem, you will now adapt the CKY algorithm to per- form hierarchical segmentation. Define a hierarchical segmentation as a set of seg- mentations {{s(ℓ) k }K(ℓ) k=1 }L ℓ=1, where L is the segmentation depth. To ensure that the segmentation is hierarchically valid, we require that each segmentation point s(ℓ) k at level ℓis also a segmentation point at level ℓ−1, where ℓ> 1. For simplicity, this problem focuses on binary hierarchical segmentation, so that each segment at level ℓ> 1 has exactly 2 subsegments. Define the score of a hierar- chical segmentation as the sum of the scores of all segments (at all levels), using the the DCM log-probabilities from the previous problem as the segment scores. Give a CKY-like recurrence such that the optimal “parse” of the text is the maximum log- probability binary segmentation with exactly L levels. 6. The entity grid representation of centering theory can be used to compute a score for adjacent sentences, as described in § 16.2.2. Given a set of sentences, these scores can be used to compute an optimal ordering. Show that finding the ordering with the maximum log probability is NP-complete, by reduction from a well-known prob- lem. 7. In § 16.3.2, it is noted that bottom-up parsing with compositional vector representa- tions of each span is not guaranteed to be optimal. In this exercise, you will construct a minimal example proving this point. Consider a discourse with four units, with base representations {z(i)}4 i=1. Construct a scenario in which the parse selected by bottom-up parsing is not optimal, and give the precise mathematical conditions un- der which this suboptimal parse is selected. You may ignore the relation labels ℓfor the purpose of this example. 8. As noted in § 16.3.3, arguments can described by hypergraphs, in which a segment may undercut a proposed edge between two other segments. Extend the model of extractive summarization described in § 16.3.4 to arguments, adding the follwoing constraint: if segment i undercuts an argumentative relationship between j and k, then i cannot be included in the summary unless both j and k are included. Your so- lution should take the form of a set of linear constraints on an integer linear program — that is, each constraint can only involve addition and subtraction of variables. In the next two exercises, you will explore the use of discourse connectives in a real corpus. Using NLTK, acquire the Brown corpus, and identify sentences that begin with any of the following connectives: however, nevertheless, moreover, furthermore, thus. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_417_Chunk414
|
400 CHAPTER 16. DISCOURSE 9. Both lexical consistency and discourse connectives contribute to the cohesion of a text. We might therefore expect adjacent sentences that are joined by explicit dis- course connectives to also have higher word overlap. Using the Brown corpus, test this theory by computing the average cosine similarity between adjacent sentences that are connected by one of the connectives mentioned above. Compare this to the average cosine similarity of all other adjacent sentences. If you know how, perform a two-sample t-test to determine whether the observed difference is statistically sig- nificant. 10. Group the above connectives into the following three discourse relations: • Expansion: moreover, furthermore • Comparison: however, nevertheless • Contingency: thus Focusing on pairs of sentences which are joined by one of these five connectives, build a classifier to predict the discourse relation from the text of the two adjacent sentences — taking care to ignore the connective itself. Use the first 30000 sentences of the Brown corpus as the training set, and the remaining sentences as the test set. Compare the performance of your classifier against simply choosing the most common class. Using a bag-of-words classifier, it is hard to do much better than this baseline, so consider more sophisticated alternatives! Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_418_Chunk415
|
Part IV Applications 401
|
nlp_Page_419_Chunk416
|
Chapter 17 Information extraction Computers offer powerful capabilities for searching and reasoning about structured records and relational data. Some have argued that the most important limitation of artificial in- telligence is not inference or learning, but simply having too little knowledge (Lenat et al., 1990). Natural language processing provides an appealing solution: automatically con- struct a structured knowledge base by reading natural language text. For example, many Wikipedia pages have an “infobox” that provides structured in- formation about an entity or event. An example is shown in Figure 17.1a: each row rep- resents one or more properties of the entity IN THE AEROPLANE OVER THE SEA, a record album. The set of properties is determined by a predefined schema, which applies to all record albums in Wikipedia. As shown in Figure 17.1b, the values for many of these fields are indicated directly in the first few sentences of text on the same Wikipedia page. The task of automatically constructing (or “populating”) an infobox from text is an example of information extraction. Much of information extraction can be described in terms of entities, relations, and events. • Entities are uniquely specified objects in the world, such as people (JEFF MANGUM), places (ATHENS, GEORGIA), organizations (MERGE RECORDS), and times (FEBRUARY 10, 1998). Chapter 8 described the task of named entity recognition, which labels tokens as parts of entity spans. Now we will see how to go further, linking each entity mention to an element in a knowledge base. • Relations include a predicate and two arguments: for example, CAPITAL(GEORGIA, ATLANTA). • Events involve multiple typed arguments. For example, the production and release 403
|
nlp_Page_421_Chunk417
|
404 CHAPTER 17. INFORMATION EXTRACTION (a) A Wikipedia infobox (17.1) In the Aeroplane Over the Sea is the second and final studio album by the American indie rock band Neutral Milk Hotel. (17.2) It was released in the United States on February 10, 1998 on Merge Records and :::: May::::: 1998 on :::: Blue::::: Rose:::::::: Records in the United Kingdom. (17.3) ::: Jeff:::::::::: Mangum moved from ::::::: Athens, ::::::: Georgia to Denver, Colorado to prepare the bulk of the album’s material with producer Robert Schneider, this time at Schneider’s newly created Pet Sounds Studio at the home of::: Jim::::::::: McIntyre. (b) The first few sentences of text. Strings that match fields or field names in the infobox are underlined; strings that mention other entities are ::::: wavy:::::::::: underlined. Figure 17.1: From the Wikipedia page for the album “In the Aeroplane Over the Sea”, retrieved October 26, 2017. of the album described in Figure 17.1 is described by the event, ⟨TITLE : IN THE AEROPLANE OVER THE SEA, ARTIST : NEUTRAL MILK HOTEL, RELEASE-DATE : 1998-FEB-10, . . .⟩ The set of arguments for an event type is defined by a schema. Events often refer to time-delimited occurrences: weddings, protests, purchases, terrorist attacks. Information extraction is similar to semantic role labeling (chapter 13): we may think of predicates as corresponding to events, and the arguments as defining slots in the event representation. However, the goals of information extraction are different. Rather than accurately parsing every sentence, information extraction systems often focus on recog- nizing a few key relation or event types, or on the task of identifying all properties of a given entity. Information extraction is often evaluated by the correctness of the resulting knowledge base, and not by how many sentences were accurately parsed. The goal is sometimes described as macro-reading, as opposed to micro-reading, in which each sen- tence must be analyzed correctly. Macro-reading systems are not penalized for ignoring difficult sentences, as long as they can recover the same information from other, easier- to-read sources. However, macro-reading systems must resolve apparent inconsistencies Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_422_Chunk418
|
17.1. ENTITIES 405 (was the album released on MERGE RECORDS or BLUE ROSE RECORDS?), requiring rea- soning across the entire dataset. In addition to the basic tasks of recognizing entities, relations, and events, information extraction systems must handle negation, and must be able to distinguish statements of fact from hopes, fears, hunches, and hypotheticals. Finally, information extraction is of- ten paired with the problem of question answering, which requires accurately parsing a query, and then selecting or generating a textual answer. Question answering systems can be built on knowledge bases that are extracted from large text corpora, or may attempt to identify answers directly from the source texts. 17.1 Entities The starting point for information extraction is to identify mentions of entities in text. Consider the following example: (17.4) The United States Army captured a hill overlooking Atlanta on May 14, 1864. For this sentence, there are two goals: 1. Identify the spans United States Army, Atlanta, and May 14, 1864 as entity mentions. (The hill is not uniquely identified, so it is not a named entity.) We may also want to recognize the named entity types: organization, location, and date. This is named entity recognition, and is described in chapter 8. 2. Link these spans to entities in a knowledge base: U.S. ARMY, ATLANTA, and 1864- MAY-14. This task is known as entity linking. The strings to be linked to entities are mentions — similar to the use of this term in coreference resolution. In some formulations of the entity linking task, only named enti- ties are candidates for linking. This is sometimes called named entity linking (Ling et al., 2015). In other formulations, such as Wikification (Milne and Witten, 2008), any string can be a mention. The set of target entities often corresponds to Wikipedia pages, and Wikipedia is the basis for more comprehensive knowledge bases such as YAGO (Suchanek et al., 2007), DBPedia (Auer et al., 2007), and Freebase (Bollacker et al., 2008). Entity link- ing may also be performed in more “closed” settings, where a much smaller list of targets is provided in advance. The system must also determine if a mention does not refer to any entity in the knowledge base, sometimes called a NIL entity (McNamee and Dang, 2009). Returning to (17.4), the three entity mentions may seem unambiguous. But the Wikipedia disambiguation page for the string Atlanta says otherwise:1 there are more than twenty 1https://en.wikipedia.org/wiki/Atlanta_(disambiguation), retrieved November 1, 2017. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_423_Chunk419
|
406 CHAPTER 17. INFORMATION EXTRACTION different towns and cities, five United States Navy vessels, a magazine, a television show, a band, and a singer — each prominent enough to have its own Wikipedia page. We now consider how to choose among these dozens of possibilities. In this chapter we will focus on supervised approaches. Unsupervised entity linking is closely related to the problem of cross-document coreference resolution, where the task is to identify pairs of mentions that corefer, across document boundaries (Bagga and Baldwin, 1998b; Singh et al., 2011). 17.1.1 Entity linking by learning to rank Entity linking is often formulated as a ranking problem, ˆy = argmax y∈Y(x) Ψ(y, x, c), [17.1] where y is a target entity, x is a description of the mention, Y(x) is a set of candidate entities, and c is a description of the context — such as the other text in the document, or its metadata. The function Ψ is a scoring function, which could be a linear model, Ψ(y, x, c) = θ · f(y, x, c), or a more complex function such as a neural network. In either case, the scoring function can be learned by minimizing a margin-based ranking loss, ℓ(ˆy, y(i), x(i), c(i)) = Ψ(ˆy, x(i), c(i)) −Ψ(y(i), x(i), c(i)) + 1 + , [17.2] where y(i) is the ground truth and ˆy ̸= y(i) is the predicted target for mention x(i) in context c(i) (Joachims, 2002; Dredze et al., 2010). Candidate identification For computational tractability, it is helpful to restrict the set of candidates, Y(x). One approach is to use a name dictionary, which maps from strings to the entities that they might mention. This mapping is many-to-many: a string such as Atlanta can refer to multiple entities, and conversely, an entity such as ATLANTA can be referenced by multiple strings. A name dictionary can be extracted from Wikipedia, with links between each Wikipedia entity page and the anchor text of all hyperlinks that point to the page (Bunescu and Pasca, 2006; Ratinov et al., 2011). To improve recall, the name dictionary can be augmented by partial and approximate matching (Dredze et al., 2010), but as the set of candidates grows, the risk of false positives increases. For example, the string Atlanta is a partial match to the Atlanta Fed (a name for the FEDERAL RESERVE BANK OF ATLANTA), and a noisy match (edit distance of one) from Atalanta (a heroine in Greek mythology and an Italian soccer team). Features Feature-based approaches to entity ranking rely on three main types of local information (Dredze et al., 2010): Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_424_Chunk420
|
17.1. ENTITIES 407 • The similarity of the mention string to the canonical entity name, as quantified by string similarity. This feature would elevate the city ATLANTA over the basketball team ATLANTA HAWKS for the string Atlanta. • The popularity of the entity, which can be measured by Wikipedia page views or PageRank in the Wikipedia link graph. This feature would elevate ATLANTA, GEOR- GIA over the unincorporated community of ATLANTA, OHIO. • The entity type, as output by the named entity recognition system. This feature would elevate the city of ATLANTA over the magazine ATLANTA in contexts where the mention is tagged as a location. In addition to these local features, the document context can also help. If Jamaica is men- tioned in a document about the Caribbean, it is likely to refer to the island nation; in the context of New York, it is likely to refer to the neighborhood in Queens; in the con- text of a menu, it might refer to a hibiscus tea beverage. Such hints can be formalized by computing the similarity between the Wikipedia page describing each candidate en- tity and the mention context c(i), which may include the bag-of-words representing the document (Dredze et al., 2010; Hoffart et al., 2011) or a smaller window of text around the mention (Ratinov et al., 2011). For example, we can compute the cosine similarity between bag-of-words vectors for the context and entity description, typically weighted using inverse document frequency to emphasize rare words.2 Neural entity linking An alternative approach is to compute the score for each entity candidate using distributed vector representations of the entities, mentions, and context. For example, for the task of entity linking in Twitter, Yang et al. (2016) employ the bilinear scoring function, Ψ(y, x, c) = v⊤ y Θ(y,x)x + v⊤ y Θ(y,c)c, [17.3] with vy ∈RKy as the vector embedding of entity y, x ∈RKx as the embedding of the mention, c ∈RKc as the embedding of the context, and the matrices Θ(y,x) and Θ(y,c) as parameters that score the compatibility of each entity with respect to the mention and context. Each of the vector embeddings can be learned from an end-to-end objective, or pre-trained on unlabeled data. • Pretrained entity embeddings can be obtained from an existing knowledge base (Bor- des et al., 2011, 2013), or by running a word embedding algorithm such as WORD2VEC 2The document frequency of word j is DF(j) = 1 N PN i=1 δ x(i) j > 0 , equal to the number of docu- ments in which the word appears. The contribution of each word to the cosine similarity of two bag-of- words vectors can be weighted by the inverse document frequency 1 DF(j) or log 1 DF(j), to emphasize rare words (Sp¨arck Jones, 1972). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_425_Chunk421
|
408 CHAPTER 17. INFORMATION EXTRACTION on the text of Wikipedia, with hyperlinks substituted for the anchor text.3 • The embedding of the mention x can be computed by averaging the embeddings of the words in the mention (Yang et al., 2016), or by the compositional techniques described in § 14.8. • The embedding of the context c can also be computed from the embeddings of the words in the context. A denoising autoencoder learns a function from raw text to dense K-dimensional vector encodings by minimizing a reconstruction loss (Vin- cent et al., 2010), min θg,θh N X i=1 ||x(i) −g(h(˜x(i); θh); θg)||2, [17.4] where ˜x(i) is a noisy version of the bag-of-words counts x(i), which is produced by randomly setting some counts to zero; h : RV →RK is an encoder with parameters θh; and g : RK →RV , with parameters θg. The encoder and decoder functions are typically implemented as feedforward neural networks. To apply this model to entity linking, each entity and context are initially represented by the encoding of their bag-of-words vectors, h(e) and g(c), and these encodings are then fine-tuned from labeled data (He et al., 2013). The context vector c can also be obtained by convolution (§ 3.4) on the embeddings of words in the document (Sun et al., 2015), or by examining metadata such as the author’s social network (Yang et al., 2016). The remaining parameters Θ(y,x) and Θ(y,c) can be trained by backpropagation from the margin loss in Equation 17.2. 17.1.2 Collective entity linking Entity linking can be more accurate when it is performed jointly across a document. To see why, consider the following lists: (17.5) a. California, Oregon, Washington b. Baltimore, Washington, Philadelphia c. Washington, Adams, Jefferson In each case, the term Washington refers to a different entity, and this reference is strongly suggested by the other entries on the list. In the last list, all three names are highly am- biguous — there are dozens of other Adams and Jefferson entities in Wikipedia. But a 3Pre-trained entity embeddings can be downloaded from https://code.google.com/archive/p/ word2vec/. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_426_Chunk422
|
17.1. ENTITIES 409 preference for coherence motivates collectively linking these references to the first three U.S. presidents. A general approach to collective entity linking is to introduce a compatibility score ψc(y). Collective entity linking is then performed by optimizing the global objective, ˆy = argmax y∈Y(x) Ψc(y) + N X i=1 Ψℓ(y(i), x(i), c(i)), [17.5] where Y(x) is the set of all possible collective entity assignments for the mentions in x, and ψℓis the local scoring function for each entity i. The compatibility function is typically decomposed into a sum of pairwise scores, Ψc(y) = PN i=1 PN j̸=i Ψc(y(i), y(j)). These scores can be computed in a number of different ways: • Wikipedia defines high-level categories for entities (e.g., living people, Presidents of the United States, States of the United States), and Ψc can reward entity pairs for the number of categories that they have in common (Cucerzan, 2007). • Compatibility can be measured by the number of incoming hyperlinks shared by the Wikipedia pages for the two entities (Milne and Witten, 2008). • In a neural architecture, the compatibility of two entities can be set equal to the inner product of their embeddings, Ψc(y(i), y(j)) = vy(i) · vy(j). • A non-pairwise compatibility score can be defined using a type of latent variable model known as a probabilistic topic model (Blei et al., 2003; Blei, 2012). In this framework, each latent topic is a probability distribution over entities, and each document has a probability distribution over topics. Each entity helps to determine the document’s distribution over topics, and in turn these topics help to resolve am- biguous entity mentions (Newman et al., 2006). Inference can be performed using the sampling techniques described in chapter 5. Unfortunately, collective entity linking is NP-hard even for pairwise compatibility func- tions, so exact optimization is almost certainly intractable. Various approximate inference techniques have been proposed, including integer linear programming (Cheng and Roth, 2013), Gibbs sampling (Han and Sun, 2012), and graph-based algorithms (Hoffart et al., 2011; Han et al., 2011). 17.1.3 *Pairwise ranking loss functions The loss function defined in Equation 17.2 considers only the highest-scoring prediction ˆy, but in fact, the true entity y(i) should outscore all other entities. A loss function based on this idea would give a gradient against the features or representations of several entities, Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_427_Chunk423
|
410 CHAPTER 17. INFORMATION EXTRACTION Algorithm 18 WARP approximate ranking loss 1: procedure WARP(y(i), x(i)) 2: N ←0 3: repeat 4: Randomly sample y ∼Y(x(i)) 5: N ←N + 1 6: if ψ(y, x(i)) + 1 > ψ(y(i), x(i)) then ▷check for margin violation 7: r ← |Y(x(i))|/N ▷compute approximate rank 8: return Lrank(r) × (ψ(y, x(i)) + 1 −ψ(y(i), x(i))) 9: until N ≥|Y(x(i))| −1 ▷no violation found 10: return 0 ▷return zero loss not just the top-scoring prediction. Usunier et al. (2009) define a general ranking error function, Lrank(k) = k X j=1 αj, with α1 ≥α2 ≥· · · ≥0, [17.6] where k is equal to the number of labels ranked higher than the correct label y(i). This function defines a class of ranking errors: if αj = 1 for all j, then the ranking error is equal to the rank of the correct entity; if α1 = 1 and αj>1 = 0, then the ranking error is one whenever the correct entity is not ranked first; if αj decreases smoothly with j, as in αj = 1 j , then the error is between these two extremes. This ranking error can be integrated into a margin objective. Remember that large margin classification requires not only the correct label, but also that the correct label outscores other labels by a substantial margin. A similar principle applies to ranking: we want a high rank for the correct entity, and we want it to be separated from other entities by a substantial margin. We therefore define the margin-augmented rank, r(y(i), x(i)) ≜ X y∈Y(x(i))\y(i) δ 1 + ψ(y, x(i)) ≥ψ(y(i), x(i)) , [17.7] where δ (·) is a delta function, and Y(x(i)) \ y(i) is the set of all entity candidates minus the true entity y(i). The margin-augmented rank is the rank of the true entity, after aug- menting every other candidate with a margin of one, under the current scoring function ψ. (The context c is omitted for clarity, and can be considered part of x.) For each instance, a hinge loss is computed from the ranking error associated with this Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_428_Chunk424
|
17.2. RELATIONS 411 margin-augmented rank, and the violation of the margin constraint, ℓ(y(i), x(i)) =Lrank(r(y(i), x(i))) r(y(i), x(i)) X y∈Y(x)\y(i) ψ(y, x(i)) −ψ(y(i), x(i)) + 1 + , [17.8] The sum in Equation 17.8 includes non-zero values for every label that is ranked at least as high as the true entity, after applying the margin augmentation. Dividing by the margin- augmented rank of the true entity thus gives the average violation. The objective in Equation 17.8 is expensive to optimize when the label space is large, as is usually the case for entity linking against large knowledge bases. This motivates a randomized approximation called WARP (Weston et al., 2011), shown in Algorithm 18. In this procedure, we sample random entities until one violates the pairwise margin con- straint, ψ(y, x(i)) + 1 ≥ψ(y(i), x(i)). The number of samples N required to find such a violation yields an approximation of the margin-augmented rank of the true entity, r(y(i), x(i)) ≈ j |Y(x)| N k . If a violation is found immediately, N = 1, the correct entity probably ranks below many others, r ≈|Y(x)|. If many samples are required before a violation is found, N →|Y(x)|, then the correct entity is probably highly ranked, r →1. A computational advantage of WARP is that it is not necessary to find the highest-scoring label, which can impose a non-trivial computational cost when Y(x(i)) is large. The objec- tive is conceptually similar to the negative sampling objective in WORD2VEC (chapter 14), which compares the observed word against randomly sampled alternatives. 17.2 Relations After identifying the entities that are mentioned in a text, the next step is to determine how they are related. Consider the following example: (17.6) George Bush traveled to France on Thursday for a summit. This sentence introduces a relation between the entities referenced by George Bush and France. In the Automatic Content Extraction (ACE) ontology (Linguistic Data Consortium, 2005), the type of this relation is PHYSICAL, and the subtype is LOCATED. This relation would be written, PHYSICAL.LOCATED(GEORGE BUSH, FRANCE). [17.9] Relations take exactly two arguments, and the order of the arguments matters. In the ACE datasets, relations are annotated between entity mentions, as in the exam- ple above. Relations can also hold between nominals, as in the following example from the SemEval-2010 shared task (Hendrickx et al., 2009): Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_429_Chunk425
|
412 CHAPTER 17. INFORMATION EXTRACTION CAUSE-EFFECT those cancers were caused by radiation exposures INSTRUMENT-AGENCY phone operator PRODUCT-PRODUCER a factory manufactures suits CONTENT-CONTAINER a bottle of honey was weighed ENTITY-ORIGIN letters from foreign countries ENTITY-DESTINATION the boy went to bed COMPONENT-WHOLE my apartment has a large kitchen MEMBER-COLLECTION there are many trees in the forest COMMUNICATION-TOPIC the lecture was about semantics Table 17.1: Relations and example sentences from the SemEval-2010 dataset (Hendrickx et al., 2009) (17.7) The cup contained tea from dried ginseng. This sentence describes a relation of type ENTITY-ORIGIN between tea and ginseng. Nomi- nal relation extraction is closely related to semantic role labeling (chapter 13). The main difference is that relation extraction is restricted to a relatively small number of relation types; for example, Table 17.1 shows the ten relation types from SemEval-2010. 17.2.1 Pattern-based relation extraction Early work on relation extraction focused on hand-crafted patterns (Hearst, 1992). For example, the appositive Starbuck, a native of Nantucket signals the relation ENTITY-ORIGIN between Starbuck and Nantucket. This pattern can be written as, PERSON , a native of LOCATION ⇒ENTITY-ORIGIN(PERSON, LOCATION). [17.10] This pattern will be “triggered” whenever the literal string , a native of occurs between an entity of type PERSON and an entity of type LOCATION. Such patterns can be generalized beyond literal matches using techniques such as lemmatization, which would enable the words (buy, buys, buying) to trigger the same patterns (see § 4.3.1). A more aggressive strategy would be to group all words in a WordNet synset (§ 4.2), so that, e.g., buy and purchase trigger the same patterns. Relation extraction patterns can be implemented in finite-state automata (§ 9.1). If the named entity recognizer is also a finite-state machine, then the systems can be combined by finite-state transduction (Hobbs et al., 1997). This makes it possible to propagate uncer- tainty through the finite-state cascade, and disambiguate from higher-level context. For example, suppose the entity recognizer cannot decide whether Starbuck refers to either a PERSON or a LOCATION; in the composed transducer, the relation extractor would be free to select the PERSON annotation when it appears in the context of an appropriate pattern. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_430_Chunk426
|
17.2. RELATIONS 413 17.2.2 Relation extraction as a classification task Relation extraction can be formulated as a classification problem, ˆr(i,j),(m,n) = argmax r∈R Ψ(r, (i, j), (m, n), w), [17.11] where r ∈R is a relation type (possibly NIL), wi+1:j is the span of the first argument, and wm+1:n is the span of the second argument. The argument wm+1:n may appear before or after wi+1:j in the text, or they may overlap; we stipulate only that wi+1:j is the first argument of the relation. We now consider three alternatives for computing the scoring function. Feature-based classification In a feature-based classifier, the scoring function is defined as, Ψ(r, (i, j), (m, n), w) = θ · f(r, (i, j), (m, n), w), [17.12] with θ representing a vector of weights, and f(·) a vector of features. The pattern-based methods described in § 17.2.1 suggest several features: • Local features of wi+1:j and wm+1:n, including: the strings themselves; whether they are recognized as entities, and if so, which type; whether the strings are present in a gazetteer of entity names; each string’s syntactic head (§ 9.2.2). • Features of the span between the two arguments, wj+1:m or wn+1:i (depending on which argument appears first): the length of the span; the specific words that appear in the span, either as a literal sequence or a bag-of-words; the wordnet synsets (§ 4.2) that appear in the span between the arguments. • Features of the syntactic relationship between the two arguments, typically the de- pendency path between the arguments (§ 13.2.1). Example dependency paths are shown in Table 17.2. Kernels Suppose that the first line of Table 17.2 is a labeled example, and the remaining lines are instances to be classified. A feature-based approach would have to decompose the depen- dency paths into features that capture individual edges, with or without their labels, and then learn weights for each of these features: for example, the second line contains identi- cal dependencies, but different arguments; the third line contains a different inflection of the word travel; the fourth and fifth lines each contain an additional edge on the depen- dency path; and the sixth example uses an entirely different path. Rather than attempting to create local features that capture all of the ways in which these dependencies paths Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_431_Chunk427
|
414 CHAPTER 17. INFORMATION EXTRACTION 1. George Bush traveled to France George Bush ← NSUBJtraveled → OBLFrance 2. Ahab traveled to Nantucket Ahab ← NSUBJtraveled→ OBLNantucket 3. George Bush will travel to France George Bush ← NSUBJtravel → OBLFrance 4. George Bush wants to travel to France George Bush ← NSUBJwants → XCOMPtravel → OBLFrance 5. Ahab traveled to a city in France Ahab ← NSUBJtraveled → OBLcity → NMODFrance 6. We await Ahab ’s visit to France Ahab ← NMOD:POSSvisit → NMODFrance Table 17.2: Candidates instances for the PHYSICAL.LOCATED relation, and their depen- dency paths are similar and different, we can instead define a similarity function κ, which computes a score for any pair of instances, κ : X × X →R+. The score for any pair of instances (i, j) is κ(x(i), x(j)) ≥0, with κ(i, j) being large when instances x(i) and x(j) are similar. If the function κ obeys a few key properties it is a valid kernel function.4 Given a valid kernel function, we can build a non-linear classifier without explicitly defining a feature vector or neural network architecture. For a binary classification prob- lem y ∈{−1, 1}, we have the decision function, ˆy =Sign(b + N X i=1 y(i)α(i)κ(x(i), x)) [17.13] where b and {α(i)}N i=1 are parameters that must be learned from the training set, under the constraint ∀i, α(i) ≥0. Intuitively, each αi specifies the importance of the instance x(i) towards the classification rule. Kernel-based classification can be viewed as a weighted form of the nearest-neighbor classifier (Hastie et al., 2009), in which test instances are assigned the most common label among their near neighbors in the training set. This results in a non-linear classification boundary. The parameters are typically learned from a margin-based objective (see § 2.4), leading to the kernel support vector machine. To generalize to multi-class classification, we can train separate binary classifiers for each label (sometimes called one-versus-all), or train binary classifiers for each pair of possible labels (one-versus-one). Dependency kernels are particularly effective for relation extraction, due to their abil- ity to capture syntactic properties of the path between the two candidate arguments. One class of dependency tree kernels is defined recursively, with the score for a pair of trees 4The Gram matrix K arises from computing the kernel function between all pairs in a set of instances. For a valid kernel, the Gram matrix must be symmetric (K = K⊤) and positive semi-definite (∀a, a⊤Ka ≥0). For more on kernel-based classification, see chapter 14 of Murphy (2012). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_432_Chunk428
|
17.2. RELATIONS 415 equal to the similarity of the root nodes and the sum of similarities of matched pairs of child subtrees (Zelenko et al., 2003; Culotta and Sorensen, 2004). Alternatively, Bunescu and Mooney (2005) define a kernel function over sequences of unlabeled dependency edges, in which the score is computed as a product of scores for each pair of words in the sequence: identical words receive a high score, words that share a synset or part-of-speech receive a small non-zero score (e.g., travel / visit), and unrelated words receive a score of zero. Neural relation extraction Convolutional neural networks (§ 3.4) were an early neural architecture for relation ex- traction (Zeng et al., 2014; dos Santos et al., 2015). For the sentence (w1, w2, . . . , wM), obtain a matrix of word embeddings X, where xm ∈RK is the embedding of wm. Now, suppose the candidate arguments appear at positions a1 and a2; then for each word in the sentence, its position with respect to each argument is m −a1 and m −a2. (Following Zeng et al. (2014), this is a restricted version of the relation extraction task in which the arguments are single tokens.) To capture any information conveyed by these positions, the word embeddings are concatenated with vector encodings of the positional offsets, x(p) m−a1 and x(p) m−a2. (For more on positional encodings, see § 18.3.2.) The complete base representation of the sentence is, X(a1, a2) = x1 x2 · · · xM x(p) 1−a1 x(p) 2−a1 · · · x(p) M−a1 x(p) 1−a2 x(p) 2−a2 · · · x(p) M−a2 , [17.14] where each column is a vertical concatenation of a word embedding, represented by the column vector xm, and two positional encodings, specifying the position with respect to a1 and a2. The matrix X(a1, a2) is then taken as input to a convolutional layer (see § 3.4), and max-pooling is applied to obtain a vector. The final scoring function is then, Ψ(r, i, j, X) = θr · MaxPool(ConvNet(X(i, j); φ)), [17.15] where φ defines the parameters of the convolutional operator, and the θr defines a set of weights for relation r. The model can be trained using a margin objective, ˆr = argmax r Ψ(r, i, j, X) [17.16] ℓ=(1 + ψ(ˆr, i, j, X) −ψ(r, i, j, X))+. [17.17] Recurrent neural networks (§ 6.3) have also been applied to relation extraction, us- ing a network such as a bidirectional LSTM to encode the words or dependency path between the two arguments. Xu et al. (2015) segment each dependency path into left and Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_433_Chunk429
|
416 CHAPTER 17. INFORMATION EXTRACTION right subpaths: the path George Bush ← NSUBJwants → XCOMPtravel → OBLFrance is segmented into the subpaths, George Bush ← NSUBJwants and wants → XCOMPtravel → OBLFrance. In each path, a recurrent neural network is run from the argument to the root word (in this case, wants). The fi- nal representation by max pooling (§ 3.4) across all the recurrent states along each path. This process can be applied across separate “channels”, in which the inputs consist of em- beddings for the words, parts-of-speech, dependency relations, and WordNet hypernyms (e.g., France-nation; see § 4.2). To define the model formally, let s(m) define the successor of word m in either the left or right subpath (in a dependency path, each word can have a successor in at most one subpath). Let x(c) m indicate the embedding of word (or relation) m in channel c, and let ←− h (c) m and −→ h (c) m indicate the associated recurrent states in the left and right subtrees respectively. Then the complete model is specified as follows, h(c) s(m) =RNN(x(c) s(m), h(c) m ) [17.18] z(c) =MaxPool ←− h (c) i , ←− h (c) s(i), . . . , ←− h (c) root, −→ h (c) j , −→ h (c) s(j), . . . , −→ h (c) root [17.19] Ψ(r, i, j) =θ · h z(word); z(POS); z(dependency); z(hypernym)i . [17.20] Note that z is computed by applying max-pooling to the matrix of horizontally concate- nated vectors h, while Ψ is computed from the vector of vertically concatenated vectors z. Xu et al. (2015) pass the score Ψ through a softmax layer to obtain a probability p(r | i, j, w), and train the model by regularized cross-entropy. Miwa and Bansal (2016) show that a related model can solve the more challenging “end-to-end” relation extrac- tion task, in which the model must simultaneously detect entities and then extract their relations. 17.2.3 Knowledge base population In many applications, what matters is not what fraction of sentences are analyzed cor- rectly, but how much accurate knowledge can be extracted. Knowledge base population (KBP) refers to the task of filling in Wikipedia-style infoboxes, as shown in Figure 17.1a. Knowledge base population can be decomposed into two subtasks: entity linking (de- scribed in § 17.1), and slot filling (Ji and Grishman, 2011). Slot filling has two key dif- ferences from the formulation of relation extraction presented above: the relations hold between entities rather than spans of text, and the performance is evaluated at the type level (on entity pairs), rather than on the token level (on individual sentences). From a practical standpoint, there are three other important differences between slot filling and per-sentence relation extraction. • KBP tasks are often formulated from the perspective of identifying attributes of a few “query” entities. As a result, these systems often start with an information Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_434_Chunk430
|
17.2. RELATIONS 417 retrieval phase, in which relevant passages of text are obtained by search. • For many entity pairs, there will be multiple passages of text that provide evidence. Slot filling systems must aggregate this evidence to predict a single relation type (or set of relations). • Labeled data is usually available in the form of pairs of related entities, rather than annotated passages of text. Training from such type-level annotations is a challenge: two entities may be linked by several relations, or they may appear together in a passage of text that nonetheless does not describe their relation to each other. Information retrieval is beyond the scope of this text (see Manning et al., 2008). The re- mainder of this section describes approaches to information fusion and learning from type-level annotations. Information fusion In knowledge base population, there will often be multiple pieces of evidence for (and sometimes against) a single relation. For example, a search for the entity MAYNARD JACK- SON, JR. may return several passages that reference the entity ATLANTA:5 (17.8) a. Elected mayor of Atlanta in 1973, Maynard Jackson was the first African American to serve as mayor of a major southern city. b. Atlanta’s airport will be renamed to honor Maynard Jackson, the city’s first Black mayor. c. Born in Dallas, Texas in 1938, Maynard Holbrook Jackson, Jr. moved to Atlanta when he was 8. d. Maynard Jackson has gone from one of the worst high schools in Atlanta to one of the best. The first and second examples provide evidence for the relation MAYOR holding between the entities ATLANTA and MAYNARD JACKSON, JR.. The third example provides evidence for a different relation between these same entities, LIVED-IN. The fourth example poses an entity linking problem, referring to MAYNARD JACKSON HIGH SCHOOL. Knowledge base population requires aggregating this sort of textual evidence, and predicting the re- lations that are most likely to hold. One approach is to run a single-document relation extraction system (using the tech- niques described in § 17.2.2), and then aggregate the results (Li et al., 2011). Relations 5First three examples from: http://www.georgiaencyclopedia.org/articles/ government-politics/maynard-jackson-1938-2003; JET magazine, November 10, 2003; www.todayingeorgiahistory.org/content/maynard-jackson-elected Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_435_Chunk431
|
418 CHAPTER 17. INFORMATION EXTRACTION that are detected with high confidence in multiple documents are more likely to be valid, motivating the heuristic, ψ(r, e1, e2) = N X i=1 (p(r(e1, e2) | w(i)))α, [17.21] where p(r(e1, e2) | w(i)) is the probability of relation r between entities e1 and e2 condi- tioned on the text w(i), and α ≫1 is a tunable hyperparameter. Using this heuristic, it is possible to rank all candidate relations, and trace out a precision-recall curve as more re- lations are extracted.6 Alternatively, features can be aggregated across multiple passages of text, feeding a single type-level relation extraction system (Wolfe et al., 2017). Precision can be improved by introducing constraints across multiple relations. For example, if we are certain of the relation PARENT(e1, e2), then it cannot also be the case that PARENT(e2, e1). Integer linear programming makes it possible to incorporate such constraints into a global optimization (Li et al., 2011). Other pairs of relations have posi- tive correlations, such MAYOR(e1, e2) and LIVED-IN(e1, e2). Compatibility across relation types can be incorporated into probabilistic graphical models (e.g., Riedel et al., 2010). Distant supervision Relation extraction is “annotation hungry,” because each relation requires its own la- beled data. Rather than relying on annotations of individual documents, it would be preferable to use existing knowledge resources — such as the many facts that are al- ready captured in knowledge bases like DBPedia. However such annotations raise the inverse of the information fusion problem considered above: the existence of the relation MAYOR(MAYNARD JACKSON JR., ATLANTA) provides only distant supervision for the example texts in which this entity pair is mentioned. One approach is to treat the entity pair as the instance, rather than the text itself (Mintz et al., 2009). Features are then aggregated across all sentences in which both entities are mentioned, and labels correspond to the relation (if any) between the entities in a knowl- edge base, such as FreeBase. Negative instances are constructed from entity pairs that are not related in the knowledge base. In some cases, two entities are related, but the knowl- edge base is missing the relation; however, because the number of possible entity pairs is huge, these missing relations are presumed to be relatively rare. This approach is shown in Figure 17.2. In multiple instance learning, labels are assigned to sets of instances, of which only an unknown subset are actually relevant (Dietterich et al., 1997; Maron and Lozano-P´erez, 1998). This formalizes the framework of distant supervision: the relation REL(A, B) acts 6The precision-recall curve is similar to the ROC curve shown in Figure 4.4, but it includes the precision TP TP+FP rather than the false positive rate FP FP+TN. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_436_Chunk432
|
17.2. RELATIONS 419 • Label : MAYOR(ATLANTA, MAYNARD JACKSON) – Elected mayor of Atlanta in 1973, Maynard Jackson ... – Atlanta’s airport will be renamed to honor Maynard Jackson, the city’s first Black mayor – Born in Dallas, Texas in 1938, Maynard Holbrook Jackson, Jr. moved to Atlanta when he was 8. • Label : MAYOR(NEW YORK, FIORELLO LA GUARDIA) – Fiorello La Guardia was Mayor of New York for three terms ... – Fiorello La Guardia, then serving on the New York City Board of Aldermen... • Label : BORN-IN(DALLAS, MAYNARD JACKSON) – Born in Dallas, Texas in 1938, Maynard Holbrook Jackson, Jr. moved to Atlanta when he was 8. – Maynard Jackson was raised in Dallas ... • Label : NIL(NEW YORK, MAYNARD JACKSON) – Jackson married Valerie Richardson, whom he had met in New York... – Jackson was a member of the Georgia and New York bars ... Figure 17.2: Four training instances for relation classification using distant supervi- sion Mintz et al. (2009). The first two instances are positive for the MAYOR relation, and the third instance is positive for the BORN-IN relation. The fourth instance is a negative ex- ample, constructed from a pair of entities (NEW YORK, MAYNARD JACKSON) that do not appear in any Freebase relation. Each instance’s features are computed by aggregating across all sentences in which the two entities are mentioned. as a label for the entire set of sentences mentioning entities A and B, even when only a subset of these sentences actually describes the relation. One approach to multi-instance learning is to introduce a binary latent variable for each sentence, indicating whether the sentence expresses the labeled relation (Riedel et al., 2010). A variety of inference tech- niques have been employed for this probabilistic model of relation extraction: Surdeanu et al. (2012) use expectation maximization, Riedel et al. (2010) use sampling, and Hoff- mann et al. (2011) use a custom graph-based algorithm. Expectation maximization and sampling are surveyed in chapter 5, and are covered in more detail by Murphy (2012); graph-based methods are surveyed by Mihalcea and Radev (2011). 17.2.4 Open information extraction In classical relation extraction, the set of relations is defined in advance, using a schema. The relation for any pair of entities can then be predicted using multi-class classification. In open information extraction (OpenIE), a relation can be any triple of text. The example Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_437_Chunk433
|
420 CHAPTER 17. INFORMATION EXTRACTION Task Relation ontology Supervision PropBank semantic role labeling VerbNet sentence FrameNet semantic role labeling FrameNet sentence Relation extraction ACE, TAC, SemEval, etc sentence Slot filling ACE, TAC, SemEval, etc relation Open Information Extraction open seed relations or patterns Table 17.3: Various relation extraction tasks and their properties. VerbNet and FrameNet are described in chapter 13. ACE (Linguistic Data Consortium, 2005), TAC (McNamee and Dang, 2009), and SemEval (Hendrickx et al., 2009) refer to shared tasks, each of which involves an ontology of relation types. sentence (17.8a) instantiates several “relations” of this sort, e.g., • (mayor of, Maynard Jackson, Atlanta), • (elected, Maynard Jackson, mayor of Atlanta), • (elected in, Maynard Jackson, 1973). Extracting such tuples can be viewed as a lightweight version of semantic role labeling (chapter 13), with only two argument types: first slot and second slot. The task is gen- erally evaluated on the relation level, rather than on the level of sentences: precision is measured by the number of extracted relations that are accurate, and recall is measured by the number of true relations that were successfully extracted. OpenIE systems are trained from distant supervision or bootstrapping, rather than from labeled sentences. An early example is the TEXTRUNNER system (Banko et al., 2007), which identifies relations with a set of handcrafted syntactic rules. The examples that are acquired from the handcrafted rules are then used to train a classification model that uses part-of-speech patterns as features. Finally, the relations that are extracted by the classifier are aggre- gated, removing redundant relations and computing the number of times that each rela- tion is mentioned in the corpus. TEXTRUNNER was the first in a series of systems that performed increasingly accurate open relation extraction by incorporating more precise linguistic features (Etzioni et al., 2011), distant supervision from Wikipedia infoboxes (Wu and Weld, 2010), and better learning algorithms (Zhu et al., 2009). 17.3 Events Relations link pairs of entities, but many real-world situations involve more than two enti- ties. Consider again the example sentence (17.8a), which describes the event of an election, Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_438_Chunk434
|
17.3. EVENTS 421 with four properties: the office (MAYOR), the district (ATLANTA), the date (1973), and the person elected (MAYNARD JACKSON, JR.). In event detection, a schema is provided for each event type (e.g., an election, a terrorist attack, or a chemical reaction), indicating all the possible properties of the event. The system is then required to fill in as many of these properties as possible (Doddington et al., 2004). Event detection systems generally involve a retrieval component (finding relevant documents and passages of text) and an extraction component (determining the proper- ties of the event based on the retrieved texts). Early approaches focused on finite-state pat- terns for identify event properties (Hobbs et al., 1997); such patterns can be automatically induced by searching for patterns that are especially likely to appear in documents that match the event query (Riloff, 1996). Contemporary approaches employ techniques that are similar to FrameNet semantic role labeling (§ 13.2), such as structured prediction over local and global features (Li et al., 2013) and bidirectional recurrent neural networks (Feng et al., 2016). These methods detect whether an event is described in a sentence, and if so, what are its properties. Event coreference Because multiple sentences may describe unique properties of a sin- gle event, event coreference is required to link event mentions across a single passage of text, or between passages (Humphreys et al., 1997). Bejan and Harabagiu (2014) de- fine event coreference as the task of identifying event mentions that share the same event participants (i.e., the slot-filling entities) and the same event properties (e.g., the time and location), within or across documents. Event coreference resolution can be performed us- ing supervised learning techniques in a similar way to entity coreference, as described in chapter 15: move left-to-right through the document, and use a classifier to decide whether to link each event reference to an existing cluster of coreferent events, or to cre- ate a new cluster (Ahn, 2006). Each clustering decision is based on the compatibility of features describing the participants and properties of the event. Due to the difficulty of annotating large amounts of data for entity coreference, unsupervised approaches are es- pecially desirable (Chen and Ji, 2009; Bejan and Harabagiu, 2014). Relations between events Just as entities are related to other entities, events may be related to other events: for example, the event of winning an election both precedes and causes the event of serving as mayor; moving to Atlanta precedes and enables the event of becoming mayor of Atlanta; moving from Dallas to Atlanta prevents the event of later be- coming mayor of Dallas. As these examples show, events may be related both temporally and causally. The TimeML annotation scheme specifies a set of six temporal relations between events (Pustejovsky et al., 2005), derived in part from interval algebra (Allen, 1984). The TimeBank corpus provides TimeML annotations for 186 documents (Puste- jovsky et al., 2003). Methods for detecting these temporal relations combine supervised Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_439_Chunk435
|
422 CHAPTER 17. INFORMATION EXTRACTION Positive (+) Negative (-) Underspecified (u) Certain (CT) Fact: CT+ Counterfact: CT- Certain, but unknown: CTU Probable (PR) Probable: PR+ Not probable: PR- (NA) Possible (PS) Possible: PS+ Not possible: PS- (NA) Underspecified (U) (NA) (NA) Unknown or uncommitted: UU Table 17.4: Table of factuality values from the FactBank corpus (Saur´ı and Pustejovsky, 2009). The entry (NA) indicates that this combination is not annotated. machine learning with temporal constraints, such as transitivity (e.g. Mani et al., 2006; Chambers and Jurafsky, 2008). More recent annotation schemes and datasets combine temporal and causal relations (Mirza et al., 2014; Dunietz et al., 2017): for example, the CaTeRS dataset includes annotations of 320 five-sentence short stories (Mostafazadeh et al., 2016). Abstracting still further, pro- cesses are networks of causal relations between multiple events. A small dataset of bi- ological processes is annotated in the ProcessBank dataset (Berant et al., 2014), with the goal of supporting automatic question answering on scientific textbooks. 17.4 Hedges, denials, and hypotheticals The methods described thus far apply to propositions about the way things are in the real world. But natural language can also describe events and relations that are likely or unlikely, possible or impossible, desired or feared. The following examples hint at the scope of the problem (Prabhakaran et al., 2010): (17.9) a. GM will lay off workers. b. A spokesman for GM said GM will lay off workers. c. GM may lay off workers. d. The politician claimed that GM will lay off workers. e. Some wish GM would lay off workers. f. Will GM lay off workers? g. Many wonder whether GM will lay off workers. Accurate information extraction requires handling these extra-propositional aspects of meaning, which are sometimes summarized under the terms modality and negation.7 7The classification of negation as extra-propositional is controversial: Packard et al. (2014) argue that negation is a “core part of compositionally constructed logical-form representations.” Negation is an element of the semantic parsing tasks discussed in chapter 12 and chapter 13 — for example, negation markers are Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_440_Chunk436
|
17.4. HEDGES, DENIALS, AND HYPOTHETICALS 423 Modality refers to expressions of the speaker’s attitude towards her own statements, in- cluding “degree of certainty, reliability, subjectivity, sources of information, and perspec- tive” (Morante and Sporleder, 2012). Various systematizations of modality have been proposed (e.g., Palmer, 2001), including categories such as future, interrogative, imper- ative, conditional, and subjective. Information extraction is particularly concerned with negation and certainty. For example, Saur´ı and Pustejovsky (2009) link negation with a modal calculus of certainty, likelihood, and possibility, creating the two-dimensional schema shown in Table 17.4. This is the basis for the FactBank corpus, with annotations of the factuality of all sentences in 208 documents of news text. A related concept is hedging, in which speakers limit their commitment to a proposi- tion (Lakoff, 1973): (17.10) a. These results suggest that expression of c-jun, jun B and jun D genes might be involved in terminal granulocyte differentiation... (Morante and Daelemans, 2009) b. A whale is technically a mammal (Lakoff, 1973) In the first example, the hedges suggest and might communicate uncertainty; in the second example, there is no uncertainty, but the hedge technically indicates that the evidence for the proposition will not fully meet the reader’s expectations. Hedging has been studied extensively in scientific texts (Medlock and Briscoe, 2007; Morante and Daelemans, 2009), where the goal of large-scale extraction of scientific facts is obstructed by hedges and spec- ulation. Still another related aspect of modality is evidentiality, in which speakers mark the source of their information. In many languages, it is obligatory to mark evidentiality through affixes or particles (Aikhenvald, 2004); while evidentiality is not grammaticalized in English, authors are expected to express this information in contexts such as journal- ism (Kovach and Rosenstiel, 2014) and Wikipedia.8 Methods for handling negation and modality generally include two phases: 1. detecting negated or uncertain events; 2. identifying scope of the negation or modal operator. A considerable body of work on negation has employed rule-based techniques such as regular expressions (Chapman et al., 2001) to detect negated events. Such techniques treated as adjuncts in PropBank semantic role labeling. However, many of the relation extraction methods mentioned in this chapter do not handle negation directly. A further consideration is that negation inter- acts closely with aspects of modality that are generally not considered in propositional semantics, such as certainty and subjectivity. 8https://en.wikipedia.org/wiki/Wikipedia:Verifiability Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_441_Chunk437
|
424 CHAPTER 17. INFORMATION EXTRACTION match lexical cues (e.g., Norwood was not elected Mayor), while avoiding “double nega- tives” (e.g., surely all this is not without meaning). Supervised techniques involve classi- fiers over lexical and syntactic features (Uzuner et al., 2009) and sequence labeling (Prab- hakaran et al., 2010). The scope refers to the elements of the text whose propositional meaning is negated or modulated (Huddleston and Pullum, 2005), as elucidated in the following example from Morante and Sporleder (2012): (17.11) [ After his habit he said ] nothing, and after mine I asked no questions. After his habit he said nothing, and [ after mine I asked ] no [ questions ]. In this sentence, there are two negation cues (nothing and no). Each negates an event, in- dicated by the underlined verbs said and asked, and each occurs within a scope: after his habit he said and after mine I asked questions. Scope identification is typically formal- ized as sequence labeling problems, with each word token labeled as beginning, inside, or outside of a cue, focus, or scope span (see § 8.3). Conventional sequence labeling ap- proaches can then be applied, using surface features as well as syntax (Velldal et al., 2012) and semantic analysis (Packard et al., 2014). Labeled datasets include the BioScope corpus of biomedical texts (Vincze et al., 2008) and a shared task dataset of detective stories by Arthur Conan Doyle (Morante and Blanco, 2012). 17.5 Question answering and machine reading The victory of the Watson question-answering system against three top human players on the game show Jeopardy! was a landmark moment for natural language processing (Fer- rucci et al., 2010). Game show questions are usually answered by factoids: entity names and short phrases.9 The task of factoid question answering is therefore closely related to information extraction, with the additional problem of accurately parsing the question. 17.5.1 Formal semantics Semantic parsing is an effective method for question-answering in restricted domains such as questions about geography and airline reservations (Zettlemoyer and Collins, 2005), and has also been applied in “open-domain” settings such as question answering on Freebase (Berant et al., 2013) and biomedical research abstracts (Poon and Domingos, 2009). One approach is to convert the question into a lambda calculus expression that returns a boolean value: for example, the question who is the mayor of the capital of Georgia? 9The broader landscape of question answering includes “why” questions (Why did Ahab continue to pursue the white whale?), “how questions” (How did Queequeg die?), and requests for summaries (What was Ishmael’s attitude towards organized religion?). For more, see Hirschman and Gaizauskas (2001). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_442_Chunk438
|
17.5. QUESTION ANSWERING AND MACHINE READING 425 would be converted to, λx.∃y CAPITAL(GEORGIA, y) ∧MAYOR(y, x). [17.22] This lambda expression can then be used to query an existing knowledge base, returning “true” for all entities that satisfy it. 17.5.2 Machine reading Recent work has focused on answering questions about specific textual passages, similar to the reading comprehension examinations for young students (Hirschman et al., 1999). This task has come to be known as machine reading. Datasets The machine reading problem can be formulated in a number of different ways. The most important distinction is what form the answer should take. • Multiple-choice question answering, as in the MCTest dataset of stories (Richard- son et al., 2013) and the New York Regents Science Exams (Clark, 2015). In MCTest, the answer is deducible from the text alone, while in the science exams, the system must make inferences using an existing model of the underlying scientific phenom- ena. Here is an example from MCTest: (17.12) James the turtle was always getting into trouble. Sometimes he’d reach into the freezer and empty out all the food ... Q: What is the name of the trouble making turtle? (a) Fries (b) Pudding (c) James (d) Jane • Cloze-style “fill in the blank” questions, as in the CNN/Daily Mail comprehension task (Hermann et al., 2015), the Children’s Book Test (Hill et al., 2016), and the Who- did-What dataset (Onishi et al., 2016). In these tasks, the system must guess which word or entity completes a sentence, based on reading a passage of text. Here is an example from Who-did-What: (17.13) Q: Tottenham manager Juande Ramos has hinted he will allow to leave if the Bulgaria striker makes it clear he is unhappy. (Onishi et al., 2016) The query sentence may be selected either from the story itself, or from an external summary. In either case, datasets can be created automatically by processing large Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_443_Chunk439
|
426 CHAPTER 17. INFORMATION EXTRACTION quantities existing documents. An additional constraint is that that missing element from the cloze must appear in the main passage of text: for example, in Who-did- What, the candidates include all entities mentioned in the main passage. In the CNN/Daily Mail dataset, each entity name is replaced by a unique identifier, e.g., ENTITY37. This ensures that correct answers can only be obtained by accurately reading the text, and not from external knowledge about the entities. • Extractive question answering, in which the answer is drawn from the original text. In WikiQA, answers are sentences (Yang et al., 2015). In the Stanford Question An- swering Dataset (SQuAD), answers are words or short phrases (Rajpurkar et al., 2016): (17.14) In metereology, precipitation is any product of the condensation of atmo- spheric water vapor that falls under gravity. Q: What causes precipitation to fall? A: gravity In both WikiQA and SQuAD, the original texts are Wikipedia articles, and the ques- tions are generated by crowdworkers. Methods A baseline method is to search the text for sentences or short passages that overlap with both the query and the candidate answer (Richardson et al., 2013). In example (17.12), this baseline would select the correct answer, since James appears in a sentence that includes the query terms trouble and turtle. This baseline can be implemented as a neural architecture, using an attention mech- anism (see § 18.3.1), which scores the similarity of the query to each part of the source text (Chen et al., 2016). The first step is to encode the passage w(p) and the query w(q), using two bidirectional LSTMs (§ 7.6). h(q) =BiLSTM(w(q); Θ(q)) [17.23] h(p) =BiLSTM(w(p); Θ(p)). [17.24] The query is represented by vertically concatenating the final states of the left-to-right and right-to-left passes: u =[ −−→ h(q)Mq; ←−− h(q)0]. [17.25] Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_444_Chunk440
|
17.5. QUESTION ANSWERING AND MACHINE READING 427 The attention vector is computed as a softmax over a vector of bilinear products, and the expected representation is computed by summing over attention values, ˜αm =(u(q))⊤Wah(p) m [17.26] α =SoftMax( ˜α) [17.27] o = M X m=1 αmh(p) m . [17.28] Each candidate answer c is represented by a vector xc. Assuming the candidate answers are spans from the original text, these vectors can be set equal to the corresponding ele- ment in h(p). The score for each candidate answer a is computed by the inner product, ˆc = argmax c o · xc. [17.29] This architecture can be trained end-to-end from a loss based on the log-likelihood of the correct answer. A number of related architectures have been proposed (e.g., Hermann et al., 2015; Kadlec et al., 2016; Dhingra et al., 2017; Cui et al., 2017), and these methods are surveyed by Wang et al. (2017). Additional resources The field of information extraction is surveyed in course notes by Grishman (2012), and more recently in a short survey paper (Grishman, 2015). Shen et al. (2015) survey the task of entity linking, and Ji and Grishman (2011) survey work on knowledge base popula- tion. This chapter’s discussion of non-propositional meaning was strongly influenced by Morante and Sporleder (2012), who introduced a special issue of the journal Computational Linguistics dedicated to recent work on modality and negation. Exercises 1. Go to the Wikipedia page for your favorite movie. For each record in the info box (e.g., Screenplay by: Stanley Kubrick), report whether there is a sentence in the ar- ticle containing both the field and value (e.g., The screenplay was written by Stanley Kubrick). If not, is there is a sentence in the article containing just the value? (For records with more than one value, just use the first value.) 2. Building on your answer in the previous question, report the dependency path be- tween the head words of the field and value for at least three records. 3. Consider the following heuristic for entity linking: Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_445_Chunk441
|
428 CHAPTER 17. INFORMATION EXTRACTION • Among all entities that have the same type as the mention (e.g., LOC, PER), choose the one whose name has the lowest edit distance from the mention. • If more than one entity has the right type and the lowest edit distance from the mention, choose the most popular one. • If no candidate entity has the right type, choose NIL. Now suppose you have the following feature function: f(y, x) = [edit-dist(name(y), x), same-type(y, x), popularity(y), δ (y = NIL)] Design a set of ranking weights θ that match the heuristic. You may assume that edit distance and popularity are always in the range [0, 100], and that the NIL entity has values of zero for all features except δ (y = NIL). 4. Now consider another heuristic: • Among all candidate entities that have edit distance zero from the mention, and are the right type, choose the most popular one. • If no entity has edit distance zero from the mention, choose the one with the right type that is most popular, regardless of edit distance. • If no entity has the right type, choose NIL. Using the same features and assumptions from the previous problem, prove that there is no set of weights that could implement this heuristic. Then show that the heuristic can be implemented by adding a single feature. Your new feature should consider only the edit distance. 5. Download the Reuters corpus in NLTK, and iterate over the tokens in the corpus: import nltk nltk.corpus.download(’reuters’) from nltk.corpus import reuters for word in reuters.words(): #your code here a) Apply the pattern , such as to obtain candidates for the IS-A relation, e.g. IS-A(ROMANIA, COUNTRY). What are three pairs that this method identi- fies correctly? What are three different pairs that it gets wrong? b) Design a pattern for the PRESIDENT relation, e.g. PRESIDENT(PHILIPPINES, CORAZON AQUINO In this case, you may want to augment your pattern matcher with the ability to match multiple token wildcards, perhaps using case information to detect proper names. Again, list three correct Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_446_Chunk442
|
17.5. QUESTION ANSWERING AND MACHINE READING 429 c) Preprocess the Reuters data by running a named entity recognizer, replacing tokens with named entity spans when applicable — e.g., your pattern can now match on the United States if the NER system tags it. Apply your PRESIDENT matcher to this preprocessed data. Does the accuracy improve? Compare 20 randomly-selected pairs from this pattern and the one you designed in the pre- vious part. 6. Using the same NLTK Reuters corpus, apply distant supervision to build a training set for detecting the relation between nations and their capitals. Start with the fol- lowing known relations: (JAPAN, TOKYO), (FRANCE, PARIS), (ITALY, ROME). How many positive and negative examples are you able to extract? 7. Represent the dependency path x(i) as a sequence of words and dependency arcs of length Mi, ignoring the endpoints of the path. In example 1 of Table 17.2, the dependency path is, x(1) = ( ← NSUBJ, traveled, → OBL) [17.30] If x(i) m is a word, then let pos(x(i) m ) be its part-of-speech, using the tagset defined in chapter 8. We can define the following kernel function over pairs of dependency paths (Bunescu and Mooney, 2005): κ(x(i), x(j)) = ( 0, Mi ̸= Mj QMi m=1 c(x(i) m , x(j) m ), Mi = Mj c(x(i) m , x(j) m ) = 2, x(i) m = x(j) m 1, x(i) m ̸= x(j) m and pos(x(i) m ) = pos(x(j) m ) 0, otherwise. Using this kernel function, compute the kernel similarities of example 1 from Ta- ble 17.2 with the other five examples. 8. Continuing from the previous problem, suppose that the instances have the follow- ing labels: y2 = 1, y3 = −1, y4 = −1, y5 = 1, y6 = 1 [17.31] Equation 17.13 defines a kernel-based classification in terms of parameters α and b. Using the above labels for y2, . . . , y6, identify the values of α and b under which ˆy1 = 1. Remember the constraint that αi ≥0 for all i. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_447_Chunk443
|
430 CHAPTER 17. INFORMATION EXTRACTION 9. Consider the neural QA system described in § 17.5.2, but restrict the set of candidate answers to words in the passage, and set each candidate answer embedding x equal to the vector h(p) m , representing token m in the passage, so that ˆm = argmaxm o·h(p) m . Suppose the system selects answer ˆm, but the correct answer is m∗. Consider the gradient of the margin loss with respect to the attention: a) Prove that ∂ℓ ∂α ˆ m ≥ ∂ℓ ∂αm∗. b) Assuming that ||h ˆm|| = ||hm∗||, prove that ∂ℓ ∂α ˆ m ≥0 and ∂ℓ ∂αm∗≤0. Explain in words what this means about how the attention is expected to change after a gradient-based update. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_448_Chunk444
|
Chapter 18 Machine translation Machine translation (MT) is one of the “holy grail” problems in artificial intelligence, with the potential to transform society by facilitating communication between people anywhere in the world. As a result, MT has received significant attention and funding since the early 1950s. However, it has proved remarkably challenging, and while there has been substantial progress towards usable MT systems — especially for high-resource language pairs like English-French — we are still far from translation systems that match the nuance and depth of human translations. 18.1 Machine translation as a task Machine translation can be formulated as an optimization problem: ˆw(t) = argmax w(t) Ψ(w(s), w(t)), [18.1] where w(s) is a sentence in a source language, w(t) is a sentence in the target language, and Ψ is a scoring function. As usual, this formalism requires two components: a decod- ing algorithm for computing ˆw(t), and a learning algorithm for estimating the parameters of the scoring function Ψ. Decoding is difficult for machine translation because of the huge space of possible translations. We have faced large label spaces before: for example, in sequence labeling, the set of possible label sequences is exponential in the length of the input. In these cases, it was possible to search the space quickly by introducing locality assumptions: for ex- ample, that each tag depends only on its predecessor, or that each production depends only on its parent. In machine translation, no such locality assumptions seem possible: human translators reword, reorder, and rearrange words; they replace single words with multi-word phrases, and vice versa. This flexibility means that in even relatively simple 431
|
nlp_Page_449_Chunk445
|
432 CHAPTER 18. MACHINE TRANSLATION source target text syntax semantics interlingua Figure 18.1: The Vauquois Pyramid translation models, decoding is NP-hard (Knight, 1999). Approaches for dealing with this complexity are described in § 18.4. Estimating translation models is difficult as well. Labeled translation data usually comes in the form parallel sentences, e.g., w(s) =A Vinay le gusta las manzanas. w(t) =Vinay likes apples. A useful feature function would note the translation pairs (gusta, likes), (manzanas, apples), and even (Vinay, Vinay). But this word-to-word alignment is not given in the data. One solution is to treat this alignment as a latent variable; this is the approach taken by clas- sical statistical machine translation (SMT) systems, described in § 18.2. Another solution is to model the relationship between w(t) and w(s) through a more complex and expres- sive function; this is the approach taken by neural machine translation (NMT) systems, described in § 18.3. The Vauquois Pyramid is a theory of how translation should be done. At the lowest level, the translation system operates on individual words, but the horizontal distance at this level is large, because languages express ideas differently. If we can move up the triangle to syntactic structure, the distance for translation is reduced; we then need only produce target-language text from the syntactic representation, which can be as simple as reading off a tree. Further up the triangle lies semantics; translating between semantic representations should be easier still, but mapping between semantics and surface text is a difficult, unsolved problem. At the top of the triangle is interlingua, a semantic represen- tation that is so generic that it is identical across all human languages. Philosophers de- bate whether such a thing as interlingua is really possible (e.g., Derrida, 1985). While the first-order logic representations discussed in chapter 12 might be thought to be language independent, they are built on an inventory of predicates that are suspiciously similar to English words (Nirenburg and Wilks, 2001). Nonetheless, the idea of linking translation Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_450_Chunk446
|
18.1. MACHINE TRANSLATION AS A TASK 433 Adequate? Fluent? To Vinay it like Python yes no Vinay debugs memory leaks no yes Vinay likes Python yes yes Table 18.1: Adequacy and fluency for translations of the Spanish sentence A Vinay le gusta Python. and semantic understanding may still be a promising path, if the resulting translations better preserve the meaning of the original text. 18.1.1 Evaluating translations There are two main criteria for a translation, summarized in Table 18.1. • Adequacy: The translation w(t) should adequately reflect the linguistic content of w(s). For example, if w(s) = A Vinay le gusta Python, the reference translation is w(t) = Vinay likes Python. However, the gloss, or word-for-word translation w(t) = To Vinay it like Python is also considered adequate because it contains all the relevant content. The output w(t) = Vinay debugs memory leaks is not adequate. • Fluency: The translation w(t) should read like fluent text in the target language. By this criterion, the gloss w(t) = To Vinay it like Python will score poorly, and w(t) = Vinay debugs memory leaks will be preferred. Automated evaluations of machine translations typically merge both of these criteria, by comparing the system translation with one or more reference translations, produced by professional human translators. The most popular quantitative metric is BLEU (bilin- gual evaluation understudy; Papineni et al., 2002), which is based on n-gram precision: what fraction of n-grams in the system translation appear in the reference? Specifically, for each n-gram length, the precision is defined as, pn = number of n-grams appearing in both reference and hypothesis translations number of n-grams appearing in the hypothesis translation . [18.2] The n-gram precisions for three hypothesis translations are shown in Figure 18.2. The BLEU score is then based on the average, exp 1 N PN n=1 log pn. Two modifications of Equation 18.2 are necessary: (1) to avoid computing log 0, all precisions are smoothed to ensure that they are positive; (2) each n-gram in the reference can be used at most once, so that to to to to to to does not achieve p1 = 1 against the reference to be or not to be. Furthermore, precision-based metrics are biased in favor of short translations, which Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_451_Chunk447
|
434 CHAPTER 18. MACHINE TRANSLATION Translation p1 p2 p3 p4 BP BLEU Reference Vinay likes programming in Python Sys1 To Vinay it like to program Python 2 7 0 0 0 1 .21 Sys2 Vinay likes Python 3 3 1 2 0 0 .51 .33 Sys3 Vinay likes programming in his pajamas 4 6 3 5 2 4 1 3 1 .76 Figure 18.2: A reference translation and three system outputs. For each output, pn indi- cates the precision at each n-gram, and BP indicates the brevity penalty. can achieve high scores by minimizing the denominator in [18.2]. To avoid this issue, a brevity penalty is applied to translations that are shorter than the reference. This penalty is indicated as “BP” in Figure 18.2. Automated metrics like BLEU have been validated by correlation with human judg- ments of translation quality. Nonetheless, it is not difficult to construct examples in which the BLEU score is high, yet the translation is disfluent or carries a completely different meaning from the original. To give just one example, consider the problem of translating pronouns. Because pronouns refer to specific entities, a single incorrect pronoun can oblit- erate the semantics of the original sentence. Existing state-of-the-art systems generally do not attempt the reasoning necessary to correctly resolve pronominal anaphora (Hard- meier, 2012). Despite the importance of pronouns for semantics, they have a marginal impact on BLEU, which may help to explain why existing systems do not make a greater effort to translate them correctly. Fairness and bias The problem of pronoun translation intersects with issues of fairness and bias. In many languages, such as Turkish, the third person singular pronoun is gender neutral. Today’s state-of-the-art systems produce the following Turkish-English transla- tions (Caliskan et al., 2017): (18.1) O He bir is a doktor. doctor. (18.2) O She bir is a hem¸sire. nurse. The same problem arises for other professions that have stereotypical genders, such as engineers, soldiers, and teachers, and for other languages that have gender-neutral pro- nouns. This bias was not directly programmed into the translation model; it arises from statistical tendencies in existing datasets. This highlights a general problem with data- driven approaches, which can perpetuate biases that negatively impact disadvantaged Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_452_Chunk448
|
18.1. MACHINE TRANSLATION AS A TASK 435 groups. Worse, machine learning can amplify biases in data (Bolukbasi et al., 2016): if a dataset has even a slight tendency towards men as doctors, the resulting translation model may produce translations in which doctors are always he, and nurses are always she. Other metrics A range of other automated metrics have been proposed for machine translation. One potential weakness of BLEU is that it only measures precision; METEOR is a weighted F -MEASURE, which is a combination of recall and precision (see § 4.4.1). Translation Error Rate (TER) computes the string edit distance (see § 9.1.4) between the reference and the hypothesis (Snover et al., 2006). For language pairs like English and Japanese, there are substantial differences in word order, and word order errors are not sufficiently captured by n-gram based metrics. The RIBES metric applies rank correla- tion to measure the similarity in word order between the system and reference transla- tions (Isozaki et al., 2010). 18.1.2 Data Data-driven approaches to machine translation rely primarily on parallel corpora, which are translations at the sentence level. Early work focused on government records, in which fine-grained official translations are often required. For example, the IBM translation sys- tems were based on the proceedings of the Canadian Parliament, called Hansards, which are recorded in English and French (Brown et al., 1990). The growth of the European Union led to the development of the EuroParl corpus, which spans 21 European lan- guages (Koehn, 2005). While these datasets helped to launch the field of machine transla- tion, they are restricted to narrow domains and a formal speaking style, limiting their ap- plicability to other types of text. As more resources are committed to machine translation, new translation datasets have been commissioned. This has broadened the scope of avail- able data to news,1 movie subtitles,2 social media (Ling et al., 2013), dialogues (Fordyce, 2007), TED talks (Paul et al., 2010), and scientific research articles (Nakazawa et al., 2016). Despite this growing set of resources, the main bottleneck in machine translation data is the need for parallel corpora that are aligned at the sentence level. Many languages have sizable parallel corpora with some high-resource language, but not with each other. The high-resource language can then be used as a “pivot” or “bridge” (Boitet, 1988; Utiyama and Isahara, 2007): for example, De Gispert and Marino (2006) use Spanish as a bridge for translation between Catalan and English. For most of the 6000 languages spoken today, the only source of translation data remains the Judeo-Christian Bible (Resnik et al., 1999). While relatively small, at less than a million tokens, the Bible has been translated into more than 2000 languages, far outpacing any other corpus. Some research has explored 1https://catalog.ldc.upenn.edu/LDC2010T10, http://www.statmt.org/wmt15/ translation-task.html 2http://opus.nlpl.eu/ Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_453_Chunk449
|
436 CHAPTER 18. MACHINE TRANSLATION the possibility of automatically identifying parallel sentence pairs from unaligned parallel texts, such as web pages and Wikipedia articles (Kilgarriff and Grefenstette, 2003; Resnik and Smith, 2003; Adafre and De Rijke, 2006). Another approach is to create large parallel corpora through crowdsourcing (Zaidan and Callison-Burch, 2011). 18.2 Statistical machine translation The previous section introduced adequacy and fluency as the two main criteria for ma- chine translation. A natural modeling approach is to represent them with separate scores, Ψ(w(s), w(t)) = ΨA(w(s), w(t)) + ΨF (w(t)). [18.3] The fluency score ΨF need not even consider the source sentence; it only judges w(t) on whether it is fluent in the target language. This decomposition is advantageous because it makes it possible to estimate the two scoring functions on separate data. While the adequacy model must be estimated from aligned sentences — which are relatively expen- sive and rare — the fluency model can be estimated from monolingual text in the target language. Large monolingual corpora are now available in many languages, thanks to resources such as Wikipedia. An elegant justification of the decomposition in Equation 18.3 is provided by the noisy channel model, in which each scoring function is a log probability: ΨA(w(s), w(t)) ≜log pS|T (w(s) | w(t)) [18.4] ΨF (w(t)) ≜log pT (w(t)) [18.5] Ψ(w(s), w(t)) = log pS|T (w(s) | w(t)) + log pT (w(t)) = log pS,T (w(s), w(t)). [18.6] By setting the scoring functions equal to the logarithms of the prior and likelihood, their sum is equal to log pS,T , which is the logarithm of the joint probability of the source and target. The sentence ˆw(t) that maximizes this joint probability is also the maximizer of the conditional probability pT|S, making it the most likely target language sentence, condi- tioned on the source. The noisy channel model can be justified by a generative story. The target text is orig- inally generated from a probability model pT . It is then encoded in a “noisy channel” pS|T , which converts it to a string in the source language. In decoding, we apply Bayes’ rule to recover the string w(t) that is maximally likely under the conditional probability pT|S. Under this interpretation, the target probability pT is just a language model, and can be estimated using any of the techniques from chapter 6. The only remaining learning problem is to estimate the translation model pS|T . Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_454_Chunk450
|
18.2. STATISTICAL MACHINE TRANSLATION 437 A Vinay le gusta python Vinay likes python Figure 18.3: An example word-to-word alignment 18.2.1 Statistical translation modeling The simplest decomposition of the translation model is word-to-word: each word in the source should be aligned to a word in the translation. This approach presupposes an alignment A(w(s), w(t)), which contains a list of pairs of source and target tokens. For example, given w(s) = A Vinay le gusta Python and w(t) = Vinay likes Python, one possible word-to-word alignment is, A(w(s), w(t)) = {(A, ∅), (Vinay, Vinay), (le, likes), (gusta, likes), (Python,Python)}. [18.7] This alignment is shown in Figure 18.3. Another, less promising, alignment is: A(w(s), w(t)) = {(A, Vinay), (Vinay, likes), (le, Python), (gusta, ∅), (Python, ∅)}. [18.8] Each alignment contains exactly one tuple for each word in the source, which serves to explain how the source word could be translated from the target, as required by the trans- lation probability pS|T . If no appropriate word in the target can be identified for a source word, it is aligned to ∅— as is the case for the Spanish function word a in the example, which glosses to the English word to. Words in the target can align with multiple words in the source, so that the target word likes can align to both le and gusta in the source. The joint probability of the alignment and the translation can be defined conveniently as, p(w(s), A | w(t)) = M(s) Y m=1 p(w(s) m , am | w(t) am, m, M(s), M(t)) [18.9] = M(s) Y m=1 p(am | m, M(s), M(t)) × p(w(s) m | w(t) am). [18.10] This probability model makes two key assumptions: Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_455_Chunk451
|
438 CHAPTER 18. MACHINE TRANSLATION • The alignment probability factors across tokens, p(A | w(s), w(t)) = M(s) Y m=1 p(am | m, M(s), M(t)). [18.11] This means that each alignment decision is independent of the others, and depends only on the index m, and the sentence lengths M(s) and M(t). • The translation probability also factors across tokens, p(w(s) | w(t), A) = M(s) Y m=1 p(w(s) m | w(t) am), [18.12] so that each word in w(s) depends only on its aligned word in w(t). This means that translation is word-to-word, ignoring context. The hope is that the target language model p(w(t)) will correct any disfluencies that arise from word-to-word translation. To translate with such a model, we could sum or max over all possible alignments, p(w(s), w(t)) = X A p(w(s), w(t), A) [18.13] =p(w(t)) X A p(A) × p(w(s) | w(t), A) [18.14] ≥p(w(t)) max A p(A) × p(w(s) | w(t), A). [18.15] The term p(A) defines the prior probability over alignments. A series of alignment models with increasingly relaxed independence assumptions was developed by researchers at IBM in the 1980s and 1990s, known as IBM Models 1-6 (Och and Ney, 2003). IBM Model 1 makes the strongest independence assumption: p(am | m, M(s), M(t)) = 1 M(t) . [18.16] In this model, every alignment is equally likely. This is almost surely wrong, but it re- sults in a convex learning objective, yielding a good initialization for the more complex alignment models (Brown et al., 1993; Koehn, 2009). 18.2.2 Estimation Let us define the parameter θu→v as the probability of translating target word u to source word v. If word-to-word alignments were annotated, these probabilities could be com- puted from relative frequencies, ˆθu→v = count(u, v) count(u) , [18.17] Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_456_Chunk452
|
18.2. STATISTICAL MACHINE TRANSLATION 439 where count(u, v) is the count of instances in which word v was aligned to word u in the training set, and count(u) is the total count of the target word u. The smoothing techniques mentioned in chapter 6 can help to reduce the variance of these probability estimates. Conversely, if we had an accurate translation model, we could estimate the likelihood of each alignment decision, qm(am | w(s), w(t)) ∝p(am | m, M(s), M(t)) × p(w(s) m | w(t) am), [18.18] where qm(am | w(s), w(t)) is a measure of our confidence in aligning source word w(s) m to target word w(t) am. The relative frequencies could then be computed from the expected counts, ˆθu→v =Eq [count(u, v)] count(u) [18.19] Eq [count(u, v)] = X m qm(am | w(s), w(t)) × δ(w(s) m = v) × δ(w(t) am = u). [18.20] The expectation-maximization (EM) algorithm proceeds by iteratively updating qm and ˆΘ. The algorithm is described in general form in chapter 5. For statistical machine translation, the steps of the algorithm are: 1. E-step: Update beliefs about word alignment using Equation 18.18. 2. M-step: Update the translation model using Equations 18.19 and 18.20. As discussed in chapter 5, the expectation maximization algorithm is guaranteed to con- verge, but not to a global optimum. However, for IBM Model 1, it can be shown that EM optimizes a convex objective, and global optimality is guaranteed. For this reason, IBM Model 1 is often used as an initialization for more complex alignment models. For more detail, see Koehn (2009). 18.2.3 Phrase-based translation Real translations are not word-to-word substitutions. One reason is that many multiword expressions are not translated literally, as shown in this example from French: (18.3) Nous We allons will prendre take un a verre glass We’ll have a drink Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_457_Chunk453
|
440 CHAPTER 18. MACHINE TRANSLATION Nous allons prendre une verre We’ll have a drink Figure 18.4: A phrase-based alignment between French and English, corresponding to example (18.3) The line we will take a glass is the word-for-word gloss of the French sentence; the transla- tion we’ll have a drink is shown on the third line. Such examples are difficult for word-to- word translation models, since they require translating prendre to have and verre to drink. These translations are only correct in the context of these specific phrases. Phrase-based translation generalizes on word-based models by building translation tables and alignments between multiword spans. (These “phrases” are not necessarily syntactic constituents like the noun phrases and verb phrases described in chapters 9 and 10.) The generalization from word-based translation is surprisingly straightforward: the translation tables can now condition on multi-word units, and can assign probabilities to multi-word units; alignments are mappings from spans to spans, ((i, j), (k, ℓ)), so that p(w(s) | w(t), A) = Y ((i,j),(k,ℓ))∈A pw(s)|w(t)({w(s) i+1, w(s) i+2, . . . , w(s) j } | {w(t) k+1, w(t) k+2, . . . , w(t) ℓ}). [18.21] The phrase alignment ((i, j), (k, ℓ)) indicates that the span w(s) i+1:j is the translation of the span w(t) k+1:ℓ. An example phrasal alignment is shown in Figure 18.4. Note that the align- ment set A is required to cover all of the tokens in the source, just as in word-based trans- lation. The probability model pw(s)|w(t) must now include translations for all phrase pairs, which can be learned from expectation-maximization just as in word-based statistical ma- chine translation. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_458_Chunk454
|
18.2. STATISTICAL MACHINE TRANSLATION 441 18.2.4 *Syntax-based translation The Vauquois Pyramid (Figure 18.1) suggests that translation might be easier if we take a higher-level view. One possibility is to incorporate the syntactic structure of the source, the target, or both. This is particularly promising for language pairs that consistent syn- tactic differences. For example, English adjectives almost always precede the nouns that they modify, while in Romance languages such as French and Spanish, the adjective often follows the noun: thus, angry fish would translate to pez (fish) enojado (angry) in Spanish. In word-to-word translation, these reorderings cause the alignment model to be overly permissive. It is not that the order of any pair of English words can be reversed when translating into Spanish, but only adjectives and nouns within a noun phrase. Similar issues arise when translating between verb-final languages such as Japanese (in which verbs usually follow the subject and object), verb-initial languages like Tagalog and clas- sical Arabic, and verb-medial languages such as English. An elegant solution is to link parsing and translation in a synchronous context-free grammar (SCFG; Chiang, 2007).3 An SCFG is a set of productions of the form X →(α, β, ∼), where X is a non-terminal, α and β are sequences of terminals or non-terminals, and ∼ is a one-to-one alignment of items in α with items in β. English-Spanish adjective-noun ordering can be handled by a set of synchronous productions, e.g., NP →(DET1 NN2 JJ3, DET1 JJ3 NN2), [18.22] with subscripts indicating the alignment between the Spanish (left) and English (right) parts of the right-hand side. Terminal productions yield translation pairs, JJ →(enojado1, angry1). [18.23] A synchronous derivation begins with the start symbol S, and derives a pair of sequences of terminal symbols. Given an SCFG in which each production yields at most two symbols in each lan- guage (Chomsky Normal Form; see § 9.2.1), a sentence can be parsed using only the CKY algorithm (chapter 10). The resulting derivation also includes productions in the other language, all the way down to the surface form. Therefore, SCFGs make translation very similar to parsing. In a weighted SCFG, the log probability log pS|T can be computed from the sum of the log-probabilities of the productions. However, combining SCFGs with a target language model is computationally expensive, necessitating approximate search algorithms (Huang and Chiang, 2007). Synchronous context-free grammars are an example of tree-to-tree translation, be- cause they model the syntactic structure of both the target and source language. In string- to-tree translation, string elements are translated into constituent tree fragments, which 3Earlier approaches to syntactic machine translation includes syntax-driven transduction (Lewis II and Stearns, 1968) and stochastic inversion transduction grammars (Wu, 1997). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_459_Chunk455
|
442 CHAPTER 18. MACHINE TRANSLATION are then assembled into a translation (Yamada and Knight, 2001; Galley et al., 2004); in tree-to-string translation, the source side is parsed, and then transformed into a string on the target side (Liu et al., 2006). A key question for syntax-based translation is the extent to which we phrasal constituents align across translations (Fox, 2002), because this gov- erns the extent to which we can rely on monolingual parsers and treebanks. For more on syntax-based machine translation, see the monograph by Williams et al. (2016). 18.3 Neural machine translation Neural network models for machine translation are based on the encoder-decoder archi- tecture (Cho et al., 2014). The encoder network converts the source language sentence into a vector or matrix representation; the decoder network then converts the encoding into a sentence in the target language. z =ENCODE(w(s)) [18.24] w(t) | w(s) ∼DECODE(z), [18.25] where the second line means that the function DECODE(z) defines the conditional proba- bility p(w(t) | w(s)). The decoder is typically a recurrent neural network, which generates the target lan- guage sentence one word at a time, while recurrently updating a hidden state. The en- coder and decoder networks are trained end-to-end from parallel sentences. If the output layer of the decoder is a logistic function, then the entire architecture can be trained to maximize the conditional log-likelihood, log p(w(t) | w(s)) = M(t) X m=1 p(w(t) m | w(t) 1:m−1, z) [18.26] p(w(t) m | w(t) 1:m−1, w(s)) ∝exp βw(t) m · h(t) m−1 [18.27] where the hidden state h(t) m−1 is a recurrent function of the previously generated text w(t) 1:m−1 and the encoding z. The second line is equivalent to writing, w(t) m | w(t) 1:m−1, w(s) ∼SoftMax β · h(t) m−1 , [18.28] where β ∈R(V (t)×K) is the matrix of output word vectors for the V (t) words in the target language vocabulary. The simplest encoder-decoder architecture is the sequence-to-sequence model (Sutskever et al., 2014). In this model, the encoder is set to the final hidden state of a long short-term Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_460_Chunk456
|
18.3. NEURAL MACHINE TRANSLATION 443 h(s,D) m−1 h(s,D) m h(s,D) m+1 . . . . . . . . . h(s,2) m−1 h(s,2) m h(s,2) m+1 h(s,1) m−1 h(s,1) m h(s,1) m+1 x(s) m−1 x(s) m x(s) m+1 Figure 18.5: A deep bidirectional LSTM encoder memory (LSTM) (see § 6.3.3) on the source sentence: h(s) m =LSTM(x(s) m , h(s) m−1) [18.29] z ≜h(s) M(s), [18.30] where x(s) m is the embedding of source language word w(s) m . The encoding then provides the initial hidden state for the decoder LSTM: h(t) 0 =z [18.31] h(t) m =LSTM(x(t) m , h(t) m−1), [18.32] where x(t) m is the embedding of the target language word w(t) m . Sequence-to-sequence translation is nothing more than wiring together two LSTMs: one to read the source, and another to generate the target. To make the model work well, some additional tweaks are needed: • Most notably, the model works much better if the source sentence is reversed, read- ing from the end of the sentence back to the beginning. In this way, the words at the beginning of the source have the greatest impact on the encoding z, and therefore impact the words at the beginning of the target sentence. Later work on more ad- vanced encoding models, such as neural attention (see § 18.3.1), has eliminated the need for reversing the source sentence. • The encoder and decoder can be implemented as deep LSTMs, with multiple layers of hidden states. As shown in Figure 18.5, each hidden state h(s,i) m at layer i is treated Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_461_Chunk457
|
444 CHAPTER 18. MACHINE TRANSLATION as the input to an LSTM at layer i + 1: h(s,1) m =LSTM(x(s) m , h(s) m−1) [18.33] h(s,i+1) m =LSTM(h(s,i) m , h(s,i+1) m−1 ), ∀i ≥1. [18.34] The original work on sequence-to-sequence translation used four layers; in 2016, Google’s commercial machine translation system used eight layers (Wu et al., 2016).4 • Significant improvements can be obtained by creating an ensemble of translation models, each trained from a different random initialization. For an ensemble of size N, the per-token decoding probability is set equal to, p(w(t) | z, w(t) 1:m−1) = 1 N N X i=1 pi(w(t) | z, w(t) 1:m−1), [18.35] where pi is the decoding probability for model i. Each translation model in the ensemble includes its own encoder and decoder networks. • The original sequence-to-sequence model used a fairly standard training setup: stochas- tic gradient descent with an exponentially decreasing learning rate after the first five epochs; mini-batches of 128 sentences, chosen to have similar length so that each sentence on the batch will take roughly the same amount of time to process; gradi- ent clipping (see § 3.3.4) to ensure that the norm of the gradient never exceeds some predefined value. 18.3.1 Neural attention The sequence-to-sequence model discussed in the previous section was a radical depar- ture from statistical machine translation, in which each word or phrase in the target lan- guage is conditioned on a single word or phrase in the source language. Both approaches have advantages. Statistical translation leverages the idea of compositionality — transla- tions of large units should be based on the translations of their component parts — and this seems crucial if we are to scale translation to longer units of text. But the translation of each word or phrase often depends on the larger context, and encoder-decoder models capture this context at the sentence level. Is it possible for translation to be both contextualized and compositional? One ap- proach is to augment neural translation with an attention mechanism. The idea of neural attention was described in § 17.5, but its application to translation bears further discus- sion. In general, attention can be thought of as using a query to select from a memory of key-value pairs. However, the query, keys, and values are all vectors, and the entire 4Google reports that this system took six days to train for English-French translation, using 96 NVIDIA K80 GPUs, which would have cost roughly half a million dollars at the time. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_462_Chunk458
|
18.3. NEURAL MACHINE TRANSLATION 445 Output activation α Query ψα Key Value Figure 18.6: A general view of neural attention. The dotted box indicates that each αm→n can be viewed as a gate on value n. operation is differentiable. For each key n in the memory, we compute a score ψα(m, n) with respect to the query m. That score is a function of the compatibility of the key and the query, and can be computed using a small feedforward neural network. The vector of scores is passed through an activation function, such as softmax. The output of this activation function is a vector of non-negative numbers [αm→1, αm→2, . . . , αm→N]⊤, with length N equal to the size of the memory. Each value in the memory vn is multiplied by the attention αm→n; the sum of these scaled values is the output. This process is shown in Figure 18.6. In the extreme case that αm→n = 1 and αm→n′ = 0 for all other n′, then the attention mechanism simply selects the value vn from the memory. Neural attention makes it possible to integrate alignment into the encoder-decoder ar- chitecture. Rather than encoding the entire source sentence into a fixed length vector z, it can be encoded into a matrix Z ∈RK×M(S), where K is the dimension of the hidden state, and M(S) is the number of tokens in the source input. Each column of Z represents the state of a recurrent neural network over the source sentence. These vectors are con- structed from a bidirectional LSTM (see § 7.6), which can be a deep network as shown in Figure 18.5. These columns are both the keys and the values in the attention mechanism. At each step m in decoding, the attentional state is computed by executing a query, which is equal to the state of the decoder, h(t) m . The resulting compatibility scores are, ψα(m, n) =vα · tanh(Θα[h(t) m ; h(s) n ]). [18.36] The function ψ is thus a two layer feedforward neural network, with weights vα on the output layer, and weights Θα on the input layer. To convert these scores into atten- tion weights, we apply an activation function, which can be vector-wise softmax or an element-wise sigmoid: Softmax attention αm→n = exp ψα(m, n) PM(s) n′=1 exp ψα(m, n′) [18.37] Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_463_Chunk459
|
446 CHAPTER 18. MACHINE TRANSLATION Sigmoid attention αm→n = σ (ψα(m, n)) [18.38] The attention α is then used to compute a context vector cm by taking a weighted average over the columns of Z, cm = M(s) X n=1 αm→nzn, [18.39] where αm→n ∈[0, 1] is the amount of attention from word m of the target to word n of the source. The context vector can be incorporated into the decoder’s word output probability model, by adding another layer to the decoder (Luong et al., 2015): ˜h(t) m = tanh Θc[h(t) m ; cm] [18.40] p(w(t) m+1 | w(t) 1:m, w(s)) ∝exp βw(t) m+1 · ˜h(t) m . [18.41] Here the decoder state h(t) m is concatenated with the context vector, forming the input to compute a final output vector ˜h(t) m . The context vector can be incorporated into the decoder recurrence in a similar manner (Bahdanau et al., 2014). 18.3.2 *Neural machine translation without recurrence In the encoder-decoder model, attention’s “keys and values” are the hidden state repre- sentations in the encoder network, z, and the “queries” are state representations in the decoder network h(t). It is also possible to completely eliminate recurrence from neural translation, by applying self-attention (Lin et al., 2017; Kim et al., 2017) within the en- coder and decoder, as in the transformer architecture (Vaswani et al., 2017). For level i, the basic equations of the encoder side of the transformer are: z(i) m = M(s) X n=1 α(i) m→n(Θvh(i−1) n ) [18.42] h(i) m =Θ2 ReLU Θ1z(i) m + b1 + b2. [18.43] For each token m at level i, we compute self-attention over the entire source sentence. The keys, values, and queries are all projections of the vector h(i−1): for example, in Equa- tion 18.42, the value vn is the projection Θvh(i−1) n . The attention scores α(i) m→n are com- puted using a scaled form of softmax attention, αm→n ∝exp(ψα(m, n)/M), [18.44] Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_464_Chunk460
|
18.3. NEURAL MACHINE TRANSLATION 447 z(i) α(i) m→ ψ(i) α (m, ·) h(i−1) m −1 m m + 1 k q v Figure 18.7: The transformer encoder’s computation of z(i) m from h(i−1). The key, value, and query are shown for token m −1. For example, ψ(i) α (m, m −1) is computed from the key Θkh(i−1) m−1 and the query Θqh(i−1) m , and the gate α(i) m→m−1 operates on the value Θvh(i−1) m−1 . The figure shows a minimal version of the architecture, with a single atten- tion head. With multiple heads, it is possible to attend to different properties of multiple words. where M is the length of the input. This encourages the attention to be more evenly dispersed across the input. Self-attention is applied across multiple “heads”, each using different projections of h(i−1) to form the keys, values, and queries. This architecture is shown in Figure 18.7. The output of the self-attentional layer is the representation z(i) m , which is then passed through a two-layer feed-forward network, yielding the input to the next layer, h(i). This self-attentional architecture can be applied in the decoder as well, but this requires that there is zero attention to future words: αm→n = 0 for all n > m. To ensure that information about word order in the source is integrated into the model, the encoder augments the base layer of the network with positional encodings of the indices of each word in the source. These encodings are vectors for each position m ∈ {1, 2, . . . , M}. The transformer sets these encodings equal to a set of sinusoidal functions of m, e2i−1(m) = sin(m/(10000 2i Ke )) [18.45] e2i(m) = cos(m/(10000 2i Ke )), ∀i ∈{1, 2, . . . , Ke/2} [18.46] where e2i(m) is the value at element 2i of the encoding for index m. As we progress through the encoding, the sinusoidal functions have progressively narrower bandwidths. This enables the model to learn to attend by relative positions of words. The positional encodings are concatenated with the word embeddings xm at the base layer of the model.5 5The transformer architecture relies on several additional tricks, including layer normalization (see Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_465_Chunk461
|
448 CHAPTER 18. MACHINE TRANSLATION Source: The ecotax portico in Pont-de-buis was taken down on Thursday morning Reference: Le portique ´ecotaxe de Pont-de-buis a ´et´e d´emont´e jeudi matin System: Le unk de unk `a unk a ´et´e pris le jeudi matin Figure 18.8: Translation with unknown words. The system outputs unk to indicate words that are outside its vocabulary. Figure adapted from Luong et al. (2015). Convolutional neural networks (see § 3.4) have also been applied as encoders in neu- ral machine translation (Gehring et al., 2017). For each word w(s) m , a convolutional network computes a representation h(s) m from the embeddings of the word and its neighbors. This procedure is applied several times, creating a deep convolutional network. The recurrent decoder then computes a set of attention weights over these convolutional representa- tions, using the decoder’s hidden state h(t) as the queries. This attention vector is used to compute a weighted average over the outputs of another convolutional neural network of the source, yielding an averaged representation cm, which is then fed into the decoder. As with the transformer, speed is the main advantage over recurrent encoding models; another similarity is that word order information is approximated through the use of po- sitional encodings.6 18.3.3 Out-of-vocabulary words Thus far, we have treated translation as a problem at the level of words or phrases. For words that do not appear in the training data, all such models will struggle. There are two main reasons for the presence of out-of-vocabulary (OOV) words: • New proper nouns, such as family names or organizations, are constantly arising — particularly in the news domain. The same is true, to a lesser extent, for technical terminology. This issue is shown in Figure 18.8. • In many languages, words have complex internal structure, known as morphology. An example is German, which uses compounding to form nouns like Abwasserbe- handlungsanlage (sewage water treatment plant; example from Sennrich et al. (2016)). § 3.3.4), residual connections around the nonlinear activations (see § 3.2.2), and a non-monotonic learning rate schedule. 6A recent evaluation found that best performance was obtained by using a recurrent network for the decoder, and a transformer for the encoder (Chen et al., 2018). The transformer was also found to significantly outperform a convolutional neural network. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_466_Chunk462
|
18.4. DECODING 449 While compounds could in principle be addressed by better tokenization (see § 8.4), other morphological processes involve more complex transformations of subword units. Names and technical terms can be handled in a postprocessing step: after first identi- fying alignments between unknown words in the source and target, we can look up each aligned source word in a dictionary, and choose a replacement (Luong et al., 2015). If the word does not appear in the dictionary, it is likely to be a proper noun, and can be copied directly from the source to the target. This approach can also be integrated directly into the translation model, rather than applying it as a postprocessing step (Jean et al., 2015). Words with complex internal structure can be handled by translating subword units rather than entire words. A popular technique for identifying subword units is byte-pair encoding (BPE; Gage, 1994; Sennrich et al., 2016). The initial vocabulary is defined as the set of characters used in the text. The most common character bigram is then merged into a new symbol, the vocabulary is updated, and the merging operation is applied again. For example, given the dictionary {fish, fished, want, wanted, bike, biked}, we would first form the subword unit ed, since this character bigram appears in three of the six words. Next, there are several bigrams that each appear in a pair of words: fi, is, sh, wa, an, etc. These can be merged in any order. By iterating this process, we eventually reach the segmentation, {fish, fish+ed, want, want+ed, bik+e, bik+ed}. At this point, there are no bigrams that appear more than once. In real data, merging is performed until the number of subword units reaches some predefined threshold, such as 104. Each subword unit is treated as a token for translation, in both the encoder (source side) and decoder (target side). BPE can be applied jointly to the union of the source and target vocabularies, identifying subword units that appear in both languages. For lan- guages that have different scripts, such as English and Russian, transliteration between the scripts should be applied first.7 18.4 Decoding Given a trained translation model, the decoding task is: ˆw(t) = argmax w∈V∗Ψ(w, w(s)), [18.47] where w(t) is a sequence of tokens from the target vocabulary V. It is not possible to efficiently obtain exact solutions to the decoding problem, for even minimally effective 7Transliteration is crucial for converting names and other foreign words between languages that do not share a single script, such as English and Japanese. It is typically approached using the finite-state methods discussed in chapter 9 (Knight and Graehl, 1998). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_467_Chunk463
|
450 CHAPTER 18. MACHINE TRANSLATION models in either statistical or neural machine translation. Today’s state-of-the-art transla- tion systems use beam search (see § 11.3.1), which is an incremental decoding algorithm that maintains a small constant number of competitive hypotheses. Such greedy approxi- mations are reasonably effective in practice, and this may be in part because the decoding objective is only loosely correlated with measures of translation quality, so that exact op- timization of [18.47] may not greatly improve the resulting translations. Decoding in neural machine translation is simpler than in phrase-based statistical ma- chine translation.8 The scoring function Ψ is defined, Ψ(w(t), w(s)) = M(t) X m=1 ψ(w(t) m ; w(t) 1:m−1, z) [18.48] ψ(w(t); w(t) 1:m−1, z) =βw(t) m · h(t) m −log X w∈V exp βw · h(t) m , [18.49] where z is the encoding of the source sentence w(s), and h(t) m is a function of the encoding z and the decoding history w(t) 1:m−1. This formulation subsumes the attentional translation model, where z is a matrix encoding of the source. Now consider the incremental decoding algorithm, ˆw(t) m = argmax w∈V ψ(w; ˆw(t) 1:m−1, z), m = 1, 2, . . . [18.50] This algorithm selects the best target language word at position m, assuming that it has already generated the sequence ˆw(t) 1:m−1. (Termination can be handled by augmenting the vocabulary V with a special end-of-sequence token, ■.) The incremental algorithm is likely to produce a suboptimal solution to the optimization problem defined in Equa- tion 18.47, because selecting the highest-scoring word at position m can set the decoder on a “garden path,” in which there are no good choices at some later position n > m. We might hope for some dynamic programming solution, as in sequence labeling (§ 7.3). But the Viterbi algorithm and its relatives rely on a Markov decomposition of the objective function into a sum of local scores: for example, scores can consider locally adjacent tags (ym, ym−1), but not the entire tagging history y1:m. This decomposition is not applicable to recurrent neural networks, because the hidden state h(t) m is impacted by the entire his- tory w(t) 1:m; this sensitivity to long-range context is precisely what makes recurrent neural networks so effective.9 In fact, it can be shown that decoding from any recurrent neural network is NP-complete (Siegelmann and Sontag, 1995; Chen et al., 2018). 8For more on decoding in phrase-based statistical models, see Koehn (2009). 9Note that this problem does not impact RNN-based sequence labeling models (see § 7.6). This is because the tags produced by these models do not affect the recurrent state. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_468_Chunk464
|
18.5. TRAINING TOWARDS THE EVALUATION METRIC 451 Beam search Beam search is a general technique for avoiding search errors when ex- haustive search is impossible; it was first discussed in § 11.3.1. Beam search can be seen as a variant of the incremental decoding algorithm sketched in Equation 18.50, but at each step m, a set of K different hypotheses are kept on the beam. For each hypothesis k ∈{1, 2, . . . , K}, we compute both the current score PM(t) m=1 ψ(w(t) k,m; w(t) k,1:m−1, z) as well as the current hidden state h(t) k . At each step in the beam search, the K top-scoring children of each hypothesis currently on the beam are “expanded”, and the beam is updated. For a detailed description of beam search for RNN decoding, see Graves (2012). Learning and search Conventionally, the learning algorithm is trained to predict the right token in the translation, conditioned on the translation history being correct. But if decoding must be approximate, then we might do better by modifying the learning algorithm to be robust to errors in the translation history. Scheduled sampling does this by training on histories that sometimes come from the ground truth, and sometimes come from the model’s own output (Bengio et al., 2015).10 As training proceeds, the training wheels come off: we increase the fraction of tokens that come from the model rather than the ground truth. Another approach is to train on an objective that relates directly to beam search performance (Wiseman et al., 2016). Reinforcement learning has also been applied to decoding of RNN-based translation models, making it possible to directly optimize translation metrics such as BLEU (Ranzato et al., 2016). 18.5 Training towards the evaluation metric In likelihood-based training, the objective is the maximize the probability of a parallel corpus. However, translations are not evaluated in terms of likelihood: metrics like BLEU consider only the correctness of a single output translation, and not the range of prob- abilities that the model assigns. It might therefore be better to train translation models to achieve the highest BLEU score possible — to the extent that we believe BLEU mea- sures translation quality. Unfortunately, BLEU and related metrics are not friendly for optimization: they are discontinuous, non-differentiable functions of the parameters of the translation model. Consider an error function ∆( ˆw(t), w(t)), which measures the discrepancy between the system translation ˆw(t) and the reference translation w(t); this function could be based on BLEU or any other metric on translation quality. One possible criterion would be to select 10Scheduled sampling builds on earlier work on learning to search (Daum´e III et al., 2009; Ross et al., 2011), which are also described in § 15.2.4. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_469_Chunk465
|
452 CHAPTER 18. MACHINE TRANSLATION the parameters θ that minimize the error of the system’s preferred translation, ˆw(t) = argmax w(t) Ψ(w(t), w(s); θ) [18.51] ˆθ = argmin θ ∆( ˆw(t), w(s)) [18.52] However, identifying the top-scoring translation ˆw(t) is usually intractable, as described in the previous section. In minimum error-rate training (MERT), ˆw(t) is selected from a set of candidate translations Y(w(s)); this is typically a strict subset of all possible transla- tions, so that it is only possible to optimize an approximation to the true error rate (Och and Ney, 2003). A further issue is that the objective function in Equation 18.52 is discontinuous and non-differentiable, due to the argmax over translations: an infinitesimal change in the parameters θ could cause another translation to be selected, with a completely different error. To address this issue, we can instead minimize the risk, which is defined as the expected error rate, R(θ) =E ˆ w(t)|w(s);θ[∆( ˆw(t), w(t))] [18.53] = X ˆ w(t)∈Y(w(s)) p( ˆw(t) | w(s)) × ∆( ˆw(t), w(t)). [18.54] Minimum risk training minimizes the sum of R(θ) across all instances in the training set. The risk can be generalized by exponentiating the translation probabilities, ˜p(w(t); θ, α) ∝ p(w(t) | w(s); θ) α [18.55] ˜R(θ) = X ˆ w(t)∈Y(w(s)) ˜p( ˆw(t) | w(s); α, θ) × ∆( ˆw(t), w(t)) [18.56] where Y(w(s)) is now the set of all possible translations for w(s). Exponentiating the prob- abilities in this way is known as annealing (Smith and Eisner, 2006). When α = 1, then ˜R(θ) = R(θ); when α = ∞, then ˜R(θ) is equivalent to the sum of the errors of the maxi- mum probability translations for each sentence in the dataset. Clearly the set of candidate translations Y(w(s)) is too large to explicitly sum over. Because the error function ∆generally does not decompose into smaller parts, there is no efficient dynamic programming solution to sum over this set. We can approximate the sum P ˆ w(t)∈Y(w(s)) with a sum over a finite number of samples, {w(t) 1 , w(t) 2 , . . . , w(t) K }. If these samples were drawn uniformly at random, then the (annealed) risk would be Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_470_Chunk466
|
18.5. TRAINING TOWARDS THE EVALUATION METRIC 453 approximated as (Shen et al., 2016), ˜R(θ) ≈1 Z K X k=1 ˜p(w(t) k | w(s); θ, α) × ∆(w(t) k , w(t)) [18.57] Z = K X k=1 ˜p(w(t) k | w(s); θ, α). [18.58] Shen et al. (2016) report that performance plateaus at K = 100 for minimum risk training of neural machine translation. Uniform sampling over the set of all possible translations is undesirable, because most translations have very low probability. A solution from Monte Carlo estimation is impor- tance sampling, in which we draw samples from a proposal distribution q(w(s)). This distribution can be set equal to the current translation model p(w(t) | w(s); θ). Each sam- ple is then weighted by an importance score, ωk = ˜p(w(t) k |w(s)) q(w(t) k ;w(s)). The effect of this weighting is to correct for any mismatch between the proposal distribution q and the true distribu- tion ˜p. The risk can then be approximated as, w(t) k ∼q(w(s)) [18.59] ωk = ˜p(w(t) k | w(s)) q(w(t) k ; w(s)) [18.60] ˜R(θ) ≈ 1 PK k=1 ωk K X k=1 ωk × ∆(w(t) k , w(t)). [18.61] Importance sampling will generally give a more accurate approximation than uniform sampling. The only formal requirement is that the proposal assigns non-zero probability to every w(t) ∈Y(w(s)). For more on importance sampling and related methods, see Robert and Casella (2013). Additional resources A complete textbook on machine translation is available from Koehn (2009). While this book precedes recent work on neural translation, a more recent draft chapter on neural translation models is also available (Koehn, 2017). Neubig (2017) provides a compre- hensive tutorial on neural machine translation, starting from first principles. The course notes from Cho (2015) are also useful. Several neural machine translation libraries are available: LAMTRAM is an implementation of neural machine translation in DYNET (Neu- big et al., 2017); OPENNMT (Klein et al., 2017) and FAIRSEQ are available in PYTORCH; Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_471_Chunk467
|
454 CHAPTER 18. MACHINE TRANSLATION TENSOR2TENSOR is an implementation of several of the Google translation models in TEN- SORFLOW (Abadi et al., 2016). Literary translation is especially challenging, even for expert human translators. Mes- sud (2014) describes some of these issues in her review of an English translation of L’´etranger, the 1942 French novel by Albert Camus.11 She compares the new translation by Sandra Smith against earlier translations by Stuart Gilbert and Matthew Ward, focusing on the difficulties presented by a single word in the first sentence: Then, too, Smith has reconsidered the book’s famous opening. Camus’s original is deceptively simple: “Aujourd’hui, maman est morte.” Gilbert influ- enced generations by offering us “Mother died today”—inscribing in Meur- sault [the narrator] from the outset a formality that could be construed as heartlessness. But maman, after all, is intimate and affectionate, a child’s name for his mother. Matthew Ward concluded that it was essentially untranslatable (“mom” or “mummy” being not quite apt), and left it in the original French: “Maman died today.” There is a clear logic in this choice; but as Smith has explained, in an interview in The Guardian, maman “didn’t really tell the reader anything about the connotation.” She, instead, has translated the sentence as “My mother died today.” I chose “My mother” because I thought about how someone would tell another person that his mother had died. Meursault is speaking to the reader directly. “My mother died today” seemed to me the way it would work, and also implied the closeness of “maman” you get in the French. Elsewhere in the book, she has translated maman as “mama” — again, striving to come as close as possible to an actual, colloquial word that will carry the same connotations as maman does in French. The passage is a reminder that while the quality of machine translation has improved dramatically in recent years, expert human translations draw on considerations that are beyond the ken of any contemporary computational approach. Exercises 1. Using Google translate or another online service, translate the following example into two different languages of your choice: 11The book review is currently available online at http://www.nybooks.com/articles/2014/06/ 05/camus-new-letranger/. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_472_Chunk468
|
18.5. TRAINING TOWARDS THE EVALUATION METRIC 455 (18.4) It is not down on any map; true places never are. Then translate each result back into English. Which is closer to the original? Can you explain the differences? 2. Compute the unsmoothed n-gram precisions p1 . . . p4 for the two back-translations in the previous problem, using the original source as the reference. Your n-grams should include punctuation, and you should segment conjunctions like it’s into two tokens. 3. You are given the following dataset of translations from “simple” to “difficult” En- glish: (18.5) a. Kids Children like adore cats. felines. b. Cats Felines hats. fedoras. Estimate a word-to-word statistical translation model from simple English (source) to difficult English (target), using the expectation-maximization as described in § 18.2.2. Compute two iterations of the algorithm by hand, starting from a uniform transla- tion model, and using the simple alignment model p(am | m, M(s), M(t)) = 1 M(t) . Hint: in the final M-step, you will want to switch from fractions to decimals. 4. Building on the previous problem, what will be the converged translation proba- bility table? Can you state a general condition about the data, under which this translation model will fail in the way that it fails here? 5. Propose a simple alignment model that would make it possible to recover the correct translation probabilities from the toy dataset in the previous two problems. 6. Let ℓ(t) m+1 represent the loss at word m+1 of the target, and let h(s) n represent the hid- den state at word n of the source. Write the expression for the derivative ∂ℓ(t) m+1 ∂h(s) n in the sequence-to-sequence translation model expressed in Equations [18.29-18.32]. You may assume that both the encoder and decoder are one-layer LSTMs. In general, how many terms are on the shortest backpropagation path from ℓ(t) m+1 to h(s) n ? 7. Now consider the neural attentional model from § 18.3.1, with sigmoid attention. The derivative ∂ℓ(t) m+1 ∂zn is the sum of many paths through the computation graph; identify the shortest such path. You may assume that the initial state of the decoder recurrence h(t) 0 is not tied to the final state of the encoder recurrence h(s) M(s). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_473_Chunk469
|
456 CHAPTER 18. MACHINE TRANSLATION 8. Apply byte-pair encoding for the vocabulary it, unit, unite, until no bigram appears more than once. 9. This problem relates to the complexity of machine translation. Suppose you have an oracle that returns the list of words to include in the translation, so that your only task is to order the words. Furthermore, suppose that the scoring function over orderings is a sum over bigrams, PM m=1 ψ(w(t) m , w(t) m−1). Show that the problem of finding the optimal translation is NP-complete, by reduction from a well-known problem. 10. Hand-design an attentional recurrent translation model that simply copies the input from the source to the target. You may assume an arbitrarily large hidden state, and you may assume that there is a finite maximum input length M. Specify all the weights such that the maximum probability translation of any source is the source itself. Hint: it is simplest to use the Elman recurrence hm = f(Θhm−1 + xm) rather than an LSTM. 11. Give a synchronized derivation (§ 18.2.4) for the Spanish-English translation, (18.6) El The pez fish enojado angry atacado. attacked. The angry fish attacked. As above, the second line shows a word-for-word gloss, and the third line shows the desired translation. Use the synchronized production rule in [18.22], and design the other production rules necessary to derive this sentence pair. You may derive (atacado, attacked) directly from VP. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_474_Chunk470
|
Chapter 19 Text generation In many of the most interesting problems in natural language processing, language is the output. The previous chapter described the specific case of machine translation, but there are many other applications, from summarization of research articles, to automated journalism, to dialogue systems. This chapter emphasizes three main scenarios: data-to- text, in which text is generated to explain or describe a structured record or unstructured perceptual input; text-to-text, which typically involves fusing information from multiple linguistic sources into a single coherent summary; and dialogue, in which text is generated as part of an interactive conversation with one or more human participants. 19.1 Data-to-text generation In data-to-text generation, the input ranges from structured records, such as the descrip- tion of an weather forecast (as shown in Figure 19.1), to unstructured perceptual data, such as a raw image or video; the output may be a single sentence, such as an image cap- tion, or a multi-paragraph argument. Despite this diversity of conditions, all data-to-text systems share some of the same challenges (Reiter and Dale, 2000): • determining what parts of the data to describe; • planning a presentation of this information; • lexicalizing the data into words and phrases; • organizing words and phrases into well-formed sentences and paragraphs. The earlier stages of this process are sometimes called content selection and text plan- ning; the later stages are often called surface realization. Early systems for data-to-text generation were modular, with separate software com- ponents for each task. Artificial intelligence planning algorithms can be applied to both 457
|
nlp_Page_475_Chunk471
|
458 CHAPTER 19. TEXT GENERATION Temperature time min mean max 06:00-21:00 9 15 21 Cloud sky cover time percent (%) 06:00-09:00 25-50 09:00-12:00 50-75 Wind speed time min mean max 06:00-21:00 15 20 30 Wind direction time mode 06:00-21:00 S Cloudy, with temperatures between 10 and 20 degrees. South wind around 20 mph. Figure 19.1: An example input-output pair for the task of generating text descriptions of weather forecasts (adapted from Konstas and Lapata, 2013). the high-level information structure and the organization of individual sentences, ensur- ing that communicative goals are met (McKeown, 1992; Moore and Paris, 1993). Surface realization can be performed by grammars or templates, which link specific types of data to candidate words and phrases. A simple example template is offered by Wiseman et al. (2017), for generating descriptions of basketball games: (19.1) The <team1> (<wins1>-losses1) defeated the <team2> (<wins2>-<losses2>), <pts1>-<pts2>. The New York Knicks (45-5) defeated the Boston Celtics (11-38), 115-79. For more complex cases, it may be necessary to apply morphological inflections such as pluralization and tense marking — even in the simple example above, languages such as Russian would require case marking suffixes for the team names. Such inflections can be applied as a postprocessing step. Another difficult challenge for surface realization is the generation of varied referring expressions (e.g., The Knicks, New York, they), which is critical to avoid repetition. As discussed in § 16.2.1, the form of referring expressions is constrained by the discourse and information structure. An example at the intersection of rule-based and statistical techniques is the NITRO- GEN system (Langkilde and Knight, 1998). The input to NITROGEN is an abstract meaning representation (AMR; see § 13.3) of semantic content to be expressed in a single sentence. In data-to-text scenarios, the abstract meaning representation is the output of a higher- level text planning stage. A set of rules then converts the abstract meaning representation into various sentence plans, which may differ in both the high-level structure (e.g., active versus passive voice) as well as the low-level details (e.g., word and phrase choice). Some examples are shown in Figure 19.2. To control the combinatorial explosion in the number of possible realizations for any given meaning, the sentence plans are unified into a single finite-state acceptor, in which word tokens are represented by arcs (see § 9.1.1). A bigram Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_476_Chunk472
|
19.1. DATA-TO-TEXT GENERATION 459 (a / admire-01 :ARG0 (v / visitor :ARG1-of (c / arrive-01 :ARG4 (j / Japan))) :ARG1 (m / "Mount Fuji")) • Visitors who came to Japan admire Mount Fuji. • Visitors who came in Japan admire Mount Fuji. • Mount Fuji is admired by the visitor who came in Japan. Figure 19.2: Abstract meaning representation and candidate surface realizations from the NITROGEN system. Example adapted from Langkilde and Knight (1998). language model is then used to compute weights on the arcs, so that the shortest path is also the surface realization with the highest bigram language model probability. More recent systems are unified models that are trained end-to-end using backpropa- gation. Data-to-text generation shares many properties with machine translation, includ- ing a problem of alignment: labeled examples provide the data and the text, but they do not specify which parts of the text correspond to which parts of the data. For example, to learn from Figure 19.1, the system must align the word cloudy to records in CLOUD SKY COVER, the phrases 10 and 20 degrees to the MIN and MAX fields in TEMPERATURE, and so on. As in machine translation, both latent variables and neural attention have been proposed as solutions. 19.1.1 Latent data-to-text alignment Given a dataset of texts and associated records {(w(i), y(i))}N i=1, our goal is to learn a model Ψ, so that ˆw = argmax w∈V∗Ψ(w, y; θ), [19.1] where V∗is the set of strings over a discrete vocabulary, and θ is a vector of parameters. The relationship between w and y is complex: the data y may contain dozens of records, and w may extend to several sentences. To facilitate learning and inference, it would be helpful to decompose the scoring function Ψ into subcomponents. This would be possi- ble if given an alignment, specifying which element of y is expressed in each part of w. Specifically, let zm indicates the record aligned to word m. For example, in Figure 19.1, z1 might specify that the word cloudy is aligned to the record cloud-sky-cover:percent. The score for this alignment would then be given by the weight on features such as (cloudy, cloud-sky-cover:percent). [19.2] In general, given an observed set of alignments, the score for a generation can be Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_477_Chunk473
|
460 CHAPTER 19. TEXT GENERATION written as sum of local scores (Angeli et al., 2010): Ψ(w, y; θ) = M X m=1 ψw,y(wm, yzm) + ψw(wm, wm−1) + ψz(zm, zm−1), [19.3] where ψw can represent a bigram language model, and ψz can be tuned to reward coher- ence, such as the use of related records in nearby words. 1 The parameters of this model could be learned from labeled data {(w(i), y(i), z(i))}N i=1. However, while several datasets include structured records and natural language text (Barzilay and McKeown, 2005; Chen and Mooney, 2008; Liang and Klein, 2009), the alignments between text and records are usually not available.2 One solution is to model the problem probabilistically, treating the alignment as a latent variable (Liang et al., 2009; Konstas and Lapata, 2013). The model can then be estimated using expectation maximization or sampling (see chapter 5). 19.1.2 Neural data-to-text generation The encoder-decoder model and neural attention were introduced in § 18.3 as methods for neural machine translation. They can also be applied to data-to-text generation, with the data acting as the source language (Mei et al., 2016). In neural machine translation, the attention mechanism linked words in the source to words in the target; in data-to- text generation, the attention mechanism can link each part of the generated text back to a record in the data. The biggest departure from translation is in the encoder, which depends on the form of the data. Data encoders In some types of structured records, all values are drawn from discrete sets. For example, the birthplace of an individual is drawn from a discrete set of possible locations; the diag- nosis and treatment of a patient are drawn from an exhaustive list of clinical codes (John- son et al., 2016). In such cases, vector embeddings can be estimated for each field and possible value: for example, a vector embedding for the field BIRTHPLACE, and another embedding for the value BERKELEY CALIFORNIA (Bordes et al., 2011). The table of such embeddings serves as the encoding of a structured record (He et al., 2017). It is also possi- ble to compress the entire table into a single vector representation, by pooling across the embeddings of each field and value (Lebret et al., 2016). 1More expressive decompositions of Ψ are possible. For example, Wong and Mooney (2007) use a syn- chronous context-free grammar (see § 18.2.4) to “translate” between a meaning representation and natural language text. 2An exception is a dataset of records and summaries from American football games, containing annota- tions of alignments between sentences and records (Snyder and Barzilay, 2007). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_478_Chunk474
|
19.1. DATA-TO-TEXT GENERATION 461 Figure 19.3: Examples of the image captioning task, with attention masks shown for each of the underlined words (Xu et al., 2015). Sequences Some types of structured records have a natural ordering, such as events in a game (Chen and Mooney, 2008) and steps in a recipe (Tutin and Kittredge, 1992). For example, the following records describe a sequence of events in a robot soccer match (Mei et al., 2016): PASS(arg1 = PURPLE6, arg2 = PURPLE3) KICK(arg1 = PURPLE3) BADPASS(arg1 = PURPLE3, arg2 = PINK9). Each event is a single record, and can be encoded by a concatenation of vector represen- tations for the event type (e.g., PASS), the field (e.g., arg1), and the values (e.g., PURPLE3), e.g., X = uPASS, uarg1, uPURPLE6, uarg2, uPURPLE3 . [19.4] This encoding can then act as the input layer for a recurrent neural network, yielding a sequence of vector representations {zr}R r=1, where r indexes over records. Interestingly, this sequence-based approach can work even in cases where there is no natural ordering over the records, such as the weather data in Figure 19.1 (Mei et al., 2016). Images Another flavor of data-to-text generation is the generation of text captions for images. Examples from this task are shown in Figure 19.3. Images are naturally repre- sented as tensors: a color image of 320 × 240 pixels would be stored as a tensor with 320 × 240 × 3 intensity values. The dominant approach to image classification is to en- code images as vectors using a combination of convolution and pooling (Krizhevsky et al., Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_479_Chunk475
|
462 CHAPTER 19. TEXT GENERATION a 20 % chance of showers and thunderstorms after noon . mostly cloudy with a high near 71 . id-0: temperature(min=52,max=71,mean=63) id-2: windSpeed(min=8,mean=17,max=23) id-5: skyCover(mode=50-75) id-10: precipChance(min=19,mean=32,max=73) id-15: thunderChance(mode=SChc) Figure 19.4: Neural attention in text generation. Figure adapted from Mei et al. (2016). 2012). Chapter 3 explains how to use convolutional networks for text; for images, convo- lution is applied across the vertical, horizontal, and color dimensions. By pooling the re- sults of successive convolutions, the image is converted to a vector representation, which can then be fed directly into the decoder as the initial state (Vinyals et al., 2015), just as in the sequence-to-sequence translation model (see § 18.3). Alternatively, one can apply a set of convolutional networks, yielding vector representations for different parts of the image, which can then be combined using neural attention (Xu et al., 2015). Attention Given a set of embeddings of the data {zr}R r=1 and a decoder state hm, an attention vector over the data can be computed using the same techniques as in machine translation (see § 18.3.1). When generating word m of the output, attention is computed over the records, ψα(m, r) =βα · f(Θα[hm; zr]) [19.5] αm =g ([ψα(m, 1), ψα(m, 2), . . . , ψα(m, R)]) [19.6] cm = R X r=1 αm→rzr, [19.7] where f is an elementwise nonlinearity such as tanh or ReLU, and g is a either softmax or elementwise sigmoid. The weighted sum cm can then be included in the recurrent update to the decoder state, or in the emission probabilities, as described in § 18.3.1. Figure 19.4 shows the attention to components of a weather record, while generating the text shown on the x-axis. Adapting this architecture to image captioning is straightforward. A convolutional neural networks is applied to a set of image locations, and the output at each location ℓis represented with a vector zℓ. Attention can then be computed over the image locations, as shown in the right panels of each pair of images in Figure 19.3. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_480_Chunk476
|
19.1. DATA-TO-TEXT GENERATION 463 Various modifications to this basic mechanism have been proposed. In coarse-to-fine attention (Mei et al., 2016), each record receives a global attention ar ∈[0, 1], which is independent of the decoder state. This global attention, which represents the overall importance of the record, is multiplied with the decoder-based attention scores, before computing the final normalized attentions. In structured attention, the attention vector αm→· can include structural biases, which can favor assigning higher attention values to contiguous segments or to dependency subtrees (Kim et al., 2017). Structured attention vectors can be computed by running the forward-backward algorithm to obtain marginal attention probabilities (see § 7.5.3). Because each step in the forward-backward algorithm is differentiable, it can be encoded in a computation graph, and end-to-end learning can be performed by backpropagation. Decoder Given the encoding, the decoder can function just as in neural machine translation (see § 18.3.1), using the attention-weighted encoder representation in the decoder recurrence and/or output computation. As in machine translation, beam search can help to avoid search errors (Lebret et al., 2016). Many applications require generating words that do not appear in the training vocab- ulary. For example, a weather record may contain a previously unseen city name; a sports record may contain a previously unseen player name. Such tokens can be generated in the text by copying them over from the input (e.g., Gulcehre et al., 2016).3 First introduce an additional variable sm ∈{gen, copy}, indicating whether token w(t) m should be generated or copied. The decoder probability is then, p(w(t) | w(t) 1:m−1, Z, sm) = ( SoftMax(βw(t) · h(t) m−1), sm = gen PR r=1 δ w(s) r = w(t) × αm→r, sm = copy, [19.8] where δ(w(s) r = w(t)) is an indicator function, taking the value 1 iff the text of the record w(s) r is identical to the target word w(t). The probability of copying record r from the source is δ (sm = copy) × αm→r, the product of the copy probability by the local attention. Note that in this model, the attention weights αm are computed from the previous decoder state hm−1. The computation graph therefore remains a feedforward network, with recurrent paths such as h(t) m−1 →αm →w(t) m →h(t) m . To facilitate end-to-end training, the switching variable sm can be represented by a gate πm, which is computed from a two-layer feedforward network, whose input consists of the concatenation of the decoder state h(t) m−1 and the attention-weighted representation 3A number of variants of this strategy have been proposed (e.g., Gu et al., 2016; Merity et al., 2017). See Wiseman et al. (2017) for an overview. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_481_Chunk477
|
464 CHAPTER 19. TEXT GENERATION of the data, cm = PR r=1 αm→rzr, πm = σ(Θ(2)f(Θ(1)[h(t) m−1; cm])). [19.9] The full generative probability at token m is then, p(w(t) | w(t) 1:m, Z) =πm × exp βw(t) · h(t) m−1 PV j=1 exp βj · h(t) m−1 | {z } generate +(1 −πm) × R X r=1 δ(w(s) r = w(t)) × αm→r | {z } copy . [19.10] 19.2 Text-to-text generation Text-to-text generation includes problems of summarization and simplification: • reading a novel and outputting a paragraph-long summary of the plot;4 • reading a set of blog posts about politics, and outputting a bullet list of the various issues and perspectives; • reading a technical research article about the long-term health consequences of drink- ing kombucha, and outputting a summary of the article in language that non-experts can understand. These problems can be approached in two ways: through the encoder-decoder architec- ture discussed in the previous section, or by operating directly on the input text. 19.2.1 Neural abstractive summarization Sentence summarization is the task of shortening a sentence while preserving its mean- ing, as in the following examples (Knight and Marcu, 2000; Rush et al., 2015): (19.2) a. The documentation is typical of Epson quality: excellent. Documentation is excellent. b. Russian defense minister Ivanov called sunday for the creation of a joint front for combating global terrorism. Russia calls for joint front against terrorism. 4In § 16.3.4, we encountered a special case of single-document summarization, which involved extract- ing the most important sentences or discourse units. We now consider the more challenging problem of abstractive summarization, in which the summary can include words that do not appear in the original text. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_482_Chunk478
|
19.2. TEXT-TO-TEXT GENERATION 465 Sentence summarization is closely related to sentence compression, in which the sum- mary is produced by deleting words or phrases from the original (Clarke and Lapata, 2008). But as shown in (19.2b), a sentence summary can also introduce new words, such as against, which replaces the phrase for combatting. Sentence summarization can be treated as a machine translation problem, using the at- tentional encoder-decoder translation model discussed in § 18.3.1 (Rush et al., 2015). The longer sentence is encoded into a sequence of vectors, one for each token. The decoder then computes attention over these vectors when updating its own recurrent state. As with data-to-text generation, it can be useful to augment the encoder-decoder model with the ability to copy words directly from the source. Rush et al. (2015) train this model by building four million sentence pairs from news articles. In each pair, the longer sentence is the first sentence of the article, and the summary is the article headline. Sentence summa- rization can also be trained in a semi-supervised fashion, using a probabilistic formulation of the encoder-decoder model called a variational autoencoder (Miao and Blunsom, 2016, also see § 14.8.2). When summarizing longer documents, an additional concern is that the summary not be repetitive: each part of the summary should cover new ground. This can be addressed by maintaining a vector of the sum total of all attention values thus far, tm = Pm n=1 αn. This total can be used as an additional input to the computation of the attention weights, αm→n ∝exp vα · tanh(Θα[h(t) m ; h(s) n ; tm]) , [19.11] which enables the model to learn to prefer parts of the source which have not been at- tended to yet (Tu et al., 2016). To further encourage diversity in the generated summary, See et al. (2017) introduce a coverage loss to the objective function, ℓm = M(s) X n=1 min(αm→n, tm→n). [19.12] This loss will be low if αmassigns little attention to words that already have large values in tm.Coverage loss is similar to the concept of marginal relevance, in which the reward for adding new content is proportional to the extent to which it increases the overall amount of information conveyed by the summary (Carbonell and Goldstein, 1998). 19.2.2 Sentence fusion for multi-document summarization In multi-document summarization, the goal is to produce a summary that covers the content of several documents (McKeown et al., 2002). One approach to this challenging problem is to identify sentences across multiple documents that relate to a single theme, and then to fuse them into a single sentence (Barzilay and McKeown, 2005). As an exam- ple, consider the following two sentences (McKeown et al., 2010): Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_483_Chunk479
|
466 CHAPTER 19. TEXT GENERATION (19.3) a. Palin actually turned against the bridge project only after it became a national symbol of wasteful spending. b. Ms. Palin supported the bridge project while running for governor, and abandoned it after it became a national scandal. An intersection preserves only the content that is present in both sentences: (19.4) Palin turned against the bridge project after it became a national scandal. A union includes information from both sentences: (19.5) Ms. Palin supported the bridge project while running for governor, but turned against it when it became a national scandal and a symbol of wasteful spending. Dependency parsing is often used as a technique for sentence fusion. After parsing each sentence, the resulting dependency trees can be aggregated into a lattice (Barzilay and McKeown, 2005) or a graph structure (Filippova and Strube, 2008), in which identical or closely related words (e.g., Palin, bridge, national) are fused into a single node. The resulting graph can then be pruned back to a tree by solving an integer linear program (see § 13.2.2), max y X i,j,r ψ(i r−→j, w; θ) × yi,j,r [19.13] s.t. y ∈C, [19.14] where the variable yi,j,r ∈{0, 1} indicates whether there is an edge from i to j of type r, the score of this edge is ψ(i r−→j, w; θ), and C is a set of constraints, which ensures that y forms a valid dependency graph. As usual, w is the list of words in the graph, and θ is a vector of parameters. The score ψ(i r−→j, w; θ) reflects the “importance” of the modifier j to the overall meaning: in intersective fusion, this score indicates the extent to which the content in this edge is expressed in all sentences; in union fusion, the score indicates whether the content in the edge is expressed in any sentence. The constraint set C can impose additional linguistic constraints: for example, ensuring that coordinated nouns are sufficiently similar. The resulting tree must then be linearized into a sentence. Lin- earization is like the inverse of dependency parsing: instead of parsing from a sequence of tokens into a tree, we must convert the tree back into a sequence of tokens. This is typically done by generating a set of candidate linearizations, and choosing the one with the highest score under a language model (Langkilde and Knight, 1998; Song et al., 2016). 19.3 Dialogue Dialogue systems are capable of conversing with a human interlocutor, often to per- form some task (Grosz, 1979), but sometimes just to chat (Weizenbaum, 1966). While re- Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_484_Chunk480
|
19.3. DIALOGUE 467 (19.6) A: I want to order a pizza. B: What toppings? A: Anchovies. B: Ok, what address? A: The College of Computing building. B: Please confirm: one pizza with artichokes, to be delivered to the College of Computing building. A: No. B: What toppings? . . . q0 start q1 q2 q3 q4 q5 q6 What toppings? Topping What address? Address Confirm? No Yes Figure 19.5: An example dialogue and the associated finite-state model. In the finite-state model, SMALL CAPS indicates that the user must provide information of this type in their answer. search on dialogue systems goes back several decades (Carbonell, 1970; Winograd, 1972), commercial systems such as Alexa and Siri have recently brought this technology into widespread use. Nonetheless, there is a significant gap between research and practice: many practical dialogue systems remain scripted and inflexible, while research systems emphasize abstractive text generation, “on-the-fly” decision making, and probabilistic reasoning about the user’s intentions. 19.3.1 Finite-state and agenda-based dialogue systems Finite-state automata were introduced in chapter 9 as a formal model of computation, in which string inputs and outputs are linked to transitions between a finite number of discrete states. This model naturally fits simple task-oriented dialogues, such as the one shown in the left panel of Figure 19.5. This (somewhat frustrating) dialogue can be repre- sented with a finite-state transducer, as shown in the right panel of the figure. The accept- ing state is reached only when the two needed pieces of information are provided, and the human user confirms that the order is correct. In this simple scenario, the TOPPING and ADDRESS are the two slots associated with the activity of ordering a pizza, which is called a frame. Frame representations can be hierarchical: for example, an ADDRESS could have slots of its own, such as STREET and CITY. In the example dialogue in Figure 19.5, the user provides the precise inputs that are needed in each turn (e.g., anchovies; the College of Computing building). Some users may Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_485_Chunk481
|
468 CHAPTER 19. TEXT GENERATION prefer to communicate more naturally, with phrases like I’d, uh, like some anchovies please. One approach to handling such utterances is to design a custom grammar, with non- terminals for slots such as TOPPING and LOCATION. However, context-free parsing of unconstrained speech input is challenging. A more lightweight alternative is BIO-style sequence labeling (see § 8.3), e.g.: (19.7) I’d O like O anchovies B-TOPPING , O and O please O bring O it O to O the B-ADDR College I-ADDR of I-ADDR Computing I-ADDR Building I-ADDR . O The tagger can be driven by a bi-directional recurrent neural network, similar to recurrent approaches to semantic role labeling described in § 13.2.3. The input in (19.7) could not be handled by the finite-state system from Figure 19.5, which forces the user to provide the topping first, and then the location. In this sense, the “initiative” is driven completely by the system. Agenda-based dialogue systems extend finite-state architectures by attempting to recognize all slots that are filled by the user’s re- ply, thereby handling these more complex examples. Agenda-based systems dynamically pose additional questions until the frame is complete (Bobrow et al., 1977; Allen et al., 1995; Rudnicky and Xu, 1999). Such systems are said to be mixed-initiative, because both the user and the system can drive the direction of the dialogue. 19.3.2 Markov decision processes The task of dynamically selecting the next move in a conversation is known as dialogue management. This problem can be framed as a Markov decision process, which is a theoretical model that includes a discrete set of states, a discrete set of actions, a function that computes the probability of transitions between states, and a function that computes the cost or reward of action-state pairs. Let’s see how each of these elements pertains to the pizza ordering dialogue system. • Each state is a tuple of information about whether the topping and address are known, and whether the order has been confirmed. For example, (KNOWN TOPPING, UNKNOWN ADDRESS, NOT CONFIRMED) [19.15] is a possible state. Any state in which the pizza order is confirmed is a terminal state, and the Markov decision process stops after entering such a state. • The set of actions includes querying for the topping, querying for the address, and requesting confirmation. Each action induces a probability distribution over states, p(st | at, st−1). For example, requesting confirmation of the order is not likely to Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_486_Chunk482
|
19.3. DIALOGUE 469 result in a transition to the terminal state if the topping is not yet known. This probability distribution over state transitions may be learned from data, or it may be specified in advance. • Each state-action-state tuple earns a reward, ra(st, st+1). In the context of the pizza ordering system, a simple reward function would be, ra(st, st−1) = 0, a = CONFIRM, st = (*, *, CONFIRMED) −10, a = CONFIRM, st = (*, *, NOT CONFIRMED) −1, a ̸= CONFIRM [19.16] This function assigns zero reward for successful transitions to the terminal state, a large negative reward to a rejected request for confirmation, and a small negative re- ward for every other type of action. The system is therefore rewarded for reaching the terminal state in few steps, and penalized for prematurely requesting confirma- tion. In a Markov decision process, a policy is a function π : S →A that maps from states to actions (see § 15.2.4). The value of a policy is the expected sum of discounted rewards, Eπ[PT t=1 γtrat(st, st+1)], where γ is the discount factor, γ ∈[0, 1). Discounting has the effect of emphasizing rewards that can be obtained immediately over less certain rewards in the distant future. An optimal policy can be obtained by dynamic programming, by iteratively updating the value function V (s), which is the expectation of the cumulative reward from s under the optimal action a, V (s) ←max a∈A X s′∈S p(s′ | s, a)[ra(s, s′) + γV (s′)]. [19.17] The value function V (s) is computed in terms of V (s′) for all states s′ ∈S. A series of iterative updates to the value function will eventually converge to a stationary point. This algorithm is known as value iteration. Given the converged value function V (s), the optimal action at each state is the argmax, π(s) = argmax a∈A X s′∈S p(s′ | s, a)[ra(s, s′) + γV (s′)]. [19.18] Value iteration and related algorithms are described in detail by Sutton and Barto (1998). For applications to dialogue systems, see Levin et al. (1998) and Walker (2000). The Markov decision process framework assumes that the current state of the dialogue is known. In reality, the system may misinterpret the user’s statements — for example, believing that a specification of the delivery location (PEACHTREE) is in fact a specification Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_487_Chunk483
|
470 CHAPTER 19. TEXT GENERATION of the topping (PEACHES). In a partially observable Markov decision process (POMDP), the system receives an observation o, which is probabilistically conditioned on the state, p(o | s). It must therefore maintain a distribution of beliefs about which state it is in, with qt(s) indicating the degree of belief that the dialogue is in state s at time t. The POMDP formulation can help to make dialogue systems more robust to errors, particularly in the context of spoken language dialogues, where the speech itself may be misrecognized (Roy et al., 2000; Williams and Young, 2007). However, finding the optimal policy in a POMDP is computationally intractable, requiring additional approximations. 19.3.3 Neural chatbots It’s easier to talk when you don’t need to get anything done. Chatbots are systems that parry the user’s input with a response that keeps the conversation going. They can be built from the encoder-decoder architecture discussed in § 18.3 and § 19.1.2: the encoder converts the user’s input into a vector, and the decoder produces a sequence of words as a response. For example, Shang et al. (2015) apply the attentional encoder-decoder transla- tion model, training on a dataset of posts and responses from the Chinese microblogging platform Sina Weibo.5 This approach is capable of generating replies that relate themati- cally to the input, as shown in the following examples (translated from Chinese by Shang et al., 2015). (19.8) a. A: High fever attacks me every New Year’s day. B: Get well soon and stay healthy! b. A: I gain one more year. Grateful to my group, so happy. B: Getting old now. Time has no mercy. While encoder-decoder models can generate responses that make sense in the con- text of the immediately preceding turn, they struggle to maintain coherence over longer conversations. One solution is to model the dialogue context recurrently. This creates a hierarchical recurrent network, including both word-level and turn-level recurrences. The turn-level hidden state is then used as additional context in the decoder (Serban et al., 2016). An open question is how to integrate the encoder-decoder architecture into task-oriented dialogue systems. Neural chatbots can be trained end-to-end: the user’s turn is analyzed by the encoder, and the system output is generated by the decoder. This architecture can be trained by log-likelihood using backpropagation (e.g., Sordoni et al., 2015; Serban et al., 2016), or by more elaborate objectives, using reinforcement learning (Li et al., 2016). In contrast, the task-oriented dialogue systems described in § 19.3.1 typically involve a 5Twitter is also frequently used for construction of dialogue datasets (Ritter et al., 2011; Sordoni et al., 2015). Another source is technical support chat logs from the Ubuntu linux distribution (Uthus and Aha, 2013; Lowe et al., 2015). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_488_Chunk484
|
19.3. DIALOGUE 471 set of specialized modules: one for recognizing the user input, another for deciding what action to take, and a third for arranging the text of the system output. Recurrent neural network decoders can be integrated into Markov Decision Process dialogue systems, by conditioning the decoder on a representation of the information that is to be expressed in each turn (Wen et al., 2015). Specifically, the long short-term memory (LSTM; § 6.3) architecture is augmented so that the memory cell at turn m takes an additional input dm, which is a representation of the slots and values to be expressed in the next turn. However, this approach still relies on additional modules to recognize the user’s utterance and to plan the overall arc of the dialogue. Another promising direction is to create embeddings for the elements in the domain: for example, the slots in a record and the entities that can fill them. The encoder then encodes not only the words of the user’s input, but the embeddings of the elements that the user mentions. Similarly, the decoder is endowed with the ability to refer to specific elements in the knowledge base. He et al. (2017) show that such a method can learn to play a collaborative dialogue game, in which both players are given a list of entities and their properties, and the goal is to find an entity that is on both players’ lists. Additional resources Gatt and Krahmer (2018) provide a comprehensive recent survey on text generation. For a book-length treatment of earlier work, see Reiter and Dale (2000). For a survey on image captioning, see Bernardi et al. (2016); for a survey of pre-neural approaches to dialogue systems, see Rieser and Lemon (2011). Dialogue acts were introduced in § 8.6 as a label- ing scheme for human-human dialogues; they also play a critical in task-based dialogue systems (e.g., Allen et al., 1996). The incorporation of theoretical models of dialogue into computational systems is reviewed by Jurafsky and Martin (2009, chapter 24). While this chapter has focused on the informative dimension of text generation, an- other line of research aims to generate text with configurable stylistic properties (Walker et al., 1997; Mairesse and Walker, 2011; Ficler and Goldberg, 2017; Hu et al., 2017). This chapter also does not address the generation of creative text such as narratives (Riedl and Young, 2010), jokes (Ritchie, 2001), poems (Colton et al., 2012), and song lyrics (Gonc¸alo Oliveira et al., 2007). Exercises 1. Find an article about a professional basketball game, with an associated “box score” of statistics. Which are the first three elements in the box score that are expressed in the article? Can you identify template-based patterns that express these elements of the record? Now find a second article about a different basketball game. Does it Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_489_Chunk485
|
472 CHAPTER 19. TEXT GENERATION mention the same first three elements of the box score? Do your templates capture how these elements are expressed in the text? 2. This exercise is to be done by a pair of students. One student should choose an article from the news or from Wikipedia, and manually perform semantic role labeling (SRL) on three short sentences or clauses. (See chapter 13 for a review of SRL.) Identify the main the semantic relation and its arguments and adjuncts. Pass this structured record — but not the original sentence — to the other student, whose job is to generate a sentence expressing the semantics. Then reverse roles, and try to regenerate three sentences from another article, based on the predicate-argument semantics. 3. Compute the BLEU scores (see § 18.1.1) for the generated sentences in the previous problem, using the original article text as the reference. 4. Align each token in the text of Figure 19.1 to a specific single record in the database, or to the null record ∅. For example, the tokens south wind would align to the record wind direction: 06:00-21:00: mode=S. How often is each token aligned to the same record as the previous token? How many transitions are there? How might a system learn to output 10 degrees for the record min=9? 5. In sentence compression and fusion, we may wish to preserve contiguous sequences of tokens (n-grams) and/or dependency edges. Find five short news articles with headlines. For each headline, compute the fraction of bigrams that appear in the main text of the article. Then do a manual depenency parse of the headline. For each dependency edge, count how often it appears as a dependency edge in the main text. You may use an automatic dependency parser to assist with this exercise, but check the output, and focus on UD 2.0 dependency grammar, as described in chapter 11. 6. § 19.2.2 presents the idea of generating text from dependency trees, which requires linearization. Sometimes there are multiple ways that a dependency tree can be linearized. For example: (19.9) a. The sick kids stayed at home in bed. b. The sick kids stayed in bed at home. Both sentences have an identical dependency parse: both home and bed are (oblique) dependents of stayed. Identify two more English dependency trees that can each be linearized in more than one way, and try to use a different pattern of variation in each tree. As usual, specify your trees in the Universal Dependencies 2 style, which is described in chapter 11. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_490_Chunk486
|
19.3. DIALOGUE 473 7. In § 19.3.2, we considered a pizza delivery service. Let’s simplify the problem to take-out, where it is only necessary to determine the topping and confirm the order. The state is a tuple in which the first element is T if the topping is specified and ? otherwise, and the second element is either YES or NO, depending on whether the order has been confirmed. The actions are TOPPING? (request information about the topping) and CONFIRM? (request confirmation). The state transition function is: p(st | st−1 = (?, NO), a = TOPPING?) = ( 0.9, st = (T, NO) 0.1, st = (?, NO). [19.19] p(st | st−1 = (?, NO), a = CONFIRM?) = n 1, st = (?, NO). [19.20] p(st | st−1 = (T, NO), a = TOPPING?) = n 1, st = (T, NO). [19.21] p(st | st−1 = (T, NO), a = CONFIRM?) = ( 0.9, st = (T, YES) 0.1, st = (T, NO). [19.22] Using the reward function defined in Equation 19.16, the discount γ = 0.9, and the initialization V (s) = 0, execute three iterations of Equation 19.17. After these three iterations, compute the optimal action in each state. You can assume that for the terminal states, V (*, YES) = 0, so you only need to compute the values for non- terminal states, V (?, NO) and V (T, NO). 8. There are several toolkits that allow you to train encoder-decoder translation models “out of the box”, such as FAIRSEQ (Gehring et al., 2017), XNMT (Neubig et al., 2018), TENSOR2TENSOR (Vaswani et al., 2018), and OPENNMT (Klein et al., 2017).6 Use one of these toolkits to train a chatbot dialogue system, using either the NPS dialogue corpus that comes with NLTK (Forsyth and Martell, 2007), or, if you are feeling more ambitious, the Ubuntu dialogue corpus (Lowe et al., 2015). 6https://github.com/facebookresearch/fairseq; https://github.com/neulab/xnmt; https://github.com/tensorflow/tensor2tensor; http://opennmt.net/ Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_491_Chunk487
|
Appendix A Probability Probability theory provides a way to reason about random events. The sorts of random events that are typically used to explain probability theory include coin flips, card draws, and the weather. It may seem odd to think about the choice of a word as akin to the flip of a coin, particularly if you are the type of person to choose words carefully. But random or not, language has proven to be extremely difficult to model deterministically. Probability offers a powerful tool for modeling and manipulating linguistic data. Probability can be thought of in terms of random outcomes: for example, a single coin flip has two possible outcomes, heads or tails. The set of possible outcomes is the sample space, and a subset of the sample space is an event. For a sequence of two coin flips, there are four possible outcomes, {HH, HT, TH, TT}, representing the ordered sequences heads-head, heads-tails, tails-heads, and tails-tails. The event of getting exactly one head includes two outcomes: {HT, TH}. Formally, a probability is a function from events to the interval between zero and one: Pr : F →[0, 1], where F is the set of possible events. An event that is certain has proba- bility one; an event that is impossible has probability zero. For example, the probability of getting fewer than three heads on two coin flips is one. Each outcome is also an event (a set with exactly one element), and for two flips of a fair coin, the probability of each outcome is, Pr({HH}) = Pr({HT}) = Pr({TH}) = Pr({TT}) = 1 4. [A.1] A.1 Probabilities of event combinations Because events are sets of outcomes, we can use set-theoretic operations such as comple- ment, intersection, and union to reason about the probabilities of events and their combi- nations. 475
|
nlp_Page_493_Chunk488
|
476 APPENDIX A. PROBABILITY For any event A, there is a complement ¬A, such that: • The probability of the union A ∪¬A is Pr(A ∪¬A) = 1; • The intersection A ∩¬A = ∅is the empty set, and Pr(A ∩¬A) = 0. In the coin flip example, the event of obtaining a single head on two flips corresponds to the set of outcomes {HT, TH}; the complement event includes the other two outcomes, {TT, HH}. A.1.1 Probabilities of disjoint events When two events have an empty intersection, A ∩B = ∅, they are disjoint. The probabil- ity of the union of two disjoint events is equal to the sum of their probabilities, A ∩B = ∅ ⇒ Pr(A ∪B) = Pr(A) + Pr(B). [A.2] This is the third axiom of probability, and it can be generalized to any countable sequence of disjoint events. In the coin flip example, this axiom can derive the probability of the event of getting a single head on two flips. This event is the set of outcomes {HT, TH}, which is the union of two simpler events, {HT, TH} = {HT} ∪{TH}. The events {HT} and {TH} are disjoint. Therefore, Pr({HT, TH}) = Pr({HT} ∪{TH}) = Pr({HT}) + Pr({TH}) [A.3] =1 4 + 1 4 = 1 2. [A.4] In the general, the probability of the union of two events is, Pr(A ∪B) = Pr(A) + Pr(B) −Pr(A ∩B). [A.5] This can be seen visually in Figure A.1, and it can be derived from the third axiom of probability. Consider an event that includes all outcomes in B that are not in A, denoted as B −(A ∩B). By construction, this event is disjoint from A. We can therefore apply the additive rule, Pr(A ∪B) = Pr(A) + Pr(B −(A ∩B)). [A.6] Furthermore, the event B is the union of two disjoint events: A ∩B and B −(A ∩B). Pr(B) = Pr(B −(A ∩B)) + Pr(A ∩B). [A.7] Reorganizing and subtituting into Equation A.6 gives the desired result: Pr(B −(A ∩B)) = Pr(B) −Pr(A ∩B) [A.8] Pr(A ∪B) = Pr(A) + Pr(B) −Pr(A ∩B). [A.9] Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_494_Chunk489
|
A.2. CONDITIONAL PROBABILITY AND BAYES’ RULE 477 A B A ∩B Figure A.1: A visualization of the probability of non-disjoint events A and B. A.1.2 Law of total probability A set of events B = {B1, B2, . . . , BN} is a partition of the sample space iff each pair of events is disjoint (Bi ∩Bj = ∅), and the union of the events is the entire sample space. The law of total probability states that we can marginalize over these events as follows, Pr(A) = X Bn∈B Pr(A ∩Bn). [A.10] For any event B, the union B ∪¬B is a partition of the sample space. Therefore, a special case of the law of total probability is, Pr(A) = Pr(A ∩B) + Pr(A ∩¬B). [A.11] A.2 Conditional probability and Bayes’ rule A conditional probability is an expression like Pr(A | B), which is the probability of the event A, assuming that event B happens too. For example, we may be interested in the probability of a randomly selected person answering the phone by saying hello, conditioned on that person being a speaker of English. Conditional probability is defined as the ratio, Pr(A | B) = Pr(A ∩B) Pr(B) . [A.12] The chain rule of probability states that Pr(A ∩B) = Pr(A | B) × Pr(B), which is just Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_495_Chunk490
|
478 APPENDIX A. PROBABILITY a rearrangement of terms from Equation A.12. The chain rule can be applied repeatedly: Pr(A ∩B ∩C) = Pr(A | B ∩C) × Pr(B ∩C) = Pr(A | B ∩C) × Pr(B | C) × Pr(C). Bayes’ rule (sometimes called Bayes’ law or Bayes’ theorem) gives us a way to convert between Pr(A | B) and Pr(B | A). It follows from the definition of conditional probability and the chain rule: Pr(A | B) = Pr(A ∩B) Pr(B) = Pr(B | A) × Pr(A) Pr(B) [A.13] Each term in Bayes rule has a name, which we will occasionally use: • Pr(A) is the prior, since it is the probability of event A without knowledge about whether B happens or not. • Pr(B | A) is the likelihood, the probability of event B given that event A has oc- curred. • Pr(A | B) is the posterior, the probability of event A with knowledge that B has occurred. Example The classic examples for Bayes’ rule involve tests for rare diseases, but Man- ning and Sch¨utze (1999) reframe this example in a linguistic setting. Suppose that you are is interested in a rare syntactic construction, such as parasitic gaps, which occur on average once in 100,000 sentences. Here is an example of a parasitic gap: (A.1) Which class did you attend without registering for ? Lana Linguist has developed a complicated pattern matcher that attempts to identify sentences with parasitic gaps. It’s pretty good, but it’s not perfect: • If a sentence has a parasitic gap, the pattern matcher will find it with probability 0.95. (This is the recall, which is one minus the false negative rate.) • If the sentence doesn’t have a parasitic gap, the pattern matcher will wrongly say it does with probability 0.005. (This is the false positive rate, which is one minus the precision.) Suppose that Lana’s pattern matcher says that a sentence contains a parasitic gap. What is the probability that this is true? Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_496_Chunk491
|
A.3. INDEPENDENCE 479 Let G be the event of a sentence having a parasitic gap, and T be the event of the test being positive. We are interested in the probability of a sentence having a parasitic gap given that the test is positive. This is the conditional probability Pr(G | T), and it can be computed by Bayes’ rule: Pr(G | T) =Pr(T | G) × Pr(G) Pr(T) . [A.14] We already know both terms in the numerator: Pr(T | G) is the recall, which is 0.95; Pr(G) is the prior, which is 10−5. We are not given the denominator, but it can be computed using tools developed ear- lier in this section. First apply the law of total probability, using the partition {G, ¬G}: Pr(T) = Pr(T ∩G) + Pr(T ∩¬G). [A.15] This says that the probability of the test being positive is the sum of the probability of a true positive (T ∩G) and the probability of a false positive (T ∩¬G). The probability of each of these events can be computed using the chain rule: Pr(T ∩G) = Pr(T | G) × Pr(G) = 0.95 × 10−5 [A.16] Pr(T ∩¬G) = Pr(T | ¬G) × Pr(¬G) = 0.005 × (1 −10−5) ≈0.005 [A.17] Pr(T) = Pr(T ∩G) + Pr(T ∩¬G) [A.18] =0.95 × 10−5 + 0.005. [A.19] Plugging these terms into Bayes’ rule gives the desired posterior probability, Pr(G | T) =Pr(T | G) Pr(G) Pr(T) [A.20] = 0.95 × 10−5 0.95 × 10−5 + 0.005 × (1 −10−5) [A.21] ≈0.002. [A.22] Lana’s pattern matcher seems accurate, with false positive and false negative rates below 5%. Yet the extreme rarity of the phenomenon means that a positive result from the detector is most likely to be wrong. A.3 Independence Two events are independent if the probability of their intersection is equal to the product of their probabilities: Pr(A ∩B) = Pr(A) × Pr(B). For example, for two flips of a fair Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_497_Chunk492
|
480 APPENDIX A. PROBABILITY coin, the probability of getting heads on the first flip is independent of the probability of getting heads on the second flip: Pr({HT, HH}) = Pr(HT) + Pr(HH) = 1 4 + 1 4 = 1 2 [A.23] Pr({HH, TH}) = Pr(HH) + Pr(TH) = 1 4 + 1 4 = 1 2 [A.24] Pr({HT, HH}) × Pr({HH, TH}) =1 2 × 1 2 = 1 4 [A.25] Pr({HT, HH} ∩{HH, TH}) = Pr(HH) = 1 4 [A.26] = Pr({HT, HH}) × Pr({HH, TH}). [A.27] If Pr(A ∩B | C) = Pr(A | C) × Pr(B | C), then the events A and B are conditionally independent, written A ⊥B | C. Conditional independence plays a important role in probabilistic models such as Na¨ıve Bayes chapter 2. A.4 Random variables Random variables are functions from events to Rn, where R is the set of real numbers. This subsumes several useful special cases: • An indicator random variable is a function from events to the set {0, 1}. In the coin flip example, we can define Y as an indicator random variable, taking the value 1 when the coin has come up heads on at least one flip. This would include the outcomes {HH, HT, TH}. The probability Pr(Y = 1) is the sum of the probabilities of these outcomes, Pr(Y = 1) = 1 4 + 1 4 + 1 4 = 3 4. • A discrete random variable is a function from events to a discrete subset of R. Con- sider the coin flip example: the number of heads on two flips, X, can be viewed as a discrete random variable, X ∈0, 1, 2. The event probability Pr(X = 1) can again be computed as the sum of the probabilities of the events in which there is one head, {HT, TH}, giving Pr(X = 1) = 1 4 + 1 4 = 1 2. Each possible value of a random variable is associated with a subset of the sample space. In the coin flip example, X = 0 is associated with the event {TT}, X = 1 is associated with the event {HT, TH}, and X = 2 is associated with the event {HH}. Assuming a fair coin, the probabilities of these events are, respectively, 1/4, 1/2, and 1/4. This list of numbers represents the probability distribution over X, written pX, which maps from the possible values of X to the non-negative reals. For a specific value x, we write pX(x), which is equal to the event probability Pr(X = x).1 The function pX is called 1In general, capital letters (e.g., X) refer to random variables, and lower-case letters (e.g., x) refer to specific values. When the distribution is clear from context, I will simply write p(x). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_498_Chunk493
|
A.5. EXPECTATIONS 481 a probability mass function (pmf) if X is discrete; it is called a probability density function (pdf) if X is continuous. In either case, the function must sum to one, and all values must be non-negative: Z x pX(x)dx =1 [A.28] ∀x, pX(x) ≥0. [A.29] Probabilities over multiple random variables can written as joint probabilities, e.g., pA,B(a, b) = Pr(A = a ∩B = b). Several properties of event probabilities carry over to probability distributions over random variables: • The marginal probability distribution is pA(a) = P b pA,B(a, b). • The conditional probability distribution is pA|B(a | b) = pA,B(a,b) pB(b) . • Random variables A and B are independent iff pA,B(a, b) = pA(a) × pB(b). A.5 Expectations Sometimes we want the expectation of a function, such as E[g(x)] = P x∈X g(x)p(x). Expectations are easiest to think about in terms of probability distributions over discrete events: • If it is sunny, Lucia will eat three ice creams. • If it is rainy, she will eat only one ice cream. • There’s a 80% chance it will be sunny. • The expected number of ice creams she will eat is 0.8 × 3 + 0.2 × 1 = 2.6. If the random variable X is continuous, the expectation is an integral: E[g(x)] = Z X g(x)p(x)dx [A.30] For example, a fast food restaurant in Quebec has a special offer for cold days: they give a 1% discount on poutine for every degree below zero. Assuming a thermometer with infinite precision, the expected price would be an integral over all possible temperatures, E[price(x)] = Z X min(1, 1 + x) × original-price × p(x)dx. [A.31] Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_499_Chunk494
|
482 APPENDIX A. PROBABILITY A.6 Modeling and estimation Probabilistic models provide a principled way to reason about random events and ran- dom variables. Let’s consider the coin toss example. Each toss can be modeled as a ran- dom event, with probability θ of the event H, and probability 1 −θ of the complementary event T. If we write a random variable X as the total number of heads on three coin flips, then the distribution of X depends on θ. In this case, X is distributed as a binomial random variable, meaning that it is drawn from a binomial distribution, with parameters (θ, N = 3). This is written, X ∼Binomial(θ, N = 3). [A.32] The properties of the binomial distribution enable us to make statements about the X, such as its expected value and the likelihood that its value will fall within some interval. Now suppose that θ is unknown, but we have run an experiment, in which we exe- cuted N trials, and obtained x heads. We can estimate θ by the principle of maximum likelihood: ˆθ = argmax θ pX(x; θ, N). [A.33] This says that the estimate ˆθ should be the value that maximizes the likelihood of the data. The semicolon indicates that θ and N are parameters of the probability function. The likelihood pX(x; θ, N) can be computed from the binomial distribution, pX(x; θ, N) = N! x!(N −x)!θx(1 −θ)N−x. [A.34] This likelihood is proportional to the product of the probability of individual out- comes: for example, the sequence T, H, H, T, H would have probability θ3(1 −θ)2. The term N! x!(N−x)! arises from the many possible orderings by which we could obtain x heads on N trials. This term does not depend on θ, so it can be ignored during estimation. In practice, we maximize the log-likelihood, which is a monotonic function of the like- lihood. Under the binomial distribution, the log-likelihood is a convex function of θ (see Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_500_Chunk495
|
A.6. MODELING AND ESTIMATION 483 § 2.4), so it can be maximized by taking the derivative and setting it equal to zero. ℓ(θ) =x log θ + (N −x) log(1 −θ) [A.35] ∂ℓ(θ) ∂θ =x θ −N −x 1 −θ [A.36] N −x 1 −θ =x θ [A.37] N −x x =1 −θ θ [A.38] N x −1 =1 θ −1 [A.39] ˆθ = x N . [A.40] In this case, the maximum likelihood estimate is equal to x N , the fraction of trials that came up heads. This intuitive solution is also known as the relative frequency estimate, since it is equal to the relative frequency of the outcome. Is maximum likelihood estimation always the right choice? Suppose you conduct one trial, and get heads. Would you conclude that θ = 1, meaning that the coin is guaran- teed to come up heads? If not, then you must have some prior expectation about θ. To incorporate this prior information, we can treat θ as a random variable, and use Bayes’ rule: p(θ | x; N) =p(x | θ) × p(θ) p(x) [A.41] ∝p(x | θ) × p(θ) [A.42] ˆθ = argmax θ p(x | θ) × p(θ). [A.43] This it the maximum a posteriori (MAP) estimate. Given a form for p(θ), you can de- rive the MAP estimate using the same approach that was used to derive the maximum likelihood estimate. Additional resources A good introduction to probability theory is offered by Manning and Sch¨utze (1999), which helped to motivate this section. For more detail, Sharon Goldwater provides an- other useful reference, http://homepages.inf.ed.ac.uk/sgwater/teaching/general/ probability.pdf. A historical and philosophical perspective on probability is offered by Diaconis and Skyrms (2017). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_501_Chunk496
|
Appendix B Numerical optimization Unconstrained numerical optimization involves solving problems of the form, min x∈RD f(x), [B.1] where x ∈RD is a vector of D real numbers. Differentiation is fundamental to numerical optimization. Suppose that at some x∗, every partial derivative of f is equal to 0: formally, ∂f ∂xi x∗= 0. Then x∗is said to be a critical point of f. If f is a convex function (defined in § 2.4), then the value of f(x∗) is equal to the global minimum of f iff x∗is a critical point of f. As an example, consider the convex function f(x) = (x−2)2+3, shown in Figure B.1a. The derivative is ∂f ∂x = 2x−4. A unique minimum can be obtained by setting the derivative equal to zero and solving for x, obtaining x∗= 2. Now consider the multivariate convex function f(x) = 1 2||x −[2, 1]⊤||2, where ||x||2 is the squared Euclidean norm. The partial 4 2 0 2 4 x 10 20 30 40 (x 2)2 + 3 (a) The function f(x) = (x −2)2 + 3 20 10 0 10 20 x 0 10 20 |x| 2cos(x) (b) The function f(x) = |x| −2 cos(x) Figure B.1: Two functions with unique global minima 485
|
nlp_Page_503_Chunk497
|
486 APPENDIX B. NUMERICAL OPTIMIZATION derivatives are, ∂d ∂x1 = x1 −2 [B.2] ∂d ∂x2 = x2 −1 [B.3] The unique minimum is x∗= [2, 1]⊤. For non-convex functions, critical points are not necessarily global minima. A local minimum x∗is a point at which the function takes a smaller value than at all nearby neighbors: formally, x∗is a local minimum if there is some positive ϵ such that f(x∗) ≤ f(x) for all x within distance ϵ of x∗. Figure B.1b shows the function f(x) = |x|−2 cos(x), which has many local minima, as well as a unique global minimum at x = 0. A critical point may also be the local or global maximum of the function; it may be a saddle point, which is a minimum with respect to at least one coordinate, and a maximum with respect at least one other coordinate; it may be an inflection point, which is neither or a minimum nor maximum. When available, the second derivative of f can help to distinguish these cases. B.1 Gradient descent For many convex functions, it is not possible to solve for x∗in closed form. In gradient descent, we compute a series of solutions, x(0), x(1), . . . by taking steps along the local gradient ∇x(t)f, which is the vector of partial derivatives of the function f, evaluated at the point x(t). Each solution x(t+1) is computed, x(t+1) ←x(t) −η(t)∇x(t)f. [B.4] where η(t) > 0 is a step size. If the step size is chosen appropriately, this procedure will find the global minimum of a differentiable convex function. For non-convex functions, gradient descent will find a local minimum. The extension to non-differentiable convex functions is discussed in § 2.4. B.2 Constrained optimization Optimization must often be performed under constraints: for example, when optimizing the parameters of a probability distribution, the probabilities of all events must sum to one. Constrained optimization problems can be written, min x f(x) [B.5] s.t. gc(x) ≤0, ∀c = 1, 2, . . . , C [B.6] Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_504_Chunk498
|
B.3. EXAMPLE: PASSIVE-AGGRESSIVE ONLINE LEARNING 487 where each gc(x) is a scalar function of x. For example, suppose that x must be non- negative, and that its sum cannot exceed a budget b. Then there are D + 1 inequality constraints, gi(x) = −xi, ∀i = 1, 2, . . . , D [B.7] gD+1(x) = −b + D X i=1 xi. [B.8] Inequality constraints can be combined with the original objective function f by form- ing a Lagrangian, L(x, λ) = f(x) + C X c=1 λcgc(x), [B.9] where λc is a Lagrange multiplier. For any Lagrangian, there is a corresponding dual form, which is a function of λ: D(λ) = min x L(x, λ). [B.10] The Lagrangian L can be referred to as the primal form. B.3 Example: Passive-aggressive online learning Sometimes it is possible to solve a constrained optimization problem by manipulating the Lagrangian. One example is maximum-likelihood estimation of a Na¨ıve Bayes probability model, as described in § 2.2.3. In that case, it is unnecessary to explicitly compute the Lagrange multiplier. Another example is illustrated by the passive-aggressive algorithm for online learning (Crammer et al., 2006). This algorithm is similar to the perceptron, but the goal at each step is to make the most conservative update that gives zero margin loss on the current example.1 Each update can be formulated as a constrained optimization over the weights θ: min θ 1 2||θ −θ(i−1)||2 [B.11] s.t. ℓ(i)(θ) = 0 [B.12] where θ(i−1) is the previous set of weights, and ℓ(i)(θ) is the margin loss on instance i. As in § 2.4.1, this loss is defined as, ℓ(i)(θ) = 1 −θ · f(x(i), y(i)) + max y̸=y(i) θ · f(x(i), y). [B.13] 1This is the basis for the name of the algorithm: it is passive when the loss is zero, but it aggressively moves to make the loss zero when necessary. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_505_Chunk499
|
488 APPENDIX B. NUMERICAL OPTIMIZATION When the margin loss is zero for θ(i−1), the optimal solution is θ∗= θ(i−1), so we will focus on the case where ℓ(i)(θ(i−1)) > 0. The Lagrangian for this problem is, L(θ, λ) = 1 2||θ −θ(i−1)||2 + λℓ(i)(θ), [B.14] Holding λ constant, we can solve for θ by differentiating, ∇θL =θ −θ(i−1) + λ ∂ ∂θℓ(i)(θ) [B.15] θ∗=θ(i−1) + λδ, [B.16] where δ = f(x(i), y(i)) −f(x(i), ˆy) and ˆy = argmaxy̸=y(i) θ · f(x(i), y). The Lagrange multiplier λ acts as the learning rate in a perceptron-style update to θ. We can solve for λ by plugging θ∗back into the Lagrangian, obtaining the dual function, D(λ) =1 2||θ(i−1) + λδ −θ(i−1)||2 + λ(1 −(θ(i−1) + λδ) · δ) [B.17] =λ2 2 ||δ||2 −λ2||δ||2 + λ(1 −θ(i−1) · δ) [B.18] = −λ2 2 ||δ||2 + λℓ(i)(θ(i−1)). [B.19] Differentiating and solving for λ, ∂D ∂λ = −λ||δ||2 + ℓ(i)(θ(i−1)) [B.20] λ∗=ℓ(i)(θ(i−1)) ||δ||2 . [B.21] The complete update equation is therefore: θ∗= θ(i−1) + ℓ(i)(θ(i−1)) ||f(x(i), y(i)) −f(x(i), ˆy)||2 (f(x(i), y(i)) −f(x(i), ˆy)). [B.22] This learning rate makes intuitive sense. The numerator grows with the loss; the denom- inator grows with the norm of the difference between the feature vectors associated with the correct and predicted label. If this norm is large, then the step with respect to each feature should be small, and vice versa. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_506_Chunk500
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.