text
stringlengths 1
7.76k
| source
stringlengths 17
81
|
|---|---|
Chapter 12 Logical semantics The previous few chapters have focused on building systems that reconstruct the syntax of natural language — its structural organization — through tagging and parsing. But some of the most exciting and promising potential applications of language technology involve going beyond syntax to semantics — the underlying meaning of the text: • Answering questions, such as where is the nearest coffeeshop? or what is the middle name of the mother of the 44th President of the United States?. • Building a robot that can follow natural language instructions to execute tasks. • Translating a sentence from one language into another, while preserving the under- lying meaning. • Fact-checking an article by searching the web for contradictory evidence. • Logic-checking an argument by identifying contradictions, ambiguity, and unsup- ported assertions. Semantic analysis involves converting natural language into a meaning representa- tion. To be useful, a meaning representation must meet several criteria: • c1: it should be unambiguous: unlike natural language, there should be exactly one meaning per statement; • c2: it should provide a way to link language to external knowledge, observations, and actions; • c3: it should support computational inference, so that meanings can be combined to derive additional knowledge; • c4: it should be expressive enough to cover the full range of things that people talk about in natural language. 285
|
nlp_Page_303_Chunk301
|
286 CHAPTER 12. LOGICAL SEMANTICS Much more than this can be said about the question of how best to represent knowledge for computation (e.g., Sowa, 2000), but this chapter will focus on these four criteria. 12.1 Meaning and denotation The first criterion for a meaning representation is that statements in the representation should be unambiguous — they should have only one possible interpretation. Natural language does not have this property: as we saw in chapter 10, sentences like cats scratch people with claws have multiple interpretations. But what does it mean for a statement to be unambiguous? Programming languages provide a useful example: the output of a program is completely specified by the rules of the language and the properties of the environment in which the program is run. For ex- ample, the python code 5 + 3 will have the output 8, as will the codes (4*4)-(3*3)+1 and ((8)). This output is known as the denotation of the program, and can be written as, J5+3K = J(4*4)-(3*3)+1K = J((8))K = 8. [12.1] The denotations of these arithmetic expressions are determined by the meaning of the constants (e.g., 5, 3) and the relations (e.g., +, *, (, )). Now let’s consider another snippet of python code, double(4). The denotation of this code could be, Jdouble(4)K = 8, or it could be Jdouble(4)K = 44 — it depends on the meaning of double. This meaning is defined in a world model M as an infinite set of pairs. We write the denotation with respect to model M as J·KM, e.g., JdoubleKM = {(0,0), (1,2), (2,4), . . .}. The world model would also define the (infinite) list of constants, e.g., {0,1,2,...}. As long as the denotation of string φ in model M can be computed unambiguously, the language can be said to be unambiguous. This approach to meaning is known as model-theoretic semantics, and it addresses not only criterion c1 (no ambiguity), but also c2 (connecting language to external knowl- edge, observations, and actions). For example, we can connect a representation of the meaning of a statement like the capital of Georgia with a world model that includes knowl- edge base of geographical facts, obtaining the denotation Atlanta. We might populate a world model by detecting and analyzing the objects in an image, and then use this world model to evaluate propositions like a man is riding a moose. Another desirable property of model-theoretic semantics is that when the facts change, the denotations change too: the meaning representation of President of the USA would have a different denotation in the model M2014 as it would in M2022. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_304_Chunk302
|
12.2. LOGICAL REPRESENTATIONS OF MEANING 287 12.2 Logical representations of meaning Criterion c3 requires that the meaning representation support inference — for example, automatically deducing new facts from known premises. While many representations have been proposed that meet these criteria, the most mature is the language of first-order logic.1 12.2.1 Propositional logic The bare bones of logical meaning representation are Boolean operations on propositions: Propositional symbols. Greek symbols like φ and ψ will be used to represent proposi- tions, which are statements that are either true or false. For example, φ may corre- spond to the proposition, bagels are delicious. Boolean operators. We can build up more complex propositional formulas from Boolean operators. These include: • Negation ¬φ, which is true if φ is false. • Conjunction, φ ∧ψ, which is true if both φ and ψ are true. • Disjunction, φ ∨ψ, which is true if at least one of φ and ψ is true • Implication, φ ⇒ψ, which is true unless φ is true and ψ is false. Implication has identical truth conditions to ¬φ ∨ψ. • Equivalence, φ ⇔ψ, which is true if φ and ψ are both true or both false. Equiv- alence has identical truth conditions to (φ ⇒ψ) ∧(ψ ⇒φ). It is not strictly necessary to have all five Boolean operators: readers familiar with Boolean logic will know that it is possible to construct all other operators from either the NAND (not-and) or NOR (not-or) operators. Nonetheless, it is clearest to use all five operators. From the truth conditions for these operators, it is possible to define a number of “laws” for these Boolean operators, such as, • Commutativity: φ ∧ψ = ψ ∧φ, φ ∨ψ = ψ ∨φ • Associativity: φ ∧(ψ ∧χ) = (φ ∧ψ) ∧χ, φ ∨(ψ ∨χ) = (φ ∨ψ) ∨χ • Complementation: φ ∧¬φ = ⊥, φ ∨¬φ = ⊤, where ⊤indicates a true proposition and ⊥indicates a false proposition. 1Alternatives include the “variable-free” representation used in semantic parsing of geographical queries (Zelle and Mooney, 1996) and robotic control (Ge and Mooney, 2005), and dependency-based com- positional semantics (Liang et al., 2013). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_305_Chunk303
|
288 CHAPTER 12. LOGICAL SEMANTICS These laws can be combined to derive further equivalences, which can support logical inferences. For example, suppose φ = The music is loud and ψ = Max can’t sleep. Then if we are given, φ ⇒ψ If the music is loud, Max can’t sleep. φ The music is loud. we can derive ψ (Max can’t sleep) by application of modus ponens, which is one of a set of inference rules that can be derived from more basic laws and used to manipulate propositional formulas. Automated theorem provers are capable of applying inference rules to a set of premises to derive desired propositions (Loveland, 2016). 12.2.2 First-order logic Propositional logic is so named because it treats propositions as its base units. However, the criterion c4 states that our meaning representation should be sufficiently expressive. Now consider the sentence pair, (12.1) If anyone is making noise, then Max can’t sleep. Abigail is making noise. People are capable of making inferences from this sentence pair, but such inferences re- quire formal tools that are beyond propositional logic. To understand the relationship between the statement anyone is making noise and the statement Abigail is making noise, our meaning representation requires the additional machinery of first-order logic (FOL). In FOL, logical propositions can be constructed from relationships between entities. Specifically, FOL extends propositional logic with the following classes of terms: Constants. These are elements that name individual entities in the model, such as MAX and ABIGAIL. The denotation of each constant in a model M is an element in the model, e.g., JMAXK = m and JABIGAILK = a. Relations. Relations can be thought of as sets of entities, or sets of tuples. For example, the relation CAN-SLEEP is defined as the set of entities who can sleep, and has the denotation JCAN-SLEEPK = {a, m, . . .}. To test the truth value of the proposition CAN-SLEEP(MAX), we ask whether JMAXK ∈JCAN-SLEEPK. Logical relations that are defined over sets of entities are sometimes called properties. Relations may also be ordered tuples of entities. For example BROTHER(MAX,ABIGAIL) expresses the proposition that MAX is the brother of ABIGAIL. The denotation of such relations is a set of tuples, JBROTHERK = {(m,a), (x,y), . . .}. To test the truth value of the proposition BROTHER(MAX,ABIGAIL), we ask whether the tuple (JMAXK, JABIGAILK) is in the denotation JBROTHERK. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_306_Chunk304
|
12.2. LOGICAL REPRESENTATIONS OF MEANING 289 Using constants and relations, it is possible to express statements like Max can’t sleep and Max is Abigail’s brother: ¬CAN-SLEEP(MAX) BROTHER(MAX,ABIGAIL). These statements can also be combined using Boolean operators, such as, (BROTHER(MAX,ABIGAIL) ∨BROTHER(MAX,STEVE)) ⇒¬CAN-SLEEP(MAX). This fragment of first-order logic permits only statements about specific entities. To support inferences about statements like If anyone is making noise, then Max can’t sleep, two more elements must be added to the meaning representation: Variables. Variables are mechanisms for referring to entities that are not locally specified. We can then write CAN-SLEEP(x) or BROTHER(x, ABIGAIL). In these cases, x is a free variable, meaning that we have not committed to any particular assignment. Quantifiers. Variables are bound by quantifiers. There are two quantifiers in first-order logic.2 • The existential quantifier ∃, which indicates that there must be at least one en- tity to which the variable can bind. For example, the statement ∃xMAKES-NOISE(X) indicates that there is at least one entity for which MAKES-NOISE is true. • The universal quantifier ∀, which indicates that the variable must be able to bind to any entity in the model. For example, the statement, MAKES-NOISE(ABIGAIL) ⇒(∀x¬CAN-SLEEP(x)) [12.3] asserts that if Abigail makes noise, no one can sleep. The expressions ∃x and ∀x make x into a bound variable. A formula that contains no free variables is a sentence. Functions. Functions map from entities to entities, e.g., JCAPITAL-OF(GEORGIA)K = JATLANTAK. With functions, it is convenient to add an equality operator, supporting statements like, ∀x∃yMOTHER-OF(x) = DAUGHTER-OF(y). [12.4] 2In first-order logic, it is possible to quantify only over entities. In second-order logic, it is possible to quantify over properties. This makes it possible to represent statements like Butch has every property that a good boxer has (example from Blackburn and Bos, 2005), ∀P∀x((GOOD-BOXER(x) ⇒P(x)) ⇒P(BUTCH)). [12.2] Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_307_Chunk305
|
290 CHAPTER 12. LOGICAL SEMANTICS Note that MOTHER-OF is a functional analogue of the relation MOTHER, so that MOTHER-OF(x) = y if MOTHER(x, y). Any logical formula that uses functions can be rewritten using only relations and quantification. For example, MAKES-NOISE(MOTHER-OF(ABIGAIL)) [12.5] can be rewritten as ∃xMAKES-NOISE(x) ∧MOTHER(x, ABIGAIL). An important property of quantifiers is that the order can matter. Unfortunately, natu- ral language is rarely clear about this! The issue is demonstrated by examples like everyone speaks a language, which has the following interpretations: ∀x∃y SPEAKS(x, y) [12.6] ∃y∀x SPEAKS(x, y). [12.7] In the first case, y may refer to several different languages, while in the second case, there is a single y that is spoken by everyone. Truth-conditional semantics One way to look at the meaning of an FOL sentence φ is as a set of truth conditions, or models under which φ is satisfied. But how to determine whether a sentence is true or false in a given model? We will approach this inductively, starting with a predicate applied to a tuple of constants. The truth of such a sentence depends on whether the tuple of denotations of the constants is in the denotation of the predicate. For example, CAPITAL(GEORGIA,ATLANTA) is true in model M iff, (JGEORGIAKM, JATLANTAKM) ∈JCAPITALKM. [12.8] The Boolean operators ∧, ∨, . . . provide ways to construct more complicated sentences, and the truth of such statements can be assessed based on the truth tables associated with these operators. The statement ∃xφ is true if there is some assignment of the variable x to an entity in the model such that φ is true; the statement ∀xφ is true if φ is true under all possible assignments of x. More formally, we would say that φ is satisfied under M, written as M |= φ. Truth conditional semantics allows us to define several other properties of sentences and pairs of sentences. Suppose that in every M under which φ is satisfied, another formula ψ is also satisfied; then φ entails ψ, which is also written as φ |= ψ. For example, CAPITAL(GEORGIA,ATLANTA) |= ∃xCAPITAL(GEORGIA, x). [12.9] A statement that is satisfied under any model, such as φ ∨¬φ, is valid, written |= (φ ∨ ¬φ). A statement that is not satisfied under any model, such as φ ∧¬φ, is unsatisfiable, Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_308_Chunk306
|
12.3. SEMANTIC PARSING AND THE LAMBDA CALCULUS 291 or inconsistent. A model checker is a program that determines whether a sentence φ is satisfied in M. A model builder is a program that constructs a model in which φ is satisfied. The problems of checking for consistency and validity in first-order logic are undecidable, meaning that there is no algorithm that can automatically determine whether an FOL formula is valid or inconsistent. Inference in first-order logic Our original goal was to support inferences that combine general statements If anyone is making noise, then Max can’t sleep with specific statements like Abigail is making noise. We can now represent such statements in first-order logic, but how are we to perform the inference that Max can’t sleep? One approach is to use “generalized” versions of propo- sitional inference rules like modus ponens, which can be applied to FOL formulas. By repeatedly applying such inference rules to a knowledge base of facts, it is possible to produce proofs of desired propositions. To find the right sequence of inferences to derive a desired theorem, classical artificial intelligence search algorithms like backward chain- ing can be applied. Such algorithms are implemented in interpreters for the prolog logic programming language (Pereira and Shieber, 2002). 12.3 Semantic parsing and the lambda calculus The previous section laid out a lot of formal machinery; the remainder of this chapter links these formalisms back to natural language. Given an English sentence like Alex likes Brit, how can we obtain the desired first-order logical representation, LIKES(ALEX,BRIT)? This is the task of semantic parsing. Just as a syntactic parser is a function from a natu- ral language sentence to a syntactic structure such as a phrase structure tree, a semantic parser is a function from natural language to logical formulas. As in syntactic analysis, semantic parsing is difficult because the space of inputs and outputs is very large, and their interaction is complex. Our best hope is that, like syntactic parsing, semantic parsing can somehow be decomposed into simpler sub-problems. This idea, usually attributed to the German philosopher Gottlob Frege, is called the principle of compositionality: the meaning of a complex expression is a function of the meanings of that expression’s constituent parts. We will define these “constituent parts” as syntactic constituents: noun phrases and verb phrases. These constituents are combined using function application: if the syntactic parse contains the production x →y z, then the semantics of x, written x.sem, will be computed as a function of the semantics of the Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_309_Chunk307
|
292 CHAPTER 12. LOGICAL SEMANTICS S : likes(alex, brit) VP : ? NP : brit Brit V : ? likes NP : alex Alex Figure 12.1: The principle of compositionality requires that we identify meanings for the constituents likes and likes Brit that will make it possible to compute the meaning for the entire sentence. constituents, y.sem and z.sem.3 4 12.3.1 The lambda calculus Let’s see how this works for a simple sentence like Alex likes Brit, whose syntactic structure is shown in Figure 12.1. Our goal is the formula, LIKES(ALEX,BRIT), and it is clear that the meaning of the constituents Alex and Brit should be ALEX and BRIT. That leaves two more constituents: the verb likes, and the verb phrase likes Brit. The meanings of these units must be defined in a way that makes it possible to recover the desired meaning for the entire sentence by function application. If the meanings of Alex and Brit are constants, then the meanings of likes and likes Brit must be functional expressions, which can be applied to their siblings to produce the desired analyses. Modeling these partial analyses requires extending the first-order logic meaning rep- resentation. We do this by adding lambda expressions, which are descriptions of anony- mous functions,5 e.g., λx.LIKES(x, BRIT). [12.10] This functional expression is the meaning of the verb phrase likes Brit; it takes a single argument, and returns the result of substituting that argument for x in the expression 3§ 9.3.2 briefly discusses Combinatory Categorial Grammar (CCG) as an alternative to a phrase-structure analysis of syntax. CCG is argued to be particularly well-suited to semantic parsing (Hockenmaier and Steedman, 2007), and is used in much of the contemporary work on machine learning for semantic parsing, summarized in § 12.4. 4The approach of algorithmically building up meaning representations from a series of operations on the syntactic structure of a sentence is generally attributed to the philosopher Richard Montague, who published a series of influential papers on the topic in the early 1970s (e.g., Montague, 1973). 5Formally, all first-order logic formulas are lambda expressions; in addition, if φ is a lambda expression, then λx.φ is also a lambda expression. Readers who are familiar with functional programming will recognize lambda expressions from their use in programming languages such as Lisp and Python. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_310_Chunk308
|
12.3. SEMANTIC PARSING AND THE LAMBDA CALCULUS 293 LIKES(x, BRIT). We write this substitution as, (λx.LIKES(x, BRIT))@ALEX = LIKES(ALEX,BRIT), [12.11] with the symbol “@” indicating function application. Function application in the lambda calculus is sometimes called β-reduction or β-conversion. The expression φ@ψ indicates a function application to be performed by β-reduction, and φ(ψ) indicates a function or predicate in the final logical form. Equation 12.11 shows how to obtain the desired semantics for the sentence Alex likes Brit: by applying the lambda expression λx.LIKES(x, BRIT) to the logical constant ALEX. This rule of composition can be specified in a syntactic-semantic grammar, in which syntactic productions are paired with semantic operations. For the syntactic production S →NP VP, we have the semantic rule VP.sem@NP.sem. The meaning of the transitive verb phrase likes Brit can also be obtained by function application on its syntactic constituents. For the syntactic production VP →V NP, we apply the semantic rule, VP.sem =(V.sem)@NP.sem [12.12] =(λy.λx.LIKES(x, y))@(BRIT) [12.13] =λx.LIKES(x, BRIT). [12.14] Thus, the meaning of the transitive verb likes is a lambda expression whose output is another lambda expression: it takes y as an argument to fill in one of the slots in the LIKES relation, and returns a lambda expression that is ready to take an argument to fill in the other slot.6 Table 12.1 shows a minimal syntactic-semantic grammar fragment, G1. The complete derivation of Alex likes Brit in G1 is shown in Figure 12.2. In addition to the transitive verb likes, the grammar also includes the intransitive verb sleeps; it should be clear how to derive the meaning of sentences like Alex sleeps. For verbs that can be either transitive or intransitive, such as eats, we would have two terminal productions, one for each sense (terminal productions are also called the lexical entries). Indeed, most of the grammar is in the lexicon (the terminal productions), since these productions select the basic units of the semantic interpretation. 12.3.2 Quantification Things get more complicated when we move from sentences about named entities to sen- tences that involve more general noun phrases. Let’s consider the example, A dog sleeps, 6This can be written in a few different ways. The notation λy, x.LIKES(x, y) is a somewhat informal way to indicate a lambda expression that takes two arguments; this would be acceptable in functional programming. Logicians (e.g., Carpenter, 1997) often prefer the more formal notation λy.λx.LIKES(x)(y), indicating that each lambda expression takes exactly one argument. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_311_Chunk309
|
294 CHAPTER 12. LOGICAL SEMANTICS S : likes(alex, brit) VP : λx.likes(x, brit) NP : brit Brit Vt : λy.λx.likes(x, y) likes NP : alex Alex Figure 12.2: Derivation of the semantic representation for Alex likes Brit in the grammar G1. S →NP VP VP.sem@NP.sem VP →Vt NP Vt.sem@NP.sem VP →Vi Vi.sem Vt →likes λy.λx.LIKES(x, y) Vi →sleeps λx.SLEEPS(x) NP →Alex ALEX NP →Brit BRIT Table 12.1: G1, a minimal syntactic-semantic context-free grammar which has the meaning ∃xDOG(x) ∧SLEEPS(x). Clearly, the DOG relation will be intro- duced by the word dog, and the SLEEP relation will be introduced by the word sleeps. The existential quantifier ∃must be introduced by the lexical entry for the determiner a.7 However, this seems problematic for the compositional approach taken in the grammar G1: if the semantics of the noun phrase a dog is an existentially quantified expression, how can it be the argument to the semantics of the verb sleeps, which expects an entity? And where does the logical conjunction come from? There are a few different approaches to handling these issues.8 We will begin by re- versing the semantic relationship between subject NPs and VPs, so that the production S →NP VP has the semantics NP.sem@VP.sem: the meaning of the sentence is now the semantics of the noun phrase applied to the verb phrase. The implications of this change are best illustrated by exploring the derivation of the example, shown in Figure 12.3. Let’s 7Conversely, the sentence Every dog sleeps would involve a universal quantifier, ∀xDOG(x) ⇒SLEEPS(x). The definite article the requires more consideration, since the dog must refer to some dog which is uniquely identifiable, perhaps from contextual information external to the sentence. Carpenter (1997, pp. 96-100) summarizes recent approaches to handling definite descriptions. 8Carpenter (1997) offers an alternative treatment based on combinatory categorial grammar. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_312_Chunk310
|
12.3. SEMANTIC PARSING AND THE LAMBDA CALCULUS 295 S : ∃xdog(x) ∧sleeps(x) VP : λx.sleeps(x) Vi : λx.sleeps(x) sleeps NP : λP.∃xP(x) ∧dog(x) NN : dog dog DT : λQ.λP.∃x.P(x) ∧Q(x) A Figure 12.3: Derivation of the semantic representation for A dog sleeps, in grammar G2 start with the indefinite article a, to which we assign the rather intimidating semantics, λP.λQ.∃xP(x) ∧Q(x). [12.15] This is a lambda expression that takes two relations as arguments, P and Q. The relation P is scoped to the outer lambda expression, so it will be provided by the immediately adjacent noun, which in this case is DOG. Thus, the noun phrase a dog has the semantics, NP.sem =DET.sem@NN.sem [12.16] =(λP.λQ.∃xP(x) ∧Q(x))@(DOG) [12.17] =λQ.∃xDOG(x) ∧Q(x). [12.18] This is a lambda expression that is expecting another relation, Q, which will be provided by the verb phrase, SLEEPS. This gives the desired analysis, ∃xDOG(x) ∧SLEEPS(x).9 If noun phrases like a dog are interpreted as lambda expressions, then proper nouns like Alex must be treated in the same way. This is achieved by type-raising from con- stants to lambda expressions, x ⇒λP.P(x). After type-raising, the semantics of Alex is λP.P(ALEX) — a lambda expression that expects a relation to tell us something about ALEX.10 Again, make sure you see how the analysis in Figure 12.3 can be applied to the sentence Alex sleeps. 9When applying β-reduction to arguments that are themselves lambda expressions, be sure to use unique variable names to avoid confusion. For example, it is important to distinguish the x in the semantics for a from the x in the semantics for likes. Variable names are abstractions, and can always be changed — this is known as α-conversion. For example, λx.P(x) can be converted to λy.P(y), etc. 10Compositional semantic analysis is often supported by type systems, which make it possible to check whether a given function application is valid. The base types are entities e and truth values t. A property, such as DOG, is a function from entities to truth values, so its type is written ⟨e, t⟩. A transitive verb has type ⟨e, ⟨e, t⟩⟩: after receiving the first entity (the direct object), it returns a function from entities to truth values, which will be applied to the subject of the sentence. The type-raising operation x ⇒λP.P(x) corresponds to a change in type from e to ⟨⟨e, t⟩, t⟩: it expects a function from entities to truth values, and returns a truth value. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_313_Chunk311
|
296 CHAPTER 12. LOGICAL SEMANTICS S : ∃xdog(x) ∧likes(x, alex) VP : λx.likes(x, alex) NP : λP.P(alex) NNP : alex Alex Vt : λP.λx.P(λy.likes(x, y)) likes NP : λQ.∃xdog(x) ∧Q(x) NN : dog dog DT : λP.λQ.∃xP(x) ∧Q(x) A Figure 12.4: Derivation of the semantic representation for A dog likes Alex. Direct objects are handled by applying the same type-raising operation to transitive verbs: the meaning of verbs such as likes is raised to, λP.λx.P(λy.LIKES(x, y)) [12.19] As a result, we can keep the verb phrase production VP.sem = V.sem@NP.sem, knowing that the direct object will provide the function P in Equation 12.19. To see how this works, let’s analyze the verb phrase likes a dog. After uniquely relabeling each lambda variable, VP.sem =V.sem@NP.sem =(λP.λx.P(λy.LIKES(x, y)))@(λQ.∃zDOG(z) ∧Q(z)) =λx.(λQ.∃zDOG(z) ∧Q(z))@(λy.LIKES(x, y)) =λx.∃zDOG(z) ∧(λy.LIKES(x, y))@z =λx.∃zDOG(z) ∧LIKES(x, z). These changes are summarized in the revised grammar G2, shown in Table 12.2. Fig- ure 12.4 shows a derivation that involves a transitive verb, an indefinite noun phrase, and a proper noun. 12.4 Learning semantic parsers As with syntactic parsing, any syntactic-semantic grammar with sufficient coverage risks producing many possible analyses for any given sentence. Machine learning is the dom- inant approach to selecting a single analysis. We will focus on algorithms that learn to score logical forms by attaching weights to features of their derivations (Zettlemoyer and Collins, 2005). Alternative approaches include transition-based parsing (Zelle and Mooney, 1996; Misra and Artzi, 2016) and methods inspired by machine translation (Wong and Mooney, 2006). Methods also differ in the form of supervision used for learning, which can range from complete derivations to much more limited training signals. We will begin with the case of complete supervision, and then consider how learning is still possible even when seemingly key information is missing. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_314_Chunk312
|
12.4. LEARNING SEMANTIC PARSERS 297 S →NP VP NP.sem@VP.sem VP →Vt NP Vt.sem@NP.sem VP →Vi Vi.sem NP →DET NN DET.sem@NN.sem NP →NNP λP.P(NNP.sem) DET →a λP.λQ.∃xP(x) ∧Q(x) DET →every λP.λQ.∀x(P(x) ⇒Q(x)) Vt →likes λP.λx.P(λy.LIKES(x, y)) Vi →sleeps λx.SLEEPS(x) NN →dog DOG NNP →Alex ALEX NNP →Brit BRIT Table 12.2: G2, a syntactic-semantic context-free grammar fragment, which supports quantified noun phrases Datasets Early work on semantic parsing focused on natural language expressions of geographical database queries, such as What states border Texas. The GeoQuery dataset of Zelle and Mooney (1996) was originally coded in prolog, but has subsequently been expanded and converted into the SQL database query language by Popescu et al. (2003) and into first-order logic with lambda calculus by Zettlemoyer and Collins (2005), pro- viding logical forms like λx.STATE(x) ∧BORDERS(x, TEXAS). Another early dataset con- sists of instructions for RoboCup robot soccer teams (Kate et al., 2005). More recent work has focused on broader domains, such as the Freebase database (Bollacker et al., 2008), for which queries have been annotated by Krishnamurthy and Mitchell (2012) and Cai and Yates (2013). Other recent datasets include child-directed speech (Kwiatkowski et al., 2012) and elementary school science exams (Krishnamurthy, 2016). 12.4.1 Learning from derivations Let w(i) indicate a sequence of text, and let y(i) indicate the desired logical form. For example: w(i) =Alex eats shoots and leaves y(i) =EATS(ALEX,SHOOTS) ∧EATS(ALEX,LEAVES) In the standard supervised learning paradigm that was introduced in § 2.3, we first de- fine a feature function, f(w, y), and then learn weights on these features, so that y(i) = argmaxy θ · f(w, y). The weight vector θ is learned by comparing the features of the true label f(w(i), y(i)) against either the features of the predicted label f(w(i), ˆy) (perceptron, Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_315_Chunk313
|
298 CHAPTER 12. LOGICAL SEMANTICS S : eats(alex, shoots) ∧eats(alex, leavesn) VP : λx.eats(x, shoots) ∧eats(x, leavesn) NP : λP.P(shoots) ∧P(leavesn) NP : λP.P(leavesn) leaves CC : λP.λQ.λx.P(x) ∧Q(x) and NP : λP.P(shoots) shoots Vt : λP.λx.P(λy.eats(x, y)) eats NP : λP.P(alex) Alex Figure 12.5: Derivation for gold semantic analysis of Alex eats shoots and leaves support vector machine) or the expected feature vector Ey|w[f(w(i), y)] (logistic regres- sion). While this basic framework seems similar to discriminative syntactic parsing, there is a crucial difference. In (context-free) syntactic parsing, the annotation y(i) contains all of the syntactic productions; indeed, the task of identifying the correct set of productions is identical to the task of identifying the syntactic structure. In semantic parsing, this is not the case: the logical form EATS(ALEX,SHOOTS) ∧EATS(ALEX,LEAVES) does not reveal the syntactic-semantic productions that were used to obtain it. Indeed, there may be spu- rious ambiguity, so that a single logical form can be reached by multiple derivations. (We previously encountered spurious ambiguity in transition-based dependency parsing, § 11.3.2.) These ideas can be formalized by introducing an additional variable z, representing the derivation of the logical form y from the text w. Assume that the feature function de- composes across the productions in the derivation, f(w, z, y) = PT t=1 f(w, zt, y), where zt indicates a single syntactic-semantic production. For example, we might have a feature for the production S →NP VP : NP.sem@VP.sem, as well as for terminal productions like NNP →Alex : ALEX. Under this decomposition, it is possible to compute scores for each semantically-annotated subtree in the analysis of w, so that bottom-up parsing algorithms like CKY (§ 10.1) can be applied to find the best-scoring semantic analysis. Figure 12.5 shows a derivation of the correct semantic analysis of the sentence Alex eats shoots and leaves, in a simplified grammar in which the plural noun phrases shoots and leaves are interpreted as logical constants SHOOTS and LEAVESn. Figure 12.6 shows a derivation of an incorrect analysis. Assuming one feature per production, the perceptron update is shown in Table 12.3. From this update, the parser would learn to prefer the noun interpretation of leaves over the verb interpretation. It would also learn to prefer noun phrase coordination over verb phrase coordination. While the update is explained in terms of the perceptron, it would be easy to replace the perceptron with a conditional random field. In this case, the online updates would be Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_316_Chunk314
|
12.4. LEARNING SEMANTIC PARSERS 299 S : eats(alex, shoots) ∧leavesv(alex) VP : λx.eats(x, shoots) ∧leavesv(x) VP : λx.leavesv(x) Vi : λx.leavesv(x) leaves CC : λP.λQ.λx.P(x) ∧Q(x) and VP : λx.eats(x, shoots) NP : λP.P(shoots) shoots Vt : λP.λx.P(λy.eats(x, y)) eats NP : λP.P(alex) Alex Figure 12.6: Derivation for incorrect semantic analysis of Alex eats shoots and leaves NP1 →NP2 CC NP3 (CC.sem@(NP2.sem))@(NP3.sem) +1 VP1 →VP2 CC VP3 (CC.sem@(VP2.sem))@(VP3.sem) -1 NP →leaves LEAVESn +1 VP →Vi Vi.sem -1 Vi →leaves λx.LEAVESv -1 Table 12.3: Perceptron update for analysis in Figure 12.5 (gold) and Figure 12.6 (predicted) based on feature expectations, which can be computed using the inside-outside algorithm (§ 10.6). 12.4.2 Learning from logical forms Complete derivations are expensive to annotate, and are rarely available.11 One solution is to focus on learning from logical forms directly, while treating the derivations as la- tent variables (Zettlemoyer and Collins, 2005). In a conditional probabilistic model over logical forms y and derivations z, we have, p(y, z | w) = exp(θ · f(w, z, y)) P y′,z′ exp(θ · f(w, z′, y′)), [12.20] which is the standard log-linear model, applied to the logical form y and the derivation z. Since the derivation z unambiguously determines the logical form y, it may seem silly to model the joint probability over y and z. However, since z is unknown, it can be marginalized out, p(y | w) = X z p(y, z | w). [12.21] 11An exception is the work of Ge and Mooney (2005), who annotate the meaning of each syntactic con- stituents for several hundred sentences. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_317_Chunk315
|
300 CHAPTER 12. LOGICAL SEMANTICS The semantic parser can then select the logical form with the maximum log marginal probability, log X z p(y, z | w) = log X z exp(θ · f(w, z, y)) P y′, z′ exp(θ · f(w, z′, y′)) [12.22] ∝log X z exp(θ · f(w, z′, y′)) [12.23] ≥max z θ · f(w, z, y). [12.24] It is impossible to push the log term inside the sum over z, so our usual linear scoring function does not apply. We can recover this scoring function only in approximation, by taking the max (rather than the sum) over derivations z, which provides a lower bound. Learning can be performed by maximizing the log marginal likelihood, ℓ(θ) = N X i=1 log p(y(i) | w(i); θ) [12.25] = N X i=1 log X z p(y(i), z(i) | w(i); θ). [12.26] This log-likelihood is not convex in θ, unlike the log-likelihood of a fully-observed condi- tional random field. This means that learning can give different results depending on the initialization. The derivative of Equation 12.26 is, ∂ℓi ∂θ = X z p(z | y, w; θ)f(w, z, y) − X y′,z′ p(y′, z′ | w; θ)f(w, z′, y′) [12.27] =Ez|y,wf(w, z, y) −Ey,z|wf(w, z, y) [12.28] Both expectations can be computed via bottom-up algorithms like inside-outside. Al- ternatively, we can again maximize rather than marginalize over derivations for an ap- proximate solution. In either case, the first term of the gradient requires us to identify derivations z that are compatible with the logical form y. This can be done in a bottom- up dynamic programming algorithm, by having each cell in the table t[i, j, X] include the set of all possible logical forms for X ⇝wi+1:j. The resulting table may therefore be much larger than in syntactic parsing. This can be controlled by using pruning to eliminate in- termediate analyses that are incompatible with the final logical form y (Zettlemoyer and Collins, 2005), or by using beam search and restricting the size of each cell to some fixed constant (Liang et al., 2013). If we replace each expectation in Equation 12.28 with argmax and then apply stochastic gradient descent to learn the weights, we obtain the latent variable perceptron, a simple Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_318_Chunk316
|
12.4. LEARNING SEMANTIC PARSERS 301 Algorithm 16 Latent variable perceptron 1: procedure LATENTVARIABLEPERCEPTRON(w(1:N), y(1:N)) 2: θ ←0 3: repeat 4: Select an instance i 5: z(i) ←argmaxz θ · f(w(i), z, y(i)) 6: ˆy, ˆz ←argmaxy′,z′ θ · f(w(i), z′, y′) 7: θ ←θ + f(w(i), z(i), y(i)) −f(w(i), ˆz, ˆy) 8: until tired 9: return θ and general algorithm for learning with missing data. The algorithm is shown in its most basic form in Algorithm 16, but the usual tricks such as averaging and margin loss can be applied (Yu and Joachims, 2009). Aside from semantic parsing, the latent variable perceptron has been used in tasks such as machine translation (Liang et al., 2006) and named entity recognition (Sun et al., 2009). In latent conditional random fields, we use the full expectations rather than maximizing over the hidden variable. This model has also been employed in a range of problems beyond semantic parsing, including parse reranking (Koo and Collins, 2005) and gesture recognition (Quattoni et al., 2007). 12.4.3 Learning from denotations Logical forms are easier to obtain than complete derivations, but the annotation of logical forms still requires considerable expertise. However, it is relatively easy to obtain deno- tations for many natural language sentences. For example, in the geography domain, the denotation of a question would be its answer (Clarke et al., 2010; Liang et al., 2013): Text :What states border Georgia? Logical form :λx.STATE(x) ∧BORDER(x, GEORGIA) Denotation :{Alabama, Florida, North Carolina, South Carolina, Tennessee} Similarly, in a robotic control setting, the denotation of a command would be an action or sequence of actions (Artzi and Zettlemoyer, 2013). In both cases, the idea is to reward the semantic parser for choosing an analysis whose denotation is correct: the right answer to the question, or the right action. Learning from logical forms was made possible by summing or maxing over deriva- tions. This idea can be carried one step further, summing or maxing over all logical forms with the correct denotation. Let vi(y) ∈{0, 1} be a validation function, which assigns a Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_319_Chunk317
|
302 CHAPTER 12. LOGICAL SEMANTICS binary score indicating whether the denotation JyK for the text w(i) is correct. We can then learn by maximizing a conditional-likelihood objective, ℓ(i)(θ) = log X y vi(y) × p(y | w; θ) [12.29] = log X y vi(y) × X z p(y, z | w; θ), [12.30] which sums over all derivations z of all valid logical forms, {y : vi(y) = 1}. This cor- responds to the log-probability that the semantic parser produces a logical form with a valid denotation. Differentiating with respect to θ, we obtain, ∂ℓ(i) ∂θ = X y,z:vi(y)=1 p(y, z | w)f(w, z, y) − X y′,z′ p(y′, z′ | w)f(w, z′, y′), [12.31] which is the usual difference in feature expectations. The positive term computes the expected feature expectations conditioned on the denotation being valid, while the second term computes the expected feature expectations according to the current model, without regard to the ground truth. Large-margin learning formulations are also possible for this problem. For example, Artzi and Zettlemoyer (2013) generate a set of valid and invalid derivations, and then impose a constraint that all valid derivations should score higher than all invalid derivations. This constraint drives a perceptron-like learning rule. Additional resources A key issue not considered here is how to handle semantic underspecification: cases in which there are multiple semantic interpretations for a single syntactic structure. Quanti- fier scope ambiguity is a classic example. Blackburn and Bos (2005) enumerate a number of approaches to this issue, and also provide links between natural language semantics and computational inference techniques. Much of the contemporary research on semantic parsing uses the framework of combinatory categorial grammar (CCG). Carpenter (1997) provides a comprehensive treatment of how CCG can support compositional semantic analysis. Another recent area of research is the semantics of multi-sentence texts. This can be handled with models of dynamic semantics, such as dynamic predicate logic (Groe- nendijk and Stokhof, 1991). Alternative readings on formal semantics include an “informal” reading from Levy and Manning (2009), and a more involved introduction from Briscoe (2011). To learn more about ongoing research on data-driven semantic parsing, readers may consult the survey Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_320_Chunk318
|
12.4. LEARNING SEMANTIC PARSERS 303 article by Liang and Potts (2015), tutorial slides and videos by Artzi and Zettlemoyer (2013),12 and the source code by Yoav Artzi13 and Percy Liang.14 Exercises 1. The modus ponens inference rule states that if we know φ ⇒ψ and φ, then ψ must be true. Justify this rule, using the definition of the ⇒operator and some of the laws provided in § 12.2.1, plus one additional identity: ⊥∨φ = φ. 2. Convert the following examples into first-order logic, using the relations CAN-SLEEP, MAKES-NOISE, and BROTHER. • If Abigail makes noise, no one can sleep. • If Abigail makes noise, someone cannot sleep. • None of Abigail’s brothers can sleep. • If one of Abigail’s brothers makes noise, Abigail cannot sleep. 3. Extend the grammar fragment G1 to include the ditransitive verb teaches and the proper noun Swahili. Show how to derive the interpretation for the sentence Alex teaches Brit Swahili, which should be TEACHES(ALEX,BRIT,SWAHILI). The grammar need not be in Chomsky Normal Form. For the ditransitive verb, use NP1 and NP2 to indicate the two direct objects. 4. Derive the semantic interpretation for the sentence Alex likes every dog, using gram- mar fragment G2. 5. Extend the grammar fragment G2 to handle adjectives, so that the meaning of an angry dog is λP.∃xDOG(x) ∧ANGRY(x) ∧P(x). Specifically, you should supply the lexical entry for the adjective angry, and you should specify the syntactic-semantic productions NP →DET NOM, NOM →JJ NOM, and NOM →NN. 6. Extend your answer to the previous question to cover copula constructions with predicative adjectives, such as Alex is angry. The interpretation should be ANGRY(ALEX). You should add a verb phrase production VP →Vcop JJ, and a terminal production Vcop →is. Show why your grammar extensions result in the correct interpretation. 7. In Figure 12.5 and Figure 12.6, we treat the plurals shoots and leaves as entities. Revise G2 so that the interpretation of Alex eats leaves is ∀x.(LEAF(x) ⇒EATS(ALEX, x)), and show the resulting perceptron update. 12Videos are currently available at http://yoavartzi.com/tutorial/ 13http://yoavartzi.com/spf 14https://github.com/percyliang/sempre Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_321_Chunk319
|
304 CHAPTER 12. LOGICAL SEMANTICS 8. Statements like every student eats a pizza have two possible interpretations, depend- ing on quantifier scope: ∀x∃yPIZZA(y) ∧(STUDENT(x) ⇒EATS(x, y)) [12.32] ∃y∀xPIZZA(y) ∧(STUDENT(x) ⇒EATS(x, y)) [12.33] a) Explain why these interpretations really are different. b) Which is generated by grammar G2? Note that you may have to manipulate the logical form to exactly align with the grammar. 9. *Modify G2 so that produces the second interpretation in the previous problem. Hint: one possible solution involves changing the semantics of the sentence pro- duction and one other production. 10. In the GeoQuery domain, give a natural language query that has multiple plausible semantic interpretations with the same denotation. List both interpretaions and the denotation. Hint: There are many ways to do this, but one approach involves using toponyms (place names) that could plausibly map to several different entities in the model. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_322_Chunk320
|
Chapter 13 Predicate-argument semantics This chapter considers more “lightweight” semantic representations, which discard some aspects of first-order logic, but focus on predicate-argument structures. Let’s begin by thinking about the semantics of events, with a simple example: (13.1) Asha gives Boyang a book. A first-order logical representation of this sentence is, ∃x.BOOK(x) ∧GIVE(ASHA, BOYANG, x) [13.1] In this representation, we define variable x for the book, and we link the strings Asha and Boyang to entities ASHA and BOYANG. Because the action of giving involves a giver, a recipient, and a gift, the predicate GIVE must take three arguments. Now suppose we have additional information about the event: (13.2) Yesterday, Asha reluctantly gave Boyang a book. One possible to solution is to extend the predicate GIVE to take additional arguments, ∃x.BOOK(x) ∧GIVE(ASHA, BOYANG, x, YESTERDAY, RELUCTANTLY) [13.2] But this is clearly unsatisfactory: yesterday and relunctantly are optional arguments, and we would need a different version of the GIVE predicate for every possible combi- nation of arguments. Event semantics solves this problem by reifying the event as an existentially quantified variable e, ∃e, x.GIVE-EVENT(e) ∧GIVER(e, ASHA) ∧GIFT(e, x) ∧BOOK(e, x) ∧RECIPIENT(e, BOYANG) ∧TIME(e, YESTERDAY) ∧MANNER(e, RELUCTANTLY) 305
|
nlp_Page_323_Chunk321
|
306 CHAPTER 13. PREDICATE-ARGUMENT SEMANTICS In this way, each argument of the event — the giver, the recipient, the gift — can be rep- resented with a relation of its own, linking the argument to the event e. The expression GIVER(e, ASHA) says that ASHA plays the role of GIVER in the event. This reformulation handles the problem of optional information such as the time or manner of the event, which are called adjuncts. Unlike arguments, adjuncts are not a mandatory part of the relation, but under this representation, they can be expressed with additional logical rela- tions that are conjoined to the semantic interpretation of the sentence. 1 The event semantic representation can be applied to nested clauses, e.g., (13.3) Chris sees Asha pay Boyang. This is done by using the event variable as an argument: ∃e1∃e2 SEE-EVENT(e1) ∧SEER(e1, CHRIS) ∧SIGHT(e1, e2) ∧PAY-EVENT(e2) ∧PAYER(e2, ASHA) ∧PAYEE(e2, BOYANG) [13.3] As with first-order logic, the goal of event semantics is to provide a representation that generalizes over many surface forms. Consider the following paraphrases of (13.1): (13.4) a. Asha gives a book to Boyang. b. A book is given to Boyang by Asha. c. A book is given by Asha to Boyang. d. The gift of a book from Asha to Boyang ... All have the same event semantic meaning as Equation 13.1, but the ways in which the meaning can be expressed are diverse. The final example does not even include a verb: events are often introduced by verbs, but as shown by (13.4d), the noun gift can introduce the same predicate, with the same accompanying arguments. Semantic role labeling (SRL) is a relaxed form of semantic parsing, in which each semantic role is filled by a set of tokens from the text itself. This is sometimes called “shallow semantics” because, unlike model-theoretic semantic parsing, role fillers need not be symbolic expressions with denotations in some world model. A semantic role labeling system is required to identify all predicates, and then specify the spans of text that fill each role. To give a sense of the task, here is a more complicated example: (13.5) Boyang wants Asha to give him a linguistics book. 1This representation is often called Neo-Davidsonian event semantics. The use of existentially- quantified event variables was proposed by Davidson (1967) to handle the issue of optional adjuncts. In Neo-Davidsonian semantics, this treatment of adjuncts is extended to mandatory arguments as well (e.g., Parsons, 1990). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_324_Chunk322
|
13.1. SEMANTIC ROLES 307 In this example, there are two predicates, expressed by the verbs want and give. Thus, a semantic role labeler might return the following output: • (PREDICATE : wants, WANTER : Boyang, DESIRE : Asha to give him a linguistics book) • (PREDICATE : give, GIVER : Asha, RECIPIENT : him, GIFT : a linguistics book) Boyang and him may refer to the same person, but the semantic role labeling is not re- quired to resolve this reference. Other predicate-argument representations, such as Ab- stract Meaning Representation (AMR), do require reference resolution. We will return to AMR in § 13.3, but first, let us further consider the definition of semantic roles. 13.1 Semantic roles In event semantics, it is necessary to specify a number of additional logical relations to link arguments to events: GIVER, RECIPIENT, SEER, SIGHT, etc. Indeed, every predicate re- quires a set of logical relations to express its own arguments. In contrast, adjuncts such as TIME and MANNER are shared across many types of events. A natural question is whether it is possible to treat mandatory arguments more like adjuncts, by identifying a set of generic argument types that are shared across many event predicates. This can be further motivated by examples involving related verbs: (13.6) a. Asha gave Boyang a book. b. Asha loaned Boyang a book. c. Asha taught Boyang a lesson. d. Asha gave Boyang a lesson. The respective roles of Asha, Boyang, and the book are nearly identical across the first two examples. The third example is slightly different, but the fourth example shows that the roles of GIVER and TEACHER can be viewed as related. One way to think about the relationship between roles such as GIVER and TEACHER is by enumerating the set of properties that an entity typically possesses when it fulfills these roles: givers and teachers are usually animate (they are alive and sentient) and volitional (they choose to enter into the action).2 In contrast, the thing that gets loaned or taught is usually not animate or volitional; furthermore, it is unchanged by the event. Building on these ideas, thematic roles generalize across predicates by leveraging the shared semantic properties of typical role fillers (Fillmore, 1968). For example, in exam- ples (13.6a-13.6d), Asha plays a similar role in all four sentences, which we will call the 2There are always exceptions. For example, in the sentence The C programming language has taught me a lot about perseverance, the “teacher” is the The C programming language, which is presumably not animate or volitional. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_325_Chunk323
|
308 CHAPTER 13. PREDICATE-ARGUMENT SEMANTICS Asha gave Boyang a book VerbNet AGENT RECIPIENT THEME PropBank ARG0: giver ARG2: entity given to ARG1: thing given FrameNet DONOR RECIPIENT THEME Asha taught Boyang algebra VerbNet AGENT RECIPIENT TOPIC PropBank ARG0: teacher ARG2: student ARG1: subject FrameNet TEACHER STUDENT SUBJECT Figure 13.1: Example semantic annotations according to VerbNet, PropBank, and FrameNet agent. This reflects several shared semantic properties: she is the one who is actively and intentionally performing the action, while Boyang is a more passive participant; the book and the lesson would play a different role, as non-animate participants in the event. Example annotations from three well known systems are shown in Figure 13.1. We will now discuss these systems in more detail. 13.1.1 VerbNet VerbNet (Kipper-Schuler, 2005) is a lexicon of verbs, and it includes thirty “core” thematic roles played by arguments to these verbs. Here are some example roles, accompanied by their definitions from the VerbNet Guidelines.3 • AGENT: “ACTOR in an event who initiates and carries out the event intentionally or consciously, and who exists independently of the event.” • PATIENT: “UNDERGOER in an event that experiences a change of state, location or condition, that is causally involved or directly affected by other participants, and exists independently of the event.” • RECIPIENT: “DESTINATION that is animate” • THEME: “UNDERGOER that is central to an event or state that does not have control over the way the event occurs, is not structurally changed by the event, and/or is characterized as being in a certain position or condition throughout the state.” • TOPIC: “THEME characterized by information content transferred to another partic- ipant.” 3http://verbs.colorado.edu/verb-index/VerbNet_Guidelines.pdf Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_326_Chunk324
|
13.1. SEMANTIC ROLES 309 VerbNet roles are organized in a hierarchy, so that a TOPIC is a type of THEME, which in turn is a type of UNDERGOER, which is a type of PARTICIPANT, the top-level category. In addition, VerbNet organizes verb senses into a class hierarchy, in which verb senses that have similar meanings are grouped together. Recall from § 4.2 that multiple meanings of the same word are called senses, and that WordNet identifies senses for many English words. VerbNet builds on WordNet, so that verb classes are identified by the WordNet senses of the verbs that they contain. For example, the verb class give-13.1 includes the first WordNet sense of loan and the second WordNet sense of lend. Each VerbNet class or subclass takes a set of thematic roles. For example, give-13.1 takes arguments with the thematic roles of AGENT, THEME, and RECIPIENT;4 the pred- icate TEACH takes arguments with the thematic roles AGENT, TOPIC, RECIPIENT, and SOURCE.5 So according to VerbNet, Asha and Boyang play the roles of AGENT and RECIP- IENT in the sentences, (13.7) a. Asha gave Boyang a book. b. Asha taught Boyang algebra. The book and algebra are both THEMES, but algebra is a subcategory of THEME — a TOPIC — because it consists of information content that is given to the receiver. 13.1.2 Proto-roles and PropBank Detailed thematic role inventories of the sort used in VerbNet are not universally accepted. For example, Dowty (1991, pp. 547) notes that “Linguists have often found it hard to agree on, and to motivate, the location of the boundary between role types.” He argues that a solid distinction can be identified between just two proto-roles: Proto-Agent. Characterized by volitional involvement in the event or state; sentience and/or perception; causing an event or change of state in another participant; move- ment; exists independently of the event. Proto-Patient. Undergoes change of state; causally affected by another participant; sta- tionary relative to the movement of another participant; does not exist indepen- dently of the event.6 4https://verbs.colorado.edu/verb-index/vn/give-13.1.php 5https://verbs.colorado.edu/verb-index/vn/transfer_mesg-37.1.1.php 6Reisinger et al. (2015) ask crowd workers to annotate these properties directly, finding that annotators tend to agree on the properties of each argument. They also find that in English, arguments having more proto-agent properties tend to appear in subject position, while arguments with more proto-patient proper- ties appear in object position. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_327_Chunk325
|
310 CHAPTER 13. PREDICATE-ARGUMENT SEMANTICS In the examples in Figure 13.1, Asha has most of the proto-agent properties: in giving the book to Boyang, she is acting volitionally (as opposed to Boyang got a book from Asha, in which it is not clear whether Asha gave up the book willingly); she is sentient; she causes a change of state in Boyang; she exists independently of the event. Boyang has some proto- agent properties: he is sentient and exists independently of the event. But he also has some proto-patient properties: he is the one who is causally affected and who undergoes change of state. The book that Asha gives Boyang has even fewer of the proto-agent properties: it is not volitional or sentient, and it has no causal role. But it also lacks many of the proto-patient properties: it does not undergo change of state, exists independently of the event, and is not stationary. The Proposition Bank, or PropBank (Palmer et al., 2005), builds on this basic agent- patient distinction, as a middle ground between generic thematic roles and roles that are specific to each predicate. Each verb is linked to a list of numbered arguments, with ARG0 as the proto-agent and ARG1 as the proto-patient. Additional numbered arguments are verb-specific. For example, for the predicate TEACH,7 the arguments are: • ARG0: the teacher • ARG1: the subject • ARG2: the student(s) Verbs may have any number of arguments: for example, WANT and GET have five, while EAT has only ARG0 and ARG1. In addition to the semantic arguments found in the frame files, roughly a dozen general-purpose adjuncts may be used in combination with any verb. These are shown in Table 13.1. PropBank-style semantic role labeling is annotated over the entire Penn Treebank. This annotation includes the sense of each verbal predicate, as well as the argument spans. 13.1.3 FrameNet Semantic frames are descriptions of situations or events. Frames may be evoked by one of their lexical units (often a verb, but not always), and they include some number of frame elements, which are like roles (Fillmore, 1976). For example, the act of teaching is a frame, and can be evoked by the verb taught; the associated frame elements include the teacher, the student(s), and the subject being taught. Frame semantics has played a significant role in the history of artificial intelligence, in the work of Minsky (1974) and Schank and Abelson (1977). In natural language processing, the theory of frame semantics has been implemented in FrameNet (Fillmore and Baker, 2009), which consists of a lexicon 7http://verbs.colorado.edu/propbank/framesets-english-aliases/teach.html Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_328_Chunk326
|
13.1. SEMANTIC ROLES 311 TMP time Boyang ate a bagel [AM-TMP yesterday]. LOC location Asha studies in [AM-LOC Stuttgart] MOD modal verb Asha [AM-MOD will] study in Stuttgart ADV general purpose [AM-ADV Luckily], Asha knew algebra. MNR manner Asha ate [AM-MNR aggressively]. DIS discourse connective [AM-DIS However], Asha prefers algebra. PRP purpose Barry studied [AM-PRP to pass the bar]. DIR direction Workers dumped burlap sacks [AM-DIR into a bin]. NEG negation Asha does [AM-NEG not] speak Albanian. EXT extent Prices increased [AM-EXT 4%]. CAU cause Boyang returned the book [AM-CAU because it was overdue]. Table 13.1: PropBank adjuncts (Palmer et al., 2005), sorted by frequency in the corpus of roughly 1000 frames, and a corpus of more than 200,000 “exemplar sentences,” in which the frames and their elements are annotated.8 Rather than seeking to link semantic roles such as TEACHER and GIVER into the- matic roles such as AGENT, FrameNet aggressively groups verbs into frames, and links semantically-related roles across frames. For example, the following two sentences would be annotated identically in FrameNet: (13.8) a. Asha taught Boyang algebra. b. Boyang learned algebra from Asha. This is because teach and learn are both lexical units in the EDUCATION TEACHING frame. Furthermore, roles can be shared even when the frames are distinct, as in the following two examples: (13.9) a. Asha gave Boyang a book. b. Boyang got a book from Asha. The GIVING and GETTING frames both have RECIPIENT and THEME elements, so Boyang and the book would play the same role. Asha’s role is different: she is the DONOR in the GIVING frame, and the SOURCE in the GETTING frame. FrameNet makes extensive use of multiple inheritance to share information across frames and frame elements: for example, the COMMERCE SELL and LENDING frames inherit from GIVING frame. 8Current details and data can be found at https://framenet.icsi.berkeley.edu/ Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_329_Chunk327
|
312 CHAPTER 13. PREDICATE-ARGUMENT SEMANTICS 13.2 Semantic role labeling The task of semantic role labeling is to identify the parts of the sentence comprising the semantic roles. In English, this task is typically performed on the PropBank corpus, with the goal of producing outputs in the following form: (13.10) [ARG0 Asha] [GIVE.01 gave] [ARG2 Boyang’s mom] [ARG1 a book] [AM-TMP yesterday]. Note that a single sentence may have multiple verbs, and therefore a given word may be part of multiple role-fillers: (13.11) [ARG0 Asha] Asha [WANT.01 wanted] wanted [ARG1 Boyang to give her the book]. [ARG0 Boyang] [GIVE.01 to give] [ARG2 her] [ARG1 the book]. 13.2.1 Semantic role labeling as classification PropBank is annotated on the Penn Treebank, and annotators used phrasal constituents (§ 9.2.2) to fill the roles. PropBank semantic role labeling can be viewed as the task of as- signing to each phrase a label from the set R = {∅, PRED, ARG0, ARG1, ARG2, . . . , AM-LOC, AM-TMP, . with respect to each predicate. If we treat semantic role labeling as a classification prob- lem, we obtain the following functional form: ˆy(i,j) = argmax y ψ(w, y, i, j, ρ, τ), [13.4] where, • (i, j) indicates the span of a phrasal constituent (wi+1, wi+2, . . . , wj);9 • w represents the sentence as a sequence of tokens; • ρ is the index of the predicate verb in w; • τ is the structure of the phrasal constituent parse of w. Early work on semantic role labeling focused on discriminative feature-based models, where ψ(w, y, i, j, ρ, τ) = θ · f(w, y, i, j, ρ, τ). Table 13.2 shows the features used in a sem- inal paper on FrameNet semantic role labeling (Gildea and Jurafsky, 2002). By 2005 there 9PropBank roles can also be filled by split constituents, which are discontinuous spans of text. This situation most frequently in reported speech, e.g. [ARG1 By addressing these problems], Mr. Maxwell said, [ARG1 the new funds have become extremely attractive.] (example adapted from Palmer et al., 2005). This issue is typically addressed by defining “continuation arguments”, e.g. C-ARG1, which refers to the continuation of ARG1 after the split. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_330_Chunk328
|
13.2. SEMANTIC ROLE LABELING 313 Predicate lemma and POS tag The lemma of the predicate verb and its part-of-speech tag Voice Whether the predicate is in active or passive voice, as deter- mined by a set of syntactic patterns for identifying passive voice constructions Phrase type The constituent phrase type for the proposed argument in the parse tree, e.g. NP, PP Headword and POS tag The head word of the proposed argument and its POS tag, identified using the Collins (1997) rules Position Whether the proposed argument comes before or after the predicate in the sentence Syntactic path The set of steps on the parse tree from the proposed argu- ment to the predicate (described in detail in the text) Subcategorization The syntactic production from the first branching node above the predicate. For example, in Figure 13.2, the subcategorization feature around taught would be VP → VBD NP PP. Table 13.2: Features used in semantic role labeling by Gildea and Jurafsky (2002). were several systems for PropBank semantic role labeling, and their approaches and fea- ture sets are summarized by Carreras and M`arquez (2005). Typical features include: the phrase type, head word, part-of-speech, boundaries, and neighbors of the proposed argu- ment wi+1:j; the word, lemma, part-of-speech, and voice of the verb wρ (active or passive), as well as features relating to its frameset; the distance and path between the verb and the proposed argument. In this way, semantic role labeling systems are high-level “con- sumers” in the NLP stack, using features produced from lower-level components such as part-of-speech taggers and parsers. More comprehensive feature sets are enumerated by Das et al. (2014) and T¨ackstr¨om et al. (2015). A particularly powerful class of features relate to the syntactic path between the ar- gument and the predicate. These features capture the sequence of moves required to get from the argument to the verb by traversing the phrasal constituent parse of the sentence. The idea of these features is to capture syntactic regularities in how various arguments are realized. Syntactic path features are best illustrated by example, using the parse tree in Figure 13.2: • The path from Asha to the verb taught is NNP↑NP↑S↓VP↓VBD. The first part of the path, NNP↑NP↑S, means that we must travel up the parse tree from the NNP tag (proper noun) to the S (sentence) constituent. The second part of the path, S↓VP↓VBD, means that we reach the verb by producing a VP (verb phrase) from Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_331_Chunk329
|
314 CHAPTER 13. PREDICATE-ARGUMENT SEMANTICS S VP PP(Arg1) Nn algebra In about NP(Arg2) Nn class Det the Vbd taught NP Nnp(Arg0) Asha Figure 13.2: Semantic role labeling on the phrase-structure parse tree for a sentence. The dashed line indicates the syntactic path from Asha to the predicate verb taught. the S constituent, and then by producing a VBD (past tense verb). This feature is consistent with Asha being in subject position, since the path includes the sentence root S. • The path from the class to taught is NP↑VP↓VBD. This is consistent with the class being in object position, since the path passes through the VP node that dominates the verb taught. Because there are many possible path features, it can also be helpful to look at smaller parts: for example, the upward and downward parts can be treated as separate features; another feature might consider whether S appears anywhere in the path. Rather than using the constituent parse, it is also possible to build features from the de- pendency path (see § 11.4) between the head word of each argument and the verb (Prad- han et al., 2005). Using the Universal Dependency part-of-speech tagset and dependency relations (Nivre et al., 2016), the dependency path from Asha to taught is PROPN ← NSUBJVERB, because taught is the head of a relation of type ← NSUBJ with Asha. Similarly, the dependency path from class to taught is NOUN ← DOBJVERB, because class heads the noun phrase that is a direct object of taught. A more interesting example is Asha wanted to teach the class, where the path from Asha to teach is PROPN ← NSUBJVERB → XCOMPVERB. The right-facing arrow in sec- ond relation indicates that wanted is the head of its XCOMP relation with teach. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_332_Chunk330
|
13.2. SEMANTIC ROLE LABELING 315 13.2.2 Semantic role labeling as constrained optimization A potential problem with treating SRL as a classification problem is that there are a num- ber of sentence-level constraints, which a classifier might violate. • For a given verb, there can be only one argument of each type (ARG0, ARG1, etc.) • Arguments cannot overlap. This problem arises when we are labeling the phrases in a constituent parse tree, as shown in Figure 13.2: if we label the PP about algebra as an argument or adjunct, then its children about and algebra must be labeled as ∅. The same constraint also applies to the syntactic ancestors of this phrase. These constraints introduce dependencies across labeling decisions. In structure pre- diction problems such as sequence labeling and parsing, such dependencies are usually handled by defining a scoring over the entire structure, y. Efficient inference requires that the global score decomposes into local parts: for example, in sequence labeling, the scoring function decomposes into scores of pairs of adjacent tags, permitting the applica- tion of the Viterbi algorithm for inference. But the constraints that arise in semantic role labeling are less amenable to local decomposition.10 We therefore consider constrained optimization as an alternative solution. Let the set C(τ) refer to all labelings that obey the constraints introduced by the parse τ. The semantic role labeling problem can be reformulated as a constrained optimization over y ∈C(τ), max y X (i,j)∈τ ψ(w, yi,j, i, j, ρ, τ) s.t. y ∈C(τ). [13.5] In this formulation, the objective (shown on the first line) is a separable function of each individual labeling decision, but the constraints (shown on the second line) apply to the overall labeling. The sum P (i,j)∈τ indicates that we are summing over all constituent spans in the parse τ. The expression s.t. in the second line means that we maximize the objective subject to the constraint y ∈C(τ). A number of practical algorithms exist for restricted forms of constrained optimiza- tion. One such restricted form is integer linear programming, in which the objective and constraints are linear functions of integer variables. To formulate SRL as an integer linear program, we begin by rewriting the labels as a set of binary variables z = {zi,j,r} (Pun- yakanok et al., 2008), zi,j,r = ( 1, yi,j = r 0, otherwise, [13.6] 10Dynamic programming solutions have been proposed by Tromble and Eisner (2006) and T¨ackstr¨om et al. (2015), but they involves creating a trellis structure whose size is exponential in the number of labels. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_333_Chunk331
|
316 CHAPTER 13. PREDICATE-ARGUMENT SEMANTICS where r ∈R is a label in the set {ARG0, ARG1, . . . , AM-LOC, . . . , ∅}. Thus, the variables z are a binarized version of the semantic role labeling y. The objective can then be formulated as a linear function of z. X (i,j)∈τ ψ(w, yi,j, i, j, ρ, τ) = X i,j,r ψ(w, r, i, j, ρ, τ) × zi,j,r, [13.7] which is the sum of the scores of all relations, as indicated by zi,j,r. Constraints Integer linear programming permits linear inequality constraints, of the general form Az ≤b, where the parameters A and b define the constraints. To make this more concrete, let’s start with the constraint that each non-null role type can occur only once in a sentence. This constraint can be written, ∀r ̸= ∅, X (i,j)∈τ zi,j,r ≤1. [13.8] Recall that zi,j,r = 1 iff the span (i, j) has label r; this constraint says that for each possible label r ̸= ∅, there can be at most one (i, j) such that zi,j,r = 1. Rewriting this constraint can be written in the form Az ≤b, as you will find if you complete the exercises at the end of the chapter. Now consider the constraint that labels cannot overlap. Let’s define the convenience function o((i, j), (i′, j′)) = 1 iff (i, j) overlaps (i′, j′), and zero otherwise. Thus, o will indicate if a constituent (i′, j′) is either an ancestor or descendant of (i, j). The constraint is that if two constituents overlap, only one can have a non-null label: ∀(i, j) ∈τ, X (i′,j′)∈τ X r̸=∅ o((i, j), (i′, j′)) × zi′,j′,r ≤1, [13.9] where o((i, j), (i, j)) = 1. In summary, the semantic role labeling problem can thus be rewritten as the following integer linear program, max z∈{0,1}|τ| X (i,j)∈τ X r∈R zi,j,rψi,j,r [13.10] s.t. ∀r ̸= ∅, X (i,j)∈τ zi,j,r ≤1. [13.11] ∀(i, j) ∈τ, X (i′,j′)∈τ X r̸=∅ o((i, j), (i′, j′)) × zi′,j′,r ≤1. [13.12] Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_334_Chunk332
|
13.2. SEMANTIC ROLE LABELING 317 Learning with constraints Learning can be performed in the context of constrained op- timization using the usual perceptron or large-margin classification updates. Because constrained inference is generally more time-consuming, a key question is whether it is necessary to apply the constraints during learning. Chang et al. (2008) find that better per- formance can be obtained by learning without constraints, and then applying constraints only when using the trained model to predict semantic roles for unseen data. How important are the constraints? Das et al. (2014) find that an unconstrained, classification- based method performs nearly as well as constrained optimization for FrameNet parsing: while it commits many violations of the “no-overlap” constraint, the overall F1 score is less than one point worse than the score at the constrained optimum. Similar results were obtained for PropBank semantic role labeling by Punyakanok et al. (2008). He et al. (2017) find that constrained inference makes a bigger impact if the constraints are based on manually-labeled “gold” syntactic parses. This implies that errors from the syntac- tic parser may limit the effectiveness of the constraints. Punyakanok et al. (2008) hedge against parser error by including constituents from several different parsers; any con- stituent can be selected from any parse, and additional constraints ensure that overlap- ping constituents are not selected. Implementation Integer linear programming solvers such as glpk,11 cplex,12 and Gurobi13 allow inequality constraints to be expressed directly in the problem definition, rather than in the matrix form Az ≤b. The time complexity of integer linear programming is theoret- ically exponential in the number of variables |z|, but in practice these off-the-shelf solvers obtain good solutions efficiently. Using a standard desktop computer, Das et al. (2014) report that the cplex solver requires 43 seconds to perform inference on the FrameNet test set, which contains 4,458 predicates. Recent work has shown that many constrained optimization problems in natural lan- guage processing can be solved in a highly parallelized fashion, using optimization tech- niques such as dual decomposition, which are capable of exploiting the underlying prob- lem structure (Rush et al., 2010). Das et al. (2014) apply this technique to FrameNet se- mantic role labeling, obtaining an order-of-magnitude speedup over cplex. 13.2.3 Neural semantic role labeling Neural network approaches to SRL have tended to treat it as a sequence labeling task, using a labeling scheme such as the BIO notation, which we previously saw in named entity recognition (§ 8.3). In this notation, the first token in a span of type ARG1 is labeled 11https://www.gnu.org/software/glpk/ 12https://www-01.ibm.com/software/commerce/optimization/cplex-optimizer/ 13http://www.gurobi.com/ Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_335_Chunk333
|
318 CHAPTER 13. PREDICATE-ARGUMENT SEMANTICS B-ARG1; all remaining tokens in the span are inside, and are therefore labeled I-ARG1. Tokens outside any argument are labeled O. For example: (13.12) Asha B-ARG0 taught PRED Boyang B-ARG2 ’s I-ARG2 mom I-ARG2 about B-ARG1 algebra I-ARG1 Recurrent neural networks (§ 7.6) are a natural approach to this tagging task. For example, Zhou and Xu (2015) apply a deep bidirectional multilayer LSTM (see § 7.6) to PropBank semantic role labeling. In this model, each bidirectional LSTM serves as input for another, higher-level bidirectional LSTM, allowing complex non-linear transforma- tions of the original input embeddings, X = [x1, x2, . . . , xM]. The hidden state of the final LSTM is Z(K) = [z(K) 1 , z(K) 2 , . . . , z(K) M ]. The “emission” score for each tag Ym = y is equal to the inner product θy · z(K) m , and there is also a transition score for each pair of adjacent tags. The complete model can be written, Z(1) =BiLSTM(X) [13.13] Z(i) =BiLSTM(Z(i−1)) [13.14] ˆy = argmax y M X m−1 Θ(y)z(K) m + ψym−1,ym. [13.15] Note that the final step maximizes over the entire labeling y, and includes a score for each tag transition ψym−1,ym. This combination of LSTM and pairwise potentials on tags is an example of an LSTM-CRF. The maximization over y is performed by the Viterbi algorithm. This model strongly outperformed alternative approaches at the time, including con- strained decoding and convolutional neural networks.14 More recent work has combined recurrent neural network models with constrained decoding, using the A∗search algo- rithm to search over labelings that are feasible with respect to the constraints (He et al., 2017). This yields small improvements over the method of Zhou and Xu (2015). He et al. (2017) obtain larger improvements by creating an ensemble of SRL systems, each trained on an 80% subsample of the corpus. The average prediction across this ensemble is more robust than any individual model. 13.3 Abstract Meaning Representation Semantic role labeling transforms the task of semantic parsing to a labeling task. Consider the sentence, 14The successful application of convolutional neural networks to semantic role labeling by Collobert and Weston (2008) was an influential early result in the current wave of neural networks in natural language processing. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_336_Chunk334
|
13.3. ABSTRACT MEANING REPRESENTATION 319 (w / want-01 :ARG0 (h / whale) :ARG1 (p / pursue-02 :ARG0 (c / captain) :ARG1 h)) w / wants-01 h / whale p / pursue-02 c / captain Arg0 Arg1 Arg1 Arg0 Figure 13.3: Two views of the AMR representation for the sentence The whale wants the captain to pursue him. (13.13) The whale wants the captain to pursue him. The PropBank semantic role labeling analysis is: • (PREDICATE : wants, ARG0 : the whale, ARG1 : the captain to pursue him) • (PREDICATE : pursue, ARG0 : the captain, ARG1 : him) The Abstract Meaning Representation (AMR) unifies this analysis into a graph struc- ture, in which each node is a variable, and each edge indicates a concept (Banarescu et al., 2013). This can be written in two ways, as shown in Figure 13.3. On the left is the PENMAN notation (Matthiessen and Bateman, 1991), in which each set of parentheses in- troduces a variable. Each variable is an instance of a concept, which is indicated with the slash notation: for example, w / want-01 indicates that the variable w is an instance of the concept want-01, which in turn refers to the PropBank frame for the first sense of the verb want; pursue-02 refers to the second sense of pursue. Relations are introduced with colons: for example, :ARG0 (c / captain) indicates a relation of type ARG0 with the newly-introduced variable c. Variables can be reused, so that when the variable h ap- pears again as an argument to p, it is understood to refer to the same whale in both cases. This arrangement is indicated compactly in the graph structure on the right, with edges indicating concepts. One way in which AMR differs from PropBank-style semantic role labeling is that it reifies each entity as a variable: for example, the whale in (13.13) is reified in the variable h, which is reused as ARG0 in its relationship with w / want-01, and as ARG1 in its relationship with p / pursue-02. Reifying entities as variables also makes it possible to represent the substructure of noun phrases more explicitly. For example, Asha borrowed the algebra book would be represented as: (b / borrow-01 :ARG0 (p / person :name (n / name :op1 "Asha")) Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_337_Chunk335
|
320 CHAPTER 13. PREDICATE-ARGUMENT SEMANTICS :ARG1 (b2 / book :topic (a / algebra))) This indicates that the variable p is a person, whose name is the variable n; that name has one token, the string Asha. Similarly, the variable b2 is a book, and the topic of b2 is a variable a whose type is algebra. The relations name and topic are examples of “non-core roles”, which are similar to adjunct modifiers in PropBank. However, AMR’s inventory is more extensive, including more than 70 non-core roles, such as negation, time, manner, frequency, and location. Lists and sequences — such as the list of tokens in a name — are described using the roles op1, op2, etc. Another feature of AMR is that a semantic predicate can be introduced by any syntac- tic element, as in the following examples from Banarescu et al. (2013): (13.14) a. The boy destroyed the room. b. the destruction of the room by the boy ... c. the boy’s destruction of the room ... All these examples have the same semantics in AMR, (d / destroy-01 :ARG0 (b / boy) :ARG1 (r / room)) The noun destruction is linked to the verb destroy, which is captured by the PropBank frame destroy-01. This can happen with adjectives as well: in the phrase the attractive spy, the adjective attractive is linked to the PropBank frame attract-01: (s / spy :ARG0-of (a / attract-01)) In this example, ARG0-of is an inverse relation, indicating that s is the ARG0 of the predicate a. Inverse relations make it possible for all AMR parses to have a single root concept. While AMR goes farther than semantic role labeling, it does not link semantically- related frames such as buy/sell (as FrameNet does). AMR also does not handle quan- tification (as first-order predicate calculus does), and it makes no attempt to handle noun number and verb tense (as PropBank does). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_338_Chunk336
|
13.3. ABSTRACT MEANING REPRESENTATION 321 13.3.1 AMR Parsing Abstract Meaning Representation is not a labeling of the original text — unlike PropBank semantic role labeling, and most of the other tagging and parsing tasks that we have encountered thus far. The AMR for a given sentence may include multiple concepts for single words in the sentence: as we have seen, the sentence Asha likes algebra contains both person and name concepts for the word Asha. Conversely, words in the sentence may not appear in the AMR: in Boyang made a tour of campus, the light verb make would not appear in the AMR, which would instead be rooted on the predicate tour. As a result, AMR is difficult to parse, and even evaluating AMR parsing involves considerable algorithmic complexity (Cai and Yates, 2013). A further complexity is that AMR labeled datasets do not explicitly show the align- ment between the AMR annotation and the words in the sentence. For example, the link between the word wants and the concept want-01 is not annotated. To acquire train- ing data for learning-based parsers, it is therefore necessary to first perform an alignment between the training sentences and their AMR parses. Flanigan et al. (2014) introduce a rule-based parser, which links text to concepts through a series of increasingly high-recall steps. As with dependency parsing, AMR can be parsed by graph-based methods that ex- plore the space of graph structures, or by incremental transition-based algorithms. One approach to graph-based AMR parsing is to first group adjacent tokens into local sub- structures, and then to search the space of graphs over these substructures (Flanigan et al., 2014). The identification of concept subgraphs can be formulated as a sequence labeling problem, and the subsequent graph search can be solved using integer linear program- ming (§ 13.2.2). Various transition-based parsing algorithms have been proposed. Wang et al. (2015) construct an AMR graph by incrementally modifying the syntactic depen- dency graph. At each step, the parser performs an action: for example, adding an AMR relation label to the current dependency edge, swapping the direction of a syntactic de- pendency edge, or cutting an edge and reattaching the orphaned subtree to a new parent. Additional resources Practical semantic role labeling was first made possible by the PropBank annotations on the Penn Treebank (Palmer et al., 2005). Abend and Rappoport (2017) survey several semantic representation schemes, including semantic role labeling and AMR. Other lin- guistic features of AMR are summarized in the original paper (Banarescu et al., 2013) and the tutorial slides by Schneider et al. (2015). Recent shared tasks have undertaken seman- tic dependency parsing, in which the goal is to identify semantic relationships between pairs of words (Oepen et al., 2014); see Ivanova et al. (2012) for an overview of connections between syntactic and semantic dependencies. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_339_Chunk337
|
322 CHAPTER 13. PREDICATE-ARGUMENT SEMANTICS Exercises 1. Write out an event semantic representation for the following sentences. You may make up your own predicates. (13.15) Abigail shares with Max. (13.16) Abigail reluctantly shares a toy with Max. (13.17) Abigail hates to share with Max. 2. Find the PropBank framesets for share and hate at http://verbs.colorado.edu/ propbank/framesets-english-aliases/, and rewrite your answers from the previous question, using the thematic roles ARG0, ARG1, and ARG2. 3. Compute the syntactic path features for Abigail and Max in each of the example sen- tences (13.15) and (13.17) in Question 1, with respect to the verb share. If you’re not sure about the parse, you can try an online parser such as http://nlp.stanford. edu:8080/parser/. 4. Compute the dependency path features for Abigail and Max in each of the example sentences (13.15) and (13.17) in Question 1, with respect to the verb share. Again, if you’re not sure about the parse, you can try an online parser such as http://nlp. stanford.edu:8080/parser/. As a hint, the dependency relation between share and Max is OBL according to the Universal Dependency treebank. 5. PropBank semantic role labeling includes reference arguments, such as, (13.18) [AM-LOC The bed] on [R-AM-LOC which] I slept broke.15 The label R-AM-LOC indicates that the word which is a reference to The bed, which expresses the location of the event. Reference arguments must have referents: the tag R-AM-LOC can appear only when AM-LOC also appears in the sentence. Show how to express this as a linear constraint, specifically for the tag R-AM-LOC. Be sure to correctly handle the case in which neither AM-LOC nor R-AM-LOC appear in the sentence. 6. Explain how to express the constraints on semantic role labeling in Equation 13.8 and Equation 13.9 in the general form Az ≥b. 7. Produce the AMR annotations for the following examples: (13.19) a. The girl likes the boy. 15Example from 2013 NAACL tutorial slides by Shumin Wu Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_340_Chunk338
|
13.3. ABSTRACT MEANING REPRESENTATION 323 b. The girl was liked by the boy. c. Abigail likes Maxwell Aristotle. d. The spy likes the attractive boy. e. The girl doesn’t like the boy. f. The girl likes her dog. For (13.19c), recall that multi-token names are created using op1, op2, etc. You will need to consult Banarescu et al. (2013) for (13.19e), and Schneider et al. (2015) for (13.19f). You may assume that her refers to the girl in this example. 8. In this problem, you will build a FrameNet sense classifier for the verb can, which can evoke two frames: POSSIBILITY (can you order a salad with french fries?) and CAPABILITY(can you eat a salad with chopsticks?). To build the dataset, access the FrameNet corpus in NLTK: import nltk nltk.download(’framenet_v17’) from nltk.corpus import framenet as fn Next, find instances in which the lexical unit can.v (the verb form of can) evokes a frame. Do this by iterating over fn.docs(), and then over sentences, and then for doc in fn.docs(): if ’sentence’ in doc: for sent in doc[’sentence’]: for anno_set in sent[’annotationSet’]: if ’luName’ in anno_set and anno_set[’luName’] == ’can.v’: pass # your code here Use the field frameName as a label, and build a set of features from the field text. Train a classifier to try to accurately predict the frameName, disregarding cases other than CAPABILITY and POSSIBILITY. Treat the first hundred instances as a train- ing set, and the remaining instances as the test set. Can you do better than a classifier that simply selects the most common class? 9. *Download the PropBank sample data, using NLTK (http://www.nltk.org/ howto/propbank.html). a) Use a deep learning toolkit such as PyTorch to train a BiLSTM sequence label- ing model (§ 7.6) to identify words or phrases that are predicates, e.g., we/O took/B-PRED a/I-PRED walk/I-PRED together/O. Your model should compute the tag score from the BiLSTM hidden state ψ(ym) = βy · hm. b) Optionally, implement Viterbi to improve the predictions of the model in the previous section. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_341_Chunk339
|
324 CHAPTER 13. PREDICATE-ARGUMENT SEMANTICS c) Try to identify ARG0 and ARG1 for each predicate. You should again use the BiLSTM and BIO notation, but you may want to include the BiLSTM hidden state at the location of the predicate in your prediction model, e.g., ψ(ym) = βy·[hm; hˆr], where ˆr is the predicted location of the (first word of the) predicate. 10. Using an off-the-shelf PropBank SRL system,16 build a simplified question answer- ing system in the style of Shen and Lapata (2007). Specifically, your system should do the following: • For each document in a collection, it should apply the semantic role labeler, and should store the output as a tuple. • For a question, your system should again apply the semantic role labeler. If any of the roles are filled by a wh-pronoun, you should mark that role as the expected answer phrase (EAP). • To answer the question, search for a stored tuple which matches the question as well as possible (same predicate, no incompatible semantic roles, and as many matching roles as possible). Align the EAP against its role filler in the stored tuple, and return this as the answer. To evaluate your system, download a set of three news articles on the same topic, and write down five factoid questions that should be answerable from the arti- cles. See if your system can answer these questions correctly. (If this problem is assigned to an entire class, you can build a large-scale test set and compare various approaches.) 16At the time of writing, the following systems are availabe: SENNA (http://ronan.collobert. com/senna/), Illinois Semantic Role Labeler (https://cogcomp.cs.illinois.edu/page/ software_view/SRL), and mate-tools (https://code.google.com/archive/p/mate-tools/). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_342_Chunk340
|
Chapter 14 Distributional and distributed semantics A recurring theme in natural language processing is the complexity of the mapping from words to meaning. In chapter 4, we saw that a single word form, like bank, can have mul- tiple meanings; conversely, a single meaning may be created by multiple surface forms, a lexical semantic relationship known as synonymy. Despite this complex mapping be- tween words and meaning, natural language processing systems usually rely on words as the basic unit of analysis. This is especially true in semantics: the logical and frame semantic methods from the previous two chapters rely on hand-crafted lexicons that map from words to semantic predicates. But how can we analyze texts that contain words that we haven’t seen before? This chapter describes methods that learn representations of word meaning by analyzing unlabeled data, vastly improving the generalizability of natural language processing systems. The theory that makes it possible to acquire mean- ingful representations from unlabeled data is the distributional hypothesis. 14.1 The distributional hypothesis Here’s a word you may not know: tezg¨uino (the example is from Lin, 1998). If you do not know the meaning of tezg¨uino, then you are in the same situation as a natural language processing system when it encounters a word that did not appear in its training data. Now suppose you see that tezg¨uino is used in the following contexts: (14.1) A bottle of is on the table. (14.2) Everybody likes . (14.3) Don’t have before you drive. (14.4) We make out of corn. 325
|
nlp_Page_343_Chunk341
|
326 CHAPTER 14. DISTRIBUTIONAL AND DISTRIBUTED SEMANTICS (14.1) (14.2) (14.3) (14.4) ... tezg¨uino 1 1 1 1 loud 0 0 0 0 motor oil 1 0 0 1 tortillas 0 1 0 1 choices 0 1 0 0 wine 1 1 1 0 Table 14.1: Distributional statistics for tezg¨uino and five related terms 4 3 2 1 0 1 2 3 4 4 3 2 1 0 1 2 3 4 sister brother niece nephew aunt uncle woman man heiress heir madam sir countess earl duchess duke queen king empress emperor 4 2 0 2 4 6 4 3 2 1 0 1 2 3 4 slow slower slowest short shorter shortest strong stronger strongest loud louder loudest clear clearer clearest soft softer softest dark darker darkest Figure 14.1: Lexical semantic relationships have regular linear structures in two dimen- sional projections of distributional statistics (Pennington et al., 2014). What other words fit into these contexts? How about: loud, motor oil, tortillas, choices, wine? Each row of Table 14.1 is a vector that summarizes the contextual properties for each word, with a value of one for contexts in which the word can appear, and a value of zero for contexts in which it cannot. Based on these vectors, we can conclude: wine is very similar to tezg¨uino; motor oil and tortillas are fairly similar to tezg¨uino; loud is completely different. These vectors, which we will call word representations, describe the distributional properties of each word. Does vector similarity imply semantic similarity? This is the dis- tributional hypothesis, stated by Firth (1957) as: “You shall know a word by the company it keeps.” The distributional hypothesis has stood the test of time: distributional statistics are a core part of language technology today, because they make it possible to leverage large amounts of unlabeled data to learn about rare words that do not appear in labeled training data. Distributional statistics have a striking ability to capture lexical semantic relationships Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_344_Chunk342
|
14.2. DESIGN DECISIONS FOR WORD REPRESENTATIONS 327 such as analogies. Figure 14.1 shows two examples, based on two-dimensional projections of distributional word embeddings, discussed later in this chapter. In each case, word- pair relationships correspond to regular linear patterns in this two dimensional space. No labeled data about the nature of these relationships was required to identify this underly- ing structure. Distributional semantics are computed from context statistics. Distributed seman- tics are a related but distinct idea: that meaning can be represented by numerical vectors rather than symbolic structures. Distributed representations are often estimated from dis- tributional statistics, as in latent semantic analysis and WORD2VEC, described later in this chapter. However, distributed representations can also be learned in a supervised fashion from labeled data, as in the neural classification models encountered in chapter 3. 14.2 Design decisions for word representations There are many approaches for computing word representations, but most can be distin- guished on three main dimensions: the nature of the representation, the source of contex- tual information, and the estimation procedure. 14.2.1 Representation Today, the dominant word representations are k-dimensional vectors of real numbers, known as word embeddings. (The name is due to the fact that each discrete word is em- bedded in a continuous vector space.) This representation dates back at least to the late 1980s (Deerwester et al., 1990), and is used in popular techniques such as WORD2VEC (Mikolov et al., 2013). Word embeddings are well suited for neural networks, where they can be plugged in as inputs. They can also be applied in linear classifiers and structure prediction mod- els (Turian et al., 2010), although it can be difficult to learn linear models that employ real-valued features (Kummerfeld et al., 2015). A popular alternative is bit-string rep- resentations, such as Brown clusters (§ 14.4), in which each word is represented by a variable-length sequence of zeros and ones (Brown et al., 1992). Another representational question is whether to estimate one embedding per surface form (e.g., bank), or to estimate distinct embeddings for each word sense or synset. In- tuitively, if word representations are to capture the meaning of individual words, then words with multiple meanings should have multiple embeddings. This can be achieved by integrating unsupervised clustering with word embedding estimation (Huang and Yates, 2012; Li and Jurafsky, 2015). However, Arora et al. (2018) argue that it is unnec- essary to model distinct word senses explicitly, because the embeddings for each surface form are a linear combination of the embeddings of the underlying senses. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_345_Chunk343
|
328 CHAPTER 14. DISTRIBUTIONAL AND DISTRIBUTED SEMANTICS The moment one learns English, complications set in (Alfau, 1999) Brown Clusters {one} WORD2VEC, h = 2 {moment, one, English, complications} Structured WORD2VEC, h = 2 {(moment, −2), (one, −1), (English, +1), (complications, +2)} Dependency contexts, {(one, NSUBJ), (English, DOBJ), (moment, ACL−1)} Table 14.2: Contexts for the word learns, according to various word representations. For dependency context, (one, NSUBJ) means that there is a relation of type NSUBJ (nominal subject) to the word one, and (moment, ACL−1) means that there is a relation of type ACL (adjectival clause) from the word moment. 14.2.2 Context The distributional hypothesis says that word meaning is related to the “contexts” in which the word appears, but context can be defined in many ways. In the tezg¨uino example, con- texts are entire sentences, but in practice there are far too many sentences. At the oppo- site extreme, the context could be defined as the immediately preceding word; this is the context considered in Brown clusters. WORD2VEC takes an intermediate approach, using local neighborhoods of words (e.g., h = 5) as contexts (Mikolov et al., 2013). Contexts can also be much larger: for example, in latent semantic analysis, each word’s context vector includes an entry per document, with a value of one if the word appears in the document (Deerwester et al., 1990); in explicit semantic analysis, these documents are Wikipedia pages (Gabrilovich and Markovitch, 2007). In structured WORD2VEC, context words are labeled by their position with respect to the target word wm (e.g., two words before, one word after), which makes the result- ing word representations more sensitive to syntactic differences (Ling et al., 2015). An- other way to incorporate syntax is to perform parsing as a preprocessing step, and then form context vectors from the dependency edges (Levy and Goldberg, 2014) or predicate- argument relations (Lin, 1998). The resulting context vectors for several of these methods are shown in Table 14.2. The choice of context has a profound effect on the resulting representations, which can be viewed in terms of word similarity. Applying latent semantic analysis (§ 14.3) to contexts of size h = 2 and h = 30 yields the following nearest-neighbors for the word dog:1 • (h = 2): cat, horse, fox, pet, rabbit, pig, animal, mongrel, sheep, pigeon 1The example is from lecture slides by Marco Baroni, Alessandro Lenci, and Stefan Evert, who applied latent semantic analysis to the British National Corpus. You can find an online demo here: http://clic. cimec.unitn.it/infomap-query/ Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_346_Chunk344
|
14.3. LATENT SEMANTIC ANALYSIS 329 • (h = 30): kennel, puppy, pet, bitch, terrier, rottweiler, canine, cat, to bark, Alsatian Which word list is better? Each word in the h = 2 list is an animal, reflecting the fact that locally, the word dog tends to appear in the same contexts as other animal types (e.g., pet the dog, feed the dog). In the h = 30 list, nearly everything is dog-related, including specific breeds such as rottweiler and Alsatian. The list also includes words that are not animals (kennel), and in one case (to bark), is not a noun at all. The 2-word context window is more sensitive to syntax, while the 30-word window is more sensitive to topic. 14.2.3 Estimation Word embeddings are estimated by optimizing some objective: the likelihood of a set of unlabeled data (or a closely related quantity), or the reconstruction of a matrix of context counts, similar to Table 14.1. Maximum likelihood estimation Likelihood-based optimization is derived from the objective log p(w; U), where U ∈RK × V is matrix of word embeddings, and w = {wm}M m=1 is a corpus, represented as a list of M tokens. Recurrent neural network lan- guage models (§ 6.3) optimize this objective directly, backpropagating to the input word embeddings through the recurrent structure. However, state-of-the-art word embeddings employ huge corpora with hundreds of billions of tokens, and recurrent architectures are difficult to scale to such data. As a result, likelihood-based word embeddings are usually based on simplified likelihoods or heuristic approximations. Matrix factorization The matrix C = {count(i, j)} stores the co-occurrence counts of word i and context j. Word representations can be obtained by approximately factoring this matrix, so that count(i, j) is approximated by a function of a word embedding ui and a context embedding vj. These embeddings can be obtained by minimizing the norm of the reconstruction error, min u,v ||C −˜C(u, v)||F , [14.1] where ˜C(u, v) is the approximate reconstruction resulting from the embeddings u and v, and ||X||F indicates the Frobenius norm, P i,j x2 i,j. Rather than factoring the matrix of word-context counts directly, it is often helpful to transform these counts using information- theoretic metrics such as pointwise mutual information (PMI), described in the next sec- tion. 14.3 Latent semantic analysis Latent semantic analysis (LSA) is one of the oldest approaches to distributed seman- tics (Deerwester et al., 1990). It induces continuous vector representations of words by Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_347_Chunk345
|
330 CHAPTER 14. DISTRIBUTIONAL AND DISTRIBUTED SEMANTICS factoring a matrix of word and context counts, using truncated singular value decompo- sition (SVD), min U∈RV ×K,S∈RK×K,V∈R|C|×K ||C −USV⊤||F [14.2] s.t. U⊤U = I [14.3] V⊤V = I [14.4] ∀i ̸= j, Si,j = 0, [14.5] where V is the size of the vocabulary, |C| is the number of contexts, and K is size of the resulting embeddings, which are set equal to the rows of the matrix U. The matrix S is constrained to be diagonal (these diagonal elements are called the singular values), and the columns of the product SV⊤provide descriptions of the contexts. Each element ci,j is then reconstructed as a bilinear product, ci,j ≈ K X k=1 ui,kskvj,k. [14.6] The objective is to minimize the sum of squared approximation errors. The orthonormal- ity constraints U⊤U = V⊤V = I ensure that all pairs of dimensions in U and V are uncorrelated, so that each dimension conveys unique information. Efficient implemen- tations of truncated singular value decomposition are available in numerical computing packages such as SCIPY and MATLAB.2 Latent semantic analysis is most effective when the count matrix is transformed before the application of SVD. One such transformation is pointwise mutual information (PMI; Church and Hanks, 1990), which captures the degree of association between word i and context j, PMI(i, j) = log p(i, j) p(i)p(j) = log p(i | j)p(j) p(i)p(j) = log p(i | j) p(i) [14.7] = log count(i, j) −log V X i′=1 count(i′, j) [14.8] −log X j′∈C count(i, j′) + log V X i′=1 X j′∈C count(i′, j′). [14.9] The pointwise mutual information can be viewed as the logarithm of the ratio of the con- ditional probability of word i in context j to the marginal probability of word i in all 2An important implementation detail is to represent C as a sparse matrix, so that the storage cost is equal to the number of non-zero entries, rather than the size V × |C|. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_348_Chunk346
|
14.4. BROWN CLUSTERS 331 evaluation assessment analysis understanding opinion conversation discussion reps representatives representative rep day year week month quarter half accounts people customers individuals employees students Figure 14.2: Subtrees produced by bottom-up Brown clustering on news text (Miller et al., 2004). contexts. When word i is statistically associated with context j, the ratio will be greater than one, so PMI(i, j) > 0. The PMI transformation focuses latent semantic analysis on re- constructing strong word-context associations, rather than on reconstructing large counts. The PMI is negative when a word and context occur together less often than if they were independent, but such negative correlations are unreliable because counts of rare events have high variance. Furthermore, the PMI is undefined when count(i, j) = 0. One solution to these problems is to use the Positive PMI (PPMI), PPMI(i, j) = ( PMI(i, j), p(i | j) > p(i) 0, otherwise. [14.10] Bullinaria and Levy (2007) compare a range of matrix transformations for latent se- mantic analysis, using a battery of tasks related to word meaning and word similarity (for more on evaluation, see § 14.6). They find that PPMI-based latent semantic analysis yields strong performance on a battery of tasks related to word meaning: for example, PPMI-based LSA vectors can be used to solve multiple-choice word similarity questions from the Test of English as a Foreign Language (TOEFL), obtaining 85% accuracy. 14.4 Brown clusters Learning algorithms like perceptron and conditional random fields often perform better with discrete feature vectors. A simple way to obtain discrete representations from distri- Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_349_Chunk347
|
332 CHAPTER 14. DISTRIBUTIONAL AND DISTRIBUTED SEMANTICS bitstring ten most frequent words 011110100111 excited thankful grateful stoked pumped anxious hyped psyched exited geeked 01111010100 talking talkin complaining talkn bitching tlkn tlkin bragging rav- ing +k 011110101010 thinking thinkin dreaming worrying thinkn speakin reminiscing dreamin daydreaming fantasizing 011110101011 saying sayin suggesting stating sayn jokin talmbout implying insisting 5’2 011110101100 wonder dunno wondered duno donno dno dono wonda wounder dunnoe 011110101101 wondering wonders debating deciding pondering unsure won- derin debatin woundering wondern 011110101110 sure suree suuure suure sure- surre sures shuree Table 14.3: Fragment of a Brown clustering of Twitter data (Owoputi et al., 2013). Each row is a leaf in the tree, showing the ten most frequent words. This part of the tree emphasizes verbs of communicating and knowing, especially in the present partici- ple. Each leaf node includes orthographic variants (thinking, thinkin, thinkn), semanti- cally related terms (excited, thankful, grateful), and some outliers (5’2, +k). See http: //www.cs.cmu.edu/˜ark/TweetNLP/cluster_viewer.html for more. butional statistics is by clustering (§ 5.1.1), so that words in the same cluster have similar distributional statistics. This can help in downstream tasks, by sharing features between all words in the same cluster. However, there is an obvious tradeoff: if the number of clus- ters is too small, the words in each cluster will not have much in common; if the number of clusters is too large, then the learner will not see enough examples from each cluster to generalize. A solution to this problem is hierarchical clustering: using the distributional statistics to induce a tree-structured representation. Fragments of Brown cluster trees are shown in Figure 14.2 and Table 14.3. Each word’s representation consists of a binary string describ- ing a path through the tree: 0 for taking the left branch, and 1 for taking the right branch. In the subtree in the upper right of the figure, the representation of the word conversation is 10; the representation of the word assessment is 0001. Bitstring prefixes capture similar- ity at varying levels of specificity, and it is common to use the first eight, twelve, sixteen, and twenty bits as features in tasks such as named entity recognition (Miller et al., 2004) and dependency parsing (Koo et al., 2008). Hierarchical trees can be induced from a likelihood-based objective, using a discrete Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_350_Chunk348
|
14.4. BROWN CLUSTERS 333 latent variable ki ∈{1, 2, . . . , K} to represent the cluster of word i: log p(w; k) ≈ M X m=1 log p(wm | wm−1; k) [14.11] ≜ M X m=1 log p(wm | kwm) + log p(kwm | kwm−1). [14.12] This is similar to a hidden Markov model, with the crucial difference that each word can be emitted from only a single cluster: ∀k ̸= kwm, p(wm | k) = 0. Using the objective in Equation 14.12, the Brown clustering tree can be constructed from the bottom up: begin with each word in its own cluster, and incrementally merge clusters until only a single cluster remains. At each step, we merge the pair of clusters such that the objective in Equation 14.12 is maximized. Although the objective seems to involve a sum over the entire corpus, the score for each merger can be computed from the cluster-to-cluster co-occurrence counts. These counts can be updated incrementally as the clustering proceeds. The optimal merge at each step can be shown to maximize the average mutual information, I(k) = K X k1=1 K X k2=1 p(k1, k2) × PMI(k1, k2) [14.13] p(k1, k2) = count(k1, k2) PK k1′=1 PK k2′=1 count(k1′, k2′) , where p(k1, k2) is the joint probability of a bigram involving a word in cluster k1 followed by a word in k2. This probability and the PMI are both computed from the co-occurrence counts between clusters. After each merger, the co-occurrence vectors for the merged clusters are simply added up, so that the next optimal merger can be found efficiently. This bottom-up procedure requires iterating over the entire vocabulary, and evaluat- ing K2 t possible mergers at each step, where Kt is the current number of clusters at step t of the algorithm. Furthermore, computing the score for each merger involves a sum over K2 t clusters. The maximum number of clusters is K0 = V , which occurs when every word is in its own cluster at the beginning of the algorithm. The time complexity is thus O(V 5). To avoid this complexity, practical implementations use a heuristic approximation called exchange clustering. The K most common words are placed in clusters of their own at the beginning of the process. We then consider the next most common word, and merge it with one of the existing clusters. This continues until the entire vocabulary has been incorporated, at which point the K clusters are merged down to a single cluster, forming a tree. The algorithm never considers more than K + 1 clusters at any step, and the complexity is O(V K + V log V ), with the second term representing the cost of sorting Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_351_Chunk349
|
334 CHAPTER 14. DISTRIBUTIONAL AND DISTRIBUTED SEMANTICS vwm−2 vwm−1 vwm vwm+1 vwm+2 wm−2 wm−1 wm wm+1 wm+2 U (a) Continuous bag-of-words (CBOW) vwm wm wm−1 wm−2 wm+1 wm+2 U (b) Skipgram Figure 14.3: The CBOW and skipgram variants of WORD2VEC. The parameter U is the matrix of word embeddings, and each vm is the context embedding for word wm. the words at the beginning of the algorithm. For more details on the algorithm, see Liang (2005). 14.5 Neural word embeddings Neural word embeddings combine aspects of the previous two methods: like latent se- mantic analysis, they are a continuous vector representation; like Brown clusters, they are trained from a likelihood-based objective. Let the vector ui represent the K-dimensional embedding for word i, and let vj represent the K-dimensional embedding for context j. The inner product ui · vj represents the compatibility between word i and context j. By incorporating this inner product into an approximation to the log-likelihood of a cor- pus, it is possible to estimate both parameters by backpropagation. WORD2VEC (Mikolov et al., 2013) includes two such approximations: continuous bag-of-words (CBOW) and skipgrams. 14.5.1 Continuous bag-of-words (CBOW) In recurrent neural network language models, each word wm is conditioned on a recurrently- updated state vector, which is based on word representations going all the way back to the beginning of the text. The continuous bag-of-words (CBOW) model is a simplification: the local context is computed as an average of embeddings for words in the immediate neighborhood m −h, m −h + 1, . . . , m + h −1, m + h, vm = 1 2h h X n=1 vwm+n + vwm−n. [14.14] Thus, CBOW is a bag-of-words model, because the order of the context words does not matter; it is continuous, because rather than conditioning on the words themselves, we condition on a continuous vector constructed from the word embeddings. The parameter h determines the neighborhood size, which Mikolov et al. (2013) set to h = 4. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_352_Chunk350
|
14.5. NEURAL WORD EMBEDDINGS 335 The CBOW model optimizes an approximation to the corpus log-likelihood, log p(w) ≈ M X m=1 log p(wm | wm−h, wm−h+1, . . . , wm+h−1, wm+h) [14.15] = M X m=1 log exp (uwm · vm) PV j=1 exp (uj · vm) [14.16] = M X m=1 uwm · vm −log V X j=1 exp (uj · vm) . [14.17] 14.5.2 Skipgrams In the CBOW model, words are predicted from their context. In the skipgram model, the context is predicted from the word, yielding the objective: log p(w) ≈ M X m=1 hm X n=1 log p(wm−n | wm) + log p(wm+n | wm) [14.18] = M X m=1 hm X n=1 log exp(uwm−n · vwm) PV j=1 exp(uj · vwm) + log exp(uwm+n · vwm) PV j=1 exp(uj · vwm) [14.19] = M X m=1 hm X n=1 uwm−n · vwm + uwm+n · vwm −2 log V X j=1 exp (uj · vwm) . [14.20] In the skipgram approximation, each word is generated multiple times; each time it is con- ditioned only on a single word. This makes it possible to avoid averaging the word vec- tors, as in the CBOW model. The local neighborhood size hm is randomly sampled from a uniform categorical distribution over the range {1, 2, . . . , hmax}; Mikolov et al. (2013) set hmax = 10. Because the neighborhood grows outward with h, this approach has the effect of weighting near neighbors more than distant ones. Skipgram performs better on most evaluations than CBOW (see § 14.6 for details of how to evaluate word representations), but CBOW is faster to train (Mikolov et al., 2013). 14.5.3 Computational complexity The WORD2VEC models can be viewed as an efficient alternative to recurrent neural net- work language models, which involve a recurrent state update whose time complexity is quadratic in the size of the recurrent state vector. CBOW and skipgram avoid this computation, and incur only a linear time complexity in the size of the word and con- text representations. However, all three models compute a normalized probability over word tokens; a na¨ıve implementation of this probability requires summing over the entire Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_353_Chunk351
|
336 CHAPTER 14. DISTRIBUTIONAL AND DISTRIBUTED SEMANTICS 0 1 2 3 4 Ahab σ(u0 · vc) whale σ(−u0 · vc) × σ(u2 · vc) blubber σ(−u0 · vc) × σ(−u2 · vc) Figure 14.4: A fragment of a hierarchical softmax tree. The probability of each word is computed as a product of probabilities of local branching decisions in the tree. vocabulary. The time complexity of this sum is O(V ×K), which dominates all other com- putational costs. There are two solutions: hierarchical softmax, a tree-based computation that reduces the cost to a logarithm of the size of the vocabulary; and negative sampling, an approximation that eliminates the dependence on vocabulary size. Both methods are also applicable to RNN language models. Hierarchical softmax In Brown clustering, the vocabulary is organized into a binary tree. Mnih and Hin- ton (2008) show that the normalized probability over words in the vocabulary can be reparametrized as a probability over paths through such a tree. This hierarchical softmax probability is computed as a product of binary decisions over whether to move left or right through the tree, with each binary decision represented as a sigmoid function of the inner product between the context embedding vc and an output embedding associated with the node un, Pr(left at n | c) =σ(un · vc) [14.21] Pr(right at n | c) =1 −σ(un · vc) = σ(−un · vc), [14.22] where σ refers to the sigmoid function, σ(x) = 1 1+exp(−x). The range of the sigmoid is the interval (0, 1), and 1 −σ(x) = σ(−x). As shown in Figure 14.4, the probability of generating each word is redefined as the product of the probabilities across its path. The sum of all such path probabilities is guar- anteed to be one, for any context vector vc ∈RK. In a balanced binary tree, the depth is logarithmic in the number of leaf nodes, and thus the number of multiplications is equal to O(log V ). The number of non-leaf nodes is equal to O(2V −1), so the number of pa- rameters to be estimated increases by only a small multiple. The tree can be constructed using an incremental clustering procedure similar to hierarchical Brown clusters (Mnih Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_354_Chunk352
|
14.5. NEURAL WORD EMBEDDINGS 337 and Hinton, 2008), or by using the Huffman (1952) encoding algorithm for lossless com- pression. Negative sampling Likelihood-based methods are computationally intensive because each probability must be normalized over the vocabulary. These probabilities are based on scores for each word in each context, and it is possible to design an alternative objective that is based on these scores more directly: we seek word embeddings that maximize the score for the word that was really observed in each context, while minimizing the scores for a set of randomly selected negative samples: ψ(i, j) = log σ(ui · vj) + X i′∈Wneg log(1 −σ(ui′ · vj)), [14.23] where ψ(i, j) is the score for word i in context j, and Wneg is the set of negative samples. The objective is to maximize the sum over the corpus, PM m=1 ψ(wm, cm), where wm is token m and cm is the associated context. The set of negative samples Wneg is obtained by sampling from a unigram language model. Mikolov et al. (2013) construct this unigram language model by exponentiating the empirical word probabilities, setting ˆp(i) ∝(count(i)) 3 4 . This has the effect of redis- tributing probability mass from common to rare words. The number of negative samples increases the time complexity of training by a constant factor. Mikolov et al. (2013) report that 5-20 negative samples works for small training sets, and that two to five samples suffice for larger corpora. 14.5.4 Word embeddings as matrix factorization The negative sampling objective in Equation 14.23 can be justified as an efficient approx- imation to the log-likelihood, but it is also closely linked to the matrix factorization ob- jective employed in latent semantic analysis. For a matrix of word-context pairs in which all counts are non-zero, negative sampling is equivalent to factorization of the matrix M, where Mij = PMI(i, j) −log k: each cell in the matrix is equal to the pointwise mutual information of the word and context, shifted by log k, with k equal to the number of neg- ative samples (Levy and Goldberg, 2014). For word-context pairs that are not observed in the data, the pointwise mutual information is −∞, but this can be addressed by consid- ering only PMI values that are greater than log k, resulting in a matrix of shifted positive pointwise mutual information, Mij = max(0, PMI(i, j) −log k). [14.24] Word embeddings are obtained by factoring this matrix with truncated singular value decomposition. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_355_Chunk353
|
338 CHAPTER 14. DISTRIBUTIONAL AND DISTRIBUTED SEMANTICS word 1 word 2 similarity love sex 6.77 stock jaguar 0.92 money cash 9.15 development issue 3.97 lad brother 4.46 Table 14.4: Subset of the WS-353 (Finkelstein et al., 2002) dataset of word similarity ratings (examples from Faruqui et al. (2016)). GloVe (“global vectors”) are a closely related approach (Pennington et al., 2014), in which the matrix to be factored is constructed from log co-occurrence counts, Mij = log count(i, j). The word embeddings are estimated by minimizing the sum of squares, min u,v,b,˜b V X j=1 X j∈C f(Mij) log Mij V −log Mij 2 s.t. log Mij V = ui · vj + bi + ˜bj, [14.25] where bi and ˜bj are offsets for word i and context j, which are estimated jointly with the embeddings u and v. The weighting function f(Mij) is set to be zero at Mij = 0, thus avoiding the problem of taking the logarithm of zero counts; it saturates at Mij = mmax, thus avoiding the problem of overcounting common word-context pairs. This heuristic turns out to be critical to the method’s performance. The time complexity of sparse matrix reconstruction is determined by the number of non-zero word-context counts. Pennington et al. (2014) show that this number grows sublinearly with the size of the dataset: roughly O(N0.8) for typical English corpora. In contrast, the time complexity of WORD2VEC is linear in the corpus size. Computing the co- occurrence counts also requires linear time in the size of the corpus, but this operation can easily be parallelized using MapReduce-style algorithms (Dean and Ghemawat, 2008). 14.6 Evaluating word embeddings Distributed word representations can be evaluated in two main ways. Intrinsic evalu- ations test whether the representations cohere with our intuitions about word meaning. Extrinsic evaluations test whether they are useful for downstream tasks, such as sequence labeling. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_356_Chunk354
|
14.6. EVALUATING WORD EMBEDDINGS 339 14.6.1 Intrinsic evaluations A basic question for word embeddings is whether the similarity of words i and j is re- flected in the similarity of the vectors ui and uj. Cosine similarity is typically used to compare two word embeddings, cos(ui, uj) = ui · uj ||ui||2 × ||uj||2 . [14.26] For any embedding method, we can evaluate whether the cosine similarity of word em- beddings is correlated with human judgments of word similarity. The WS-353 dataset (Finkel- stein et al., 2002) includes similarity scores for 353 word pairs (Table 14.4). To test the accuracy of embeddings for rare and morphologically complex words, Luong et al. (2013) introduce a dataset of “rare words.” Outside of English, word similarity resources are lim- ited, mainly consisting of translations of WS-353 and the related SimLex-999 dataset (Hill et al., 2015). Word analogies (e.g., king:queen :: man:woman) have also been used to evaluate word embeddings (Mikolov et al., 2013). In this evaluation, the system is provided with the first three parts of the analogy (i1 : j1 :: i2 :?), and the final element is predicted by finding the word embedding most similar to ui1 −uj1 + ui2. Another evaluation tests whether word embeddings are related to broad lexical semantic categories called supersenses (Ciaramita and Johnson, 2003): verbs of motion, nouns that describe animals, nouns that describe body parts, and so on. These supersenses are annotated for English synsets in Word- Net (Fellbaum, 2010). This evaluation is implemented in the QVEC metric, which tests whether the matrix of supersenses can be reconstructed from the matrix of word embed- dings (Tsvetkov et al., 2015). Levy et al. (2015) compared several dense word representations for English — includ- ing latent semantic analysis, WORD2VEC, and GloVe — using six word similarity metrics and two analogy tasks. None of the embeddings outperformed the others on every task, but skipgrams were the most broadly competitive. Hyperparameter tuning played a key role: any method will perform badly if the wrong hyperparameters are used. Relevant hyperparameters include the embedding size, as well as algorithm-specific details such as the neighborhood size and the number of negative samples. 14.6.2 Extrinsic evaluations Word representations contribute to downstream tasks like sequence labeling and docu- ment classification by enabling generalization across words. The use of distributed repre- sentations as features is a form of semi-supervised learning, in which performance on a supervised learning problem is augmented by learning distributed representations from unlabeled data (Miller et al., 2004; Koo et al., 2008; Turian et al., 2010). These pre-trained word representations can be used as features in a linear prediction model, or as the input Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_357_Chunk355
|
340 CHAPTER 14. DISTRIBUTIONAL AND DISTRIBUTED SEMANTICS layer in a neural network, such as a Bi-LSTM tagging model (§ 7.6). Word representations can be evaluated by the performance of the downstream systems that consume them: for example, GloVe embeddings are convincingly better than Latent Semantic Analysis as features in the downstream task of named entity recognition (Pennington et al., 2014). Unfortunately, extrinsic and intrinsic evaluations do not always point in the same direc- tion, and the best word representations for one downstream task may perform poorly on another task (Schnabel et al., 2015). When word representations are updated from labeled data in the downstream task, they are said to be fine-tuned. When labeled data is plentiful, pre-training may be un- necessary; when labeled data is scarce, fine-tuning may lead to overfitting. Various com- binations of pre-training and fine-tuning can be employed. Pre-trained embeddings can be used as initialization before fine-tuning, and this can substantially improve perfor- mance (Lample et al., 2016). Alternatively, both fine-tuned and pre-trained embeddings can be used as inputs in a single model (Kim, 2014). In semi-supervised scenarios, pretrained word embeddings can be replaced by “con- textualized” word representations (Peters et al., 2018). These contextualized represen- tations are set to the hidden states of a deep bi-directional LSTM, which is trained as a bi-directional language model, motivating the name ELMo (embeddings from language models). By running the language model, we obtain contextualized word representa- tions, which can then be used as the base layer in a supervised neural network for any task. This approach yields significant gains over pretrained word embeddings on several tasks, presumably because the contextualized embeddings use unlabeled data to learn how to integrate linguistic context into the base layer of the supervised neural network. 14.6.3 Fairness and bias Figure 14.1 shows how word embeddings can capture analogies such as man:woman :: king:queen. While king and queen are gender-specific by definition, other professions or titles are associated with genders and other groups merely by statistical tendency. This statistical tendency may be a fact about the world (e.g., professional baseball players are usually men), or a fact about the text corpus (e.g., there are professional basketball leagues for both women and men, but the men’s basketball is written about far more often). There is now considerable evidence that word embeddings do indeed encode such bi- ases. Bolukbasi et al. (2016) show that the words most aligned with the vector difference she −he are stereotypically female professions homemaker, nurse, receptionist; in the other direction are maestro, skipper, protege. Caliskan et al. (2017) systematize this observation by showing that biases in word embeddings align with well-validated gender stereotypes. Garg et al. (2018) extend these results to ethnic stereotypes of Asian Americans, and pro- vide a historical perspective on how stereotypes evolve over 100 years of text data. Because word embeddings are the input layer for many other natural language pro- Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_358_Chunk356
|
14.7. DISTRIBUTED REPRESENTATIONS BEYOND DISTRIBUTIONAL STATISTICS341 cessing systems, these findings highlight the risk that natural language processing will replicate and amplify biases in the world, as well as in text. If, for example, word em- beddings encode the belief that women are as unlikely to be programmers as they are to be nephews, then software is unlikely to successfully parse, translate, index, and generate texts in which women do indeed program computers. For example, contemporary NLP systems often fail to properly resolve pronoun references in texts that cut against gender stereotypes (Rudinger et al., 2018; Zhao et al., 2018). (The task of pronoun resolution is described in depth in chapter 15.) Such biases can have profound consequences: for exam- ple, search engines are more likely to yield personalized advertisements for public arrest records when queried with names that are statistically associated with African Ameri- cans (Sweeney, 2013). There is now an active research literature on “debiasing” machine learning and natural language processing, as evidenced by the growth of annual meet- ings such as Fairness, Accountability, and Transparency in Machine Learning (FAT/ML). However, given that the ultimate source of these biases is the text itself, it may be too much to hope for a purely algorithmic solution. There is no substitute for critical thought about the inputs to natural language processing systems – and the uses of their outputs. 14.7 Distributed representations beyond distributional statistics Distributional word representations can be estimated from huge unlabeled datasets, thereby covering many words that do not appear in labeled data: for example, GloVe embeddings are estimated from 800 billion tokens of web data,3 while the largest labeled datasets for NLP tasks are on the order of millions of tokens. Nonetheless, even a dataset of hundreds of billions of tokens will not cover every word that may be encountered in the future. Furthermore, many words will appear only a few times, making their embeddings un- reliable. Many languages exceed English in morphological complexity, and thus have lower token-to-type ratios. When this problem is coupled with small training corpora, it becomes especially important to leverage other sources of information beyond distribu- tional statistics. 14.7.1 Word-internal structure One solution is to incorporate word-internal structure into word embeddings. Purely distributional approaches consider words as atomic units, but in fact, many words have internal structure, so that their meaning can be composed from the representations of sub-word units. Consider the following terms, all of which are missing from Google’s pre-trained WORD2VEC embeddings:4 3http://commoncrawl.org/ 4https://code.google.com/archive/p/word2vec/, accessed September 20, 2017 Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_359_Chunk357
|
342 CHAPTER 14. DISTRIBUTIONAL AND DISTRIBUTED SEMANTICS umillicuries ˜umillicuries u(M) milli+ u(M) curie u(M) +s umillicuries umillicurie u(M) milli+ u(M) curie u(M) +s Figure 14.5: Two architectures for building word embeddings from subword units. On the left, morpheme embeddings u(m) are combined by addition with the non-compositional word embedding ˜u (Botha and Blunsom, 2014). On the right, morpheme embeddings are combined in a recursive neural network (Luong et al., 2013). millicuries This word has morphological structure (see § 9.1.2 for more on morphology): the prefix milli- indicates an amount, and the suffix -s indicates a plural. (A millicurie is an unit of radioactivity.) caesium This word is a single morpheme, but the characters -ium are often associated with chemical elements. (Caesium is the British spelling of a chemical element, spelled cesium in American English.) IAEA This term is an acronym, as suggested by the use of capitalization. The prefix I- fre- quently refers to international organizations, and the suffix -A often refers to agen- cies or associations. (IAEA is the International Atomic Energy Agency.) Zhezhgan This term is in title case, suggesting the name of a person or place, and the character bigram zh indicates that it is likely a transliteration. (Zhezhgan is a mining facility in Kazakhstan.) How can word-internal structure be incorporated into word representations? One approach is to construct word representations from embeddings of the characters or mor- phemes. For example, if word i has morphological segments Mi, then its embedding can be constructed by addition (Botha and Blunsom, 2014), ui = ˜ui + X j∈Mi u(M) j , [14.27] where u(M) m is a morpheme embedding and ˜ui is a non-compositional embedding of the whole word, which is an additional free parameter of the model (Figure 14.5, left side). All embeddings are estimated from a log-bilinear language model (Mnih and Hinton, 2007), which is similar to the CBOW model (§ 14.5), but includes only contextual informa- tion from preceding words. The morphological segments are obtained using an unsuper- vised segmenter (Creutz and Lagus, 2007). For words that do not appear in the training Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_360_Chunk358
|
14.7. DISTRIBUTED REPRESENTATIONS BEYOND DISTRIBUTIONAL STATISTICS343 data, the embedding can be constructed directly from the morphemes, assuming that each morpheme appears in some other word in the training data. The free parameter ˜u adds flexibility: words with similar morphemes are encouraged to have similar embeddings, but this parameter makes it possible for them to be different. Word-internal structure can be incorporated into word representations in various other ways. Here are some of the main parameters. Subword units. Examples like IAEA and Zhezhgan are not based on morphological com- position, and a morphological segmenter is unlikely to identify meaningful sub- word units for these terms. Rather than using morphemes for subword embeddings, one can use characters (Santos and Zadrozny, 2014; Ling et al., 2015; Kim et al., 2016), character n-grams (Wieting et al., 2016a; Bojanowski et al., 2017), and byte-pair en- codings, a compression technique which captures frequent substrings (Gage, 1994; Sennrich et al., 2016). Composition. Combining the subword embeddings by addition does not differentiate between orderings, nor does it identify any particular morpheme as the root. A range of more flexible compositional models have been considered, including re- currence (Ling et al., 2015), convolution (Santos and Zadrozny, 2014; Kim et al., 2016), and recursive neural networks (Luong et al., 2013), in which representa- tions of progressively larger units are constructed over a morphological parse, e.g. ((milli+curie)+s), ((in+flam)+able), (in+(vis+ible)). A recursive embedding model is shown in the right panel of Figure 14.5. Estimation. Estimating subword embeddings from a full dataset is computationally ex- pensive. An alternative approach is to train a subword model to match pre-trained word embeddings (Cotterell et al., 2016; Pinter et al., 2017). To train such a model, it is only necessary to iterate over the vocabulary, and not the corpus. 14.7.2 Lexical semantic resources Resources such as WordNet provide another source of information about word meaning: if we know that caesium is a synonym of cesium, or that a millicurie is a type of measurement unit, then this should help to provide embeddings for the unknown words, and to smooth embeddings of rare words. One way to do this is to retrofit pre-trained word embeddings across a network of lexical semantic relationships (Faruqui et al., 2015) by minimizing the following objective, min U V X j=1 ||ui −ˆui||2 + X (i,j)∈L βij||ui −uj||2, [14.28] Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_361_Chunk359
|
344 CHAPTER 14. DISTRIBUTIONAL AND DISTRIBUTED SEMANTICS where ˆui is the pretrained embedding of word i, and L = {(i, j)} is a lexicon of word relations. The hyperparameter βij controls the importance of adjacent words having similar embeddings; Faruqui et al. (2015) set it to the inverse of the degree of word i, βij = |{j : (i, j) ∈L}|−1. Retrofitting improves performance on a range of intrinsic evalu- ations, and gives small improvements on an extrinsic document classification task. 14.8 Distributed representations of multiword units Can distributed representations extend to phrases, sentences, paragraphs, and beyond? Before exploring this possibility, recall the distinction between distributed and distri- butional representations. Neural embeddings such as WORD2VEC are both distributed (vector-based) and distributional (derived from counts of words in context). As we con- sider larger units of text, the counts decrease: in the limit, a multi-paragraph span of text would never appear twice, except by plagiarism. Thus, the meaning of a large span of text cannot be determined from distributional statistics alone; it must be computed com- positionally from smaller spans. But these considerations are orthogonal to the question of whether distributed representations — dense numerical vectors — are sufficiently ex- pressive to capture the meaning of phrases, sentences, and paragraphs. 14.8.1 Purely distributional methods Some multiword phrases are non-compositional: the meaning of such phrases is not de- rived from the meaning of the individual words using typical compositional semantics. This includes proper nouns like San Francisco as well as idiomatic expressions like kick the bucket (Baldwin and Kim, 2010). For these cases, purely distributional approaches can work. A simple approach is to identify multiword units that appear together fre- quently, and then treat these units as words, learning embeddings using a technique such as WORD2VEC. The problem of identifying multiword units is sometimes called collocation extrac- tion. A good collocation has high pointwise mutual information (PMI; see § 14.3). For example, Na¨ıve Bayes is a good collocation because p(wt = Bayes | wt−1 = na¨ıve) is much larger than p(wt = Bayes). Collocations of more than two words can be identified by a greedy incremental search: for example, mutual information might first be extracted as a collocation and grouped into a single word type mutual information; then pointwise mu- tual information can be extracted later. After identifying such units, they can be treated as words when estimating skipgram embeddings. Mikolov et al. (2013) show that the result- ing embeddings perform reasonably well on a task of solving phrasal analogies, e.g. New York : New York Times :: Baltimore : Baltimore Sun. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_362_Chunk360
|
14.8. DISTRIBUTED REPRESENTATIONS OF MULTIWORD UNITS 345 this was the only way it was the only way it was her turn to blink it was hard to tell it was time to move on he had to do it again they all looked at each other they all turned to look back they both turned to face him they both turned and walked away Figure 14.6: By interpolating between the distributed representations of two sentences (in bold), it is possible to generate grammatical sentences that combine aspects of both (Bow- man et al., 2016) 14.8.2 Distributional-compositional hybrids To move beyond short multiword phrases, composition is necessary. A simple but sur- prisingly powerful approach is to represent a sentence with the average of its word em- beddings (Mitchell and Lapata, 2010). This can be considered a hybrid of the distribu- tional and compositional approaches to semantics: the word embeddings are computed distributionally, and then the sentence representation is computed by composition. The WORD2VEC approach can be stretched considerably further, embedding entire sentences using a model similar to skipgrams, in the “skip-thought” model of Kiros et al. (2015). Each sentence is encoded into a vector using a recurrent neural network: the encod- ing of sentence t is set to the RNN hidden state at its final token, h(t) Mt. This vector is then a parameter in a decoder model that is used to generate the previous and subsequent sen- tences: the decoder is another recurrent neural network, which takes the encoding of the neighboring sentence as an additional parameter in its recurrent update. (This encoder- decoder model is discussed at length in chapter 18.) The encoder and decoder are trained simultaneously from a likelihood-based objective, and the trained encoder can be used to compute a distributed representation of any sentence. Skip-thought can also be viewed as a hybrid of distributional and compositional approaches: the vector representation of each sentence is computed compositionally from the representations of the individual words, but the training objective is distributional, based on sentence co-occurrence across a corpus. Autoencoders are a variant of encoder-decoder models in which the decoder is trained to produce the same text that was originally encoded, using only the distributed encod- ing vector (Li et al., 2015). The encoding acts as a bottleneck, so that generalization is necessary if the model is to successfully fit the training data. In denoising autoencoders, Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_363_Chunk361
|
346 CHAPTER 14. DISTRIBUTIONAL AND DISTRIBUTED SEMANTICS the input is a corrupted version of the original sentence, and the auto-encoder must re- construct the uncorrupted original (Vincent et al., 2010; Hill et al., 2016). By interpolating between distributed representations of two sentences, αui+(1−α)uj, it is possible to gen- erate sentences that combine aspects of the two inputs, as shown in Figure 14.6 (Bowman et al., 2016). Autoencoders can also be applied to longer texts, such as paragraphs and documents. This enables applications such as question answering, which can be performed by match- ing the encoding of the question with encodings of candidate answers (Miao et al., 2016). 14.8.3 Supervised compositional methods Given a supervision signal, such as a label describing the sentiment or meaning of a sen- tence, a wide range of compositional methods can be applied to compute a distributed representation that then predicts the label. The simplest is to average the embeddings of each word in the sentence, and pass this average through a feedforward neural net- work (Iyyer et al., 2015). Convolutional and recurrent neural networks go further, with the ability to effectively capturing multiword phenomena such as negation (Kalchbrenner et al., 2014; Kim, 2014; Li et al., 2015; Tang et al., 2015). Another approach is to incorpo- rate the syntactic structure of the sentence into a recursive neural network, in which the representation for each syntactic constituent is computed from the representations of its children (Socher et al., 2012). However, in many cases, recurrent neural networks perform as well or better than recursive networks (Li et al., 2015). Whether convolutional, recurrent, or recursive, a key question is whether supervised sentence representations are task-specific, or whether a single supervised sentence repre- sentation model can yield useful performance on other tasks. Wieting et al. (2016b) train a variety of sentence embedding models for the task of labeling pairs of sentences as para- phrases. They show that the resulting sentence embeddings give good performance for sentiment analysis. The Stanford Natural Language Inference corpus classifies sentence pairs as entailments (the truth of sentence i implies the truth of sentence j), contradictions (the truth of sentence i implies the falsity of sentence j), and neutral (i neither entails nor contradicts j). Sentence embeddings trained on this dataset transfer to a wide range of classification tasks (Conneau et al., 2017). 14.8.4 Hybrid distributed-symbolic representations The power of distributed representations is in their generality: the distributed represen- tation of a unit of text can serve as a summary of its meaning, and therefore as the input for downstream tasks such as classification, matching, and retrieval. For example, dis- tributed sentence representations can be used to recognize the paraphrase relationship between closely related sentences like the following: Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_364_Chunk362
|
14.8. DISTRIBUTED REPRESENTATIONS OF MULTIWORD UNITS 347 (14.5) a. Donald thanked Vlad profusely. b. Donald conveyed to Vlad his profound appreciation. c. Vlad was showered with gratitude by Donald. Symbolic representations are relatively brittle to this sort of variation, but are better suited to describe individual entities, the things that they do, and the things that are done to them. In examples (14.5a)-(14.5c), we not only know that somebody thanked someone else, but we can make a range of inferences about what has happened between the entities named Donald and Vlad. Because distributed representations do not treat entities symbol- ically, they lack the ability to reason about the roles played by entities across a sentence or larger discourse.5 A hybrid between distributed and symbolic representations might give the best of both worlds: robustness to the many different ways of describing the same event, plus the expressiveness to support inferences about entities and the roles that they play. A “top-down” hybrid approach is to begin with logical semantics (of the sort de- scribed in the previous two chapters), and but replace the predefined lexicon with a set of distributional word clusters (Poon and Domingos, 2009; Lewis and Steedman, 2013). A “bottom-up” approach is to add minimal symbolic structure to existing distributed repre- sentations, such as vector representations for each entity (Ji and Eisenstein, 2015; Wiseman et al., 2016). This has been shown to improve performance on two problems that we will encounter in the following chapters: classification of discourse relations between adja- cent sentences (chapter 16; Ji and Eisenstein, 2015), and coreference resolution of entity mentions (chapter 15; Wiseman et al., 2016; Ji et al., 2017). Research on hybrid seman- tic representations is still in an early stage, and future representations may deviate more boldly from existing symbolic and distributional approaches. Additional resources Turney and Pantel (2010) survey a number of facets of vector word representations, fo- cusing on matrix factorization methods. Schnabel et al. (2015) highlight problems with similarity-based evaluations of word embeddings, and present a novel evaluation that controls for word frequency. Baroni et al. (2014) address linguistic issues that arise in attempts to combine distributed and compositional representations. In bilingual and multilingual distributed representations, embeddings are estimated for translation pairs or tuples, such as (dog, perro, chien). These embeddings can improve machine translation (Zou et al., 2013; Klementiev et al., 2012), transfer natural language 5At a 2014 workshop on semantic parsing, this critique of distributed representations was expressed by Ray Mooney — a leading researcher in computational semantics — in a now well-known quote, “you can’t cram the meaning of a whole sentence into a single vector!” Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_365_Chunk363
|
348 CHAPTER 14. DISTRIBUTIONAL AND DISTRIBUTED SEMANTICS processing models across languages (T¨ackstr¨om et al., 2012), and make monolingual word embeddings more accurate (Faruqui and Dyer, 2014). A typical approach is to learn a pro- jection that maximizes the correlation of the distributed representations of each element in a translation pair, which can be obtained from a bilingual dictionary. Distributed rep- resentations can also be linked to perceptual information, such as image features. Bruni et al. (2014) use textual descriptions of images to obtain visual contextual information for various words, which supplements traditional distributional context. Image features can also be inserted as contextual information in log bilinear language models (Kiros et al., 2014), making it possible to automatically generate text descriptions of images. Exercises 1. Prove that the sum of probabilities of paths through a hierarchical softmax tree is equal to one. 2. In skipgram word embeddings, the negative sampling objective can be written as, L = X i∈V X j∈C count(i, j)ψ(i, j), [14.29] with ψ(i, j) is defined in Equation 14.23. Suppose we draw the negative samples from the empirical unigram distribution ˆp(i) = punigram(i). First, compute the expectation of L with respect the negative samples, using this probability. Next, take the derivative of this expectation with respect to the score of a single word context pair σ(ui·vj), and solve for the pointwise mutual information PMI(i, j). You should be able to show that at the optimum, the PMI is a simple function of σ(ui·vj) and the number of negative samples. (This exercise is part of a proof that shows that skipgram with negative sampling is closely related to PMI-weighted matrix factorization.) 3. * In Brown clustering, prove that the cluster merge that maximizes the average mu- tual information (Equation 14.13) also maximizes the log-likelihood objective (Equa- tion 14.12). 4. A simple way to compute a distributed phrase representation is to add up the dis- tributed representations of the words in the phrase. Consider a sentiment analysis model in which the predicted sentiment is, ψ(w) = θ · (PM m=1 xm), where xm is the vector representation of word m. Prove that in such a model, the following two Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_366_Chunk364
|
14.8. DISTRIBUTED REPRESENTATIONS OF MULTIWORD UNITS 349 inequalities cannot both hold: ψ(good) >ψ(not good) [14.30] ψ(bad) <ψ(not bad). [14.31] Then construct a similar example pair for the case in which phrase representations are the average of the word representations. 5. Now let’s consider a slight modification to the prediction model in the previous problem: ψ(w) = θ · ReLU( M X m=1 xm) [14.32] Show that in this case, it is possible to achieve the inequalities above. Your solution should provide the weights θ and the embeddings xgood, xbad, and xnot. For the next two problems, download a set of pre-trained word embeddings, such as the WORD2VEC or polyglot embeddings. 6. Use cosine similarity to find the most similar words to: dog, whale, before, however, fabricate. 7. Use vector addition and subtraction to compute target vectors for the analogies be- low. After computing each target vector, find the top three candidates by cosine similarity. • dog:puppy :: cat: ? • speak:speaker :: sing:? • France:French :: England:? • France:wine :: England:? The remaining problems will require you to build a classifier and test its properties. Pick a text classification dataset, such as the Cornell Movie Review data.6 Divide your data into training (60%), development (20%), and test sets (20%), if no such division already exists. 8. Train a convolutional neural network, with inputs set to pre-trained word embed- dings from the previous two problems. Use an additional, fine-tuned embedding for out-of-vocabulary words. Train until performance on the development set does not improve. You can also use the development set to tune the model architecture, such as the convolution width and depth. Report F -MEASURE and accuracy, as well as training time. 6http://www.cs.cornell.edu/people/pabo/movie-review-data/ Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_367_Chunk365
|
350 CHAPTER 14. DISTRIBUTIONAL AND DISTRIBUTED SEMANTICS 9. Now modify your model from the previous problem to fine-tune the word embed- dings. Report F -MEASURE, accuracy, and training time. 10. Try a simpler approach, in which word embeddings in the document are averaged, and then this average is passed through a feed-forward neural network. Again, use the development data to tune the model architecture. How close is the accuracy to the convolutional networks from the previous problems? Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_368_Chunk366
|
Chapter 15 Reference Resolution References are one of the most noticeable forms of linguistic ambiguity, afflicting not just automated natural language processing systems, but also fluent human readers. Warn- ings to avoid “ambiguous pronouns” are ubiquitous in manuals and tutorials on writing style. But referential ambiguity is not limited to pronouns, as shown in the text in Fig- ure 15.1. Each of the bracketed substrings refers to an entity that is introduced earlier in the passage. These references include the pronouns he and his, but also the shortened name Cook, and nominals such as the firm and the firm’s biggest growth market. Reference resolution subsumes several subtasks. This chapter will focus on corefer- ence resolution, which is the task of grouping spans of text that refer to a single underly- ing entity, or, in some cases, a single event: for example, the spans Tim Cook, he, and Cook are all coreferent. These individual spans are called mentions, because they mention an entity; the entity is sometimes called the referent. Each mention has a set of antecedents, which are preceding mentions that are coreferent; for the first mention of an entity, the an- tecedent set is empty. The task of pronominal anaphora resolution requires identifying only the antecedents of pronouns. In entity linking, references are resolved not to other spans of text, but to entities in a knowledge base. This task is discussed in chapter 17. Coreference resolution is a challenging problem for several reasons. Resolving differ- ent types of referring expressions requires different types of reasoning: the features and methods that are useful for resolving pronouns are different from those that are useful to resolve names and nominals. Coreference resolution involves not only linguistic rea- soning, but also world knowledge and pragmatics: you may not have known that China was Apple’s biggest growth market, but it is likely that you effortlessly resolved this ref- erence while reading the passage in Figure 15.1.1 A further challenge is that coreference 1This interpretation is based in part on the assumption that a cooperative author would not use the expression the firm’s biggest growth market to refer to an entity not yet mentioned in the article (Grice, 1975). Pragmatics is the discipline of linguistics concerned with the formalization of such assumptions (Huang, 351
|
nlp_Page_369_Chunk367
|
352 CHAPTER 15. REFERENCE RESOLUTION (15.1) [[Apple Inc] Chief Executive Tim Cook] has jetted into [China] for talks with government officials as [he] seeks to clear up a pile of problems in [[the firm] ’s biggest growth market] ... [Cook] is on [his] first trip to [the country] since taking over... Figure 15.1: Running example (Yee and Jones, 2012). Coreferring entity mentions are in brackets. resolution decisions are often entangled: each mention adds information about the entity, which affects other coreference decisions. This means that coreference resolution must be addressed as a structure prediction problem. But as we will see, there is no dynamic program that allows the space of coreference decisions to be searched efficiently. 15.1 Forms of referring expressions There are three main forms of referring expressions — pronouns, names, and nominals. 15.1.1 Pronouns Pronouns are a closed class of words that are used for references. A natural way to think about pronoun resolution is SMASH (Kehler, 2007): • Search for candidate antecedents; • Match against hard agreement constraints; • And Select using Heuristics, which are “soft” constraints such as recency, syntactic prominence, and parallelism. Search In the search step, candidate antecedents are identified from the preceding text or speech.2 Any noun phrase can be a candidate antecedent, and pronoun resolution usually requires 2015). 2Pronouns whose referents come later are known as cataphora, as in the opening line from a novel by M´arquez (1970): (15.1) Many years later, as [he] faced the firing squad, [Colonel Aureliano Buend´ıa] was to remember that distant afternoon when [his] father took him to discover ice. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_370_Chunk368
|
15.1. FORMS OF REFERRING EXPRESSIONS 353 parsing the text to identify all such noun phrases.3 Filtering heuristics can help to prune the search space to noun phrases that are likely to be coreferent (Lee et al., 2013; Durrett and Klein, 2013). In nested noun phrases, mentions are generally considered to be the largest unit with a given head word (see § 10.5.2): thus, Apple Inc. Chief Executive Tim Cook would be included as a mention, but Tim Cook would not, since they share the same head word, Cook. Matching constraints for pronouns References and their antecedents must agree on semantic features such as number, person, gender, and animacy. Consider the pronoun he in this passage from the running example: (15.2) Tim Cook has jetted in for talks with officials as [he] seeks to clear up a pile of problems... The pronoun and possible antecedents have the following features: • he: singular, masculine, animate, third person • officials: plural, animate, third person • talks: plural, inanimate, third person • Tim Cook: singular, masculine, animate, third person The SMASH method searches backwards from he, discarding officials and talks because they do not satisfy the agreements constraints. Another source of constraints comes from syntax — specifically, from the phrase struc- ture trees discussed in chapter 10. Consider a parse tree in which both x and y are phrasal constituents. The constituent x c-commands the constituent y iff the first branching node above x also dominates y. For example, in Figure 15.2a, Abigail c-commands her, because the first branching node above Abigail, S, also dominates her. Now, if x c-commands y, government and binding theory (Chomsky, 1982) states that y can refer to x only if it is a reflexive pronoun (e.g., herself). Furthermore, if y is a reflexive pronoun, then its an- tecedent must c-command it. Thus, in Figure 15.2a, her cannot refer to Abigail; conversely, if we replace her with herself, then the reflexive pronoun must refer to Abigail, since this is the only candidate antecedent that c-commands it. Now consider the example shown in Figure 15.2b. Here, Abigail does not c-command her, but Abigail’s mom does. Thus, her can refer to Abigail — and we cannot use reflexive 3In the OntoNotes coreference annotations, verbs can also be antecedents, if they are later referenced by nominals (Pradhan et al., 2011): (15.1) Sales of passenger cars [grew] 22%. [The strong growth] followed year-to-year increases. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_371_Chunk369
|
354 CHAPTER 15. REFERENCE RESOLUTION S VP PP her with speaks NP Abigail (a) S VP PP her with speaks NP mom ’s Abigail (b) S VP S VP her with speaks NP she V hopes NP Abigail (c) Figure 15.2: In (a), Abigail c-commands her; in (b), Abigail does not c-command her, but Abigail’s mom does; in (c), the scope of Abigail is limited by the S non-terminal, so that she or her can bind to Abigail, but not both. herself in this context, unless we are talking about Abigail’s mom. However, her does not have to refer to Abigail. Finally, Figure 15.2c shows the how these constraints are limited. In this case, the pronoun she can refer to Abigail, because the S non-terminal puts Abigail outside the domain of she. Similarly, her can also refer to Abigail. But she and her cannot be coreferent, because she c-commands her. Heuristics After applying constraints, heuristics are applied to select among the remaining candi- dates. Recency is a particularly strong heuristic. All things equal, readers will prefer the more recent referent for a given pronoun, particularly when comparing referents that occur in different sentences. Jurafsky and Martin (2009) offer the following example: (15.3) The doctor found an old map in the captain’s chest. Jim found an even older map hidden on the shelf. [It] described an island. Readers are expected to prefer the older map as the referent for the pronoun it. However, subjects are often preferred over objects, and this can contradict the prefer- ence for recency when two candidate referents are in the same sentence. For example, (15.4) Abigail loaned Lucia a book on Spanish. [She] is always trying to help people. Here, we may prefer to link she to Abigail rather than Lucia, because of Abigail’s position in the subject role of the preceding sentence. (Arguably, this preference would not be strong enough to select Abigail if the second sentence were She is visiting Valencia next month.) A third heuristic is parallelism: (15.5) Abigail loaned Lucia a book on Spanish. ¨Ozlem loaned [her] a book on Por- tuguese. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_372_Chunk370
|
15.1. FORMS OF REFERRING EXPRESSIONS 355 S VP PP NP SBAR S VP PP NP NNP London TO to NP1 PRP it VBD moved NP PRP he WHP when CD 536 IN until NP NP PP NP NN king DET the IN of NN residence DET the VBD remained NP PP NP NNP Camelot IN in NN castle DET The Figure 15.3: Left-to-right breadth-first tree traversal (Hobbs, 1978), indicating that the search for an antecedent for it (NP1) would proceed in the following order: 536; the castle in Camelot; the residence of the king; Camelot; the king. Hobbs (1978) proposes semantic con- straints to eliminate 536 and the castle in Camelot as candidates, since they are unlikely to be the direct object of the verb move. Here Lucia is preferred as the referent for her, contradicting the preference for the subject Abigail in the preceding example. The recency and subject role heuristics can be unified by traversing the document in a syntax-driven fashion (Hobbs, 1978): each preceding sentence is traversed breadth-first, left-to-right (Figure 15.3). This heuristic successfully handles (15.4): Abigail is preferred as the referent for she because the subject NP is visited first. It also handles (15.3): the older map is preferred as the referent for it because the more recent sentence is visited first. (An alternative unification of recency and syntax is proposed by centering theory (Grosz et al., 1995), which is discussed in detail in chapter 16.) In early work on reference resolution, the number of heuristics was small enough that a set of numerical weights could be set by hand (Lappin and Leass, 1994). More recent work uses machine learning to quantify the importance of each of these factors. However, pronoun resolution cannot be completely solved by constraints and heuristics alone. This is shown by the classic example pair (Winograd, 1972): (15.6) The [city council] denied [the protesters] a permit because [they] advocated/feared violence. Without reasoning about the motivations of the city council and protesters, it is unlikely that any system could correctly resolve both versions of this example. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_373_Chunk371
|
356 CHAPTER 15. REFERENCE RESOLUTION Non-referential pronouns While pronouns are generally used for reference, they need not refer to entities. The fol- lowing examples show how pronouns can refer to propositions, events, and speech acts. (15.7) a. They told me that I was too ugly for show business, but I didn’t believe [it]. b. Elifsu saw Berthold get angry, and I saw [it] too. c. Emmanuel said he worked in security. I suppose [that]’s one way to put it. These forms of reference are generally not annotated in large-scale coreference resolution datasets such as OntoNotes (Pradhan et al., 2011). Pronouns may also have generic referents: (15.8) a. A poor carpenter blames [her] tools. b. On the moon, [you] have to carry [your] own oxygen. c. Every farmer who owns a donkey beats [it]. (Geach, 1962) In the OntoNotes dataset, coreference is not annotated for generic referents, even in cases like these examples, in which the same generic entity is mentioned multiple times. Some pronouns do not refer to anything at all: (15.9) a. [It]’s [Il] raining. pleut. (Fr) b. [It] ’s money that she’s really after. c. [It] is too bad that we have to work so hard. How can we automatically distinguish these usages of it from referential pronouns? Consider the the difference between the following two examples (Bergsma et al., 2008): (15.10) a. You can make [it] in advance. b. You can make [it] in showbiz. In the second example, the pronoun it is non-referential. One way to see this is by substi- tuting another pronoun, like them, into these examples: (15.11) a. You can make [them] in advance. b. ? You can make [them] in showbiz. The questionable grammaticality of the second example suggests that it is not referential. Bergsma et al. (2008) operationalize this idea by comparing distributional statistics for the n-grams around the word it, testing how often other pronouns or nouns appear in the same context. In cases where nouns and other pronouns are infrequent, the it is unlikely to be referential. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_374_Chunk372
|
15.1. FORMS OF REFERRING EXPRESSIONS 357 15.1.2 Proper Nouns If a proper noun is used as a referring expression, it often corefers with another proper noun, so that the coreference problem is simply to determine whether the two names match. Subsequent proper noun references often use a shortened form, as in the running example (Figure 15.1): (15.12) Apple Inc Chief Executive [Tim Cook] has jetted into China ...[Cook] is on his first business trip to the country ... A typical solution for proper noun coreference is to match the syntactic head words of the reference with the referent. In § 10.5.2, we saw that the head word of a phrase can be identified by applying head percolation rules to the phrasal parse tree; alternatively, the head can be identified as the root of the dependency subtree covering the name. For sequences of proper nouns, the head word will be the final token. There are a number of caveats to the practice of matching head words of proper nouns. • In the European tradition, family names tend to be more specific than given names, and family names usually come last. However, other traditions have other practices: for example, in Chinese names, the family name typically comes first; in Japanese, honorifics come after the name, as in Nobu-San (Mr. Nobu). • In organization names, the head word is often not the most informative, as in Georgia Tech and Virginia Tech. Similarly, Lebanon does not refer to the same entity as South- ern Lebanon, necessitating special rules for the specific case of geographical modi- fiers (Lee et al., 2011). • Proper nouns can be nested, as in [the CEO of [Microsoft]], resulting in head word match without coreference. Despite these difficulties, proper nouns are the easiest category of references to re- solve (Stoyanov et al., 2009). In machine learning systems, one solution is to include a range of matching features, including exact match, head match, and string inclusion. In addition to matching features, competitive systems (e.g., Bengtson and Roth, 2008) in- clude large lists, or gazetteers, of acronyms (e.g, the National Basketball Association/NBA), demonyms (e.g., the Israelis/Israel), and other aliases (e.g., the Georgia Institute of Technol- ogy/Georgia Tech). 15.1.3 Nominals In coreference resolution, noun phrases that are neither pronouns nor proper nouns are referred to as nominals. In the running example (Figure 15.1), nominal references include: the firm (Apple Inc); the firm’s biggest growth market (China); and the country (China). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_375_Chunk373
|
358 CHAPTER 15. REFERENCE RESOLUTION Nominals are especially difficult to resolve (Denis and Baldridge, 2007; Durrett and Klein, 2013), and the examples above suggest why this may be the case: world knowledge is required to identify Apple Inc as a firm, and China as a growth market. Other difficult examples include the use of colloquial expressions, such as coreference between Clinton campaign officials and the Clinton camp (Soon et al., 2001). 15.2 Algorithms for coreference resolution The ground truth training data for coreference resolution is a set of mention sets, where all mentions within each set refer to a single entity.4 In the running example from Figure 15.1, the ground truth coreference annotation is: c1 ={Apple Inc1:2, the firm27:28} [15.1] c2 ={Apple Inc Chief Executive Tim Cook1:6, he17, Cook33, his36} [15.2] c3 ={China10, the firm ’s biggest growth market27:32, the country40:41} [15.3] Each row specifies the token spans that mention an entity. (“Singleton” entities, which are mentioned only once (e.g., talks, government officials), are excluded from the annotations.) Equivalently, if given a set of M mentions, {mi}M i=1, each mention i can be assigned to a cluster zi, where zi = zj if i and j are coreferent. The cluster assignments z are invariant under permutation. The unique clustering associated with the assignment z is written c(z). Coreference resolution can thus be viewed as a structure prediction problem, involv- ing two subtasks: identifying which spans of text mention entities, and then clustering those spans. Mention identification The task of identifying mention spans for coreference resolution is often performed by applying a set of heuristics to the phrase structure parse of each sentence. A typical approach is to start with all noun phrases and named entities, and then apply filtering rules to remove nested noun phrases with the same head (e.g., [Apple CEO [Tim Cook]]), numeric entities (e.g., [100 miles], [97%]), non-referential it, etc (Lee et al., 2013; Durrett and Klein, 2013). In general, these deterministic approaches err in favor of recall, since the mention clustering component can choose to ignore false positive mentions, but cannot recover from false negatives. An alternative is to consider all spans (up to some finite length) as candidate mentions, performing mention identification and clustering jointly (Daum´e III and Marcu, 2005; Lee et al., 2017). 4In many annotations, the term markable is used to refer to spans of text that can potentially mention an entity. The set of markables includes non-referential pronouns, which does not mention any entity. Part of the job of the coreference system is to avoid incorrectly linking these non-referential markables to any mention chains. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_376_Chunk374
|
15.2. ALGORITHMS FOR COREFERENCE RESOLUTION 359 Mention clustering The subtask of mention clustering will be the focus of the remainder of this chapter. There are two main classes of models. In mention-based models, the scoring function for a coreference clustering decomposes over pairs of mentions. These pairwise decisions are then aggregated, using a clustering heuristic. Mention-based coreference clustering can be treated as a fairly direct application of supervised classification or rank- ing. However, the mention-pair locality assumption can result in incoherent clusters, like {Hillary Clinton ←Clinton ←Mr Clinton}, in which the pairwise links score well, but the overall result is unsatisfactory. Entity-based models address this issue by scoring entities holistically. This can make inference more difficult, since the number of possible entity groupings is exponential in the number of mentions. 15.2.1 Mention-pair models In the mention-pair model, a binary label yi,j ∈{0, 1} is assigned to each pair of mentions (i, j), where i < j. If i and j corefer (zi = zj), then yi,j = 1; otherwise, yi,j = 0. The mention he in Figure 15.1 is preceded by five other mentions: (1) Apple Inc; (2) Apple Inc Chief Executive Tim Cook; (3) China; (4) talks; (5) government officials. The correct mention pair labeling is y2,6 = 1 and yi̸=2,6 = 0 for all other i. If a mention j introduces a new entity, such as mention 3 in the example, then yi,j = 0 for all i. The same is true for “mentions” that do not refer to any entity, such as non-referential pronouns. If mention j refers to an entity that has been mentioned more than once, then yi,j = 1 for all i < j that mention the referent. By transforming coreference into a set of binary labeling problems, the mention-pair model makes it possible to apply an off-the-shelf binary classifier (Soon et al., 2001). This classifier is applied to each mention j independently, searching backwards from j until finding an antecedent i which corefers with j with high confidence. After identifying a single antecedent, the remaining mention pair labels can be computed by transitivity: if yi,j = 1 and yj,k = 1, then yi,k = 1. Since the ground truth annotations give entity chains c but not individual mention- pair labels y, an additional heuristic must be employed to convert the labeled data into training examples for classification. A typical approach is to generate at most one pos- itive labeled instance yaj,j = 1 for mention j, where aj is the index of the most recent antecedent, aj = max{i : i < j ∧zi = zj}. Negative labeled instances are generated for all for all i ∈{aj + 1, . . . , j}. In the running example, the most recent antecedent of the pronoun he is a6 = 2, so the training data would be y2,6 = 1 and y3,6 = y4,6 = y5,6 = 0. The variable y1,6 is not part of the training data, because the first mention appears before the true antecedent a6 = 2. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_377_Chunk375
|
360 CHAPTER 15. REFERENCE RESOLUTION 15.2.2 Mention-ranking models In mention ranking (Denis and Baldridge, 2007), the classifier learns to identify a single antecedent ai ∈{ϵ, 1, 2, . . . , i −1} for each referring expression i, ˆai = argmax a∈{ϵ,1,2,...,i−1} ψM(a, i), [15.4] where ψM(a, i) is a score for the mention pair (a, i). If a = ϵ, then mention i does not refer to any previously-introduced entity — it is not anaphoric. Mention-ranking is similar to the mention-pair model, but all candidates are considered simultaneously, and at most a single antecedent is selected. The mention-ranking model explicitly accounts for the possibility that mention i is not anaphoric, through the score ψM(ϵ, i). The determination of anaphoricity can be made by a special classifier in a preprocessing step, so that non-ϵ antecedents are identified only for spans that are determined to be anaphoric (Denis and Baldridge, 2008). As a learning problem, ranking can be trained using the same objectives as in dis- criminative classification. For each mention i, we can define a gold antecedent a∗ i , and an associated loss, such as the hinge loss, ℓi = (1 −ψM(a∗ i , i) + ψM(ˆa, i))+ or the negative log-likelihood, ℓi = −log p(a∗ i | i; θ). (For more on learning to rank, see § 17.1.1.) But as with the mention-pair model, there is a mismatch between the labeled data, which comes in the form of mention sets, and the desired supervision, which would indicate the spe- cific antecedent of each mention. The antecedent variables {ai}M i=1 relate to the mention sets in a many-to-one mapping: each set of antecedents induces a single clustering, but a clustering can correspond to many different settings of antecedent variables. A heuristic solution is to set a∗ i = max{j : j < i ∧zj = zi}, the most recent mention in the same cluster as i. But the most recent mention may not be the most informative: in the running example, the most recent antecedent of the mention Cook is the pronoun he, but a more useful antecedent is the earlier mention Apple Inc Chief Executive Tim Cook. Rather than selecting a specific antecedent to train on, the antecedent can be treated as a latent variable, in the manner of the latent variable perceptron from § 12.4.2 (Fernandes et al., 2014): ˆa = argmax a M X i=1 ψM(ai, i) [15.5] a∗= argmax a∈A(c) M X i=1 ψM(ai, i) [15.6] θ ←θ + M X i=1 ∂L ∂θ ψM(a∗ i , i) − M X i=1 ∂L ∂θ ψM(ˆai, i) [15.7] Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_378_Chunk376
|
15.2. ALGORITHMS FOR COREFERENCE RESOLUTION 361 where A(c) is the set of antecedent structures that is compatible with the ground truth coreference clustering c. Another alternative is to sum over all the conditional probabili- ties of antecedent structures that are compatible with the ground truth clustering (Durrett and Klein, 2013; Lee et al., 2017). For the set of mention m, we compute the following probabilities: p(c | m) = X a∈A(c) p(a | m) = X a∈A(c) M Y i=1 p(ai | i, m) [15.8] p(ai | i, m) = exp (ψM(ai, i)) P a′∈{ϵ,1,2,...,i−1} exp (ψM(a′, i)). [15.9] This objective rewards models that assign high scores for all valid antecedent structures. In the running example, this would correspond to summing the probabilities of the two valid antecedents for Cook, he and Apple Inc Chief Executive Tim Cook. In one of the exer- cises, you will compute the number of valid antecedent structures for a given clustering. 15.2.3 Transitive closure in mention-based models A problem for mention-based models is that individual mention-level decisions may be incoherent. Consider the following mentions: m1 =Hillary Clinton [15.10] m2 =Clinton [15.11] m3 =Bill Clinton [15.12] A mention-pair system might predict ˆy1,2 = 1, ˆy2,3 = 1, ˆy1,3 = 0. Similarly, a mention- ranking system might choose ˆa2 = 1 and ˆa3 = 2. Logically, if mentions 1 and 3 are both coreferent with mention 2, then all three mentions must refer to the same entity. This constraint is known as transitive closure. Transitive closure can be applied post hoc, revising the independent mention-pair or mention-ranking decisions. However, there are many possible ways to enforce transitive closure: in the example above, we could set ˆy1,3 = 1, or ˆy1,2 = 0, or ˆy2,3 = 0. For docu- ments with many mentions, there may be many violations of transitive closure, and many possible fixes. Transitive closure can be enforced by always adding edges, so that ˆy1,3 = 1 is preferred (e.g., Soon et al., 2001), but this can result in overclustering, with too many mentions grouped into too few entities. Mention-pair coreference resolution can be viewed as a constrained optimization prob- Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_379_Chunk377
|
362 CHAPTER 15. REFERENCE RESOLUTION lem, max y∈{0,1}M M X j=1 j X i=1 ψM(i, j) × yi,j s.t. yi,j + yj,k −1 ≤yi,k, ∀i < j < k, with the constraint enforcing transitive closure. This constrained optimization problem is equivalent to graph partitioning with positive and negative edge weights: construct a graph where the nodes are mentions, and the edges are the pairwise scores ψM(i, j); the goal is to partition the graph so as to maximize the sum of the edge weights between all nodes within the same partition (McCallum and Wellner, 2004). This problem is NP-hard, motivating approximations such as correlation clustering (Bansal et al., 2004) and integer linear programming (Klenner, 2007; Finkel and Manning, 2008, also see § 13.2.2). 15.2.4 Entity-based models A weakness of mention-based models is that they treat coreference resolution as a classifi- cation or ranking problem, when it is really a clustering problem: the goal is to group the mentions together into clusters that correspond to the underlying entities. Entity-based approaches attempt to identify these clusters directly. Such methods require a scoring function at the entity level, measuring whether each set of mentions is internally consis- tent. Coreference resolution can then be viewed as the following optimization, max z X e=1 ψE({i : zi = e}), [15.13] where zi indicates the entity referenced by mention i, and ψE({i : zi = e}) is a scoring function applied to all mentions i that are assigned to entity e. Entity-based coreference resolution is conceptually similar to the unsupervised clus- tering problems encountered in chapter 5: the goal is to obtain clusters of mentions that are internally coherent. The number of possible clusterings of n items is the Bell number, which is defined by the following recurrence (Bell, 1934; Luo et al., 2004), Bn = n−1 X k=0 Bk n −1 k B0 = B1 = 1. [15.14] This recurrence is illustrated by the Bell tree, which is applied to a short coreference prob- lem in Figure 15.4. The Bell number Bn grows exponentially with n, making exhaustive search of the space of clusterings impossible. For this reason, entity-based coreference resolution typically involves incremental search, in which clustering decisions are based on local evidence, in the hope of approximately optimizing the full objective in Equa- tion 15.13. This approach is sometimes called cluster ranking, in contrast to mention ranking. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_380_Chunk378
|
15.2. ALGORITHMS FOR COREFERENCE RESOLUTION 363 {Abigail} {Abigail, she} {Abigail}, {she} {Abigail, she, her} {Abigail, she}, {her} {Abigail}, {she, her} {Abigail, her}, {she} {Abigail}, {she}, {her} Figure 15.4: The Bell Tree for the sentence Abigail hopes she speaks with her. Which paths are excluded by the syntactic constraints mentioned in § 15.1.1? *Generative models of coreference Entity-based coreference can be approached through probabilistic generative models, in which the mentions in the document are conditioned on a set of latent entities (Haghighi and Klein, 2007, 2010). An advantage of these meth- ods is that they can be learned from unlabeled data (Poon and Domingos, 2008, e.g.,); a disadvantage is that probabilistic inference is required not just for learning, but also for prediction. Furthermore, generative models require independence assumptions that are difficult to apply in coreference resolution, where the diverse and heterogeneous features do not admit an easy decomposition into mutually independent subsets. Incremental cluster ranking The SMASH method (§ 15.1.1) can be extended to entity-based coreference resolution by building up coreference clusters while moving through the document (Cardie and Wagstaff, 1999). At each mention, the algorithm iterates backwards through possible antecedent clusters; but unlike SMASH, a cluster is selected only if all members of its cluster are com- patible with the current mention. As mentions are added to a cluster, so are their features (e.g., gender, number, animacy). In this way, incoherent chains like {Hillary Clinton, Clinton, Bill Clinton} can be avoided. However, an incorrect assignment early in the document — a search error — might lead to a cascade of errors later on. More sophisticated search strategies can help to ameliorate the risk of search errors. One approach is beam search (first discussed in § 11.3), in which a set of hypotheses is maintained throughout search. Each hypothesis represents a path through the Bell tree (Figure 15.4). Hypotheses are “expanded” either by adding the next mention to an ex- isting cluster, or by starting a new cluster. Each expansion receives a score, based on Equation 15.13, and the top K hypotheses are kept on the beam as the algorithm moves to the next step. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_381_Chunk379
|
364 CHAPTER 15. REFERENCE RESOLUTION Incremental cluster ranking can be made more accurate by performing multiple passes over the document, applying rules (or “sieves”) with increasing recall and decreasing precision at each pass (Lee et al., 2013). In the early passes, coreference links are pro- posed only between mentions that are highly likely to corefer (e.g., exact string match for full names and nominals). Information can then be shared among these mentions, so that when more permissive matching rules are applied later, agreement is preserved across the entire cluster. For example, in the case of {Hillary Clinton, Clinton, she}, the name-matching sieve would link Clinton and Hillary Clinton, and the pronoun-matching sieve would then link she to the combined cluster. A deterministic multi-pass system won nearly every track of the 2011 CoNLL shared task on coreference resolution (Prad- han et al., 2011). Given the dominance of machine learning in virtually all other areas of natural language processing — and more than fifteen years of prior work on machine learning for coreference — this was a surprising result, even if learning-based methods have subsequently regained the upper hand (e.g., Lee et al., 2018, the state of the art at the time of this writing). Incremental perceptron Incremental coreference resolution can be learned with the incremental perceptron, as described in § 11.3.2. At mention i, each hypothesis on the beam corresponds to a cluster- ing of mentions 1 . . . i−1, or equivalently, a path through the Bell tree up to position i−1. As soon as none of the hypotheses on the beam are compatible with the gold coreference clustering, a perceptron update is made (Daum´e III and Marcu, 2005). For concreteness, consider a linear cluster ranking model, ψE({i : zi = e}) = X i:zi=e θ · f(i, {j : j < i ∧zj = e}), [15.15] where the score for each cluster is computed as the sum of scores of all mentions that are linked into the cluster, and f(i, ∅) is a set of features for the non-anaphoric mention that initiates the cluster. Using Figure 15.4 as an example, suppose that the ground truth is, c∗= {Abigail, her}, {she}, [15.16] but that with a beam of size one, the learner reaches the hypothesis, ˆc = {Abigail, she}. [15.17] This hypothesis is incompatible with c∗, so an update is needed: θ ←θ + f(c∗) −f(ˆc) [15.18] =θ + (f(Abigail, ∅) + f(she, ∅)) −(f(Abigail, ∅) + f(she, {Abigail})) [15.19] =θ + f(she, ∅) −f(she, {Abigail}). [15.20] Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_382_Chunk380
|
15.2. ALGORITHMS FOR COREFERENCE RESOLUTION 365 This style of incremental update can also be applied to a margin loss between the gold clustering and the top clustering on the beam. By backpropagating from this loss, it is also possible to train a more complicated scoring function, such as a neural network in which the score for each entity is a function of embeddings for the entity mentions (Wiseman et al., 2015). Reinforcement learning Reinforcement learning is a topic worthy of a textbook of its own (Sutton and Barto, 1998),5 so this section will provide only a very brief overview, in the context of coreference resolution. A stochastic policy assigns a probability to each possible action, conditional on the context. The goal is to learn a policy that achieves a high expected reward, or equivalently, a low expected cost. In incremental cluster ranking, a complete clustering on M mentions can be produced by a sequence of M actions, in which the action zi either merges mention i with an existing cluster or begins a new cluster. We can therefore create a stochastic policy using the cluster scores (Clark and Manning, 2016), Pr(zi = e; θ) = exp ψE(i ∪{j : zj = e}; θ) P e′ exp ψE(i ∪{j : zj = e′}′; θ), [15.21] where ψE(i ∪{j : zj = e}; θ) is the score under parameters θ for assigning mention i to cluster e. This score can be an arbitrary function of the mention i, the cluster e and its (possibly empty) set of mentions; it can also include the history of actions taken thus far. If a policy assigns probability p(c; θ) to clustering c, then its expected loss is, L(θ) = X c∈C(m) pθ(c) × ℓ(c), [15.22] where C(m) is the set of possible clusterings for mentions m. The loss ℓ(c) can be based on any arbitrary scoring function, including the complex evaluation metrics used in corefer- ence resolution (see § 15.4). This is an advantage of reinforcement learning, which can be trained directly on the evaluation metric — unlike traditional supervised learning, which requires a loss function that is differentiable and decomposable across individual deci- sions. Rather than summing over the exponentially many possible clusterings, we can ap- proximate the expectation by sampling trajectories of actions, z = (z1, z2, . . . , zM), from 5A draft of the second edition can be found here: http://incompleteideas.net/book/ the-book-2nd.html. Reinforcement learning has been used in spoken dialogue systems (Walker, 2000) and text-based game playing (Branavan et al., 2009), and was applied to coreference resolution by Clark and Manning (2015). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_383_Chunk381
|
366 CHAPTER 15. REFERENCE RESOLUTION the current policy. Each action zi corresponds to a step in the Bell tree: adding mention mi to an existing cluster, or forming a new cluster. Each trajectory z corresponds to a single clustering c, and so we can write the loss of an action sequence as ℓ(c(z)). The policy gradient algorithm computes the gradient of the expected loss as an expectation over trajectories (Sutton et al., 2000), ∂ ∂θL(θ) =Ez∼Z(m)ℓ(c(z)) M X i=1 ∂ ∂θ log p(zi | z1:i−1, m) [15.23] ≈1 K K X k=1 ℓ(c(z(k))) M X i=1 ∂ ∂θ log p(z(k) i | z(k) 1:i−1, m), [15.24] where each action sequence z(k) is sampled from the current policy. Unlike the incremen- tal perceptron, an update is not made until the complete action sequence is available. Learning to search Policy gradient can suffer from high variance: while the average loss over K samples is asymptotically equal to the expected reward of a given policy, this estimate may not be accurate unless K is very large. This can make it difficult to allocate credit and blame to individual actions. In learning to search, this problem is addressed through the addition of an oracle policy, which is known to receive zero or small loss. The oracle policy can be used in two ways: • The oracle can be used to generate partial hypotheses that are likely to score well, by generating i actions from the initial state. These partial hypotheses are then used as starting points for the learned policy. This is known as roll-in. • The oracle can be used to compute the minimum possible loss from a given state, by generating M −i actions from the current state until completion. This is known as roll-out. The oracle can be combined with the existing policy during both roll-in and roll-out, sam- pling actions from each policy (Daum´e III et al., 2009). One approach is to gradually decrease the number of actions drawn from the oracle over the course of learning (Ross et al., 2011). In the context of entity-based coreference resolution, Clark and Manning (2016) use the learned policy for roll-in and the oracle policy for roll-out. Algorithm 17 shows how the gradients on the policy weights are computed in this case. In this application, the oracle is “noisy”, because it selects the action that minimizes only the local loss — the accuracy of the coreference clustering up to mention i — rather than identifying the action sequence that will lead to the best final coreference clustering on the entire document. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_384_Chunk382
|
15.3. REPRESENTATIONS FOR COREFERENCE RESOLUTION 367 Algorithm 17 Learning to search for entity-based coreference resolution 1: procedure COMPUTE-GRADIENT(mentions m, loss function ℓ, parameters θ) 2: L(θ) ←0 3: z ∼p(z | m; θ) ▷Sample a trajectory from the current policy 4: for i ∈{1, 2, . . . M} do 5: for action z ∈Z(z1:i−1, m) do ▷All possible actions after history z1:i−1 6: h ←z1:i−1 ⊕z ▷Concatenate history z1:i−1 with action z 7: for j ∈{i + 1, i + 2, . . . , M} do ▷Roll-out 8: hj ←argminh ℓ(h1:j−1 ⊕h) ▷Oracle selects action with minimum loss 9: L(θ) ←L(θ) + p(z | z1:i−1, m; θ) × ℓ(h) ▷Update expected loss 10: return ∂ ∂θL(θ) When learning from noisy oracles, it can be helpful to mix in actions from the current policy with the oracle during roll-out (Chang et al., 2015). 15.3 Representations for coreference resolution Historically, coreference resolution has employed an array of hand-engineered features to capture the linguistic constraints and preferences described in § 15.1 (Soon et al., 2001). Later work has documented the utility of lexical and bilexical features on mention pairs (Bj¨orkelund and Nugues, 2011; Durrett and Klein, 2013). The most recent and successful methods re- place many (but not all) of these features with distributed representations of mentions and entities (Wiseman et al., 2015; Clark and Manning, 2016; Lee et al., 2017). 15.3.1 Features Coreference features generally rely on a preprocessing pipeline to provide part-of-speech tags and phrase structure parses. This pipeline makes it possible to design features that capture many of the phenomena from § 15.1, and is also necessary for typical approaches to mention identification. However, the pipeline may introduce errors that propagate to the downstream coreference clustering system. Furthermore, the existence of such a pipeline presupposes resources such as treebanks, which do not exist for many lan- guages.6 6The Universal Dependencies project has produced dependency treebanks for more than sixty languages. However, coreference features and mention detection are generally based on phrase structure trees, which exist for roughly two dozen languages. A list is available here: https://en.wikipedia.org/wiki/ Treebank Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_385_Chunk383
|
368 CHAPTER 15. REFERENCE RESOLUTION Mention features Features of individual mentions can help to predict anaphoricity. In systems where men- tion detection is performed jointly with coreference resolution, these features can also predict whether a span of text is likely to be a mention. For mention i, typical features include: Mention type. Each span can be identified as a pronoun, name, or nominal, using the part-of-speech of the head word of the mention: both the Penn Treebank and Uni- versal Dependencies tagsets (§ 8.1.1) include tags for pronouns and proper nouns, and all other heads can be marked as nominals (Haghighi and Klein, 2009). Mention width. The number of tokens in a mention is a rough predictor of its anaphoric- ity, with longer mentions being less likely to refer back to previously-defined enti- ties. Lexical features. The first, last, and head words can help to predict anaphoricity; they are also useful in conjunction with features such as mention type and part-of-speech, providing a rough measure of agreement (Bj¨orkelund and Nugues, 2011). The num- ber of lexical features can be very large, so it can be helpful to select only frequently- occurring features (Durrett and Klein, 2013). Morphosyntactic features. These features include the part-of-speech, number, gender, and dependency ancestors. The features for mention i and candidate antecedent a can be conjoined, producing joint features that can help to assess the compatibility of the two mentions. For example, Durrett and Klein (2013) conjoin each feature with the mention types of the anaphora and the antecedent. Coreference resolution corpora such as ACE and OntoNotes contain documents from various genres. By conjoining the genre with other features, it is possible to learn genre-specific feature weights. Mention-pair features For any pair of mentions i and j, typical features include: Distance. The number of intervening tokens, mentions, and sentences between i and j can all be used as distance features. These distances can be computed on the surface text, or on a transformed representation reflecting the breadth-first tree traversal (Figure 15.3). Rather than using the distances directly, they are typically binned, creating binary features. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_386_Chunk384
|
15.3. REPRESENTATIONS FOR COREFERENCE RESOLUTION 369 String match. A variety of string match features can be employed: exact match, suffix match, head match, and more complex matching rules that disregard irrelevant modifiers (Soon et al., 2001). Compatibility. Building on the model, features can measure the anaphor and antecedent agree with respect to morphosyntactic attributes such as gender, number, and ani- macy. Nesting. If one mention is nested inside another (e.g., [The President of [France]]), they generally cannot corefer. Same speaker. For documents with quotations, such as news articles, personal pronouns can be resolved only by determining the speaker for each mention (Lee et al., 2013). Coreference is also more likely between mentions from the same speaker. Gazetteers. These features indicate that the anaphor and candidate antecedent appear in a gazetteer of acronyms (e.g., USA/United States, GATech/Georgia Tech), demonyms (e.g., Israel/Israeli), or other aliases (e.g., Knickerbockers/New York Knicks). Lexical semantics. These features use a lexical resource such as WORDNET to determine whether the head words of the mentions are related through synonymy, antonymy, and hypernymy (§ 4.2). Dependency paths. The dependency path between the anaphor and candidate antecedent can help to determine whether the pair can corefer, under the government and bind- ing constraints described in § 15.1.1. Comprehensive lists of mention-pair features are offered by Bengtson and Roth (2008) and Rahman and Ng (2011). Neural network approaches use far fewer mention-pair features: for example, Lee et al. (2017) include only speaker, genre, distance, and mention width features. Semantics In many cases, coreference seems to require knowledge and semantic infer- ences, as in the running example, where we link China with a country and a growth mar- ket. Some of this information can be gleaned from WORDNET, which defines a graph over synsets (see § 4.2). For example, one of the synsets of China is an instance of an Asian nation#1, which in turn is a hyponym of country#2, a synset that includes country.7 Such paths can be used to measure the similarity between concepts (Pedersen et al., 2004), and this similarity can be incorporated into coreference resolution as a fea- ture (Ponzetto and Strube, 2006). Similar ideas can be applied to knowledge graphs in- duced from Wikipedia (Ponzetto and Strube, 2007). But while such approaches improve 7teletype font is used to indicate wordnet synsets, and italics is used to indicate strings. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_387_Chunk385
|
370 CHAPTER 15. REFERENCE RESOLUTION relatively simple classification-based systems, they have proven less useful when added to the current generation of techniques.8 For example, Durrett and Klein (2013) employ a range of semantics-based features — WordNet synonymy and hypernymy relations on head words, named entity types (e.g., person, organization), and unsupervised cluster- ing over nominal heads — but find that these features give minimal improvement over a baseline system using surface features. Entity features Many of the features for entity-mention coreference are generated by aggregating mention- pair features over all mentions in the candidate entity (Culotta et al., 2007; Rahman and Ng, 2011). Specifically, for each binary mention-pair feature f(i, j), we compute the fol- lowing entity-mention features for mention i and entity e = {j : j < i ∧zj = e}. • ALL-TRUE: Feature f(i, j) holds for all mentions j ∈e. • MOST-TRUE: Feature f(i, j) holds for at least half and fewer than all mentions j ∈e. • MOST-FALSE: Feature f(i, j) holds for at least one and fewer than half of all men- tions j ∈e. • NONE: Feature f(i, j) does not hold for any mention j ∈e. For scalar mention-pair features (e.g., distance features), aggregation can be performed by computing the minimum, maximum, and median values across all mentions in the cluster. Additional entity-mention features include the number of mentions currently clustered in the entity, and ALL-X and MOST-X features for each mention type. 15.3.2 Distributed representations of mentions and entities Recent work has emphasized distributed representations of both mentions and entities. One potential advantage is that pre-trained embeddings could help to capture the se- mantic compatibility underlying nominal coreference, helping with difficult cases like (Apple, the firm) and (China, the firm’s biggest growth market). Furthermore, a distributed representation of entities can be trained to capture semantic features that are added by each mention. Mention embeddings Entity mentions can be embedded into a vector space, providing the base layer for neural networks that score coreference decisions (Wiseman et al., 2015). 8This point was made by Michael Strube at a 2015 workshop, noting that as the quality of the machine learning models in coreference has improved, the benefit of including semantics has become negligible. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_388_Chunk386
|
15.3. REPRESENTATIONS FOR COREFERENCE RESOLUTION 371 ufirst uhead ulast · · · · · · in the firm ’s biggest growth market . Figure 15.5: A bidirectional recurrent model of mention embeddings. The mention is represented by its first word, its last word, and an estimate of its head word, which is computed from a weighted average (Lee et al., 2017). Constructing the mention embedding Various approaches for embedding multiword units can be applied (see § 14.8). Figure 15.5 shows a recurrent neural network approach, which begins by running a bidirectional LSTM over the entire text, obtaining hidden states from the left-to-right and right-to-left passes, hm = [←− h m; −→ h m]. Each candidate mention span (s, t) is then represented by the vertical concatenation of four vectors: u(s,t) = [u(s,t) first ; u(s,t) last ; u(s,t) head; φ(s,t)], [15.25] where u(s,t) first = hs+1 is the embedding of the first word in the span, u(s,t) last = ht is the embedding of the last word, u(s,t) head is the embedding of the “head” word, and φ(s,t) is a vector of surface features, such as the length of the span (Lee et al., 2017). Attention over head words Rather than identifying the head word from the output of a parser, it can be computed from a neural attention mechanism: ˜αm =θα · hm [15.26] a(s,t) = SoftMax ([˜αs+1, ˜αs+2, . . . , ˜αt]) [15.27] u(s,t) head = t X m=s+1 a(s,t) m hm. [15.28] Each token m gets a scalar score ˜αm = θα · hm, which is the dot product of the LSTM hidden state hm and a vector of weights θα. The vector of scores for tokens in the span m ∈{s + 1, s + 2, . . . , t} is then passed through a softmax layer, yielding a vector a(s,t) that allocates one unit of attention across the span. This eliminates the need for syntactic parsing to recover the head word; instead, the model learns to identify the most important words in each span. Attention mechanisms were introduced in neural machine transla- tion (Bahdanau et al., 2014), and are described in more detail in § 18.3.1. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_389_Chunk387
|
372 CHAPTER 15. REFERENCE RESOLUTION Using mention embeddings Given a set of mention embeddings, each mention i and candidate antecedent a is scored as, ψ(a, i) =ψS(a) + ψS(i) + ψM(a, i) [15.29] ψS(a) =FeedForwardS(u(a)) [15.30] ψS(i) =FeedForwardS(u(i)) [15.31] ψM(a, i) =FeedForwardM([u(a); u(i); u(a) ⊙u(i); f(a, i, w)]), [15.32] where u(a) and u(i) are the embeddings for spans a and i respectively, as defined in Equa- tion 15.25. • The scores ψS(a) quantify whether span a is likely to be a coreferring mention, inde- pendent of what it corefers with. This allows the model to learn identify mentions directly, rather than identifying mentions with a preprocessing step. • The score ψM(a, i) computes the compatibility of spans a and i. Its base layer is a vector that includes the embeddings of spans a and i, their elementwise product u(a) ⊙u(i), and a vector of surface features f(a, i, w), including distance, speaker, and genre information. Lee et al. (2017) provide an error analysis that shows how this method can correctly link a blaze and a fire, while incorrectly linking pilots and fight attendants. In each case, the coreference decision is based on similarities in the word embeddings. Rather than embedding individual mentions, Clark and Manning (2016) embed men- tion pairs. At the base layer, their network takes embeddings of the words in and around each mention, as well as one-hot vectors representing a few surface features, such as the distance and string matching features. This base layer is then passed through a multilayer feedforward network with ReLU nonlinearities, resulting in a representation of the men- tion pair. The output of the mention pair encoder ui,j is used in the scoring function of a mention-ranking model, ψM(i, j) = θ · ui,j. A similar approach is used to score cluster pairs, constructing a cluster-pair encoding by pooling over the mention-pair encodings for all pairs of mentions within the two clusters. Entity embeddings In entity-based coreference resolution, each entity should be represented by properties of its mentions. In a distributed setting, we maintain a set of vector entity embeddings, ve. Each candidate mention receives an embedding ui; Wiseman et al. (2016) compute this embedding by a single-layer neural network, applied to a vector of surface features. The decision of whether to merge mention i with entity e can then be driven by a feedforward Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_390_Chunk388
|
15.4. EVALUATING COREFERENCE RESOLUTION 373 network, ψE(i, e) = Feedforward([ve; ui]). If i is added to entity e, then its representa- tion is updated recurrently, ve ←f(ve, ui), using a recurrent neural network such as a long short-term memory (LSTM; chapter 6). Alternatively, we can apply a pooling oper- ation, such as max-pooling or average-pooling (chapter 3), setting ve ←Pool(ve, ui). In either case, the update to the representation of entity e can be thought of as adding new information about the entity from mention i. 15.4 Evaluating coreference resolution The state of coreference evaluation is aggravatingly complex. Early attempts at sim- ple evaluation metrics were found to be susceptible to trivial baselines, such as placing each mention in its own cluster, or grouping all mentions into a single cluster. Follow- ing Denis and Baldridge (2009), the CoNLL 2011 shared task on coreference (Pradhan et al., 2011) formalized the practice of averaging across three different metrics: MUC (Vi- lain et al., 1995), B-CUBED (Bagga and Baldwin, 1998a), and CEAF (Luo, 2005). Refer- ence implementations of these metrics are available from Pradhan et al. (2014) at https: //github.com/conll/reference-coreference-scorers. Additional resources Ng (2010) surveys coreference resolution through 2010. Early work focused exclusively on pronoun resolution, with rule-based (Lappin and Leass, 1994) and probabilistic meth- ods (Ge et al., 1998). The full coreference resolution problem was popularized in a shared task associated with the sixth Message Understanding Conference, which included coref- erence annotations for training and test sets of thirty documents each (Grishman and Sundheim, 1996). An influential early paper was the decision tree approach of Soon et al. (2001), who introduced mention ranking. A comprehensive list of surface features for coreference resolution is offered by Bengtson and Roth (2008). Durrett and Klein (2013) improved on prior work by introducing a large lexicalized feature set; subsequent work has emphasized neural representations of entities and mentions (Wiseman et al., 2015). Exercises 1. Select an article from today’s news, and annotate coreference for the first twenty noun phrases and possessive pronouns that appear in the article, include ones that are nested within larger noun phrases. Then specify the mention-pair training data that would result from the first five of these candidate entity mentions. 2. Using your annotations from the preceding problem, compute the following statis- tics: Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_391_Chunk389
|
374 CHAPTER 15. REFERENCE RESOLUTION • The number of times new entities are introduced by each of the three types of referring expressions: pronouns, proper nouns, and nominals. Include “single- ton” entities that are mentioned only once. • For each type of referring expression, compute the fraction of mentions that are anaphoric. 3. Apply a simple heuristic to all pronouns in the article from the previous exercise: link each pronoun to the closest preceding noun phrase that agrees in gender, num- ber, animacy, and person. Compute the following evaluation: • True positive: a pronoun that is linked to a noun phrase with which it is coref- erent, or is labeled as the first mention of an entity when in fact it does not corefer with any preceding mention. In this case, non-referential pronouns can be true positives if they are marked as having no antecedent. • False positive: a pronoun that is linked to a noun phrase with which it is not coreferent. This includes mistakenly linking singleton or non-referential pro- nouns. • False negative: a pronoun that has at least one antecedent, but is either labeled as not having an antecednet, or is linked to mention with which it does not corefer. Compute the F -MEASURE for your method, and for a trivial baseline in which every pronoun refers to the immediately preceding entity mention. Are there any addi- tional heuristics that would have improved the performance of this method? 4. Durrett and Klein (2013) compute the probability of the gold coreference clustering by summing over all antecedent structures that are compatible with the clustering. For example, if there are three mentions of a single entity, m1, m2, m3, there are two possible antecedent structures: a2 = 1, a3 = 1 and a2 = 1, a3 = 2. Compute the number of antecedent structures for a single entity with K mentions. 5. Suppose that all mentions can be unambiguously divided into C classes, for exam- ple by gender and number. Further suppose that mentions from different classes can never corefer. In a document with M mentions, give upper and lower bounds on the total number of possible coreference clusterings, in terms of the Bell numbers and the parameters M and C. Compute numerical upper and lower bounds for the case M = 4, C = 2. 6. Lee et al. (2017) propose a model that considers all contiguous spans in a document as possible mentions. a) In a document of length M, how many mention pairs must be evaluated? (All answers can be given in asymptotic, big-O notation.) Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_392_Chunk390
|
15.4. EVALUATING COREFERENCE RESOLUTION 375 b) To make inference more efficient, Lee et al. (2017) restrict consideration to spans of maximum length L ≪M. Under this restriction, how many mention pairs must be evaluated? c) To further improve inference, one might evaluate coreference only between pairs of mentions whose endpoints are separated by a maximum of D tokens. Under this additional restriction, how many mention pairs must be evaluated? 7. In Spanish, the subject can be omitted when it is clear from context, e.g., (15.13) Las ballenas The whales no no son are peces. fish. Son Are mam´ıferos. mammals. Whales are not fish. They are mammals. Resolution of such null subjects is facilitated by the Spanish system of verb mor- phology, which includes distinctive suffixes for most combinations of person and number. For example, the verb form son (‘are’) agrees with the third-person plural pronouns ellos (masculine) and ellas (feminine), as well as the second-person plural ustedes. Suppose that you are given the following components: • A system that automatically identifies verbs with null subjects. • A function c(j, p) ∈{0, 1} that indicates whether pronoun p is compatible with null subject j, according to the verb morphology. • A trained mention-pair model, which computes scores ψ(wi, wj, j −i) ∈R for all pairs of mentions i and j, scoring the pair by the antecedent mention wi, the anaphor wj, and the distance j −i. Describe an integer linear program that simultaneously performs two tasks: resolv- ing coreference among all entity mentions, and identifying suitable pronouns for all null subjects. In the example above, your program should link the null subject with las ballenas (‘whales’), and identify ellas as the correct pronoun. For simplicity, you may assume that null subjects cannot be antecedents, and you need not worry about the transitivity constraint described in § 15.2.3. 8. Use the policy gradient algorithm to compute the gradient for the following sce- nario, based on the Bell tree in Figure 15.4: • The gold clustering c∗is {Abigail, her}, {she}. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_393_Chunk391
|
376 CHAPTER 15. REFERENCE RESOLUTION • Drawing a single sequence of actions (K = 1) from the current policy, you obtain the following incremental clusterings: c(a1) ={Abigail} c(a1:2) ={Abigail, she} c(a1:3) ={Abigail, she}, {her}. • At each mention t, the space of actions At includes merging the mention with each existing cluster or with the empty cluster. The probability of merging mt with cluster c is proportional to the exponentiated score for the merged cluster, p(Merge(mt, c))) ∝exp ψE(mt ∪c), [15.33] where ψE(mt ∪c) is defined in Equation 15.15. Compute the gradient ∂ ∂θL(θ) in terms of the loss ℓ(c(a)) and the features of each (potential) cluster. Explain the differences between the gradient-based update θ ←θ −∂ ∂θL(θ) and the incremental perceptron update from this same example. 9. As discussed in § 15.1.1, some pronouns are not referential. In English, this occurs frequently with the word it. Download the text of Alice in Wonderland from NLTK, and examine the first ten appearances of it. For each occurrence: • First, examine a five-token window around the word. In the first example, this window is, , but it had no Is there another pronoun that could be substituted for it? Consider she, they, and them. In this case, both she and they yield grammatical substitutions. What about the other ten appearances of it? • Now, view an fifteen-word window for each example. Based on this window, mark whether you think the word it is referential. How often does the substitution test predict whether it is referential? 10. Now try to automate the test, using the Google n-grams corpus (Brants and Franz, 2006). Specifically, find the count of each 5-gram containing it, and then compute the counts of 5-grams in which it is replaced with other third-person pronouns: he, she, they, her, him, them, herself, himself. There are various ways to get these counts. One approach is to download the raw data and search it; another is to construct web queries to https://books. google.com/ngrams. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_394_Chunk392
|
15.4. EVALUATING COREFERENCE RESOLUTION 377 Compare the ratio of the counts of the original 5-gram to the summed counts of the 5-grams created by substitution. Is this ratio a good predictor of whether it is referential? Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_395_Chunk393
|
Chapter 16 Discourse Applications of natural language processing often concern multi-sentence documents: from paragraph-long restaurant reviews, to 500-word newspaper articles, to 500-page novels. Yet most of the methods that we have discussed thus far are concerned with individual sentences. This chapter discusses theories and methods for handling multi- sentence linguistic phenomena, known collectively as discourse. There are diverse char- acterizations of discourse structure, and no single structure is ideal for every computa- tional application. This chapter covers some of the most well studied discourse repre- sentations, while highlighting computational models for identifying and exploiting these structures. 16.1 Segments A document or conversation can be viewed as a sequence of segments, each of which is cohesive in its content and/or function. In Wikipedia biographies, these segments often pertain to various aspects to the subject’s life: early years, major events, impact on others, and so on. This segmentation is organized around topics. Alternatively, scientific research articles are often organized by functional themes: the introduction, a survey of previous research, experimental setup, and results. Written texts often mark segments with section headers and related formatting de- vices. However, such formatting may be too coarse-grained to support applications such as the retrieval of specific passages of text that are relevant to a query (Hearst, 1997). Unformatted speech transcripts, such as meetings and lectures, are also an application scenario for segmentation (Carletta, 2007; Glass et al., 2007; Janin et al., 2003). 379
|
nlp_Page_397_Chunk394
|
380 CHAPTER 16. DISCOURSE 0 5 10 15 20 25 30 35 sentence 0.0 0.1 0.2 0.3 0.4 0.5 0.6 cosine similarity original smoothing L=1 smoothing L=3 Figure 16.1: Smoothed cosine similarity among adjacent sentences in a news article. Local minima at m = 10 and m = 29 indicate likely segmentation points. 16.1.1 Topic segmentation A cohesive topic segment forms a unified whole, using various linguistic devices: re- peated references to an entity or event; the use of conjunctions to link related ideas; and the repetition of meaning through lexical choices (Halliday and Hasan, 1976). Each of these cohesive devices can be measured, and then used as features for topic segmentation. A classical example is the use of lexical cohesion in the TEXTTILING method for topic seg- mentation (Hearst, 1997). The basic idea is to compute the textual similarity between each pair of adjacent blocks of text (sentences or fixed-length units), using a formula such as the smoothed cosine similarity of their bag-of-words vectors, sm = xm · xm+1 ||xm||2 × ||xm+1||2 [16.1] sm = L X ℓ=0 kℓ(sm+ℓ+ sm−ℓ), [16.2] with kℓrepresenting the value of a smoothing kernel of size L, e.g. k = [1, 0.5, 0.25]⊤. Segmentation points are then identified at local minima in the smoothed similarities s, since these points indicate changes in the overall distribution of words in the text. An example is shown in Figure 16.1. Text segmentation can also be formulated as a probabilistic model, in which each seg- ment has a unique language model that defines the probability over the text in the seg- ment (Utiyama and Isahara, 2001; Eisenstein and Barzilay, 2008; Du et al., 2013).1 A good 1There is a rich literature on how latent variable models (such as latent Dirichlet allocation) can track Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_398_Chunk395
|
16.2. ENTITIES AND REFERENCE 381 segmentation achieves high likelihood by grouping segments with similar word distribu- tions. This probabilistic approach can be extended to hierarchical topic segmentation, in which each topic segment is divided into subsegments (Eisenstein, 2009). All of these ap- proaches are unsupervised. While labeled data can be obtained from well-formatted texts such as textbooks, such annotations may not generalize to speech transcripts in alterna- tive domains. Supervised methods have been tried in cases where in-domain labeled data is available, substantially improving performance by learning weights on multiple types of features (Galley et al., 2003). 16.1.2 Functional segmentation In some genres, there is a canonical set of communicative functions: for example, in sci- entific research articles, one such function is to communicate the general background for the article, another is to introduce a new contribution, or to describe the aim of the re- search (Teufel et al., 1999). A functional segmentation divides the document into con- tiguous segments, sometimes called rhetorical zones, in which each sentence has the same function. Teufel and Moens (2002) train a supervised classifier to identify the functional of each sentence in a set of scientific research articles, using features that describe the sen- tence’s position in the text, its similarity to the rest of the article and title, tense and voice of the main verb, and the functional role of the previous sentence. Functional segmentation can also be performed without supervision. Noting that some types of Wikipedia arti- cles have very consistent functional segmentations (e.g., articles about cities or chemical elements), Chen et al. (2009) introduce an unsupervised model for functional segmenta- tion, which learns both the language model associated with each function and the typical patterning of functional segments across the article. 16.2 Entities and reference Another dimension of discourse relates to which entities are mentioned throughout the text, and how. Consider the examples in Figure 16.2: Grosz et al. (1995) argue that the first discourse is more coherent. Do you agree? The examples differ in their choice of refer- ring expressions for the protagonist John, and in the syntactic constructions in sentences (b) and (d). The examples demonstrate the need for theoretical models to explain how referring expressions are chosen, and where they are placed within sentences. Such mod- els can then be used to help interpret the overall structure of the discourse, to measure discourse coherence, and to generate discourses in which referring expressions are used coherently. topics across documents (Blei et al., 2003; Blei, 2012). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_399_Chunk396
|
382 CHAPTER 16. DISCOURSE (16.1) a. John went to his favorite music store to buy a piano. b. He had frequented the store for many years. c. He was excited that he could fi- nally buy a piano. d. He arrived just as the store was closing for the day (16.2) a. John went to his favorite music store to buy a piano. b. It was a store John had fre- quented for many years. c. He was excited that he could fi- nally buy a piano. d. It was closing just as John ar- rived. Figure 16.2: Two tellings of the same story (Grosz et al., 1995). The discourse on the left uses referring expressions coherently, while the one on the right does not. 16.2.1 Centering theory Centering theory presents a unified account of the relationship between discourse struc- ture and entity reference (Grosz et al., 1995). According to the theory, every utterance in the discourse is characterized by a set of entities, known as centers. • The forward-looking centers in utterance m are all the entities that are mentioned in the utterance, cf(wm) = {e1, e2, . . . , }. The forward-looking centers are partially ordered by their syntactic prominence, favoring subjects over objects, and objects over other positions (Brennan et al., 1987). For example, in example (1.1a) of Fig- ure 16.2, the ordered list of forward-looking centers in the first utterance is John, the music store, and the piano. • The backward-looking center cb(wm) is the highest-ranked element in the set of forward-looking centers from the previous utterance cf(wm−1) that is also men- tioned in wm. In example (1.1b) of item 16.1, the backward looking center is John. Given these two definitions, centering theory makes the following predictions about the form and position of referring expressions: 1. If a pronoun appears in the utterance wm, then the backward-looking center cb(wm) must also be realized as a pronoun. This rule argues against the use of it to refer to the piano store in Example (16.2d), since JOHN is the backward looking center of (16.2d), and he is mentioned by name and not by a pronoun. 2. Sequences of utterances should retain the same backward-looking center if possible, and ideally, the backward-looking center should also be the top-ranked element in the list of forward-looking centers. This rule argues in favor of the preservation of JOHN as the backward-looking center throughout Example (16.1). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_400_Chunk397
|
16.2. ENTITIES AND REFERENCE 383 SKYLER WALTER DANGER A GUY THE DOOR You don’t know who you’re talk- ing to, S - - - - so let me clue you in. O O - - - I am not in danger, Skyler. X S X - - I am the danger. - S O - - A guy opens his door and gets shot, - - - S O and you think that of me? S X - - - No. I am the one who knocks! - S - - - Figure 16.3: The entity grid representation for a dialogue from the television show Break- ing Bad. Centering theory unifies aspects of syntax, discourse, and anaphora resolution. However, it can be difficult to clarify exactly how to rank the elements of each utterance, or even how to partition a text or dialog into utterances (Poesio et al., 2004). 16.2.2 The entity grid One way to formalize the ideas of centering theory is to arrange the entities in a text or conversation in an entity grid. This is a data structure with one row per sentence, and one column per entity (Barzilay and Lapata, 2008). Each cell c(m, i) can take the following values: c(m, i) = S, entity i is in subject position in sentence m O, entity i is in object position in sentence m X, entity i appears in sentence m, in neither subject nor object position −, entity i does not appear in sentence m. [16.3] To populate the entity grid, syntactic parsing is applied to identify subject and object positions, and coreference resolution is applied to link multiple mentions of a single entity. An example is shown in Figure 16.3. After the grid is constructed, the coherence of a document can be measured by the transitions between adjacent cells in each column. For example, the transition (S →S) keeps an entity in subject position across adjacent sentences; the transition (O →S) pro- motes an entity from object position to subject position; the transition (S →−) drops the subject of one sentence from the next sentence. The probabilities of each transition can be Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_401_Chunk398
|
384 CHAPTER 16. DISCOURSE estimated from labeled data, and an entity grid can then be scored by the sum of the log- probabilities across all columns and all transitions, PNe i=1 PM m=1 log p(c(m, i) | c(m −1, i)). The resulting probability can be used as a proxy for the coherence of a text. This has been shown to be useful for a range of tasks: determining which of a pair of articles is more readable (Schwarm and Ostendorf, 2005), correctly ordering the sentences in a scrambled text (Lapata, 2003), and disentangling multiple conversational threads in an online multi- party chat (Elsner and Charniak, 2010). 16.2.3 *Formal semantics beyond the sentence level An alternative view of the role of entities in discourse focuses on formal semantics, and the construction of meaning representations for multi-sentence units. Consider the following two sentences (from Bird et al., 2009): (16.3) a. Angus owns a dog. b. It bit Irene. We would like to recover the formal semantic representation, ∃x.DOG(x) ∧OWN(ANGUS, x) ∧BITE(x, IRENE). [16.4] However, the semantic representations of each individual sentence are, ∃x.DOG(x) ∧OWN(ANGUS, x) [16.5] BITE(y, IRENE). [16.6] Unifying these two representations into the form of Equation 16.4 requires linking the unbound variable y from [16.6] with the quantified variable x in [16.5].2 Discourse un- derstanding therefore requires the reader to update a set of assignments, from variables to entities. This update would (presumably) link the dog in the first sentence of [16.3] with the unbound variable y in the second sentence, thereby licensing the conjunction in [16.4].3 This basic idea is at the root of dynamic semantics (Groenendijk and Stokhof, 1991). Segmented discourse representation theory links dynamic semantics with a set of discourse relations, which explain how adjacent units of text are rhetorically or con- ceptually related (Lascarides and Asher, 2007). The next section explores the theory of discourse relations in more detail. 2Groenendijk and Stokhof (1991) treats the y variable in Equation 16.6 as unbound. Even if it were bound locally with an existential quantifier (∃yBITE(y, IRENE)), the variable would still need to be reconciled with the quantified variable in Equation 16.5. 3This linking task is similar to coreference resolution (see chapter 15), but here the connections are be- tween semantic variables, rather than spans of text. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_402_Chunk399
|
16.3. RELATIONS 385 16.3 Relations In dependency grammar, sentences are characterized by a graph (usually a tree) of syntac- tic relations between words, such as NSUBJ and DET. A similar idea can be applied at the document level, identifying relations between discourse units, such as clauses, sentences, or paragraphs. The task of discourse parsing involves identifying discourse units and the relations that hold between them. These relations can then be applied to tasks such as document classification and summarization, as discussed in § 16.3.4. 16.3.1 Shallow discourse relations The existence of discourse relations is hinted by discourse connectives, such as however, moreover, meanwhile, and if . . . then. These connectives explicitly specify the relationship between adjacent units of text: however signals a contrastive relationship, moreover signals that the subsequent text elaborates or strengthens the point that was made immediately beforehand, meanwhile indicates that two events are contemporaneous, and if ...then sets up a conditional relationship. Discourse connectives can therefore be viewed as a starting point for the analysis of discourse relations. In lexicalized tree-adjoining grammar for discourse (D-LTAG), each connective an- chors a relationship between two units of text (Webber, 2004). This model provides the theoretical basis for the Penn Discourse Treebank (PDTB), the largest corpus of discourse relations in English (Prasad et al., 2008). It includes a hierarchical inventory of discourse relations (shown in Table 16.1), which is created by abstracting the meanings implied by the discourse connectives that appear in real texts (Knott, 1996). These relations are then annotated on the same corpus of news text used in the Penn Treebank (see § 9.2.2), adding the following information: • Each connective is annotated for the discourse relation or relations that it expresses, if any — many discourse connectives have senses in which they do not signal a discourse relation (Pitler and Nenkova, 2009). • For each discourse relation, the two arguments of the relation are specified as ARG1 and ARG2, where ARG2 is constrained to be adjacent to the connective. These argu- ments may be sentences, but they may also smaller or larger units of text. • Adjacent sentences are annotated for implicit discourse relations, which are not marked by any connective. When a connective could be inserted between a pair of sentence, the annotator supplies it, and also labels its sense (e.g., example 16.5). In some cases, there is no relationship at all between a pair of adjacent sentences; in other cases, the only relation is that the adjacent sentences mention one or more shared entity. These phenomena are annotated as NOREL and ENTREL (entity rela- tion), respectively. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_403_Chunk400
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.