text
stringlengths 1
7.76k
| source
stringlengths 17
81
|
|---|---|
182 CHAPTER 8. APPLICATIONS OF SEQUENCE LABELING word PTB tag UD tag UD attributes The DT DET DEFINITE=DEF PRONTYPE=ART German JJ ADJ DEGREE=POS Expressionist NN NOUN NUMBER=SING movement NN NOUN NUMBER=SING was VBD AUX MOOD=IND NUMBER=SING PERSON=3 TENSE=PAST VERBFORM=FIN destroyed VBN VERB TENSE=PAST VERBFORM=PART VOICE=PASS as IN ADP a DT DET DEFINITE=IND PRONTYPE=ART result NN NOUN NUMBER=SING . . PUNCT Figure 8.1: UD and PTB part-of-speech tags, and UD morphosyntactic attributes. Example selected from the UD 1.4 English corpus. 8.2 Morphosyntactic Attributes There is considerably more to say about a word than whether it is a noun or a verb: in En- glish, verbs are distinguish by features such tense and aspect, nouns by number, adjectives by degree, and so on. These features are language-specific: other languages distinguish other features, such as case (the role of the noun with respect to the action of the sen- tence, which is marked in languages such as Latin and German5) and evidentiality (the source of information for the speaker’s statement, which is marked in languages such as Turkish). In the UD corpora, these attributes are annotated as feature-value pairs for each token.6 An example is shown in Figure 8.1. The determiner the is marked with two attributes: PRONTYPE=ART, which indicates that it is an article (as opposed to another type of deter- 5Case is marked in English for some personal pronouns, e.g., She saw her, They saw them. 6The annotation and tagging of morphosyntactic attributes can be traced back to earlier work on Turk- ish (Oflazer and Kuru¨oz, 1994) and Czech (Hajiˇc and Hladk´a, 1998). MULTEXT-East was an early multilin- gual corpus to include morphosyntactic attributes (Dimitrova et al., 1998). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_200_Chunk201
|
8.3. NAMED ENTITY RECOGNITION 183 miner or pronominal modifier), and DEFINITE=DEF, which indicates that it is a definite article (referring to a specific, known entity). The verbs are each marked with several attributes. The auxiliary verb was is third-person, singular, past tense, finite (conjugated), and indicative (describing an event that has happened or is currently happenings); the main verb destroyed is in participle form (so there is no additional person and number information), past tense, and passive voice. Some, but not all, of these distinctions are reflected in the PTB tags VBD (past-tense verb) and VBN (past participle). While there are thousands of papers on part-of-speech tagging, there is comparatively little work on automatically labeling morphosyntactic attributes. Faruqui et al. (2016) train a support vector machine classification model, using a minimal feature set that in- cludes the word itself, its prefixes and suffixes, and type-level information listing all pos- sible morphosyntactic attributes for each word and its neighbors. Mueller et al. (2013) use a conditional random field (CRF), in which the tag space consists of all observed com- binations of morphosyntactic attributes (e.g., the tag would be DEF+ART for the word the in Figure 8.1). This massive tag space is managed by decomposing the feature space over individual attributes, and pruning paths through the trellis. More recent work has employed bidirectional LSTM sequence models. For example, Pinter et al. (2017) train a bidirectional LSTM sequence model. The input layer and hidden vectors in the LSTM are shared across attributes, but each attribute has its own output layer, culminating in a softmax over all attribute values, e.g. yNUMBER t ∈{SING, PLURAL, . . .}. They find that character-level information is crucial, especially when the amount of labeled data is lim- ited. Evaluation is performed by first computing recall and precision for each attribute. These scores can then be averaged at either the type or token level to obtain micro- or macro-F -MEASURE. Pinter et al. (2017) evaluate on 23 languages in the UD treebank, reporting a median micro-F -MEASURE of 0.95. Performance is strongly correlated with the size of the labeled dataset for each language, with a few outliers: for example, Chinese is particularly difficult, because although the dataset is relatively large (105 tokens in the UD 1.4 corpus), only 6% of tokens have any attributes, offering few useful labeled instances. 8.3 Named Entity Recognition A classical problem in information extraction is to recognize and extract mentions of named entities in text. In news documents, the core entity types are people, locations, and organizations; more recently, the task has been extended to include amounts of money, percentages, dates, and times. In item 8.20a (Figure 8.2), the named entities include: The U.S. Army, an organization; Atlanta, a location; and May 14, 1864, a date. Named en- tity recognition is also a key task in biomedical natural language processing, with entity types including proteins, DNA, RNA, and cell lines (e.g., Collier et al., 2000; Ohta et al., 2002). Figure 8.2 shows an example from the GENIA corpus of biomedical research ab- Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_201_Chunk202
|
184 CHAPTER 8. APPLICATIONS OF SEQUENCE LABELING (8.20) a. The B-ORG U.S. I-ORG Army I-ORG captured O Atlanta B-LOC on O May B-DATE 14 I-DATE , I-DATE 1864 I-DATE b. Number O of O glucocorticoid B-PROTEIN receptors I-PROTEIN in O lymphocytes B-CELLTYPE and O ... ... Figure 8.2: BIO notation for named entity recognition. Example (8.20b) is drawn from the GENIA corpus of biomedical documents (Ohta et al., 2002). stracts. A standard approach to tagging named entity spans is to use discriminative sequence labeling methods such as conditional random fields. However, the named entity recogni- tion (NER) task would seem to be fundamentally different from sequence labeling tasks like part-of-speech tagging: rather than tagging each token, the goal in is to recover spans of tokens, such as The United States Army. This is accomplished by the BIO notation, shown in Figure 8.2. Each token at the beginning of a name span is labeled with a B- prefix; each token within a name span is la- beled with an I- prefix. These prefixes are followed by a tag for the entity type, e.g. B-LOC for the beginning of a location, and I-PROTEIN for the inside of a protein name. Tokens that are not parts of name spans are labeled as O. From this representation, the entity name spans can be recovered unambiguously. This tagging scheme is also advantageous for learning: tokens at the beginning of name spans may have different properties than tokens within the name, and the learner can exploit this. This insight can be taken even further, with special labels for the last tokens of a name span, and for unique tokens in name spans, such as Atlanta in the example in Figure 8.2. This is called BILOU notation, and it can yield improvements in supervised named entity recognition (Ratinov and Roth, 2009). Feature-based sequence labeling Named entity recognition was one of the first applica- tions of conditional random fields (McCallum and Li, 2003). The use of Viterbi decoding restricts the feature function f(w, y) to be a sum of local features, P m f(w, ym, ym−1, m), so that each feature can consider only local adjacent tags. Typical features include tag tran- sitions, word features for wm and its neighbors, character-level features for prefixes and suffixes, and “word shape” features for capitalization and other orthographic properties. As an example, base features for the word Army in the example in (8.20a) include: (CURR-WORD:Army, PREV-WORD:U.S., NEXT-WORD:captured, PREFIX-1:A-, PREFIX-2:Ar-, SUFFIX-1:-y, SUFFIX-2:-my, SHAPE:Xxxx) Features can also be obtained from a gazetteer, which is a list of known entity names. For example, the U.S. Social Security Administration provides a list of tens of thousands of Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_202_Chunk203
|
8.4. TOKENIZATION 185 (1) 日文 Japanese 章魚 octopus 怎麼 how 說? say How to say octopus in Japanese? (2) 日 Japan 文章 essay 魚 fish 怎麼 how 說? say Figure 8.3: An example of tokenization ambiguity in Chinese (Sproat et al., 1996) given names — more than could be observed in any annotated corpus. Tokens or spans that match an entry in a gazetteer can receive special features; this provides a way to incorporate hand-crafted resources such as name lists in a learning-driven framework. Neural sequence labeling for NER Current research has emphasized neural sequence labeling, using similar LSTM models to those employed in part-of-speech tagging (Ham- merton, 2003; Huang et al., 2015; Lample et al., 2016). The bidirectional LSTM-CRF (Fig- ure 7.4 in § 7.6) does particularly well on this task, due to its ability to model tag-to-tag dependencies. However, Strubell et al. (2017) show that convolutional neural networks can be equally accurate, with significant improvement in speed due to the efficiency of implementing ConvNets on graphics processing units (GPUs). The key innovation in this work was the use of dilated convolution, which is described in more detail in § 3.4. 8.4 Tokenization A basic problem for text analysis, first discussed in § 4.3.1, is to break the text into a se- quence of discrete tokens. For alphabetic languages such as English, deterministic scripts usually suffice to achieve accurate tokenization. However, in logographic writing systems such as Chinese script, words are typically composed of a small number of characters, without intervening whitespace. The tokenization must be determined by the reader, with the potential for occasional ambiguity, as shown in Figure 8.3. One approach is to match character sequences against a known dictionary (e.g., Sproat et al., 1996), using additional statistical information about word frequency. However, no dictionary is completely com- prehensive, and dictionary-based approaches can struggle with such out-of-vocabulary words. Chinese word segmentation has therefore been approached as a supervised sequence labeling problem. Xue et al. (2003) train a logistic regression classifier to make indepen- dent segmentation decisions while moving a sliding window across the document. A set of rules is then used to convert these individual classification decisions into an overall to- kenization of the input. However, these individual decisions may be globally suboptimal, motivating a structure prediction approach. Peng et al. (2004) train a conditional random Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_203_Chunk204
|
186 CHAPTER 8. APPLICATIONS OF SEQUENCE LABELING field to predict labels of START or NONSTART on each character. More recent work has employed neural network architectures. For example, Chen et al. (2015) use an LSTM- CRF architecture, as described in § 7.6: they construct a trellis, in which each tag is scored according to the hidden state of an LSTM, and tag-tag transitions are scored according to learned transition weights. The best-scoring segmentation is then computed by the Viterbi algorithm. 8.5 Code switching Multilingual speakers and writers do not restrict themselves to a single language. Code switching is the phenomenon of switching between languages in speech and text (Auer, 2013; Poplack, 1980). Written code switching has become more common in online social media, as in the following extract from the website of Canadian President Justin Trudeau:7 (8.21) Although everything written on this site est is disponible available en in anglais English and in French, my personal videos seront will be bilingues bilingual Accurately analyzing such texts requires first determining which languages are being used. Furthermore, quantitative analysis of code switching can provide insights on the languages themselves and their relative social positions. Code switching can be viewed as a sequence labeling problem, where the goal is to la- bel each token as a candidate switch point. In the example above, the words est, and, and seront would be labeled as switch points. Solorio and Liu (2008) detect English-Spanish switch points using a supervised classifier, with features that include the word, its part-of- speech in each language (according to a supervised part-of-speech tagger), and the prob- abilities of the word and part-of-speech in each language. Nguyen and Dogru¨oz (2013) apply a conditional random field to the problem of detecting code switching between Turkish and Dutch. Code switching is a special case of the more general problem of word level language identification, which Barman et al. (2014) address in the context of trilingual code switch- ing between Bengali, English, and Hindi. They further observe an even more challenging phenomenon: intra-word code switching, such as the use of English suffixes with Bengali roots. They therefore mark each token as either (1) belonging to one of the three languages; (2) a mix of multiple languages; (3) “universal” (e.g., symbols, numbers, emoticons); or (4) undefined. 7As quoted in http://blogues.lapresse.ca/lagace/2008/09/08/ justin-trudeau-really-parfait-bilingue/, accessed August 21, 2017. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_204_Chunk205
|
8.6. DIALOGUE ACTS 187 Speaker Dialogue Act Utterance A YES-NO-QUESTION So do you go college right now? A ABANDONED Are yo- B YES-ANSWER Yeah, B STATEMENT It’s my last year [laughter]. A DECLARATIVE-QUESTION You’re a, so you’re a senior now. B YES-ANSWER Yeah, B STATEMENT I’m working on my projects trying to graduate [laughter] A APPRECIATION Oh, good for you. B BACKCHANNEL Yeah. Figure 8.4: An example of dialogue act labeling (Stolcke et al., 2000) 8.6 Dialogue acts The sequence labeling problems that we have discussed so far have been over sequences of word tokens or characters (in the case of tokenization). However, sequence labeling can also be performed over higher-level units, such as utterances. Dialogue acts are la- bels over utterances in a dialogue, corresponding roughly to the speaker’s intention — the utterance’s illocutionary force (Austin, 1962). For example, an utterance may state a proposition (it is not down on any map), pose a question (shall we keep chasing this murderous fish?), or provide a response (aye aye!). Stolcke et al. (2000) describe how a set of 42 dia- logue acts were annotated for the 1,155 conversations in the Switchboard corpus (Godfrey et al., 1992).8 An example is shown in Figure 8.4. The annotation is performed over UTTERANCES, with the possibility of multiple utterances per conversational turn (in cases such as inter- ruptions, an utterance may split over multiple turns). Some utterances are clauses (e.g., So do you go to college right now?), while others are single words (e.g., yeah). Stolcke et al. (2000) report that hidden Markov models (HMMs) achieve 96% accuracy on supervised utter- ance segmentation. The labels themselves reflect the conversational goals of the speaker: the utterance yeah functions as an answer in response to the question you’re a senior now, but in the final line of the excerpt, it is a backchannel (demonstrating comprehension). For task of dialogue act labeling, Stolcke et al. (2000) apply a hidden Markov model. The probability p(wm | ym) must generate the entire sequence of words in the utterance, and it is modeled as a trigram language model (§ 6.1). Stolcke et al. (2000) also account for acoustic features, which capture the prosody of each utterance — for example, tonal and rhythmic properties of speech, which can be used to distinguish dialogue acts such 8Dialogue act modeling is not restricted to speech; it is relevant in any interactive conversation. For example, Jeong et al. (2009) annotate a more limited set of speech acts in a corpus of emails and online forums. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_205_Chunk206
|
188 CHAPTER 8. APPLICATIONS OF SEQUENCE LABELING as questions and answers. These features are handled with an additional emission distri- bution, p(am | ym), which is modeled with a probabilistic decision tree (Murphy, 2012). While acoustic features yield small improvements overall, they play an important role in distinguish questions from statements, and agreements from backchannels. Recurrent neural architectures for dialogue act labeling have been proposed by Kalch- brenner and Blunsom (2013) and Ji et al. (2016), with strong empirical results. Both models are recurrent at the utterance level, so that each complete utterance updates a hidden state. The recurrent-convolutional network of Kalchbrenner and Blunsom (2013) uses convolu- tion to obtain a representation of each individual utterance, while Ji et al. (2016) use a second level of recurrence, over individual words. This enables their method to also func- tion as a language model, giving probabilities over sequences of words in a document. Exercises 1. Using the Universal Dependencies part-of-speech tags, annotate the following sen- tences. You may examine the UD tagging guidelines. Tokenization is shown with whitespace. Don’t forget about punctuation. (8.22) a. I try all things , I achieve what I can . b. It was that accursed white whale that razed me . c. Better to sleep with a sober cannibal , than a drunk Christian . d. Be it what it will , I ’ll go to it laughing . 2. Select three short sentences from a recent news article, and annotate them for UD part-of-speech tags. Ask a friend to annotate the same three sentences without look- ing at your annotations. Compute the rate of agreement, using the Kappa metric defined in § 4.5.2. Then work together to resolve any disagreements. 3. Choose one of the following morphosyntactic attributes: MOOD, TENSE, VOICE. Re- search the definition of this attribute on the universal dependencies website, http: //universaldependencies.org/u/feat/index.html. Returning to the ex- amples in the first exercise, annotate all verbs for your chosen attribute. It may be helpful to consult examples from an English-language universal dependencies cor- pus, available at https://github.com/UniversalDependencies/UD_English-EWT/ tree/master. 4. Download a dataset annotated for universal dependencies, such as the English Tree- bank at https://github.com/UniversalDependencies/UD_English-EWT/ tree/master. This corpus is already segmented into training, development, and test data. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_206_Chunk207
|
8.6. DIALOGUE ACTS 189 a) First, train a logistic regression or SVM classifier using character suffixes: char- acter n-grams up to length 4. Compute the recall, precision, and F -MEASURE on the development data. b) Next, augment your classifier using the same character suffixes of the preced- ing and succeeding tokens. Again, evaluate your classifier on heldout data. c) Optionally, train a Viterbi-based sequence labeling model, using a toolkit such as CRFSuite (http://www.chokkan.org/software/crfsuite/) or your own Viterbi implementation. This is more likely to be helpful for attributes in which agreement is required between adjacent words. For example, many Romance languages require gender and number agreement for determiners, nouns, and adjectives. 5. Provide BIO-style annotation of the named entities (person, place, organization, date, or product) in the following expressions: (8.23) a. The third mate was Flask, a native of Tisbury, in Martha’s Vineyard. b. Its official Nintendo announced today that they Will release the Nin- tendo 3DS in north America march 27 (Ritter et al., 2011). c. Jessica Reif, a media analyst at Merrill Lynch & Co., said, “If they can get up and running with exclusive programming within six months, it doesn’t set the venture back that far.”9 6. Run the examples above through the online version of a named entity recogni- tion tagger, such as the Allen NLP system here: http://demo.allennlp.org/named- entity-recognition. Do the predicted tags match your annotations? 7. Build a whitespace tokenizer for English: a) Using the NLTK library, download the complete text to the novel Alice in Won- derland (Carroll, 1865). Hold out the final 1000 words as a test set. b) Label each alphanumeric character as a segmentation point, ym = 1 if m is the final character of a token. Label every other character as ym = 0. Then concatenate all the tokens in the training and test sets.Make sure that the num- ber of labels {ym}M m=1 is identical to the number of characters {cm}M m=1 in your concatenated datasets. c) Train a logistic regression classifier to predict ym, using the surrounding char- acters cm−5:m+5 as features. After training the classifier, run it on the test set, using the predicted segmentation points to re-tokenize the text. 9From the Message Understanding Conference (MUC-7) dataset (Chinchor and Robinson, 1997). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_207_Chunk208
|
190 CHAPTER 8. APPLICATIONS OF SEQUENCE LABELING d) Compute the per-character segmentation accuracy on the test set. You should be able to get at least 88% accuracy. e) Print out a sample of segmented text from the test set, e.g. Thereareno mice in the air , I ’ m afraid , but y oumight cat chabat , and that ’ svery like a mouse , youknow . But docatseat bats , I wonder ?’ 8. Perform the following extensions to your tokenizer in the previous problem. a) Train a conditional random field sequence labeler, by incorporating the tag bigrams (ym−1, ym) as additional features. You may use a structured predic- tion library such as CRFSuite, or you may want to implement Viterbi yourself. Compare the accuracy with your classification-based approach. b) Compute the token-level performance: treating the original tokenization as ground truth, compute the number of true positives (tokens that are in both the ground truth and predicted tokenization), false positives (tokens that are in the predicted tokenization but not the ground truth), and false negatives (to- kens that are in the ground truth but not the predicted tokenization). Compute the F-measure. Hint: to match predicted and ground truth tokens, add “anchors” for the start character of each token. The number of true positives is then the size of the intersection of the sets of predicted and ground truth tokens. c) Apply the same methodology in a more practical setting: tokenization of Chi- nese, which is written without whitespace. You can find annotated datasets at http://alias-i.com/lingpipe/demos/tutorial/chineseTokens/read-me. html. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_208_Chunk209
|
Chapter 9 Formal language theory We have now seen methods for learning to label individual words, vectors of word counts, and sequences of words; we will soon proceed to more complex structural transforma- tions. Most of these techniques could apply to counts or sequences from any discrete vo- cabulary; there is nothing fundamentally linguistic about, say, a hidden Markov model. This raises a basic question that this text has not yet considered: what is a language? This chapter will take the perspective of formal language theory, in which a language is defined as a set of strings, each of which is a sequence of elements from a finite alphabet. For interesting languages, there are an infinite number of strings that are in the language, and an infinite number of strings that are not. For example: • the set of all even-length sequences from the alphabet {a, b}, e.g., {∅, aa, ab, ba, bb, aaaa, aaab, . . .}; • the set of all sequences from the alphabet {a, b} that contain aaa as a substring, e.g., {aaa, aaaa, baaa, aaab, . . .}; • the set of all sequences of English words (drawn from a finite dictionary) that con- tain at least one verb (a finite subset of the dictionary); • the PYTHON programming language. Formal language theory defines classes of languages and their computational prop- erties. Of particular interest is the computational complexity of solving the membership problem — determining whether a string is in a language. The chapter will focus on three classes of formal languages: regular, context-free, and “mildly” context-sensitive languages. A key insight of 20th century linguistics is that formal language theory can be usefully applied to natural languages such as English, by designing formal languages that cap- ture as many properties of the natural language as possible. For many such formalisms, a useful linguistic analysis comes as a byproduct of solving the membership problem. The 191
|
nlp_Page_209_Chunk210
|
192 CHAPTER 9. FORMAL LANGUAGE THEORY membership problem can be generalized to the problems of scoring strings for their ac- ceptability (as in language modeling), and of transducing one string into another (as in translation). 9.1 Regular languages If you have written a regular expression, then you have defined a regular language: a regular language is any language that can be defined by a regular expression. Formally, a regular expression can include the following elements: • A literal character drawn from some finite alphabet Σ. • The empty string ϵ. • The concatenation of two regular expressions RS, where R and S are both regular expressions. The resulting expression accepts any string that can be decomposed x = yz, where y is accepted by R and z is accepted by S. • The alternation R | S, where R and S are both regular expressions. The resulting expression accepts a string x if it is accepted by R or it is accepted by S. • The Kleene star R∗, which accepts any string x that can be decomposed into a se- quence of strings which are all accepted by R. • Parenthesization (R), which is used to limit the scope of the concatenation, alterna- tion, and Kleene star operators. Here are some example regular expressions: • The set of all even length strings on the alphabet {a, b}: ((aa)|(ab)|(ba)|(bb))∗ • The set of all sequences of the alphabet {a, b} that contain aaa as a substring: (a|b)∗aaa(a|b)∗ • The set of all sequences of English words that contain at least one verb: W ∗V W ∗, where W is an alternation between all words in the dictionary, and V is an alterna- tion between all verbs (V ⊆W). This list does not include a regular expression for the Python programming language, because this language is not regular — there is no regular expression that can capture its syntax. We will discuss why towards the end of this section. Regular languages are closed under union, intersection, and concatenation. This means that if two languages L1 and L2 are regular, then so are the languages L1 ∪L2, L1 ∩L2, and the language of strings that can be decomposed as s = tu, with s ∈L1 and t ∈L2. Regular languages are also closed under negation: if L is regular, then so is the language L = {s /∈L}. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_210_Chunk211
|
9.1. REGULAR LANGUAGES 193 q0 start q1 a b b Figure 9.1: State diagram for the finite state acceptor M1. 9.1.1 Finite state acceptors A regular expression defines a regular language, but does not give an algorithm for de- termining whether a string is in the language that it defines. Finite state automata are theoretical models of computation on regular languages, which involve transitions be- tween a finite number of states. The most basic type of finite state automaton is the finite state acceptor (FSA), which describes the computation involved in testing if a string is a member of a language. Formally, a finite state acceptor is a tuple M = (Q, Σ, q0, F, δ), consisting of: • a finite alphabet Σ of input symbols; • a finite set of states Q = {q0, q1, . . . , qn}; • a start state q0 ∈Q; • a set of final states F ⊆Q; • a transition function δ : Q × (Σ ∪{ϵ}) →2Q. The transition function maps from a state and an input symbol (or empty string ϵ) to a set of possible resulting states. A path in M is a sequence of transitions, π = t1, t2, . . . , tN, where each ti traverses an arc in the transition function δ. The finite state acceptor M accepts a string ω if there is an accepting path, in which the initial transition t1 begins at the start state q0, the final transition tN terminates in a final state in Q, and the entire input ω is consumed. Example Consider the following FSA, M1. Σ ={a, b} [9.1] Q ={q0, q1} [9.2] F ={q1} [9.3] δ ={(q0, a) →q0, (q0, b) →q1, (q1, b) →q1}. [9.4] This FSA defines a language over an alphabet of two symbols, a and b. The transition function δ is written as a set of arcs: (q0, a) →q0 says that if the machine is in state Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_211_Chunk212
|
194 CHAPTER 9. FORMAL LANGUAGE THEORY q0 and reads symbol a, it stays in q0. Figure 9.1 provides a graphical representation of M1. Because each pair of initial state and symbol has at most one resulting state, M1 is deterministic: each string ω induces at most one accepting path. Note that there are no transitions for the symbol a in state q1; if a is encountered in q1, then the acceptor is stuck, and the input string is rejected. What strings does M1 accept? The start state is q0, and we have to get to q1, since this is the only final state. Any number of a symbols can be consumed in q0, but a b symbol is required to transition to q1. Once there, any number of b symbols can be consumed, but an a symbol cannot. So the regular expression corresponding to the language defined by M1 is a∗bb∗. Computational properties of finite state acceptors The key computational question for finite state acceptors is: how fast can we determine whether a string is accepted? For determistic FSAs, this computation can be performed by Dijkstra’s algorithm, with time complexity O(V log V + E), where V is the number of vertices in the FSA, and E is the number of edges (Cormen et al., 2009). Non-deterministic FSAs (NFSAs) can include multiple transitions from a given symbol and state. Any NSFA can be converted into a deterministic FSA, but the resulting automaton may have a num- ber of states that is exponential in the number of size of the original NFSA (Mohri et al., 2002). 9.1.2 Morphology as a regular language Many words have internal structure, such as prefixes and suffixes that shape their mean- ing. The study of word-internal structure is the domain of morphology, of which there are two main types: • Derivational morphology describes the use of affixes to convert a word from one grammatical category to another (e.g., from the noun grace to the adjective graceful), or to change the meaning of the word (e.g., from grace to disgrace). • Inflectional morphology describes the addition of details such as gender, number, person, and tense (e.g., the -ed suffix for past tense in English). Morphology is a rich topic in linguistics, deserving of a course in its own right.1 The focus here will be on the use of finite state automata for morphological analysis. The 1A good starting point would be a chapter from a linguistics textbook (e.g., Akmajian et al., 2010; Bender, 2013). A key simplification in this chapter is the focus on affixes at the sole method of derivation and inflec- tion. English makes use of affixes, but also incorporates apophony, such as the inflection of foot to feet. Semitic languages like Arabic and Hebrew feature a template-based system of morphology, in which roots are triples of consonants (e.g., ktb), and words are created by adding vowels: kataba (Arabic: he wrote), kutub (books), maktab (desk). For more detail on morphology, see texts from Haspelmath and Sims (2013) and Lieber (2015). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_212_Chunk213
|
9.1. REGULAR LANGUAGES 195 current section deals with derivational morphology; inflectional morphology is discussed in § 9.1.4. Suppose that we want to write a program that accepts only those words that are con- structed in accordance with the rules of English derivational morphology: (9.1) a. grace, graceful, gracefully, *gracelyful b. disgrace, *ungrace, disgraceful, disgracefully c. allure, *allureful, alluring, alluringly d. fairness, unfair, *disfair, fairly (Recall that the asterisk indicates that a linguistic example is judged unacceptable by flu- ent speakers of a language.) These examples cover only a tiny corner of English deriva- tional morphology, but a number of things stand out. The suffix -ful converts the nouns grace and disgrace into adjectives, and the suffix -ly converts adjectives into adverbs. These suffixes must be applied in the correct order, as shown by the unacceptability of *grace- lyful. The -ful suffix works for only some words, as shown by the use of alluring as the adjectival form of allure. Other changes are made with prefixes, such as the derivation of disgrace from grace, which roughly corresponds to a negation; however, fair is negated with the un- prefix instead. Finally, while the first three examples suggest that the direc- tion of derivation is noun →adjective →adverb, the example of fair suggests that the adjective can also be the base form, with the -ness suffix performing the conversion to a noun. Can we build a computer program that accepts only well-formed English words, and rejects all others? This might at first seem trivial to solve with a brute-force attack: simply make a dictionary of all valid English words. But such an approach fails to account for morphological productivity — the applicability of existing morphological rules to new words and names, such as Trump to Trumpy and Trumpkin, and Clinton to Clintonian and Clintonite. We need an approach that represents morphological rules explicitly, and for this we will try a finite state acceptor. The dictionary approach can be implemented as a finite state acceptor, with the vo- cabulary Σ equal to the vocabulary of English, and a transition from the start state to the accepting state for each word. But this would of course fail to generalize beyond the origi- nal vocabulary, and would not capture anything about the morphotactic rules that govern derivations from new words. The first step towards a more general approach is shown in Figure 9.2, which is the state diagram for a finite state acceptor in which the vocabulary consists of morphemes, which include stems (e.g., grace, allure) and affixes (e.g., dis-, -ing, -ly). This finite state acceptor consists of a set of paths leading away from the start state, with derivational affixes added along the path. Except for qneg, the states on these paths are all final, so the FSA will accept disgrace, disgraceful, and disgracefully, but not dis-. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_213_Chunk214
|
196 CHAPTER 9. FORMAL LANGUAGE THEORY q0 start qN1 qJ1 qA1 grace -ful -ly qneg qN2 qJ2 qA2 dis- grace -ful -ly qN3 qJ3 qA3 allure -ing -ly qJ4 qN4 qA4 fair -ness -ly Figure 9.2: A finite state acceptor for a fragment of English derivational morphology. Each path represents possible derivations from a single root form. This FSA can be minimized to the form shown in Figure 9.3, which makes the gen- erality of the finite state approach more apparent. For example, the transition from q0 to qJ2 can be made to accept not only fair but any single-morpheme (monomorphemic) ad- jective that takes -ness and -ly as suffixes. In this way, the finite state acceptor can easily be extended: as new word stems are added to the vocabulary, their derived forms will be accepted automatically. Of course, this FSA would still need to be extended considerably to cover even this small fragment of English morphology. As shown by cases like music →musical, athlete →athletic, English includes several classes of nouns, each with its own rules for derivation. The FSAs shown in Figure 9.2 and 9.3 accept allureing, not alluring. This reflects a dis- tinction between morphology — the question of which morphemes to use, and in what order — and orthography — the question of how the morphemes are rendered in written language. Just as orthography requires dropping the e preceding the -ing suffix, phonol- ogy imposes a related set of constraints on how words are rendered in speech. As we will see soon, these issues can be handled by finite state!transducers, which are finite state automata that take inputs and produce outputs. 9.1.3 Weighted finite state acceptors According to the FSA treatment of morphology, every word is either in or out of the lan- guage, with no wiggle room. Perhaps you agree that musicky and fishful are not valid English words; but if forced to choose, you probably find a fishful stew or a musicky trib- ute preferable to behaving disgracelyful. Rather than asking whether a word is acceptable, we might like to ask how acceptable it is. Aronoff (1976, page 36) puts it another way: Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_214_Chunk215
|
9.1. REGULAR LANGUAGES 197 q0 start qneg qN1 qJ1 qA1 dis- grace -ful -ly grace qN2 allure -ing qJ2 qN3 fair -ness -ly Figure 9.3: Minimization of the finite state acceptor shown in Figure 9.2. “Though many things are possible in morphology, some are more possible than others.” But finite state acceptors give no way to express preferences among technically valid choices. Weighted finite state acceptors (WFSAs) are generalizations of FSAs, in which each accepting path is assigned a score, computed from the transitions, the initial state, and the final state. Formally, a weighted finite state acceptor M = (Q, Σ, λ, ρ, δ) consists of: • a finite set of states Q = {q0, q1, . . . , qn}; • a finite alphabet Σ of input symbols; • an initial weight function, λ : Q →R; • a final weight function ρ : Q →R; • a transition function δ : Q × Σ × Q →R. WFSAs depart from the FSA formalism in three ways: every state can be an initial state, with score λ(q); every state can be an accepting state, with score ρ(q); transitions are possible between any pair of states on any input, with a score δ(qi, ω, qj). Nonetheless, FSAs can be viewed as a special case: for any FSA M we can build an equivalent WFSA by setting λ(q) = ∞for all q ̸= q0, ρ(q) = ∞for all q /∈F, and δ(qi, ω, qj) = ∞for all transitions {(q1, ω) →q2} that are not permitted by the transition function of M. The total score for any path π = t1, t2, . . . , tN is equal to the sum of these scores, d(π) = λ(from-state(t1)) + N X n δ(tn) + ρ(to-state(tN)). [9.5] A shortest-path algorithm is used to find the minimum-cost path through a WFSA for string ω, with time complexity O(E + V log V ), where E is the number of edges and V is the number of vertices (Cormen et al., 2009).2 2Shortest-path algorithms find the path with the minimum cost. In many cases, the path weights are log Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_215_Chunk216
|
198 CHAPTER 9. FORMAL LANGUAGE THEORY N-gram language models as WFSAs In n-gram language models (see § 6.1), the probability of a sequence of tokens w1, w2, . . . , wM is modeled as, p(w1, . . . , wM) ≈ M Y m=1 pn(wm | wm−1, . . . , wm−n+1). [9.6] The log probability under an n-gram language model can be modeled in a WFSA. First consider a unigram language model. We need only a single state q0, with transition scores δ(q0, ω, q0) = log p1(ω). The initial and final scores can be set to zero. Then the path score for w1, w2, . . . , wM is equal to, 0 + M X m δ(q0, wm, q0) + 0 = M X m log p1(wm). [9.7] For an n-gram language model with n > 1, we need probabilities that condition on the past history. For example, in a bigram language model, the transition weights must represent log p2(wm | wm−1). The transition scoring function must somehow “remember” the previous word or words. This can be done by adding more states: to model the bigram probability p2(wm | wm−1), we need a state for every possible wm−1 — a total of V states. The construction indexes each state qi by a context event wm−1 = i. The weights are then assigned as follows: δ(qi, ω, qj) = ( log Pr(wm = j | wm−1 = i), ω = j −∞, ω ̸= j λ(qi) = log Pr(w1 = i | w0 = □) ρ(qi) = log Pr(wM+1 = ■| wM = i). The transition function is designed to ensure that the context is recorded accurately: we can move to state j on input ω only if ω = j; otherwise, transitioning to state j is forbidden by the weight of −∞. The initial weight function λ(qi) is the log probability of receiving i as the first token, and the final weight function ρ(qi) is the log probability of receiving an “end-of-string” token after observing wM = i. *Semiring weighted finite state acceptors The n-gram language model WFSA is deterministic: each input has exactly one accepting path, for which the WFSA computes a score. In non-deterministic WFSAs, a given input probabilities, so we want the path with the maximum score, which can be accomplished by making each local score into a negative log-probability. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_216_Chunk217
|
9.1. REGULAR LANGUAGES 199 may have multiple accepting paths. In some applications, the score for the input is ag- gregated across all such paths. Such aggregate scores can be computed by generalizing WFSAs with semiring notation, first introduced in § 7.7.3. Let d(π) represent the total score for path π = t1, t2, . . . , tN, which is computed as, d(π) = λ(from-state(t1)) ⊗δ(t1) ⊗δ(t2) ⊗. . . ⊗δ(tN) ⊗ρ(to-state(tN)). [9.8] This is a generalization of Equation 9.5 to semiring notation, using the semiring multipli- cation operator ⊗in place of addition. Now let s(ω) represent the total score for all paths Π(ω) that consume input ω, s(ω) = M π∈Π(ω) d(π). [9.9] Here, semiring addition (⊕) is used to combine the scores of multiple paths. The generalization to semirings covers a number of useful special cases. In the log- probability semiring, multiplication is defined as log p(x) ⊗log p(y) = log p(x) + log p(y), and addition is defined as log p(x) ⊕log p(y) = log(p(x) + p(y)). Thus, s(ω) represents the log-probability of accepting input ω, marginalizing over all paths π ∈Π(ω). In the boolean semiring, the ⊗operator is logical conjunction, and the ⊕operator is logical disjunction. This reduces to the special case of unweighted finite state acceptors, where the score s(ω) is a boolean indicating whether there exists any accepting path for ω. In the tropical semiring, the ⊕operator is a maximum, so the resulting score is the score of the best-scoring path through the WFSA. The OPENFST toolkit uses semirings and poly- morphism to implement general algorithms for weighted finite state automata (Allauzen et al., 2007). *Interpolated n-gram language models Recall from § 6.2.3 that an interpolated n-gram language model combines the probabili- ties from multiple n-gram models. For example, an interpolated bigram language model computes the probability, ˆp(wm | wm−1) = λ1p1(wm) + λ2p2(wm | wm−1), [9.10] with ˆp indicating the interpolated probability, p2 indicating the bigram probability, and p1 indicating the unigram probability. Setting λ2 = (1 −λ1) ensures that the probabilities sum to one. Interpolated bigram language models can be implemented using a non-deterministic WFSA (Knight and May, 2009). The basic idea is shown in Figure 9.4. In an interpolated bigram language model, there is one state for each element in the vocabulary — in this Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_217_Chunk218
|
200 CHAPTER 9. FORMAL LANGUAGE THEORY qA qU start qB a : p1(a) b : p1(b) a : λ2p2(a | a) b : λ2p2(b | a) b : λ2p2(b | b) a : λ2p2(a | b) ϵ : λ1 ϵ2 : λ1 Figure 9.4: WFSA implementing an interpolated bigram/unigram language model, on the alphabet Σ = {a, b}. For simplicity, the WFSA is contrained to force the first token to be generated from the unigram model, and does not model the emission of the end-of- sequence token. case, the states qA and qB — which are capture the contextual conditioning in the bigram probabilities. To model unigram probabilities, there is an additional state qU, which “for- gets” the context. Transitions out of qU involve unigram probabilities, p1(a) and p2(b); transitions into qU emit the empty symbol ϵ, and have probability λ1, reflecting the inter- polation weight for the unigram model. The interpolation weight for the bigram model is included in the weight of the transition qA →qB. The epsilon transitions into qU make this WFSA non-deterministic. Consider the score for the sequence (a, b, b). The initial state is qU, so the symbol a is generated with score p1(a)3 Next, we can generate b from the unigram model by taking the transition qA →qB, with score λ2p2(b | a). Alternatively, we can take a transition back to qU with score λ1, and then emit b from the unigram model with score p1(b). To generate the final b token, we face the same choice: emit it directly from the self-transition to qB, or transition to qU first. The total score for the sequence (a, b, b) is the semiring sum over all accepting paths, s(a, b, b) =
|
nlp_Page_218_Chunk219
|
9.1. REGULAR LANGUAGES 201 transition weight, which are themselves probabilities. The ⊕operator is addition, so that the total score is the sum of the scores (probabilities) for each path. This corresponds to the probability under the interpolated bigram language model. 9.1.4 Finite state transducers Finite state acceptors can determine whether a string is in a regular language, and weighted finite state acceptors can compute a score for every string over a given alphabet. Finite state transducers (FSTs) extend the formalism further, by adding an output symbol to each transition. Formally, a finite state transducer is a tuple T = (Q, Σ, Ω, λ, ρ, δ), with Ωrepre- senting an output vocabulary and the transition function δ : Q×(Σ∪ϵ)×(Ω∪ϵ)×Q →R mapping from states, input symbols, and output symbols to states. The remaining ele- ments (Q, Σ, λ, ρ) are identical to their definition in weighted finite state acceptors (§ 9.1.3). Thus, each path through the FST T transduces the input string into an output. String edit distance The edit distance between two strings s and t is a measure of how many operations are required to transform one string into another. There are several ways to compute edit distance, but one of the most popular is the Levenshtein edit distance, which counts the minimum number of insertions, deletions, and substitutions. This can be computed by a one-state weighted finite state transducer, in which the input and output alphabets are identical. For simplicity, consider the alphabet Σ = Ω= {a, b}. The edit distance can be computed by a one-state transducer with the following transitions, δ(q, a, a, q) = δ(q, b, b, q) = 0 [9.12] δ(q, a, b, q) = δ(q, b, a, q) = 1 [9.13] δ(q, a, ϵ, q) = δ(q, b, ϵ, q) = 1 [9.14] δ(q, ϵ, a, q) = δ(q, ϵ, b, q) = 1. [9.15] The state diagram is shown in Figure 9.5. For a given string pair, there are multiple paths through the transducer: the best- scoring path from dessert to desert involves a single deletion, for a total score of 1; the worst-scoring path involves seven deletions and six additions, for a score of 13. The Porter stemmer The Porter (1980) stemming algorithm is a “lexicon-free” algorithm for stripping suffixes from English words, using a sequence of character-level rules. Each rule can be described Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_219_Chunk220
|
202 CHAPTER 9. FORMAL LANGUAGE THEORY q start a/a, b/b : 0 a/b, b/a : 1 a/ϵ, b/ϵ : 1 ϵ/a, ϵ/b : 1 Figure 9.5: State diagram for the Levenshtein edit distance finite state transducer. The label x/y : c indicates a cost of c for a transition with input x and output y. by an unweighted finite state transducer. The first rule is: -sses →-ss e.g., dresses →dress [9.16] -ies →-i e.g., parties →parti [9.17] -ss →-ss e.g., dress →dress [9.18] -s →ϵ e.g., cats →cat [9.19] The final two lines appear to conflict; they are meant to be interpreted as an instruction to remove a terminal -s unless it is part of an -ss ending. A state diagram to handle just these final two lines is shown in Figure 9.6. Make sure you understand how this finite state transducer handles cats, steps, bass, and basses. Inflectional morphology In inflectional morphology, word lemmas are modified to add grammatical information such as tense, number, and case. For example, many English nouns are pluralized by the suffix -s, and many verbs are converted to past tense by the suffix -ed. English’s inflectional morphology is considerably simpler than many of the world’s languages. For example, Romance languages (derived from Latin) feature complex systems of verb suffixes which must agree with the person and number of the verb, as shown in Table 9.1. The task of morphological analysis is to read a form like canto, and output an analysis like CANTAR+VERB+PRESIND+1P+SING, where +PRESIND describes the tense as present indicative, +1P indicates the first-person, and +SING indicates the singular number. The task of morphological generation is the reverse, going from CANTAR+VERB+PRESIND+1P+SING to canto. Finite state transducers are an attractive solution, because they can solve both problems with a single model (Beesley and Karttunen, 2003). As an example, Figure 9.7 shows a fragment of a finite state transducer for Spanish inflectional morphology. The Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_220_Chunk221
|
9.1. REGULAR LANGUAGES 203 q1 start q2 ¬s/¬s s/ϵ q3 q4 . . . a/s b/s ϵ/a ϵ/b Figure 9.6: State diagram for final two lines of step 1a of the Porter stemming diagram. States q3 and q4 “remember” the observations a and b respectively; the ellipsis . . . repre- sents additional states for each symbol in the input alphabet. The notation ¬s/¬s is not part of the FST formalism; it is a shorthand to indicate a set of self-transition arcs for every input/output symbol except s. infinitive cantar (to sing) comer (to eat) vivir (to live) yo (1st singular) canto como vivo tu (2nd singular) cantas comes vives ´el, ella, usted (3rd singular) canta come vive nosotros (1st plural) cantamos comemos vivimos vosotros (2nd plural, informal) cant´ais com´eis viv´ıs ellos, ellas (3rd plural); ustedes (2nd plural) cantan comen viven Table 9.1: Spanish verb inflections for the present indicative tense. Each row represents a person and number, and each column is a regular example from a class of verbs, as indicated by the ending of the infinitive form. input vocabulary Σ corresponds to the set of letters used in Spanish spelling, and the out- put vocabulary Ωcorresponds to these same letters, plus the vocabulary of morphological features (e.g., +SING, +VERB). In Figure 9.7, there are two paths that take canto as input, corresponding to the verb and noun meanings; the choice between these paths could be guided by a part-of-speech tagger. By inversion, the inputs and outputs for each tran- sition are switched, resulting in a finite state generator, capable of producing the correct surface form for any morphological analysis. Finite state morphological analyzers and other unweighted transducers can be de- signed by hand. The designer’s goal is to avoid overgeneration — accepting strings or making transductions that are not valid in the language — as well as undergeneration Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_221_Chunk222
|
204 CHAPTER 9. FORMAL LANGUAGE THEORY start c/c a/a n/n t/t o/o ϵ/+Noun ϵ/+Masc ϵ/+Sing ϵ/a ϵ/r ϵ/+Verb o/+PresInd ϵ/+1p ϵ/+Sing a/+PresInd ϵ/+3p ϵ/+Sing Figure 9.7: Fragment of a finite state transducer for Spanish morphology. There are two accepting paths for the input canto: canto+NOUN+MASC+SING (masculine singular noun, meaning a song), and cantar+VERB+PRESIND+1P+SING (I sing). There is also an accept- ing path for canta, with output cantar+VERB+PRESIND+3P+SING (he/she sings). — failing to accept strings or transductions that are valid. For example, a pluralization transducer that does not accept foot/feet would undergenerate. Suppose we “fix” the trans- ducer to accept this example, but as a side effect, it now accepts boot/beet; the transducer would then be said to overgenerate. If a transducer accepts foot/foots but not foot/feet, then it simultaneously overgenerates and undergenerates. Finite state composition Designing finite state transducers to capture the full range of morphological phenomena in any real language is a huge task. Modularization is a classic computer science approach for this situation: decompose a large and unwieldly problem into a set of subproblems, each of which will hopefully have a concise solution. Finite state automata can be mod- ularized through composition: feeding the output of one transducer T1 as the input to another transducer T2, written T2◦T1. Formally, if there exists some y such that (x, y) ∈T1 (meaning that T1 produces output y on input x), and (y, z) ∈T2, then (x, z) ∈(T2 ◦T1). Because finite state transducers are closed under composition, there is guaranteed to be a single finite state transducer that T3 = T2 ◦T1, which can be constructed as a machine with one state for each pair of states in T1 and T2 (Mohri et al., 2002). Example: Morphology and orthography In English morphology, the suffix -ed is added to signal the past tense for many verbs: cook→cooked, want→wanted, etc. However, English orthography dictates that this process cannot produce a spelling with consecutive e’s, so that bake→baked, not bakeed. A modular solution is to build separate transducers for mor- phology and orthography. The morphological transducer TM transduces from bake+PAST to bake+ed, with the + symbol indicating a segment boundary. The input alphabet of TM includes the lexicon of words and the set of morphological features; the output alphabet includes the characters a-z and the + boundary marker. Next, an orthographic transducer TO is responsible for the transductions cook+ed →cooked, and bake+ed →baked. The input alphabet of TO must be the same as the output alphabet for TM, and the output alphabet Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_222_Chunk223
|
9.1. REGULAR LANGUAGES 205 is simply the characters a-z. The composed transducer (TO ◦TM) then transduces from bake+PAST to the spelling baked. The design of TO is left as an exercise. Example: Hidden Markov models Hidden Markov models (chapter 7) can be viewed as weighted finite state transducers, and they can be constructed by transduction. Recall that a hidden Markov model defines a joint probability over words and tags, p(w, y), which can be computed as a path through a trellis structure. This trellis is itself a weighted finite state acceptor, with edges between all adjacent nodes qm−1,i →qm,j on input Ym = j. The edge weights are log-probabilities, δ(qm−1,i, Ym = j, qm,j) = log p(wm, Ym = j | Ym−i = j) [9.20] = log p(wm | Ym = j) + log Pr(Ym = j | Ym−1 = i). [9.21] Because there is only one possible transition for each tag Ym, this WFSA is deterministic. The score for any tag sequence {ym}M m=1 is the sum of these log-probabilities, correspond- ing to the total log probability log p(w, y). Furthermore, the trellis can be constructed by the composition of simpler FSTs. • First, construct a “transition” transducer to represent a bigram probability model over tag sequences, TT . This transducer is almost identical to the n-gram language model acceptor in § 9.1.3: there is one state for each tag, and the edge weights equal to the transition log-probabilities, δ(qi, j, j, qj) = log Pr(Ym = j | Ym−1 = i). Note that TT is a transducer, with identical input and output at each arc; this makes it possible to compose TT with other transducers. • Next, construct an “emission” transducer to represent the probability of words given tags, TE. This transducer has only a single state, with arcs for each word/tag pair, δ(q0, i, j, q0) = log Pr(Wm = j | Ym = i). The input vocabulary is the set of all tags, and the output vocabulary is the set of all words. • The composition TE ◦TT is a finite state transducer with one state per tag, as shown in Figure 9.8. Each state has V × K outgoing edges, representing transitions to each of the K other states, with outputs for each of the V words in the vocabulary. The weights for these edges are equal to, δ(qi, Ym = j, wm, qj) = log p(wm, Ym = j | Ym−1 = i). [9.22] • The trellis is a structure with M ×K nodes, for each of the M words to be tagged and each of the K tags in the tagset. It can be built by composition of (TE ◦TT ) against an unweighted chain FSA MA(w) that is specially constructed to accept only a given input w1, w2, . . . , wM, shown in Figure 9.9. The trellis for input w is built from the composition MA(w) ◦(TE ◦TT ). Composing with the unweighted MA(w) does not affect the edge weights from (TE ◦TT ), but it selects the subset of paths that generate the word sequence w. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_223_Chunk224
|
206 CHAPTER 9. FORMAL LANGUAGE THEORY start start N V end N/aardvark N/abacus N/. . . V/aardvark V/abacus V/. . . Figure 9.8: Finite state transducer for hidden Markov models, with a small tagset of nouns and verbs. For each pair of tags (including self-loops), there is an edge for every word in the vocabulary. For simplicity, input and output are only shown for the edges from the start state. Weights are also omitted from the diagram; for each edge from qi to qj, the weight is equal to log p(wm, Ym = j | Ym−1 = i), except for edges to the end state, which are equal to log Pr(Ym = ♦| Ym−1 = i). start They can fish Figure 9.9: Chain finite state acceptor for the input They can fish. 9.1.5 *Learning weighted finite state automata In generative models such as n-gram language models and hidden Markov models, the edge weights correspond to log probabilities, which can be obtained from relative fre- quency estimation. However, in other cases, we wish to learn the edge weights from in- put/output pairs. This is difficult in non-deterministic finite state automata, because we do not observe the specific arcs that are traversed in accepting the input, or in transducing from input to output. The path through the automaton is a latent variable. Chapter 5 presented one method for learning with latent variables: expectation max- imization (EM). This involves computing a distribution q(·) over the latent variable, and iterating between updates to this distribution and updates to the parameters — in this case, the arc weights. The forward-backward algorithm (§ 7.5.3) describes a dynamic program for computing a distribution over arcs in the trellis structure of a hidden Markov Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_224_Chunk225
|
9.2. CONTEXT-FREE LANGUAGES 207 model, but this is a special case of the more general problem for finite state automata. Eisner (2002) describes an expectation semiring, which enables the expected number of transitions across each arc to be computed through a semiring shortest-path algorithm. Alternative approaches for generative models include Markov Chain Monte Carlo (Chi- ang et al., 2010) and spectral learning (Balle et al., 2011). Further afield, we can take a perceptron-style approach, with each arc corresponding to a feature. The classic perceptron update would update the weights by subtracting the difference between the feature vector corresponding to the predicted path and the feature vector corresponding to the correct path. Since the path is not observed, we resort to a latent variable perceptron. The model is described formally in § 12.4, but the basic idea is to compute an update from the difference between the features from the predicted path and the features for the best-scoring path that generates the correct output. 9.2 Context-free languages Beyond the class of regular languages lie the context-free languages. An example of a language that is context-free but not finite state is the set of arithmetic expressions with balanced parentheses. Intuitively, to accept only strings in this language, an FSA would have to “count” the number of left parentheses, and make sure that they are balanced against the number of right parentheses. An arithmetic expression can be arbitrarily long, yet by definition an FSA has a finite number of states. Thus, for any FSA, there will be a string with too many parentheses to count. More formally, the pumping lemma is a proof technique for showing that languages are not regular. It is typically demonstrated for the simpler case anbn, the language of strings containing a sequence of a’s, and then an equal-length sequence of b’s.4 There are at least two arguments for the relevance of non-regular formal languages to linguistics. First, there are natural language phenomena that are argued to be iso- morphic to anbn. For English, the classic example is center embedding, shown in Fig- ure 9.10. The initial expression the dog specifies a single dog. Embedding this expression into the cat chased specifies a particular cat — the one chased by the dog. This cat can then be embedded again to specify a goat, in the less felicitous but arguably grammatical expression, the goat the cat the dog chased kissed, which refers to the goat who was kissed by the cat which was chased by the dog. Chomsky (1957) argues that to be grammatical, a center-embedded construction must be balanced: if it contains n noun phrases (e.g., the cat), they must be followed by exactly n −1 verbs. An FSA that could recognize such ex- pressions would also be capable of recognizing the language anbn. Because we can prove that no FSA exists for anbn, no FSA can exist for center embedded constructions either. En- 4Details of the proof can be found in an introductory computer science theory textbook (e.g., Sipser, 2012). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_225_Chunk226
|
208 CHAPTER 9. FORMAL LANGUAGE THEORY the dog the cat the dog chased the goat the cat the dog chased kissed ... Figure 9.10: Three levels of center embedding glish includes center embedding, and so the argument goes, English grammar as a whole cannot be regular.5 A more practical argument for moving beyond regular languages is modularity. Many linguistic phenomena — especially in syntax — involve constraints that apply at long distance. Consider the problem of determiner-noun number agreement in English: we can say the coffee and these coffees, but not *these coffee. By itself, this is easy enough to model in an FSA. However, fairly complex modifying expressions can be inserted between the determiner and the noun: (9.2) a. the burnt coffee b. the badly-ground coffee c. the burnt and badly-ground Italian coffee d. these burnt and badly-ground Italian coffees e. * these burnt and badly-ground Italian coffee Again, an FSA can be designed to accept modifying expressions such as burnt and badly- ground Italian. Let’s call this FSA FM. To reject the final example, a finite state acceptor must somehow “remember” that the determiner was plural when it reaches the noun cof- fee at the end of the expression. The only way to do this is to make two identical copies of FM: one for singular determiners, and one for plurals. While this is possible in the finite state framework, it is inconvenient — especially in languages where more than one attribute of the noun is marked by the determiner. Context-free languages facilitate mod- ularity across such long-range dependencies. 9.2.1 Context-free grammars Context-free languages are specified by context-free grammars (CFGs), which are tuples (N, Σ, R, S) consisting of: 5The claim that arbitrarily deep center-embedded expressions are grammatical has drawn skepticism. Corpus evidence shows that embeddings of depth greater than two are exceedingly rare (Karlsson, 2007), and that embeddings of depth greater than three are completely unattested. If center-embedding is capped at some finite depth, then it is regular. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_226_Chunk227
|
9.2. CONTEXT-FREE LANGUAGES 209 S →S OP S | NUM OP →+ | −| × | ÷ NUM →NUM DIGIT | DIGIT DIGIT →0 | 1 | 2 | . . . | 9 Figure 9.11: A context-free grammar for arithmetic expressions • a finite set of non-terminals N; • a finite alphabet Σ of terminal symbols; • a set of production rules R, each of the form A →β, where A ∈N and β ∈(Σ∪N)∗; • a designated start symbol S. In the production rule A →β, the left-hand side (LHS) A must be a non-terminal; the right-hand side (RHS) can be a sequence of terminals or non-terminals, {n, σ}∗, n ∈ N, σ ∈Σ. A non-terminal can appear on the left-hand side of many production rules. A non-terminal can appear on both the left-hand side and the right-hand side; this is a recursive production, and is analogous to self-loops in finite state automata. The name “context-free” is based on the property that the production rule depends only on the LHS, and not on its ancestors or neighbors; this is analogous to Markov property of finite state automata, in which the behavior at each step depends only on the current state, and not on the path by which that state was reached. A derivation τ is a sequence of steps from the start symbol S to a surface string w ∈Σ∗, which is the yield of the derivation. A string w is in a context-free language if there is some derivation from S yielding w. Parsing is the problem of finding a derivation for a string in a grammar. Algorithms for parsing are described in chapter 10. Like regular expressions, context-free grammars define the language but not the com- putation necessary to recognize it. The context-free analogues to finite state acceptors are pushdown automata, a theoretical model of computation in which input symbols can be pushed onto a stack with potentially infinite depth. For more details, see Sipser (2012). Example Figure 9.11 shows a context-free grammar for arithmetic expressions such as 1 + 2 ÷ 3 −4. In this grammar, the terminal symbols include the digits {1, 2, ..., 9} and the op- erators {+, −, ×, ÷}. The rules include the | symbol, a notational convenience that makes it possible to specify multiple right-hand sides on a single line: the statement A →x | y Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_227_Chunk228
|
210 CHAPTER 9. FORMAL LANGUAGE THEORY S Num Digit 4 S S Num Digit 3 Op − S S Num Digit 2 Op + S Num Digit 1 S S S Num Digit 3 Op − S Num Digit 2 Op + S Num Digit 1 Figure 9.12: Some example derivations from the arithmetic grammar in Figure 9.11 defines two productions, A →x and A →y. This grammar is recursive: the non-termals S and NUM can produce themselves. Derivations are typically shown as trees, with production rules applied from the top to the bottom. The tree on the left in Figure 9.12 describes the derivation of a single digit, through the sequence of productions S →NUM →DIGIT →4 (these are all unary pro- ductions, because the right-hand side contains a single element). The other two trees in Figure 9.12 show alternative derivations of the string 1 + 2 −3. The existence of multiple derivations for a string indicates that the grammar is ambiguous. Context-free derivations can also be written out according to the pre-order tree traver- sal.6 For the two derivations of 1 + 2 - 3 in Figure 9.12, the notation is: (S (S (S (Num (Digit 1))) (Op +) (S (Num (Digit 2)))) (Op - ) (S (Num (Digit 3)))) [9.23] (S (S (Num (Digit 1))) (Op +) (S (Num (Digit 2)) (Op - ) (S (Num (Digit 3))))). [9.24] Grammar equivalence and Chomsky Normal Form A single context-free language can be expressed by more than one context-free grammar. For example, the following two grammars both define the language anbn for n > 0. S →aSb | ab S →aSb | aabb | ab Two grammars are weakly equivalent if they generate the same strings. Two grammars are strongly equivalent if they generate the same strings via the same derivations. The grammars above are only weakly equivalent. 6This is a depth-first left-to-right search that prints each node the first time it is encountered (Cormen et al., 2009, chapter 12). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_228_Chunk229
|
9.2. CONTEXT-FREE LANGUAGES 211 In Chomsky Normal Form (CNF), the right-hand side of every production includes either two non-terminals, or a single terminal symbol: A →BC A →a All CFGs can be converted into a CNF grammar that is weakly equivalent. To convert a grammar into CNF, we first address productions that have more than two non-terminals on the RHS by creating new “dummy” non-terminals. For example, if we have the pro- duction, W →X Y Z, [9.25] it is replaced with two productions, W →X W\X [9.26] W\X →Y Z. [9.27] In these productions, W\X is a new dummy non-terminal. This transformation binarizes the grammar, which is critical for efficient bottom-up parsing, as we will see in chapter 10. Productions whose right-hand side contains a mix of terminal and non-terminal symbols can be replaced in a similar fashion. Unary non-terminal productions A →B are replaced as follows: for each production B →α in the grammar, add a new production A →α. For example, in the grammar described in Figure 9.11, we would replace NUM →DIGIT with NUM →1 | 2 | . . . | 9. However, we keep the production NUM →NUM DIGIT, which is a valid binary produc- tion. 9.2.2 Natural language syntax as a context-free language Context-free grammars can be used to represent syntax, which is the set of rules that determine whether an utterance is judged to be grammatical. If this representation were perfectly faithful, then a natural language such as English could be transformed into a formal language, consisting of exactly the (infinite) set of strings that would be judged to be grammatical by a fluent English speaker. We could then build parsing software that would automatically determine if a given utterance were grammatical.7 Contemporary theories generally do not consider natural languages to be context-free (see § 9.3), yet context-free grammars are widely used in natural language parsing. The reason is that context-free representations strike a good balance: they cover a broad range of syntactic phenomena, and they can be parsed efficiently. This section therefore de- scribes how to handle a core fragment of English syntax in context-free form, following 7To move beyond this cursory treatment of syntax, consult the short introductory manuscript by Bender (2013), or the longer text by Akmajian et al. (2010). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_229_Chunk230
|
212 CHAPTER 9. FORMAL LANGUAGE THEORY the conventions of the Penn Treebank (PTB; Marcus et al., 1993), a large-scale annotation of English language syntax. The generalization to “mildly” context-sensitive languages is discussed in § 9.3. The Penn Treebank annotation is a phrase-structure grammar of English. This means that sentences are broken down into constituents, which are contiguous sequences of words that function as coherent units for the purpose of linguistic analysis. Constituents generally have a few key properties: Movement. Constituents can often be moved around sentences as units. (9.3) a. Abigail gave (her brother) (a fish). b. Abigail gave (a fish) to (her brother). In contrast, gave her and brother a cannot easily be moved while preserving gram- maticality. Substitution. Constituents can be substituted by other phrases of the same type. (9.4) a. Max thanked (his older sister). b. Max thanked (her). In contrast, substitution is not possible for other contiguous units like Max thanked and thanked his. Coordination. Coordinators like and and or can conjoin constituents. (9.5) a. (Abigail) and (her younger brother) bought a fish. b. Abigail (bought a fish) and (gave it to Max). c. Abigail (bought) and (greedily ate) a fish. Units like brother bought and bought a cannot easily be coordinated. These examples argue for units such as her brother and bought a fish to be treated as con- stituents. Other sequences of words in these examples, such as Abigail gave and brother a fish, cannot be moved, substituted, and coordinated in these ways. In phrase-structure grammar, constituents are nested, so that the senator from New Jersey contains the con- stituent from New Jersey, which in turn contains New Jersey. The sentence itself is the max- imal constituent; each word is a minimal constituent, derived from a unary production from a part-of-speech tag. Between part-of-speech tags and sentences are phrases. In phrase-structure grammar, phrases have a type that is usually determined by their head word: for example, a noun phrase corresponds to a noun and the group of words that Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_230_Chunk231
|
9.2. CONTEXT-FREE LANGUAGES 213 modify it, such as her younger brother; a verb phrase includes the verb and its modifiers, such as bought a fish and greedily ate it. In context-free grammars, each phrase type is a non-terminal, and each constituent is the substring that the non-terminal yields. Grammar design involves choosing the right set of non-terminals. Fine-grained non-terminals make it possible to represent more fine- grained linguistic phenomena. For example, by distinguishing singular and plural noun phrases, it is possible to have a grammar of English that generates only sentences that obey subject-verb agreement. However, enforcing subject-verb agreement is considerably more complicated in languages like Spanish, where the verb must agree in both person and number with subject. In general, grammar designers must trade off between over- generation — a grammar that permits ungrammatical sentences — and undergeneration — a grammar that fails to generate grammatical sentences. Furthermore, if the grammar is to support manual annotation of syntactic structure, it must be simple enough to annotate efficiently. 9.2.3 A phrase-structure grammar for English To better understand how phrase-structure grammar works, let’s consider the specific case of the Penn Treebank grammar of English. The main phrase categories in the Penn Treebank (PTB) are based on the main part-of-speech classes: noun phrase (NP), verb phrase (VP), prepositional phrase (PP), adjectival phrase (ADJP), and adverbial phrase (ADVP). The top-level category is S, which conveniently stands in for both “sentence” and the “start” symbol. Complement clauses (e.g., I take the good old fashioned ground that the whale is a fish) are represented by the non-terminal SBAR. The terminal symbols in the grammar are individual words, which are generated from unary productions from part-of-speech tags (the PTB tagset is described in § 8.1). This section describes some of the most common productions from the major phrase- level categories, explaining how to generate individual tag sequences. The production rules are approached in a “theory-driven” manner: first the syntactic properties of each phrase type are described, and then some of the necessary production rules are listed. But it is important to keep in mind that the Penn Treebank was produced in a “data-driven” manner. After the set of non-terminals was specified, annotators were free to analyze each sentence in whatever way seemed most linguistically accurate, subject to some high-level guidelines. The grammar of the Penn Treebank is simply the set of productions that were required to analyze the several million words of the corpus. By design, the grammar overgenerates — it does not exclude ungrammatical sentences. Furthermore, while the productions shown here cover some of the most common cases, they are only a small fraction of the several thousand different types of productions in the Penn Treebank. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_231_Chunk232
|
214 CHAPTER 9. FORMAL LANGUAGE THEORY Sentences The most common production rule for sentences is, S →NP VP [9.28] which accounts for simple sentences like Abigail ate the kimchi — as we will see, the direct object the kimchi is part of the verb phrase. But there are more complex forms of sentences as well: S →ADVP NP VP Unfortunately Abigail ate the kimchi. [9.29] S →S CC S Abigail ate the kimchi and Max had a burger. [9.30] S →VP Eat the kimchi. [9.31] where ADVP is an adverbial phrase (e.g., unfortunately, very unfortunately) and CC is a coordinating conjunction (e.g., and, but).8 Noun phrases Noun phrases refer to entities, real or imaginary, physical or abstract: Asha, the steamed dumpling, parts and labor, nobody, the whiteness of the whale, and the rise of revolutionary syn- dicalism in the early twentieth century. Noun phrase productions include “bare” nouns, which may optionally follow determiners, as well as pronouns: NP →NN | NNS | NNP | PRP [9.32] NP →DET NN | DET NNS | DET NNP [9.33] The tags NN, NNS, and NNP refer to singular, plural, and proper nouns; PRP refers to personal pronouns, and DET refers to determiners. The grammar also contains terminal productions from each of these tags, e.g., PRP →I | you | we | . . . . Noun phrases may be modified by adjectival phrases (ADJP; e.g., the small Russian dog) and numbers (CD; e.g., the five pastries), each of which may optionally follow a determiner: NP →ADJP NN | ADJP NNS | DET ADJP NN | DET ADJP NNS [9.34] NP →CD NNS | DET CD NNS | . . . [9.35] Some noun phrases include multiple nouns, such as the liberation movement and an antelope horn, necessitating additional productions: NP →NN NN | NN NNS | DET NN NN | . . . [9.36] 8Notice that the grammar does not include the recursive production S →ADVP S. It may be helpful to think about why this production would cause the grammar to overgenerate. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_232_Chunk233
|
9.2. CONTEXT-FREE LANGUAGES 215 These multiple noun constructions can be combined with adjectival phrases and cardinal numbers, leading to a large number of additional productions. Recursive noun phrase productions include coordination, prepositional phrase attach- ment, subordinate clauses, and verb phrase adjuncts: NP →NP CC NP e.g., the red and the black [9.37] NP →NP PP e.g., the President of the Georgia Institute of Technology [9.38] NP →NP SBAR e.g., a whale which he had wounded [9.39] NP →NP VP e.g., a whale taken near Shetland [9.40] These recursive productions are a major source of ambiguity, because the VP and PP non- terminals can also generate NP children. Thus, the the President of the Georgia Institute of Technology can be derived in two ways, as can a whale taken near Shetland in October. But aside from these few recursive productions, the noun phrase fragment of the Penn Treebank grammar is relatively flat, containing a large of number of productions that go from NP directly to a sequence of parts-of-speech. If noun phrases had more internal structure, the grammar would need fewer rules, which, as we will see, would make pars- ing faster and machine learning easier. Vadas and Curran (2011) propose to add additional structure in the form of a new non-terminal called a nominal modifier (NML), e.g., (9.6) a. (NP (NN crude) (NN oil) (NNS prices)) (PTB analysis) b. (NP (NML (NN crude) (NN oil)) (NNS prices)) (NML-style analysis). Another proposal is to treat the determiner as the head of a determiner phrase (DP; Abney, 1987). There are linguistic arguments for and against determiner phrases (e.g., Van Eynde, 2006). From the perspective of context-free grammar, DPs enable more struc- tured analyses of some constituents, e.g., (9.7) a. (NP (DT the) (JJ white) (NN whale)) (PTB analysis) b. (DP (DT the) (NP (JJ white) (NN whale))) (DP-style analysis). Verb phrases Verb phrases describe actions, events, and states of being. The PTB tagset distinguishes several classes of verb inflections: base form (VB; she likes to snack), present-tense third- person singular (VBZ; she snacks), present tense but not third-person singular (VBP; they snack), past tense (VBD; they snacked), present participle (VBG; they are snacking), and past participle (VBN; they had snacked).9 Each of these forms can constitute a verb phrase on its 9This tagset is specific to English: for example, VBP is a meaningful category only because English mor- phology distinguishes third-person singular from all person-number combinations. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_233_Chunk234
|
216 CHAPTER 9. FORMAL LANGUAGE THEORY own: VP →VB | VBZ | VBD | VBN | VBG | VBP [9.41] More complex verb phrases can be formed by a number of recursive productions, including the use of coordination, modal verbs (MD; she should snack), and the infinitival to (TO): VP →MD VP She will snack [9.42] VP →VBD VP She had snacked [9.43] VP →VBZ VP She has been snacking [9.44] VP →VBN VP She has been snacking [9.45] VP →TO VP She wants to snack [9.46] VP →VP CC VP She buys and eats many snacks [9.47] Each of these productions uses recursion, with the VP non-terminal appearing in both the LHS and RHS. This enables the creation of complex verb phrases, such as She will have wanted to have been snacking. Transitive verbs take noun phrases as direct objects, and ditransitive verbs take two direct objects: VP →VBZ NP She teaches algebra [9.48] VP →VBG NP She has been teaching algebra [9.49] VP →VBD NP NP She taught her brother algebra [9.50] These productions are not recursive, so a unique production is required for each verb part-of-speech. They also do not distinguish transitive from intransitive verbs, so the resulting grammar overgenerates examples like *She sleeps sushi and *She learns Boyang algebra. Sentences can also be direct objects: VP →VBZ S Hunter wants to eat the kimchi [9.51] VP →VBZ SBAR Hunter knows that Tristan ate the kimchi [9.52] The first production overgenerates, licensing sentences like *Hunter sees Tristan eats the kimchi. This problem could be addressed by designing a more specific set of sentence non-terminals, indicating whether the main verb can be conjugated. Verbs can also be modified by prepositional phrases and adverbial phrases: VP →VBZ PP She studies at night [9.53] VP →VBZ ADVP She studies intensively [9.54] VP →ADVP VBG She is not studying [9.55] Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_234_Chunk235
|
9.2. CONTEXT-FREE LANGUAGES 217 Again, because these productions are not recursive, the grammar must include produc- tions for every verb part-of-speech. A special set of verbs, known as copula, can take predicative adjectives as direct ob- jects: VP →VBZ ADJP She is hungry [9.56] VP →VBP ADJP Success seems increasingly unlikely [9.57] The PTB does not have a special non-terminal for copular verbs, so this production gen- erates non-grammatical examples such as *She eats tall. Particles (PRT as a phrase; RP as a part-of-speech) work to create phrasal verbs: VP →VB PRT She told them to fuck off [9.58] VP →VBD PRT NP They gave up their ill-gotten gains [9.59] As the second production shows, particle productions are required for all configurations of verb parts-of-speech and direct objects. Other contituents The remaining constituents require far fewer productions. Prepositional phrases almost always consist of a preposition and a noun phrase, PP →IN NP the whiteness of the whale [9.60] PP →TO NP What the white whale was to Ahab, has been hinted [9.61] Similarly, complement clauses consist of a complementizer (usually a preposition, pos- sibly null) and a sentence, SBAR →IN S She said that it was spicy [9.62] SBAR →S She said it was spicy [9.63] Adverbial phrases are usually bare adverbs (ADVP →RB), with a few exceptions: ADVP →RB RBR They went considerably further [9.64] ADVP →ADVP PP They went considerably further than before [9.65] The tag RBR is a comparative adverb. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_235_Chunk236
|
218 CHAPTER 9. FORMAL LANGUAGE THEORY Adjectival phrases extend beyond bare adjectives (ADJP →JJ) in a number of ways: ADJP →RB JJ very hungry [9.66] ADJP →RBR JJ more hungry [9.67] ADJP →JJS JJ best possible [9.68] ADJP →RB JJR even bigger [9.69] ADJP →JJ CC JJ high and mighty [9.70] ADJP →JJ JJ West German [9.71] ADJP →RB VBN previously reported [9.72] The tags JJR and JJS refer to comparative and superlative adjectives respectively. All of these phrase types can be coordinated: PP →PP CC PP on time and under budget [9.73] ADVP →ADVP CC ADVP now and two years ago [9.74] ADJP →ADJP CC ADJP quaint and rather deceptive [9.75] SBAR →SBAR CC SBAR whether they want control [9.76] or whether they want exports 9.2.4 Grammatical ambiguity Context-free parsing is useful not only because it determines whether a sentence is gram- matical, but mainly because the constituents and their relations can be applied to tasks such as information extraction (chapter 17) and sentence compression (Jing, 2000; Clarke and Lapata, 2008). However, the ambiguity of wide-coverage natural language grammars poses a serious problem for such potential applications. As an example, Figure 9.13 shows two possible analyses for the simple sentence We eat sushi with chopsticks, depending on whether the chopsticks modify eat or sushi. Realistic grammars can license thousands or even millions of parses for individual sentences. Weighted context-free grammars solve this problem by attaching weights to each production, and selecting the derivation with the highest score. This is the focus of chapter 10. 9.3 *Mildly context-sensitive languages Beyond context-free languages lie context-sensitive languages, in which the expansion of a non-terminal depends on its neighbors. In the general class of context-sensitive languages, computation becomes much more challenging: the membership problem for context-sensitive languages is PSPACE-complete. Since PSPACE contains the complexity class NP (problems that can be solved in polynomial time on a non-deterministic Turing Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_236_Chunk237
|
9.3. *MILDLY CONTEXT-SENSITIVE LANGUAGES 219 S VP NP PP NP chopsticks IN with NP sushi V eat NP We S VP PP NP chopsticks IN with VP NP sushi V eat NP We Figure 9.13: Two derivations of the same sentence machine), PSPACE-complete problems cannot be solved efficiently if P ̸= NP. Thus, de- signing an efficient parsing algorithm for the full class of context-sensitive languages is probably hopeless.10 However, Joshi (1985) identifies a set of properties that define mildly context-sensitive languages, which are a strict subset of context-sensitive languages. Like context-free lan- guages, mildly context-sensitive languages are parseable in polynomial time. However, the mildly context-sensitive languages include non-context-free languages, such as the “copy language” {ww | w ∈Σ∗} and the language ambncmdn. Both are characterized by cross-serial dependencies, linking symbols at long distance across the string.11 For exam- ple, in the language anbmcndm, each a symbol is linked to exactly one c symbol, regardless of the number of intervening b symbols. 9.3.1 Context-sensitive phenomena in natural language Such phenomena are occasionally relevant to natural language. A classic example is found in Swiss-German (Shieber, 1985), in which sentences such as we let the children help Hans paint the house are realized by listing all nouns before all verbs, i.e., we the children Hans the house let help paint. Furthermore, each noun’s determiner is dictated by the noun’s case marking (the role it plays with respect to the verb). Using an argument that is analogous to the earlier discussion of center-embedding (§ 9.2), Shieber describes these case marking constraints as a set of cross-serial dependencies, homomorphic to ambncmdn, and therefore not context-free. 10If PSPACE ̸= NP, then it contains problems that cannot be solved in polynomial time on a non- deterministic Turing machine; equivalently, solutions to these problems cannot even be checked in poly- nomial time (Arora and Barak, 2009). 11A further condition of the set of mildly-context-sensitive languages is constant growth: if the strings in the language are arranged by length, the gap in length between any pair of adjacent strings is bounded by some language specific constant. This condition excludes languages such as {a2n | n ≥0}. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_237_Chunk238
|
220 CHAPTER 9. FORMAL LANGUAGE THEORY Abigail eats the kimchi NP (S\NP)/NP (NP/N ) N > NP > S\NP < S Figure 9.14: A syntactic analysis in CCG involving forward and backward function appli- cation As with the move from regular to context-free languages, mildly context-sensitive languages can also be motivated by expedience. While finite sequences of cross-serial dependencies can in principle be handled in a context-free grammar, it is often more con- venient to use a mildly context-sensitive formalism like tree-adjoining grammar (TAG) and combinatory categorial grammar (CCG). TAG-inspired parsers have been shown to be particularly effective in parsing the Penn Treebank (Collins, 1997; Carreras et al., 2008), and CCG plays a leading role in current research on semantic parsing (Zettlemoyer and Collins, 2005). These two formalisms are weakly equivalent: any language that can be specified in TAG can also be specified in CCG, and vice versa (Joshi et al., 1991). The re- mainder of the chapter gives a brief overview of CCG, but you are encouraged to consult Joshi and Schabes (1997) and Steedman and Baldridge (2011) for more detail on TAG and CCG respectively. 9.3.2 Combinatory categorial grammar In combinatory categorial grammar, structural analyses are built up through a small set of generic combinatorial operations, which apply to immediately adjacent sub-structures. These operations act on the categories of the sub-structures, producing a new structure with a new category. The basic categories include S (sentence), NP (noun phrase), VP (verb phrase) and N (noun). The goal is to label the entire span of text as a sentence, S. Complex categories, or types, are constructed from the basic categories, parentheses, and forward and backward slashes: for example, S/NP is a complex type, indicating a sentence that is lacking a noun phrase to its right; S\NP is a sentence lacking a noun phrase to its left. Complex types act as functions, and the most basic combinatory oper- ations are function application to either the right or left neighbor. For example, the type of a verb phrase, such as eats, would be S\NP. Applying this function to a subject noun phrase to its left results in an analysis of Abigail eats as category S, indicating a successful parse. Transitive verbs must first be applied to the direct object, which in English appears to the right of the verb, before the subject, which appears on the left. They therefore have the more complex type (S\NP)/NP. Similarly, the application of a determiner to the noun at Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_238_Chunk239
|
9.3. *MILDLY CONTEXT-SENSITIVE LANGUAGES 221 Abigail might learn Swahili NP (S\NP)/VP VP/NP NP >B (S\NP)/NP > S\NP < S Figure 9.15: A syntactic analysis in CCG involving function composition (example modi- fied from Steedman and Baldridge, 2011) its right results in a noun phrase, so determiners have the type NP/N. Figure 9.14 pro- vides an example involving a transitive verb and a determiner. A key point from this example is that it can be trivially transformed into phrase-structure tree, by treating each function application as a constituent phrase. Indeed, when CCG’s only combinatory op- erators are forward and backward function application, it is equivalent to context-free grammar. However, the location of the “effort” has changed. Rather than designing good productions, the grammar designer must focus on the lexicon — choosing the right cate- gories for each word. This makes it possible to parse a wide range of sentences using only a few generic combinatory operators. Things become more interesting with the introduction of two additional operators: composition and type-raising. Function composition enables the combination of com- plex types: X/Y ◦Y/Z ⇒B X/Z (forward composition) and Y \Z ◦X\Y ⇒B X\Z (back- ward composition).12 Composition makes it possible to “look inside” complex types, and combine two adjacent units if the “input” for one is the “output” for the other. Figure 9.15 shows how function composition can be used to handle modal verbs. While this sen- tence can be parsed using only function application, the composition-based analysis is preferable because the unit might learn functions just like a transitive verb, as in the exam- ple Abigail studies Swahili. This in turn makes it possible to analyze conjunctions such as Abigail studies and might learn Swahili, attaching the direct object Swahili to the entire con- joined verb phrase studies and might learn. The Penn Treebank grammar fragment from § 9.2.3 would be unable to handle this case correctly: the direct object Swahili could attach only to the second verb learn. Type raising converts an element of type X to a more complex type: X ⇒T T/(T\X) (forward type-raising to type T), and X ⇒T T\(T/X) (backward type-raising to type T). Type-raising makes it possible to reverse the relationship between a function and its argument — by transforming the argument into a function over functions over arguments! An example may help. Figure 9.15 shows how to analyze an object relative clause, a story that Abigail tells. The problem is that tells is a transitive verb, expecting a direct object to its right. As a result, Abigail tells is not a valid constituent. The issue is resolved by raising 12The subscript B follows notation from Curry and Feys (1958). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_239_Chunk240
|
222 CHAPTER 9. FORMAL LANGUAGE THEORY a story that Abigail tells NP (NP\NP)/(S/NP) NP (S\NP)/NP >T S/(S\NP) >B S/NP > NP\NP < NP Figure 9.16: A syntactic analysis in CCG involving an object relative clause Abigail from NP to the complex type (S/NP)\NP). This function can then be combined with the transitive verb tells by forward composition, resulting in the type (S/NP), which is a sentence lacking a direct object to its right.13 From here, we need only design the lexical entry for the complementizer that to expect a right neighbor of type (S/NP), and the remainder of the derivation can proceed by function application. Composition and type-raising give CCG considerable power and flexibility, but at a price. The simple sentence Abigail tells Max can be parsed in two different ways: by func- tion application (first forming the verb phrase tells Max), and by type-raising and compo- sition (first forming the non-constituent Abigail tells). This derivational ambiguity does not affect the resulting linguistic analysis, so it is sometimes known as spurious ambi- guity. Hockenmaier and Steedman (2007) present a translation algorithm for converting the Penn Treebank into CCG derivations, using composition and type-raising only when necessary. Exercises 1. Sketch out the state diagram for finite-state acceptors for the following languages on the alphabet {a, b}. a) Even-length strings. (Be sure to include 0 as an even number.) b) Strings that contain aaa as a substring. c) Strings containing an even number of a and an odd number of b symbols. d) Strings in which the substring bbb must be terminal if it appears — the string need not contain bbb, but if it does, nothing can come after it. 2. Levenshtein edit distance is the number of insertions, substitutions, or deletions required to convert one string to another. 13The missing direct object would be analyzed as a trace in CFG-like approaches to syntax, including the Penn Treebank. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_240_Chunk241
|
9.3. *MILDLY CONTEXT-SENSITIVE LANGUAGES 223 a) Define a finite-state acceptor that accepts all strings with edit distance 1 from the target string, target. b) Now think about how to generalize your design to accept all strings with edit distance from the target string equal to d. If the target string has length ℓ, what is the minimal number of states required? 3. Construct an FSA in the style of Figure 9.3, which handles the following examples: • nation/N, national/ADJ, nationalize/V, nationalizer/N • America/N, American/ADJ, Americanize/V, Americanizer/N Be sure that your FSA does not accept any further derivations, such as *nationalizeral and *Americanizern. 4. Show how to construct a trigram language model in a weighted finite-state acceptor. Make sure that you handle the edge cases at the beginning and end of the input. 5. Extend the FST in Figure 9.6 to handle the other two parts of rule 1a of the Porter stemmer: -sses →ss, and -ies →-i. 6. § 9.1.4 describes TO, a transducer that captures English orthography by transduc- ing cook + ed →cooked and bake + ed →baked. Design an unweighted finite-state transducer that captures this property of English orthography. Next, augment the transducer to appropriately model the suffix -s when applied to words ending in s, e.g. kiss+s →kisses. 7. Add parenthesization to the grammar in Figure 9.11 so that it is no longer ambigu- ous. 8. Construct three examples — a noun phrase, a verb phrase, and a sentence — which can be derived from the Penn Treebank grammar fragment in § 9.2.3, yet are not grammatical. Avoid reusing examples from the text. Optionally, propose corrections to the grammar to avoid generating these cases. 9. Produce parses for the following sentences, using the Penn Treebank grammar frag- ment from § 9.2.3. (9.8) This aggression will not stand. (9.9) I can get you a toe. (9.10) Sometimes you eat the bar and sometimes the bar eats you. Then produce parses for three short sentences from a news article from this week. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_241_Chunk242
|
224 CHAPTER 9. FORMAL LANGUAGE THEORY 10. * One advantage of CCG is its flexibility in handling coordination: (9.11) a. Hunter and Tristan speak Hawaiian b. Hunter speaks and Tristan understands Hawaiian Define the lexical entry for and as and := (X/X)\X, [9.77] where X can refer to any type. Using this lexical entry, show how to parse the two examples above. In the second example, Swahili should be combined with the coor- dination Abigail speaks and Max understands, and not just with the verb understands. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_242_Chunk243
|
Chapter 10 Context-free parsing Parsing is the task of determining whether a string can be derived from a given context- free grammar, and if so, how. A parser’s output is a tree, like the ones shown in Fig- ure 9.13. Such trees can answer basic questions of who-did-what-to-whom, and have ap- plications in downstream tasks like semantic analysis (chapter 12 and 13) and information extraction (chapter 17). For a given input and grammar, how many parse trees are there? Consider a minimal context-free grammar with only one non-terminal, X, and the following productions: X →X X X →aardvark | abacus | . . . | zyther The second line indicates unary productions to every nonterminal in Σ. In this gram- mar, the number of possible derivations for a string w is equal to the number of binary bracketings, e.g., ((((w1 w2) w3) w4) w5), (((w1 (w2 w3)) w4) w5), ((w1 (w2(w3 w4))) w5), . . . . The number of such bracketings is a Catalan number, which grows super-exponentially in the length of the sentence, Cn = (2n)! (n+1)!n!. As with sequence labeling, it is only possible to exhaustively search the space of parses by resorting to locality assumptions, which make it possible to search efficiently by reusing shared substructures with dynamic programming. This chapter focuses on a bottom-up dynamic programming algorithm, which enables exhaustive search of the space of possible parses, but imposes strict limitations on the form of scoring function. These limitations can be relaxed by abandoning exhaustive search. Non-exact search methods will be briefly discussed at the end of this chapter, and one of them — transition-based parsing — will be the focus of chapter 11. 225
|
nlp_Page_243_Chunk244
|
226 CHAPTER 10. CONTEXT-FREE PARSING S →NP VP NP →NP PP | we | sushi | chopsticks PP →IN NP IN →with VP →V NP | VP PP V →eat Table 10.1: A toy example context-free grammar 10.1 Deterministic bottom-up parsing The CKY algorithm1 is a bottom-up approach to parsing in a context-free grammar. It efficiently tests whether a string is in a language, without enumerating all possible parses. The algorithm first forms small constituents, and then tries to merge them into larger constituents. To understand the algorithm, consider the input, We eat sushi with chopsticks. Accord- ing to the toy grammar in Table 10.1, each terminal symbol can be generated by exactly one unary production, resulting in the sequence NP V NP IN NP. In real examples, there may be many unary productions for each individual token. In any case, the next step is to try to apply binary productions to merge adjacent symbols into larger constituents: for example, V NP can be merged into a verb phrase (VP), and IN NP can be merged into a prepositional phrase (PP). Bottom-up parsing searches for a series of mergers that ultimately results in the start symbol S covering the entire input. The CKY algorithm systematizes this search by incrementally constructing a table t in which each cell t[i, j] contains the set of nonterminals that can derive the span wi+1:j. The algorithm fills in the upper right triangle of the table; it begins with the diagonal, which corresponds to substrings of length 1, and then computes derivations for progressively larger substrings, until reaching the upper right corner t[0, M], which corresponds to the entire input, w1:M. If the start symbol S is in t[0, M], then the string w is in the language defined by the grammar. This process is detailed in Algorithm 13, and the resulting data structure is shown in Figure 10.1. Informally, here’s how it works: • Begin by filling in the diagonal: the cells t[m −1, m] for all m ∈{1, 2, . . . , M}. These cells are filled with terminal productions that yield the individual tokens; for the word w2 = sushi, we fill in t[1, 2] = {NP}, and so on. • Then fill in the next diagonal, in which each cell corresponds to a subsequence of length two: t[0, 2], t[1, 3], . . . , t[M −2, M]. These cells are filled in by looking for 1The name is for Cocke-Kasami-Younger, the inventors of the algorithm. It is a special case of chart parsing, because its stores reusable computations in a chart-like data structure. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_244_Chunk245
|
10.1. DETERMINISTIC BOTTOM-UP PARSING 227 binary productions capable of producing at least one entry in each of the cells corre- sponding to left and right children. For example, VP can be placed in the cell t[1, 3] because the grammar includes the production VP →V NP, and because the chart contains V ∈t[1, 2] and NP ∈t[2, 3]. • At the next diagonal, the entries correspond to spans of length three. At this level, there is an additional decision at each cell: where to split the left and right children. The cell t[i, j] corresponds to the subsequence wi+1:j, and we must choose some split point i < k < j, so that the span wi+1:k is the left child, and the span wk+1:j is the right child. We consider all possible k, looking for productions that generate elements in t[i, k] and t[k, j]; the left-hand side of all such productions can be added to t[i, j]. When it is time to compute t[i, j], the cells t[i, k] and t[k, j] are guaranteed to be complete, since these cells correspond to shorter sub-strings of the input. • The process continues until we reach t[0, M]. Figure 10.1 shows the chart that arises from parsing the sentence We eat sushi with chop- sticks using the grammar defined above. 10.1.1 Recovering the parse tree As with the Viterbi algorithm, it is possible to identify a successful parse by storing and traversing an additional table of back-pointers. If we add an entry X to cell t[i, j] by using the production X →Y Z and the split point k, then we store the back-pointer b[i, j, X] = (Y, Z, k). Once the table is complete, we can recover a parse by tracing this pointers, starting at b[0, M, S], and stopping when they ground out at terminal productions. For ambiguous sentences, there will be multiple paths to reach S ∈t[0, M]. For exam- ple, in Figure 10.1, the goal state S ∈t[0, M] is reached through the state VP ∈t[1, 5], and there are two different ways to generate this constituent: one with (eat sushi) and (with chopsticks) as children, and another with (eat) and (sushi with chopsticks) as children. The presence of multiple paths indicates that the input can be generated by the grammar in more than one way. In Algorithm 13, one of these derivations is selected arbitrarily. As discussed in § 10.3, weighted context-free grammars compute a score for all permissible derivations, and a minor modification of CKY allows it to identify the single derivation with the maximum score. 10.1.2 Non-binary productions As presented above, the CKY algorithm assumes that all productions with non-terminals on the right-hand side (RHS) are binary. In real grammars, such as the one considered in chapter 9, there are other types of productions: some have more than two elements on the right-hand side, and others produce a single non-terminal. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_245_Chunk246
|
228 CHAPTER 10. CONTEXT-FREE PARSING Algorithm 13 The CKY algorithm for parsing a sequence w ∈Σ∗in a context-free grammar G = (N, Σ, R, S), with non-terminals N, production rules R, and start sym- bol S. The grammar is assumed to be in Chomsky normal form (§ 9.2.1). The function PICKFROM(b[i, j, X]) selects an element of the set b[i, j, X] arbitrarily. All values of t and b are initialized to ∅. 1: procedure CKY(w, G = (N, Σ, R, S)) 2: for m ∈{1 . . . M} do 3: t[m −1, m] ←{X : (X →wm) ∈R} 4: for ℓ∈{2, 3, . . . , M} do ▷Iterate over constituent lengths 5: for m ∈{0, 1, . . . M −ℓ} do ▷Iterate over left endpoints 6: for k ∈{m + 1, m + 2, . . . , m + ℓ−1} do ▷Iterate over split points 7: for (X →Y Z) ∈R do ▷Iterate over rules 8: if Y ∈t[m, k] ∧Z ∈t[k, m + ℓ] then 9: t[m, m + ℓ] ←t[m, m + ℓ] ∪X ▷Add non-terminal to table 10: b[m, m + ℓ, X] ←b[m, m + ℓ, X] ∪(Y, Z, k) ▷Add back-pointers 11: if S ∈t[0, M] then 12: return TRACEBACK(S, 0, M, b) 13: else 14: return ∅ 15: procedure TRACEBACK(X, i, j, b) 16: if j = i + 1 then 17: return X 18: else 19: (Y, Z, k) ←PICKFROM(b[i, j, X]) 20: return X →(TRACEBACK(Y, i, k, b), TRACEBACK(Z, k, j, b)) • Productions with more than two elements on the right-hand side can be binarized by creating additional non-terminals, as described in § 9.2.1. For example, the pro- duction VP →V NP NP (for ditransitive verbs) can be converted to VP →VPditrans/NP NP, by adding the non-terminal VPditrans/NP and the production VPditrans/NP →V NP. • What about unary productions like VP →V? While such productions are not a part of Chomsky Normal Form — and can therefore be eliminated in preprocessing the grammar — in practice, a more typical solution is to modify the CKY algorithm. The algorithm makes a second pass on each diagonal in the table, augmenting each cell t[i, j] with all possible unary productions capable of generating each item al- ready in the cell: formally, t[i, j] is extended to its unary closure. Suppose the ex- ample grammar in Table 10.1 were extended to include the production VP →V, enabling sentences with intransitive verb phrases, like we eat. Then the cell t[1, 2] — corresponding to the word eat — would first include the set {V}, and would be augmented to the set {V, VP} during this second pass. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_246_Chunk247
|
10.2. AMBIGUITY 229 We eat sushi with chopsticks We NP ∅ S ∅ S eat V VP ∅ VP sushi NP ∅ NP with P PP chopsticks NP Figure 10.1: An example completed CKY chart. The solid and dashed lines show the back pointers resulting from the two different derivations of VP in position t[1, 5]. 10.1.3 Complexity For an input of length M and a grammar with R productions and N non-terminals, the space complexity of the CKY algorithm is O(M2N): the number of cells in the chart is O(M2), and each cell must hold O(N) elements. The time complexity is O(M3R): each cell is computed by searching over O(M) split points, with R possible productions for each split point. Both the time and space complexity are considerably worse than the Viterbi algorithm, which is linear in the length of the input. 10.2 Ambiguity In natural language, there is rarely a single parse for a given sentence. The main culprit is ambiguity, which is endemic to natural language syntax. Here are a few broad categories: • Attachment ambiguity: e.g., We eat sushi with chopsticks, I shot an elephant in my pajamas. In these examples, the prepositions (with, in) can attach to either the verb or the direct object. • Modifier scope: e.g., southern food store, plastic cup holder. In these examples, the first word could be modifying the subsequent adjective, or the final noun. • Particle versus preposition: e.g., The puppy tore up the staircase. Phrasal verbs like tore up often include particles which could also act as prepositions. This has struc- tural implications: if up is a preposition, then up the staircase is a prepositional phrase; if up is a particle, then the staircase is the direct object to the verb. • Complement structure: e.g., The students complained to the professor that they didn’t understand. This is another form of attachment ambiguity, where the complement Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_247_Chunk248
|
230 CHAPTER 10. CONTEXT-FREE PARSING that they didn’t understand could attach to the main verb (complained), or to the indi- rect object (the professor). • Coordination scope: e.g., “I see,” said the blind man, as he picked up the hammer and saw. In this example, the lexical ambiguity for saw enables it to be coordinated either with the noun hammer or the verb picked up. These forms of ambiguity can combine, so that seemingly simple headlines like Fed raises interest rates have dozens of possible analyses even in a minimal grammar. In a broad coverage grammar, typical sentences can have millions of parses. While careful grammar design can chip away at this ambiguity, a better strategy is combine broad cov- erage parsers with data-driven strategies for identifying the correct analysis. 10.2.1 Parser evaluation Before continuing to parsing algorithms that are able to handle ambiguity, let us stop to consider how to measure parsing performance. Suppose we have a set of reference parses — the ground truth — and a set of system parses that we would like to score. A simple solution would be per-sentence accuracy: the parser is scored by the proportion of sentences on which the system and reference parses exactly match.2 But as any student knows, it always nice to get partial credit, which we can assign to analyses that correctly match parts of the reference parse. The PARSEval metrics (Grishman et al., 1992) score each system parse via: Precision: the fraction of constituents in the system parse that match a constituent in the reference parse. Recall: the fraction of constituents in the reference parse that match a constituent in the system parse. In labeled precision and recall, the system must also match the phrase type for each constituent; in unlabeled precision and recall, it is only required to match the constituent structure. As described in chapter 4, the precision and recall can be combined into an F -MEASURE by their harmonic mean. Suppose that the left tree of Figure 10.2 is the system parse, and that the right tree is the reference parse. Then: • S →w1:5 is true positive, because it appears in both trees. 2Most parsing papers do not report results on this metric, but Suzuki et al. (2018) find that a strong parser recovers the exact parse in roughly 50% of all sentences. Performance on short sentences is generally much higher. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_248_Chunk249
|
10.2. AMBIGUITY 231 S VP NP PP NP chopsticks IN with NP sushi V eat NP We (a) system output S VP PP NP chopsticks IN with VP NP sushi V eat NP We (b) reference Figure 10.2: Two possible analyses from the grammar in Table 10.1 • VP →w2:5 is true positive as well. • NP →w3:5 is false positive, because it appears only in the system output. • PP →w4:5 is true positive, because it appears in both trees. • VP →w2:3 is false negative, because it appears only in the reference. The labeled and unlabeled precision of this parse is 3 4 = 0.75, and the recall is 3 4 = 0.75, for an F-measure of 0.75. For an example in which precision and recall are not equal, suppose the reference parse instead included the production VP →V NP PP. In this parse, the reference does not contain the constituent w2:3, so the recall would be 1.3 10.2.2 Local solutions Some ambiguity can be resolved locally. Consider the following examples, (10.1) a. We met the President on Monday. b. We met the President of Mexico. Each case ends with a prepositional phrase, which can be attached to the verb met or the noun phrase the president. If given a labeled corpus, we can compare the likelihood of the observing the preposition alongside each candidate attachment point, p(on | met) ≷p(on | President) [10.1] p(of | met) ≷p(of | President). [10.2] 3While the grammar must be binarized before applying the CKY algorithm, evaluation is performed on the original parses. It is therefore necessary to “unbinarize” the output of a CKY-based parser, converting it back to the original grammar. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_249_Chunk250
|
232 CHAPTER 10. CONTEXT-FREE PARSING A comparison of these probabilities would successfully resolve this case (Hindle and Rooth, 1993). Other cases, such as the example we eat sushi with chopsticks, require con- sidering the object of the preposition: consider the alternative we eat sushi with soy sauce. With sufficient labeled data, some instances of attachment ambiguity can be solved by supervised classification (Ratnaparkhi et al., 1994). However, there are inherent limitations to local solutions. While toy examples may have just a few ambiguities to resolve, realistic sentences have thousands or millions of possible parses. Furthermore, attachment decisions are interdependent, as shown in the garden path example: (10.2) Cats scratch people with claws with knives. We may want to attach with claws to scratch, as would be correct in the shorter sentence in cats scratch people with claws. But this leaves nowhere to attach with knives. The cor- rect interpretation can be identified only be considering the attachment decisions jointly. The huge number of potential parses may seem to make exhaustive search impossible. But as with sequence labeling, locality assumptions make it possible to search this space efficiently. 10.3 Weighted Context-Free Grammars Let us define a derivation τ as a set of anchored productions, τ = {X →α, (i, j, k)}, [10.3] with X corresponding to the left-hand side non-terminal and α corresponding to the right- hand side. For grammars in Chomsky normal formal, α is either a pair of non-terminals or a terminal symbol. The indices i, j, k anchor the production in the input, with X deriving the span wi+1:j. For binary productions, wi+1:k indicates the span of the left child, and wk+1:j indicates the span of the right child; for unary productions, k is ignored. For an input w, the optimal parse is, ˆτ = argmax τ∈T (w) Ψ(τ), [10.4] where T (w) is the set of derivations that yield the input w. Define a scoring function Ψ that decomposes across anchored productions, Ψ(τ) = X (X→α,(i,j,k))∈τ ψ(X →α, (i, j, k)). [10.5] This is a locality assumption, akin to the assumption in Viterbi sequence labeling. In this case, the assumption states that the overall score is a sum over scores of productions, Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_250_Chunk251
|
10.3. WEIGHTED CONTEXT-FREE GRAMMARS 233 ψ(·) exp ψ(·) S →NP VP 0 1 NP →NP PP −1 1 2 →we −2 1 4 →sushi −3 1 8 →chopsticks −3 1 8 PP →IN NP 0 1 IN →with 0 1 VP →V NP −1 1 2 →VP PP −2 1 4 →MD V −2 1 4 V →eat 0 1 Table 10.2: An example weighted context-free grammar (WCFG). The weights are chosen so that exp ψ(·) sums to one over right-hand sides for each non-terminal; this is required by probabilistic context-free grammars, but not by WCFGs in general. which are computed independently. In a weighted context-free grammar (WCFG), the score of each anchored production X →(α, (i, j, k)) is simply ψ(X →α), ignoring the anchor (i, j, k). In other parsing models, the anchors can be used to access features of the input, while still permitting efficient bottom-up parsing. Example Consider the weighted grammar shown in Table 10.2, and the analysis in Fig- ure 10.2b. Ψ(τ) =ψ(S →NP VP) + ψ(VP →VP PP) + ψ(VP →V NP) + ψ(PP →IN NP) + ψ(NP →We) + ψ(V →eat) + ψ(NP →sushi) + ψ(IN →with) + ψ(NP →chopsticks) [10.6] =0 −2 −1 + 0 −2 + 0 −3 + 0 −3 = −11. [10.7] In the alternative parse in Figure 10.2a, the production VP →VP PP (with score −2) is replaced with the production NP →NP PP (with score −1); all other productions are the same. As a result, the score for this parse is −10. This example hints at a problem with WCFG parsing on non-terminals such as NP, VP, and PP: a WCFG will always prefer either VP or NP attachment, regardless of what is being attached! Solutions to this issue are discussed in § 10.5. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_251_Chunk252
|
234 CHAPTER 10. CONTEXT-FREE PARSING Algorithm 14 CKY algorithm for parsing a string w ∈Σ∗in a weighted context-free grammar (N, Σ, R, S), where N is the set of non-terminals and R is the set of weighted productions. The grammar is assumed to be in Chomsky normal form (§ 9.2.1). The function TRACEBACK is defined in Algorithm 13. procedure WCKY(w, G = (N, Σ, R, S)) for all i, j, X do ▷Initialization t[i, j, X] ←0 b[i, j, X] ←∅ for m ∈{1, 2, . . . , M} do for all X ∈N do t[m, m + 1, X] ←ψ(X →wm, (m, m + 1, m)) for ℓ∈{2, 3, . . . M} do for m ∈{0, 1, . . . , M −ℓ} do for k ∈{m + 1, m + 2, . . . , m + ℓ−1} do t[m, m + ℓ, X] ←max k,Y,Z ψ(X →Y Z, (m, m + ℓ, k)) + t[m, k, Y ] + t[k, m + ℓ, Z] b[m, m + ℓ, X] ←argmax k,Y,Z ψ(X →Y Z, (m + ℓ, k)) + t[m, k, Y ] + t[k, m + ℓ, Z] return TRACEBACK(S, 0, M, b) 10.3.1 Parsing with weighted context-free grammars The optimization problem in Equation 10.4 can be solved by modifying the CKY algo- rithm. In the deterministic CKY algorithm, each cell t[i, j] stored a set of non-terminals capable of deriving the span wi+1:j. We now augment the table so that the cell t[i, j, X] is the score of the best derivation of wi+1:j from non-terminal X. This score is computed recursively: for the anchored binary production (X →Y Z, (i, j, k)), we compute: • the score of the anchored production, ψ(X →Y Z, (i, j, k)); • the score of the best derivation of the left child, t[i, k, Y ]; • the score of the best derivation of the right child, t[k, j, Z]. These scores are combined by addition. As in the unweighted CKY algorithm, the table is constructed by considering spans of increasing length, so the scores for spans t[i, k, Y ] and t[k, j, Z] are guaranteed to be available at the time we compute the score t[i, j, X]. The value t[0, M, S] is the score of the best derivation of w from the grammar. Algorithm 14 formalizes this procedure. As in unweighted CKY, the parse is recovered from the table of back pointers b, where each b[i, j, X] stores the argmax split point k and production X →Y Z in the derivation of wi+1:j from X. The top scoring parse can be obtained by tracing these pointers backwards from b[0, M, S], all the way to the terminal symbols. This is analogous to the computation Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_252_Chunk253
|
10.3. WEIGHTED CONTEXT-FREE GRAMMARS 235 of the best sequence of labels in the Viterbi algorithm by tracing pointers backwards from the end of the trellis. Note that we need only store back-pointers for the best path to t[i, j, X]; this follows from the locality assumption that the global score for a parse is a combination of the local scores of each production in the parse. Example Let’s revisit the parsing table in Figure 10.1. In a weighted CFG, each cell would include a score for each non-terminal; non-terminals that cannot be generated are assumed to have a score of −∞. The first diagonal contains the scores of unary produc- tions: t[0, 1, NP] = −2, t[1, 2, V] = 0, and so on. The next diagonal contains the scores for spans of length 2: t[1, 3, VP] = −1 + 0 −3 = −4, t[3, 5, PP] = 0 + 0 −3 = −3, and so on. Things get interesting when we reach the cell t[1, 5, VP], which contains the score for the derivation of the span w2:5 from the non-terminal VP. This score is computed as a max over two alternatives, t[1, 5, VP] = max(ψ(VP →VP PP, (1, 3, 5)) + t[1, 3, VP] + t[3, 5, PP], ψ(VP →V NP, (1, 2, 5)) + t[1, 2, V] + t[2, 5, NP]) [10.8] = max( −2 −4 −3, −1 + 0 −7) = −8. [10.9] Since the second case is the argmax, we set the back-pointer b[1, 5, VP] = (V, NP, 2), en- abling the optimal derivation to be recovered. 10.3.2 Probabilistic context-free grammars Probabilistic context-free grammars (PCFGs) are a special case of weighted context- free grammars that arises when the weights correspond to probabilities. Specifically, the weight ψ(X →α, (i, j, k)) = log p(α | X), where the probability of the right-hand side α is conditioned on the non-terminal X, and the anchor (i, j, k) is ignored. These probabilities must be normalized over all possible right-hand sides, so that P α p(α | X) = 1, for all X. For a given parse τ, the product of the probabilities of the productions is equal to p(τ), under the generative model τ ∼DRAWSUBTREE(S), where the function DRAWSUBTREE is defined in Algorithm 15. The conditional probability of a parse given a string is, p(τ | w) = p(τ) P τ ′∈T (w) p(τ ′) = exp Ψ(τ) P τ ′∈T (w) exp Ψ(τ ′), [10.10] where Ψ(τ) = P X→α,(i,j,k)∈τ ψ(X →α, (i, j, k)). Because the probability is monotonic in the score Ψ(τ), the maximum likelihood parse can be identified by the CKY algorithm without modification. If a normalized probability p(τ | w) is required, the denominator of Equation 10.10 can be computed by the inside recurrence, described below. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_253_Chunk254
|
236 CHAPTER 10. CONTEXT-FREE PARSING Algorithm 15 Generative model for derivations from probabilistic context-free grammars in Chomsky Normal Form (CNF). procedure DRAWSUBTREE(X) sample (X →α) ∼p(α | X) if α = (Y Z) then return DRAWSUBTREE(Y ) ∪DRAWSUBTREE(Z) else return (X →α) ▷In CNF, all unary productions yield terminal symbols Example The WCFG in Table 10.2 is designed so that the weights are log-probabilities, satisfying the constraint P α exp ψ(X →α) = 1. As noted earlier, there are two parses in T (we eat sushi with chopsticks), with scores Ψ(τ1) = log p(τ1) = −10 and Ψ(τ2) = log p(τ2) = −11. Therefore, the conditional probability p(τ1 | w) is equal to, p(τ1 | w) = p(τ1) p(τ1) + p(τ2) = exp Ψ(τ1) exp Ψ(τ1) + exp Ψ(τ2) = 2−10 2−10 + 2−11 = 2 3. [10.11] The inside recurrence The denominator of Equation 10.10 can be viewed as a language model, summing over all valid derivations of the string w, p(w) = X τ ′:yield(τ ′)=w p(τ ′). [10.12] Just as the CKY algorithm makes it possible to maximize over all such analyses, with a few modifications it can also compute their sum. Each cell t[i, j, X] must store the log probability of deriving wi+1:j from non-terminal X. To compute this, we replace the max- imization over split points k and productions X →Y Z with a “log-sum-exp” operation, which exponentiates the log probabilities of the production and the children, sums them in probability space, and then converts back to the log domain: t[i, j, X] = log X k,Y,Z exp (ψ(X →Y Z) + t[i, k, Y ] + t[k, j, Z]) [10.13] = log X k,Y,Z exp (log p(Y Z | X) + log p(Y →wi+1:k) + log p(Z →wk+1:j)) [10.14] = log X k,Y,Z p(Y Z | X) × p(Y →wi+1:k) × p(Z →wk+1:j) [10.15] = log X k,Y,Z p(Y Z, wi+1:k, wk+1:j | X) [10.16] = log p(X ⇝wi+1:j), [10.17] Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_254_Chunk255
|
10.3. WEIGHTED CONTEXT-FREE GRAMMARS 237 with X ⇝wi+1:j indicating the event that non-terminal X yields the span wi+1, wi+2, . . . , wj. The recursive computation of t[i, j, X] is called the inside recurrence, because it computes the probability of each subtree as a combination of the probabilities of the smaller subtrees that are inside of it. The name implies a corresponding outside recurrence, which com- putes the probability of a non-terminal X spanning wi+1:j, joint with the outside context (w1:i, wj+1:M). This recurrence is described in § 10.4.3. The inside and outside recurrences are analogous to the forward and backward recurrences in probabilistic sequence label- ing (see § 7.5.3). They can be used to compute the marginal probabilities of individual anchored productions, p(X →α, (i, j, k) | w), summing over all possible derivations of w. 10.3.3 *Semiring weighted context-free grammars The weighted and unweighted CKY algorithms can be unified with the inside recurrence using the same semiring notation described in § 7.7.3. The generalized recurrence is: t[i, j, X] = M k,Y,Z ψ(X →Y Z, (i, j, k)) ⊗t[i, k, Y ] ⊗t[k, j, Z]. [10.18] This recurrence subsumes all of the algorithms that have been discussed in this chapter to this point. Unweighted CKY. When ψ(X →α, (i, j, k)) is a Boolean truth value {⊤, ⊥}, ⊗is logical conjunction, and L is logical disjunction, then we derive CKY recurrence for un- weighted context-free grammars, discussed in § 10.1 and Algorithm 13. Weighted CKY. When ψ(X →α, (i, j, k)) is a scalar score, ⊗is addition, and L is maxi- mization, then we derive the CKY recurrence for weighted context-free grammars, discussed in § 10.3 and Algorithm 14. When ψ(X →α, (i, j, k)) = log p(α | X), this same setting derives the CKY recurrence for finding the maximum likelihood derivation in a probabilistic context-free grammar. Inside recurrence. When ψ(X →α, (i, j, k)) is a log probability, ⊗is addition, and L = log P exp, then we derive the inside recurrence for probabilistic context-free gram- mars, discussed in § 10.3.2. It is also possible to set ψ(X →α, (i, j, k)) directly equal to the probability p(α | X). In this case, ⊗is multiplication, and L is addition. While this may seem more intuitive than working with log probabilities, there is the risk of underflow on long inputs. Regardless of how the scores are combined, the key point is the locality assumption: the score for a derivation is the combination of the independent scores for each anchored Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_255_Chunk256
|
238 CHAPTER 10. CONTEXT-FREE PARSING production, and these scores do not depend on any other part of the derivation. For exam- ple, if two non-terminals are siblings, the scores of productions from these non-terminals are computed independently. This locality assumption is analogous to the first-order Markov assumption in sequence labeling, where the score for transitions between tags depends only on the previous tag and current tag, and not on the history. As with se- quence labeling, this assumption makes it possible to find the optimal parse efficiently; its linguistic limitations are discussed in § 10.5. 10.4 Learning weighted context-free grammars Like sequence labeling, context-free parsing is a form of structure prediction. As a result, WCFGs can be learned using the same set of algorithms: generative probabilistic models, structured perceptron, maximum conditional likelihood, and maximum margin learning. In all cases, learning requires a treebank, which is a dataset of sentences labeled with context-free parses. Parsing research was catalyzed by the Penn Treebank (Marcus et al., 1993), the first large-scale dataset of this type (see § 9.2.2). Phrase structure treebanks exist for roughly two dozen other languages, with coverage mainly restricted to European and East Asian languages, plus Arabic and Urdu. 10.4.1 Probabilistic context-free grammars Probabilistic context-free grammars are similar to hidden Markov models, in that they are generative models of text. In this case, the parameters of interest correspond to probabil- ities of productions, conditional on the left-hand side. As with hidden Markov models, these parameters can be estimated by relative frequency: ψ(X →α) = log p(X →α) [10.19] ˆp(X →α) =count(X →α) count(X) . [10.20] For example, the probability of the production NP →DET NN is the corpus count of this production, divided by the count of the non-terminal NP. This estimator applies to terminal productions as well: the probability of NN →whale is the count of how often whale appears in the corpus as generated from an NN tag, divided by the total count of the NN tag. Even with the largest treebanks — currently on the order of one million tokens — it is difficult to accurately compute probabilities of even moderately rare events, such as NN →whale. Therefore, smoothing is critical for making PCFGs effective. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_256_Chunk257
|
10.4. LEARNING WEIGHTED CONTEXT-FREE GRAMMARS 239 10.4.2 Feature-based parsing The scores for each production can be computed as an inner product of weights and fea- tures, ψ(X →α, (i, j, k)) = θ · f(X, α, (i, j, k), w), [10.21] where the feature vector f is a function of the left-hand side X, the right-hand side α, the anchor indices (i, j, k), and the input w. The basic feature f(X, α, (i, j, k)) = {(X, α)} encodes only the identity of the produc- tion itself. This gives rise to a discriminatively-trained model with the same expressive- ness as a PCFG. Features on anchored productions can include the words that border the span wi, wj+1, the word at the split point wk+1, the presence of a verb or noun in the left child span wi+1:k, and so on (Durrett and Klein, 2015). Scores on anchored productions can be incorporated into CKY parsing without any modification to the algorithm, because it is still possible to compute each element of the table t[i, j, X] recursively from its imme- diate children. Other features can be obtained by grouping elements on either the left-hand or right- hand side: for example it can be particularly beneficial to compute additional features by clustering terminal symbols, with features corresponding to groups of words with similar syntactic properties. The clustering can be obtained from unlabeled datasets that are much larger than any treebank, improving coverage. Such methods are described in chapter 14. Feature-based parsing models can be estimated using the usual array of discrimina- tive learning techniques. For example, a structure perceptron update can be computed as (Carreras et al., 2008), f(τ, w(i)) = X (X→α,(i,j,k))∈τ f(X, α, (i, j, k), w(i)) [10.22] ˆτ = argmax τ∈T (w) θ · f(τ, w(i)) [10.23] θ ←f(τ (i), w(i)) −f(ˆτ, w(i)). [10.24] A margin-based objective can be optimized by selecting ˆτ through cost-augmented decod- ing (§ 2.4.2), enforcing a margin of ∆(ˆτ, τ) between the hypothesis and the reference parse, where ∆is a non-negative cost function, such as the Hamming loss (Stern et al., 2017). It is also possible to train feature-based parsing models by conditional log-likelihood, as described in the next section. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_257_Chunk258
|
240 CHAPTER 10. CONTEXT-FREE PARSING Y Z wj+1 . . . wk X wi+1 . . . wj Y X wi+1 . . . wj Z wk+1 . . . wi Figure 10.3: The two cases faced by the outside recurrence in the computation of β(i, j, X) 10.4.3 *Conditional random field parsing The score of a derivation Ψ(τ) can be converted into a probability by normalizing over all possible derivations, p(τ | w) = exp Ψ(τ) P τ ′∈T (w) exp Ψ(τ ′). [10.25] Using this probability, a WCFG can be trained by maximizing the conditional log-likelihood of a labeled corpus. Just as in logistic regression and the conditional random field over sequences, the gradient of the conditional log-likelihood is the difference between the observed and ex- pected counts of each feature. The expectation Eτ|w[f(τ, w(i)); θ] requires summing over all possible parses, and computing the marginal probabilities of anchored productions, p(X →α, (i, j, k) | w). In CRF sequence labeling, marginal probabilities over tag bigrams are computed by the two-pass forward-backward algorithm (§ 7.5.3). The analogue for context-free grammars is the inside-outside algorithm, in which marginal probabilities are computed from terms generated by an upward and downward pass over the parsing chart: • The upward pass is performed by the inside recurrence, which is described in § 10.3.2. Each inside variable α(i, j, X) is the score of deriving wi+1:j from the non-terminal X. In a PCFG, this corresponds to the log-probability log p(wi+1:j | X). This is computed by the recurrence, α(i, j, X) ≜log X (X→Y Z) j X k=i+1 exp (ψ(X →Y Z, (i, j, k)) + α(i, k, Y ) + α(k, j, Z)) . [10.26] The initial condition of this recurrence is α(m −1, m, X) = ψ(X →wm). The de- nominator P τ∈T (w) exp Ψ(τ) is equal to exp α(0, M, S). • The downward pass is performed by the outside recurrence, which recursively pop- ulates the same table structure, starting at the root of the tree. Each outside variable Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_258_Chunk259
|
10.4. LEARNING WEIGHTED CONTEXT-FREE GRAMMARS 241 β(i, j, X) is the score of having a phrase of type X covering the span (i + 1 : j), joint with the exterior context w1:i and wj+1:M. In a PCFG, this corresponds to the log probability log p((X, i + 1, j), w1:i, wj+1:M). Each outside variable is computed by the recurrence, exp β(i, j, X) ≜ X (Y →X Z) M X k=j+1 exp [ψ(Y →X Z, (i, k, j)) + α(j, k, Z) + β(i, k, Y )] [10.27] + X (Y →Z X) i−1 X k=0 exp [ψ(Y →Z X, (k, i, j)) + α(k, i, Z) + β(k, j, Y )] . [10.28] The first line of Equation 10.28 is the score under the condition that X is a left child of its parent, which spans wi+1:k, with k > j; the second line is the score under the condition that X is a right child of its parent Y , which spans wk+1:j, with k < i. The two cases are shown in Figure 10.3. In each case, we sum over all possible productions with X on the right-hand side. The parent Y is bounded on one side by either i or j, depending on whether X is a left or right child of Y ; we must sum over all possible values for the other boundary. The initial conditions for the outside recurrence are β(0, M, S) = 0 and β(0, M, X ̸= S) = −∞. The marginal probability of a non-terminal X over span wi+1:j is written p(X ⇝wi+1:j | w). This probability can be computed from the inside and outside scores, p(X ⇝wi+1:j | w) =p(X ⇝wi+1:j, w) p(w) [10.29] =p(wi+1:j | X) × p(X, w1:i, xj+1:M) p(w) [10.30] =exp (α(i, j, X) + β(i, j, X)) exp α(0, M, S) . [10.31] Marginal probabilities of individual productions can be computed similarly (see exercise 2). These marginal probabilities can be used for training a conditional random field parser, and also for the task of unsupervised grammar induction, in which a PCFG is estimated from a dataset of unlabeled text (Lari and Young, 1990; Pereira and Schabes, 1992). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_259_Chunk260
|
242 CHAPTER 10. CONTEXT-FREE PARSING 10.4.4 Neural context-free grammars Neural networks and can be applied to parsing by representing each span with a dense numerical vector (Socher et al., 2013; Durrett and Klein, 2015; Cross and Huang, 2016).4 For example, the anchor (i, j, k) and sentence w can be associated with a fixed-length column vector, v(i,j,k) = [uwi−1; uwi; uwj−1; uwj; uwk−1; uwk], [10.32] where uwi is a word embedding associated with the word wi. The vector vi,j,k can then be passed through a feedforward neural network, and used to compute the score of the an- chored production. For example, this score can be computed as a bilinear product (Durrett and Klein, 2015), ˜v(i,j,k) =FeedForward(v(i,j,k)) [10.33] ψ(X →α, (i, j, k)) =˜v⊤ (i,j,k)Θf(X →α), [10.34] where f(X →α) is a vector of features of the production, and Θ is a parameter ma- trix. The matrix Θ and the parameters of the feedforward network can be learned by backpropagating from an objective such as the margin loss or the negative conditional log-likelihood. 10.5 Grammar refinement The locality assumptions underlying CFG parsing depend on the granularity of the non- terminals. For the Penn Treebank non-terminals, there are several reasons to believe that these assumptions are too strong (Johnson, 1998): • The context-free assumption is too strict: for example, the probability of the produc- tion NP →NP PP is much higher (in the PTB) if the parent of the noun phrase is a verb phrase (indicating that the NP is a direct object) than if the parent is a sentence (indicating that the NP is the subject of the sentence). • The Penn Treebank non-terminals are too coarse: there are many kinds of noun phrases and verb phrases, and accurate parsing sometimes requires knowing the difference. As we have already seen, when faced with prepositional phrase at- tachment ambiguity, a weighted CFG will either always choose NP attachment (if ψ(NP →NP PP) > ψ(VP →VP PP)), or it will always choose VP attachment. To get more nuanced behavior, more fine-grained non-terminals are needed. • More generally, accurate parsing requires some amount of semantics — understand- ing the meaning of the text to be parsed. Consider the example cats scratch people 4Earlier work on neural constituent parsing used transition-based parsing algorithms (§ 10.6.2) rather than CKY-style chart parsing (Henderson, 2004; Titov and Henderson, 2007). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_260_Chunk261
|
10.5. GRAMMAR REFINEMENT 243 S VP NP PP NP NP NNP Italy CC and NP NNP France P from NN wine V likes NP PRP she S VP NP NP NNP Italy CC and NP PP NP NNP France P from NN wine V likes NP PRP she Figure 10.4: The left parse is preferable because of the conjunction of phrases headed by France and Italy, but these parses cannot be distinguished by a WCFG. with claws: knowledge of about cats, claws, and scratching is necessary to correctly resolve the attachment ambiguity. An extreme example is shown in Figure 10.4. The analysis on the left is preferred because of the conjunction of similar entities France and Italy. But given the non-terminals shown in the analyses, there is no way to differentiate these two parses, since they include exactly the same productions. What is needed seems to be more precise non-terminals. One possibility would be to rethink the linguistics behind the Penn Treebank, and ask the annotators to try again. But the original annotation effort took five years, and there is a little appetite for another annotation effort of this scope. Researchers have therefore turned to automated techniques. 10.5.1 Parent annotations and other tree transformations The key assumption underlying context-free parsing is that productions depend only on the identity of the non-terminal on the left-hand side, and not on its ancestors or neigh- bors. The validity of this assumption is an empirical question, and it depends on the non-terminals themselves: ideally, every noun phrase (and verb phrase, etc) would be distributionally identical, so the assumption would hold. But in the Penn Treebank, the observed probability of productions often depends on the parent of the left-hand side. For example, noun phrases are more likely to be modified by prepositional phrases when they are in the object position (e.g., they amused the students from Georgia) than in the subject position (e.g., the students from Georgia amused them). This means that the NP →NP PP production is more likely if the entire constituent is the child of a VP than if it is the child Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_261_Chunk262
|
244 CHAPTER 10. CONTEXT-FREE PARSING S VP NP NN bear DT the V heard NP she S VP-S NP-VP NN-NP bear DT-NP the VP-VP heard NP-S she Figure 10.5: Parent annotation in a CFG derivation of S. The observed statistics are (Johnson, 1998): Pr(NP →NP PP) =11% [10.35] Pr(NP under S →NP PP) =9% [10.36] Pr(NP under VP →NP PP) =23%. [10.37] This phenomenon can be captured by parent annotation (Johnson, 1998), in which each non-terminal is augmented with the identity of its parent, as shown in Figure 10.5). This is sometimes called vertical Markovization, since a Markov dependency is introduced be- tween each node and its parent (Klein and Manning, 2003). It is analogous to moving from a bigram to a trigram context in a hidden Markov model. In principle, parent annotation squares the size of the set of non-terminals, which could make parsing considerably less efficient. But in practice, the increase in the number of non-terminals that actually appear in the data is relatively modest (Johnson, 1998). Parent annotation weakens the WCFG locality assumptions. This improves accuracy by enabling the parser to make more fine-grained distinctions, which better capture real linguistic phenomena. However, each production is more rare, and so careful smoothing or regularization is required to control the variance over production scores. 10.5.2 Lexicalized context-free grammars The examples in § 10.2.2 demonstrate the importance of individual words in resolving parsing ambiguity: the preposition on is more likely to attach to met, while the preposition of is more likely to attachment to President. But of all word pairs, which are relevant to attachment decisions? Consider the following variants on the original examples: (10.3) a. We met the President of Mexico. b. We met the first female President of Mexico. c. They had supposedly met the President on Monday. The underlined words are the head words of their respective phrases: met heads the verb phrase, and President heads the direct object noun phrase. These heads provide useful Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_262_Chunk263
|
10.5. GRAMMAR REFINEMENT 245 VP(meet) PP(on) NP NN Monday P on NP(President) NN President DT the VB meet VP(meet) NP(President) PP(of) NP NN Mexico P of NP(President) NN President DT the VB meet (a) Lexicalization and attachment ambiguity NP(Italy) NP(Italy) NNS Italy CC and NP(wine) PP(from) NP(France) NNP France IN from NP(wine) NN wine NP(wine) PP(from) NP(Italy) NP(Italy) NNS Italy CC and NP(France) NNP France IN from NP(wine) NN wine (b) Lexicalization and coordination scope ambiguity Figure 10.6: Examples of lexicalization semantic information. But they break the context-free assumption, which states that the score for a production depends only on the parent and its immediate children, and not the substructure under each child. The incorporation of head words into context-free parsing is known as lexicalization, and is implemented in rules of the form, NP(President) →NP(President) PP(of) [10.38] NP(President) →NP(President) PP(on). [10.39] Lexicalization was a major step towards accurate PCFG parsing in the 1990s and early 2000s. It requires solving three problems: identifying the heads of all constituents in a treebank; parsing efficiently while keeping track of the heads; and estimating the scores for lexicalized productions. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_263_Chunk264
|
246 CHAPTER 10. CONTEXT-FREE PARSING Non-terminal Direction Priority S right VP SBAR ADJP UCP NP VP left VBD VBN MD VBZ TO VB VP VBG VBP ADJP NP NP right N* EX $ CD QP PRP . . . PP left IN TO FW Table 10.3: A fragment of head percolation rules for English (Magerman, 1995; Collins, 1997) Identifying head words The head of a constituent is the word that is the most useful for determining how that constituent is integrated into the rest of the sentence.5 The head word of a constituent is determined recursively: for any non-terminal production, the head of the left-hand side must be the head of one of the children. The head is typically selected according to a set of deterministic rules, sometimes called head percolation rules. In many cases, these rules are straightforward: the head of a noun phrase in a NP →DET NN production is the head of the noun; the head of a sentence in a S →NP VP production is the head of the verb phrase. Table 10.3 shows a fragment of the head percolation rules used in many English pars- ing systems. The meaning of the first rule is that to find the head of an S constituent, first look for the rightmost VP child; if you don’t find one, then look for the rightmost SBAR child, and so on down the list. Verb phrases are headed by left verbs (the head of can plan on walking is planned, since the modal verb can is tagged MD); noun phrases are headed by the rightmost noun-like non-terminal (so the head of the red cat is cat),6 and prepositional phrases are headed by the preposition (the head of at Georgia Tech is at). Some of these rules are somewhat arbitrary — there’s no particular reason why the head of cats and dogs should be dogs — but the point here is just to get some lexical information that can support parsing, not to make deep claims about syntax. Figure 10.6 shows the application of these rules to two of the running examples. Parsing lexicalized context-free grammars A na¨ıve application of lexicalization would simply increase the set of non-terminals by taking the cross-product with the set of terminal symbols, so that the non-terminals now 5This is a pragmatic definition, befitting our goal of using head words to improve parsing; for a more formal definition, see (Bender, 2013, chapter 7). 6The noun phrase non-terminal is sometimes treated as a special case. Collins (1997) uses a heuristic that looks for the rightmost child which is a noun-like part-of-speech (e.g., NN, NNP), a possessive marker, or a superlative adjective (e.g., the greatest). If no such child is found, the heuristic then looks for the leftmost NP. If there is no child with tag NP, the heuristic then applies another priority list, this time from right to left. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_264_Chunk265
|
10.5. GRAMMAR REFINEMENT 247 include symbols like NP(President) and VP(meet). Under this approach, the CKY parsing algorithm could be applied directly to the lexicalized production rules. However, the complexity would be cubic in the size of the vocabulary of terminal symbols, which would clearly be intractable. Another approach is to augment the CKY table with an additional index, keeping track of the head of each constituent. The cell t[i, j, h, X] stores the score of the best derivation in which non-terminal X spans wi+1:j with head word h, where i < h ≤j. To compute such a table recursively, we must consider the possibility that each phrase gets its head from either its left or right child. The scores of the best derivations in which the head comes from the left and right child are denoted tℓand tr respectively, leading to the following recurrence: tℓ[i, j, h, X] = max (X→Y Z) max k>h max k<h′≤j t[i, k, h, Y ] + t[k, j, h′, Z] + ψ(X(h) →Y (h)Z(h′)) [10.40] tr[i, j, h, X] = max (X→Y Z) max k<h max i<h′≤k t[i, k, h′, Y ] + t[k, j, h, Z] + (ψ(X(h) →Y (h′)Z(h))) [10.41] t[i, j, h, X] = max (tℓ[i, j, h, X], tr[i, j, h, X]) . [10.42] To compute tℓ, we maximize over all split points k > h, since the head word must be in the left child. We then maximize again over possible head words h′ for the right child. An analogous computation is performed for tr. The size of the table is now O(M3N), where M is the length of the input and N is the number of non-terminals. Furthermore, each cell is computed by performing O(M2) operations, since we maximize over both the split point k and the head h′. The time complexity of the algorithm is therefore O(RM5N), where R is the number of rules in the grammar. Fortunately, more efficient solutions are possible. In general, the complexity of parsing can be reduced to O(M4) in the length of the input; for a broad class of lexicalized CFGs, the complexity can be made cubic in the length of the input, just as in unlexicalized CFGs (Eisner, 2000). Estimating lexicalized context-free grammars The final problem for lexicalized parsing is how to estimate weights for lexicalized pro- ductions X(i) →Y (j) Z(k). These productions are said to be bilexical, because they involve scores over pairs of words: in the example meet the President of Mexico, we hope to choose the correct attachment point by modeling the bilexical affinities of (meet, of) and (President, of). The number of such word pairs is quadratic in the size of the vocabulary, making it difficult to estimate the weights of lexicalized production rules directly from data. This is especially true for probabilistic context-free grammars, in which the weights are obtained from smoothed relative frequency. In a treebank with a million tokens, a Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_265_Chunk266
|
248 CHAPTER 10. CONTEXT-FREE PARSING vanishingly small fraction of the possible lexicalized productions will be observed more than once.7 The Charniak (1997) and Collins (1997) parsers therefore focus on approxi- mating the probabilities of lexicalized productions, using various smoothing techniques and independence assumptions. In discriminatively-trained weighted context-free grammars, the scores for each pro- duction can be computed from a set of features, which can be made progressively more fine-grained (Finkel et al., 2008). For example, the score of the lexicalized production NP(President) →NP(President) PP(of) can be computed from the following features: f(NP(President) →NP(President) PP(of)) = {NP(*) →NP(*) PP(*), NP(President) →NP(President) PP(*), NP(*) →NP(*) PP(of), NP(President) →NP(President) PP(of)} The first feature scores the unlexicalized production NP →NP PP; the next two features lexicalize only one element of the production, thereby scoring the appropriateness of NP attachment for the individual words President and of; the final feature scores the specific bilexical affinity of President and of. For bilexical pairs that are encountered frequently in the treebank, this bilexical feature can play an important role in parsing; for pairs that are absent or rare, regularization will drive its weight to zero, forcing the parser to rely on the more coarse-grained features. In chapter 14, we will encounter techniques for clustering words based on their distri- butional properties — the contexts in which they appear. Such a clustering would group rare and common words, such as whale, shark, beluga, Leviathan. Word clusters can be used as features in discriminative lexicalized parsing, striking a middle ground between full lexicalization and non-terminals (Finkel et al., 2008). In this way, labeled examples con- taining relatively common words like whale can help to improve parsing for rare words like beluga, as long as those two words are clustered together. 10.5.3 *Refinement grammars Lexicalization improves on context-free parsing by adding detailed information in the form of lexical heads. However, estimating the scores of lexicalized productions is dif- ficult. Klein and Manning (2003) argue that the right level of linguistic detail is some- where between treebank categories and individual words. Some parts-of-speech and non- terminals are truly substitutable: for example, cat/N and dog/N. But others are not: for example, the preposition of exclusively attaches to nouns, while the preposition as is more 7The real situation is even more difficult, because non-binary context-free grammars can involve trilexical or higher-order dependencies, between the head of the constituent and multiple of its children (Carreras et al., 2008). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_266_Chunk267
|
10.6. BEYOND CONTEXT-FREE PARSING 249 likely to modify verb phrases. Klein and Manning (2003) obtained a 2% improvement in F -MEASURE on a parent-annotated PCFG parser by making a single change: splitting the preposition category into six subtypes. They propose a series of linguistically-motivated refinements to the Penn Treebank annotations, which in total yielded a 40% error reduc- tion. Non-terminal refinement process can be automated by treating the refined categories as latent variables. For example, we might split the noun phrase non-terminal into NP1, NP2, NP3, . . . , without defining in advance what each refined non-terminal cor- responds to. This can be treated as partially supervised learning, similar to the multi- component document classification model described in § 5.2.3. A latent variable PCFG can be estimated by expectation maximization (Matsuzaki et al., 2005):8 • In the E-step, estimate a marginal distribution q over the refinement type of each non-terminal in each derivation. These marginals are constrained by the original annotation: an NP can be reannotated as NP4, but not as VP3. Marginal probabil- ities over refined productions can be computed from the inside-outside algorithm, as described in § 10.4.3, where the E-step enforces the constraints imposed by the original annotations. • In the M-step, recompute the parameters of the grammar, by summing over the probabilities of anchored productions that were computed in the E-step: E[count(X →Y Z)] = M X i=0 M X j=i j X k=i p(X →Y Z, (i, j, k) | w). [10.43] As usual, this process can be iterated to convergence. To determine the number of re- finement types for each tag, Petrov et al. (2006) apply a split-merge heuristic; Liang et al. (2007) and Finkel et al. (2007) apply Bayesian nonparametrics (Cohen, 2016). Some examples of refined non-terminals are shown in Table 10.4. The proper nouns differentiate months, first names, middle initials, last names, first names of places, and second names of places; each of these will tend to appear in different parts of grammatical productions. The personal pronouns differentiate grammatical role, with PRP-0 appear- ing in subject position at the beginning of the sentence (note the capitalization), PRP-1 appearing in subject position but not at the beginning of the sentence, and PRP-2 appear- ing in object position. 10.6 Beyond context-free parsing In the context-free setting, the score for a parse is a combination of the scores of individual productions. As we have seen, these models can be improved by using finer-grained non- 8Spectral learning, described in § 5.5.2, has also been applied to refinement grammars (Cohen et al., 2014). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_267_Chunk268
|
250 CHAPTER 10. CONTEXT-FREE PARSING Proper nouns NNP-14 Oct. Nov. Sept. NNP-12 John Robert James NNP-2 J. E. L. NNP-1 Bush Noriega Peters NNP-15 New San Wall NNP-3 York Francisco Street Personal Pronouns PRP-0 It He I PRP-1 it he they PRP-2 it them him Table 10.4: Examples of automatically refined non-terminals and some of the words that they generate (Petrov et al., 2006). terminals, via parent-annotation, lexicalization, and automated refinement. However, the inherent limitations to the expressiveness of context-free parsing motivate the consider- ation of other search strategies. These strategies abandon the optimality guaranteed by bottom-up parsing, in exchange for the freedom to consider arbitrary properties of the proposed parses. 10.6.1 Reranking A simple way to relax the restrictions of context-free parsing is to perform a two-stage pro- cess, in which a context-free parser generates a k-best list of candidates, and a reranker then selects the best parse from this list (Charniak and Johnson, 2005; Collins and Koo, 2005). The reranker can be trained from an objective that is similar to multi-class classi- fication: the goal is to learn weights that assign a high score to the reference parse, or to the parse on the k-best list that has the lowest error. In either case, the reranker need only evaluate the K best parses, and so no context-free assumptions are necessary. This opens the door to more expressive scoring functions: • It is possible to incorporate arbitrary non-local features, such as the structural par- allelism and right-branching orientation of the parse (Charniak and Johnson, 2005). • Reranking enables the use of recursive neural networks, in which each constituent span wi+1:j receives a vector ui,j which is computed from the vector representa- tions of its children, using a composition function that is linked to the production Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_268_Chunk269
|
10.6. BEYOND CONTEXT-FREE PARSING 251 rule (Socher et al., 2013), e.g., ui,j = f ΘX→Y Z ui,k uk,j [10.44] The overall score of the parse can then be computed from the final vector, Ψ(τ) = θu0,M. Reranking can yield substantial improvements in accuracy. The main limitation is that it can only find the best parse among the K-best offered by the generator, so it is inherently limited by the ability of the bottom-up parser to find high-quality candidates. 10.6.2 Transition-based parsing Structure prediction can be viewed as a form of search. An alternative to bottom-up pars- ing is to read the input from left-to-right, gradually building up a parse structure through a series of transitions. Transition-based parsing is described in more detail in the next chapter, in the context of dependency parsing. However, it can also be applied to CFG parsing, as briefly described here. For any context-free grammar, there is an equivalent pushdown automaton, a model of computation that accepts exactly those strings that can be derived from the grammar. This computational model consumes the input from left to right, while pushing and pop- ping elements on a stack. This architecture provides a natural transition-based parsing framework for context-free grammars, known as shift-reduce parsing. Shift-reduce parsing is a type of transition-based parsing, in which the parser can take the following actions: • shift the next terminal symbol onto the stack; • unary-reduce the top item on the stack, using a unary production rule in the gram- mar; • binary-reduce the top two items onto the stack, using a binary production rule in the grammar. The set of available actions is constrained by the situation: the parser can only shift if there are remaining terminal symbols in the input, and it can only reduce if an applicable production rule exists in the grammar. If the parser arrives at a state where the input has been completely consumed, and the stack contains only the element S, then the input is accepted. If the parser arrives at a non-accepting state where there are no possible actions, the input is rejected. A parse error occurs if there is some action sequence that would accept an input, but the parser does not find it. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_269_Chunk270
|
252 CHAPTER 10. CONTEXT-FREE PARSING Example Consider the input we eat sushi and the grammar in Table 10.1. The input can be parsed through the following sequence of actions: 1. Shift the first token we onto the stack. 2. Reduce the top item on the stack to NP, using the production NP →we. 3. Shift the next token eat onto the stack, and reduce it to V with the production V → eat. 4. Shift the final token sushi onto the stack, and reduce it to NP. The input has been completely consumed, and the stack contains [NP, V, NP]. 5. Reduce the top two items using the production VP →V NP. The stack now con- tains [VP, NP]. 6. Reduce the top two items using the production S →NP VP. The stack now contains [S]. Since the input is empty, this is an accepting state. One thing to notice from this example is that the number of shift actions is equal to the length of the input. The number of reduce actions is equal to the number of non-terminals in the analysis, which grows linearly in the length of the input. Thus, the overall time complexity of shift-reduce parsing is linear in the length of the input (assuming the com- plexity of each individual classification decision is constant in the length of the input). This is far better than the cubic time complexity required by CKY parsing. Transition-based parsing as inference In general, it is not possible to guarantee that a transition-based parser will find the optimal parse, argmaxτ Ψ(τ; w), even under the usual CFG independence assumptions. We could assign a score to each anchored parsing action in each context, with ψ(a, c) indicating the score of performing action a in context c. One might imagine that transition-based parsing could efficiently find the derivation that maximizes the sum of such scores. But this too would require backtracking and searching over an exponentially large number of possible action sequences: if a bad decision is made at the beginning of the derivation, then it may be impossible to recover the optimal action sequence without backtracking to that early mistake. This is known as a search error. Transition-based parsers can incorporate arbitrary features, without the restrictive independence assumptions required by chart parsing; search errors are the price that must be paid for this flexibility. Learning transition-based parsing Transition-based parsing can be combined with ma- chine learning by training a classifier to select the correct action in each situation. This classifier is free to choose any feature of the input, the state of the parser, and the parse history. However, there is no optimality guarantee: the parser may choose a suboptimal parse, due to a mistake at the beginning of the analysis. Nonetheless, some of the strongest Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_270_Chunk271
|
10.6. BEYOND CONTEXT-FREE PARSING 253 CFG parsers are based on the shift-reduce architecture, rather than CKY. A recent gener- ation of models links shift-reduce parsing with recurrent neural networks, updating a hidden state vector while consuming the input (e.g., Cross and Huang, 2016; Dyer et al., 2016). Learning algorithms for transition-based parsing are discussed in more detail in § 11.3. Exercises 1. Design a grammar that handles English subject-verb agreement. Specifically, your grammar should handle the examples below correctly: (10.4) a. She sings. b. We sing. (10.5) a. *She sing. b. *We sings. 2. Extend your grammar from the previous problem to include the auxiliary verb can, so that the following cases are handled: (10.6) a. She can sing. b. We can sing. (10.7) a. *She can sings. b. *We can sings. 3. French requires subjects and verbs to agree in person and number, and it requires determiners and nouns to agree in gender and number. Verbs and their objects need not agree. Assuming that French has two genders (feminine and masculine), three persons (first [me], second [you], third [her]), and two numbers (singular and plural), how many productions are required to extend the following simple grammar to handle agreement? S → NP VP VP → V | V NP | V NP NP NP → DET NN 4. Consider the grammar: Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_271_Chunk272
|
254 CHAPTER 10. CONTEXT-FREE PARSING S → NP VP VP → V NP NP → JJ NP NP → fish (the animal) V → fish (the action of fishing) JJ → fish (a modifier, as in fish sauce or fish stew) Apply the CKY algorithm and identify all possible parses for the sentence fish fish fish fish. 5. Choose one of the possible parses for the previous problem, and show how it can be derived by a series of shift-reduce actions. 6. To handle VP coordination, a grammar includes the production VP →VP CC VP. To handle adverbs, it also includes the production VP →VP ADV. Assume all verbs are generated from a sequence of unary productions, e.g., VP →V →eat. a) Show how to binarize the production VP →VP CC VP. b) Use your binarized grammar to parse the sentence They eat and drink together, treating together as an adverb. c) Prove that a weighted CFG cannot distinguish the two possible derivations of this sentence. Your explanation should focus on the productions in the original, non-binary grammar. d) Explain what condition must hold for a parent-annotated WCFG to prefer the derivation in which together modifies the coordination eat and drink. 7. Consider the following PCFG: p(X →X X) = 1 2 [10.45] p(X →Y ) = 1 2 [10.46] p(Y →σ) = 1 |Σ|, ∀σ ∈Σ [10.47] a) Compute the probability p(ˆτ) of the maximum probability parse for a string w ∈ΣM. b) Compute the conditional probability p(ˆτ | w). 8. Context-free grammars can be used to parse the internal structure of words. Us- ing the weighted CKY algorithm and the following weighted context-free grammar, identify the best parse for the sequence of morphological segments in+flame+able. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_272_Chunk273
|
10.6. BEYOND CONTEXT-FREE PARSING 255 S → V 0 S → N 0 S → J 0 V → VPref N -1 J → N JSuff 1 J → V JSuff 0 J → NegPref J 1 VPref → in+ 2 NegPref → in+ 1 N → flame 0 JSuff → +able 0 9. Use the inside and outside scores to compute the marginal probability p(Xi+1:j →Yi+1:k Zk+1:j | w), indicating that Y spans wi+1:k, Z spans wk+1:j, and X is the parent of Y and Z, span- ning wi+1:j. 10. Suppose that the potentials Ψ(X →α) are log-probabilities, so that P α exp Ψ(X →α) = 1 for all X. Verify that the semiring inside recurrence from Equation 10.26 generates the log-probability log p(w) = log P τ:yield(τ)=w p(τ). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_273_Chunk274
|
Chapter 11 Dependency parsing The previous chapter discussed algorithms for analyzing sentences in terms of nested con- stituents, such as noun phrases and verb phrases. However, many of the key sources of ambiguity in phrase-structure analysis relate to questions of attachment: where to attach a prepositional phrase or complement clause, how to scope a coordinating conjunction, and so on. These attachment decisions can be represented with a more lightweight structure: a directed graph over the words in the sentence, known as a dependency parse. Syn- tactic annotation has shifted its focus to such dependency structures: at the time of this writing, the Universal Dependencies project offers more than 100 dependency treebanks for more than 60 languages.1 This chapter will describe the linguistic ideas underlying dependency grammar, and then discuss exact and transition-based parsing algorithms. The chapter will also discuss recent research on learning to search in transition-based structure prediction. 11.1 Dependency grammar While dependency grammar has a rich history of its own (Tesni`ere, 1966; K¨ubler et al., 2009), it can be motivated by extension from the lexicalized context-free grammars that we encountered in previous chapter (§ 10.5.2). Recall that lexicalization augments each non-terminal with a head word. The head of a constituent is identified recursively, using a set of head rules, as shown in Table 10.3. An example of a lexicalized context-free parse is shown in Figure 11.1a. In this sentence, the head of the S constituent is the main verb, scratch; this non-terminal then produces the noun phrase the cats, whose head word is cats, and from which we finally derive the word the. Thus, the word scratch occupies the central position for the sentence, with the word cats playing a supporting role. In turn, cats 1universaldependencies.org 257
|
nlp_Page_275_Chunk275
|
258 CHAPTER 11. DEPENDENCY PARSING S(scratch) VP(scratch) PP(with) NP(claws) NNS claws IN with NP(people) NNS people VB scratch NP(cats) NNS cats DT The (a) lexicalized constituency parse The cats scratch people with claws (b) unlabeled dependency tree Figure 11.1: Dependency grammar is closely linked to lexicalized context free grammars: each lexical head has a dependency path to every other word in the constituent. (This example is based on the lexicalization rules from § 10.5.2, which make the preposition the head of a prepositional phrase. In the more contemporary Universal Dependencies annotations, the head of with claws would be claws, so there would be an edge scratch → claws.) occupies the central position for the noun phrase, with the word the playing a supporting role. The relationships between words in a sentence can be formalized in a directed graph, based on the lexicalized phrase-structure parse: create an edge (i, j) iff word i is the head of a phrase whose child is a phrase headed by word j. Thus, in our example, we would have scratch →cats and cats →the. We would not have the edge scratch →the, because although S(scratch) dominates DET(the) in the phrase-structure parse tree, it is not its im- mediate parent. These edges describe syntactic dependencies, a bilexical relationship between a head and a dependent, which is at the heart of dependency grammar. Continuing to build out this dependency graph, we will eventually reach every word in the sentence, as shown in Figure 11.1b. In this graph — and in all graphs constructed in this way — every word has exactly one incoming edge, except for the root word, which is indicated by a special incoming arrow from above. Furthermore, the graph is weakly connected: if the directed edges were replaced with undirected edges, there would be a path between all pairs of nodes. From these properties, it can be shown that there are no cycles in the graph (or else at least one node would have to have more than one incoming edge), and therefore, the graph is a tree. Because the graph includes all vertices, it is a spanning tree. 11.1.1 Heads and dependents A dependency edge implies an asymmetric syntactic relationship between the head and dependent words, sometimes called modifiers. For a pair like the cats or cats scratch, how Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_276_Chunk276
|
11.1. DEPENDENCY GRAMMAR 259 do we decide which is the head? Here are some possible criteria: • The head sets the syntactic category of the construction: for example, nouns are the heads of noun phrases, and verbs are the heads of verb phrases. • The modifier may be optional while the head is mandatory: for example, in the sentence cats scratch people with claws, the subtrees cats scratch and cats scratch people are grammatical sentences, but with claws is not. • The head determines the morphological form of the modifier: for example, in lan- guages that require gender agreement, the gender of the noun determines the gen- der of the adjectives and determiners. • Edges should first connect content words, and then connect function words. These guidelines are not universally accepted, and they sometimes conflict. The Uni- versal Dependencies (UD) project has attempted to identify a set of principles that can be applied to dozens of different languages (Nivre et al., 2016).2 These guidelines are based on the universal part-of-speech tags from chapter 8. They differ somewhat from the head rules described in § 10.5.2: for example, on the principle that dependencies should relate content words, the prepositional phrase with claws would be headed by claws, resulting in an edge scratch →claws, and another edge claws →with. One objection to dependency grammar is that not all syntactic relations are asymmet- ric. One such relation is coordination (Popel et al., 2013): in the sentence, Abigail and Max like kimchi (Figure 11.2), which word is the head of the coordinated noun phrase Abigail and Max? Choosing either Abigail or Max seems arbitrary; fairness argues for making and the head, but this seems like the least important word in the noun phrase, and selecting it would violate the principle of linking content words first. The Universal Dependencies annotation system arbitrarily chooses the left-most item as the head — in this case, Abigail — and includes edges from this head to both Max and the coordinating conjunction and. These edges are distinguished by the labels CONJ (for the thing begin conjoined) and CC (for the coordinating conjunction). The labeling system is discussed next. 11.1.2 Labeled dependencies Edges may be labeled to indicate the nature of the syntactic relation that holds between the two elements. For example, in Figure 11.2, the label NSUBJ on the edge from like to Abigail indicates that the subtree headed by Abigail is the noun subject of the verb like; similarly, the label OBJ on the edge from like to kimchi indicates that the subtree headed by 2The latest and most specific guidelines are available at universaldependencies.org/ guidelines.html Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_277_Chunk277
|
260 CHAPTER 11. DEPENDENCY PARSING Abigail and Max like kimchi but not jook root nsubj obj cc conj conj cc advmod Figure 11.2: In the Universal Dependencies annotation system, the left-most item of a coordination is the head. I know New York pizza and this is not it !! nsubj compound compound obj cc nsubj cop advmod conj punct root Figure 11.3: A labeled dependency parse from the English UD Treebank (reviews-361348- 0006) kimchi is the object.3 The negation not is treated as an adverbial modifier (ADVMOD) on the noun jook. A slightly more complex example is shown in Figure 11.3. The multiword expression New York pizza is treated as a “flat” unit of text, with the elements linked by the COM- POUND relation. The sentence includes two clauses that are conjoined in the same way that noun phrases are conjoined in Figure 11.2. The second clause contains a copula verb (see § 8.1.1). For such clauses, we treat the “object” of the verb as the root — in this case, it — and label the verb as a dependent, with the COP relation. This example also shows how punctuations are treated, with label PUNCT. 11.1.3 Dependency subtrees and constituents Dependency trees hide information that would be present in a CFG parse. Often what is hidden is in fact irrelevant: for example, Figure 11.4 shows three different ways of 3Earlier work distinguished direct and indirect objects (De Marneffe and Manning, 2008), but this has been dropped in version 2.0 of the Universal Dependencies annotation system. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_278_Chunk278
|
11.1. DEPENDENCY GRAMMAR 261 VP PP with a fork PP on the table NP dinner V ate (a) Flat VP PP with a fork VP PP on the table VP NP dinner V ate (b) Chomsky adjunction VP PP with a fork PP on the table VP NP dinner V ate (c) Two-level (PTB-style) ate dinner on the table with a fork (d) Dependency representation Figure 11.4: The three different CFG analyses of this verb phrase all correspond to a single dependency structure. representing prepositional phrase adjuncts to the verb ate. Because there is apparently no meaningful difference between these analyses, the Penn Treebank decides by convention to use the two-level representation (see Johnson, 1998, for a discussion). As shown in Figure 11.4d, these three cases all look the same in a dependency parse. But dependency grammar imposes its own set of annotation decisions, such as the identification of the head of a coordination (§ 11.1.1); without lexicalization, context-free grammar does not require either element in a coordination to be privileged in this way. Dependency parses can be disappointingly flat: for example, in the sentence Yesterday, Abigail was reluctantly giving Max kimchi, the root giving is the head of every dependency! The constituent parse arguably offers a more useful structural analysis for such cases. Projectivity Thus far, we have defined dependency trees as spanning trees over a graph in which each word is a vertex. As we have seen, one way to construct such trees is by connecting the heads in a lexicalized constituent parse. However, there are spanning trees that cannot be constructed in this way. Syntactic constituents are contiguous spans. In a spanning tree constructed from a lexicalized constituent parse, the head h of any con- stituent that spans the nodes from i to j must have a path to every node in this span. This is property is known as projectivity, and projective dependency parses are a restricted class of spanning trees. Informally, projectivity means that “crossing edges” are prohib- ited. The formal definition follows: Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_279_Chunk279
|
262 CHAPTER 11. DEPENDENCY PARSING % non-projective edges % non-projective sentences Czech 1.86% 22.42% English 0.39% 7.63% German 2.33% 28.19% Table 11.1: Frequency of non-projective dependencies in three languages (Kuhlmann and Nivre, 2010) Lucia ate a pizza yesterday which was vegetarian root nsubj obj det acl:relcl obl:tmod nsubj cop Figure 11.5: An example of a non-projective dependency parse. The “crossing edge” arises from the relative clause which was vegetarian and the oblique temporal modifier yesterday. Definition 2 (Projectivity). An edge from i to j is projective iff all k between i and j are descen- dants of i. A dependency parse is projective iff all its edges are projective. Figure 11.5 gives an example of a non-projective dependency graph in English. This dependency graph does not correspond to any constituent parse. As shown in Table 11.1, non-projectivity is more common in languages such as Czech and German. Even though relatively few dependencies are non-projective in these languages, many sentences have at least one such dependency. As we will soon see, projectivity has important algorithmic consequences. 11.2 Graph-based dependency parsing Let y = {(i r−→j)} represent a dependency graph, in which each edge is a relation r from head word i ∈{1, 2, . . . , M, ROOT} to modifier j ∈{1, 2, . . . , M}. The special node ROOT indicates the root of the graph, and M is the length of the input |w|. Given a scoring function Ψ(y, w; θ), the optimal parse is, ˆy = argmax y∈Y(w) Ψ(y, w; θ), [11.1] where Y(w) is the set of valid dependency parses on the input w. As usual, the number of possible labels |Y(w)| is exponential in the length of the input (Wu and Chao, 2004). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_280_Chunk280
|
11.2. GRAPH-BASED DEPENDENCY PARSING 263 First order h m Second order h s m g h m Third order g h s m h t s m Figure 11.6: Feature templates for higher-order dependency parsing Algorithms that search over this space of possible graphs are known as graph-based de- pendency parsers. In sequence labeling and constituent parsing, it was possible to search efficiently over an exponential space by choosing a feature function that decomposes into a sum of local feature vectors. A similar approach is possible for dependency parsing, by requiring the scoring function to decompose across dependency arcs: Ψ(y, w; θ) = X i r−→j∈y ψ(i r−→j, w; θ). [11.2] Dependency parsers that operate under this assumption are known as arc-factored, since the score of a graph is the product of the scores of all arcs. Higher-order dependency parsing The arc-factored decomposition can be relaxed to al- low higher-order dependencies. In second-order dependency parsing, the scoring func- tion may include grandparents and siblings, as shown by the templates in Figure 11.6. The scoring function is, Ψ(y, w; θ) = X i r−→j∈y ψparent(i r−→j, w; θ) + X k r′−→i∈y ψgrandparent(i r−→j, k, r′, w; θ) + X i r′−→s∈y s̸=j ψsibling(i r−→j, s, r′, w; θ). [11.3] The top line scores computes a scoring function that includes the grandparent k; the bottom line computes a scoring function for each sibling s. For projective dependency graphs, there are efficient algorithms for second-order and third-order dependency pars- ing (Eisner, 1996; McDonald and Pereira, 2006; Koo and Collins, 2010); for non-projective dependency graphs, second-order dependency parsing is NP-hard (McDonald and Pereira, 2006). The specific algorithms are discussed in the next section. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_281_Chunk281
|
264 CHAPTER 11. DEPENDENCY PARSING 11.2.1 Graph-based parsing algorithms The distinction between projective and non-projective dependency trees (§ 11.1.3) plays a key role in the choice of algorithms. Because projective dependency trees are closely related to (and can be derived from) lexicalized constituent trees, lexicalized parsing al- gorithms can be applied directly. For the more general problem of parsing to arbitrary spanning trees, a different class of algorithms is required. In both cases, arc-factored de- pendency parsing relies on precomputing the scores ψ(i r−→j, w; θ) for each potential edge. There are O(M2R) such scores, where M is the length of the input and R is the number of dependency relation types, and this is a lower bound on the time and space complexity of any exact algorithm for arc-factored dependency parsing. Projective dependency parsing Any lexicalized constituency tree can be converted into a projective dependency tree by creating arcs between the heads of constituents and their parents, so any algorithm for lexicalized constituent parsing can be converted into an algorithm for projective depen- dency parsing, by converting arc scores into scores for lexicalized productions. As noted in § 10.5.2, there are cubic time algorithms for lexicalized constituent parsing, which are extensions of the CKY algorithm. Therefore, arc-factored projective dependency parsing can be performed in cubic time in the length of the input. Second-order projective dependency parsing can also be performed in cubic time, with minimal modifications to the lexicalized parsing algorithm (Eisner, 1996). It is possible to go even further, to third-order dependency parsing, in which the scoring function may consider great-grandparents, grand-siblings, and “tri-siblings”, as shown in Figure 11.6. Third-order dependency parsing can be performed in O(M4) time, which can be made practical through the use of pruning to eliminate unlikely edges (Koo and Collins, 2010). Non-projective dependency parsing In non-projective dependency parsing, the goal is to identify the highest-scoring span- ning tree over the words in the sentence. The arc-factored assumption ensures that the score for each spanning tree will be computed as a sum over scores for the edges, which are precomputed. Based on these scores, we build a weighted connected graph. Arc- factored non-projective dependency parsing is then equivalent to finding the spanning tree that achieves the maximum total score, Ψ(y, w) = P i r−→j∈y ψ(i r−→j, w). The Chu- Liu-Edmonds algorithm (Chu and Liu, 1965; Edmonds, 1967) computes this maximum directed spanning tree efficiently. It does this by first identifying the best incoming edge i r−→j for each vertex j. If the resulting graph does not contain cycles, it is the maxi- mum spanning tree. If there is a cycle, it is collapsed into a super-vertex, whose incoming and outgoing edges are based on the edges to the vertices in the cycle. The algorithm is Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_282_Chunk282
|
11.2. GRAPH-BASED DEPENDENCY PARSING 265 then applied recursively to the resulting graph, and process repeats until a graph without cycles is obtained. The time complexity of identifying the best incoming edge for each vertex is O(M2R), where M is the length of the input and R is the number of relations; in the worst case, the number of cycles is O(M). Therefore, the complexity of the Chu-Liu-Edmonds algorithm is O(M3R). This complexity can be reduced to O(M2N) by storing the edge scores in a Fibonnaci heap (Gabow et al., 1986). For more detail on graph-based parsing algorithms, see Eisner (1997) and K¨ubler et al. (2009). Higher-order non-projective dependency parsing Given the tractability of higher-order projective dependency parsing, you may be surprised to learn that non-projective second- order dependency parsing is NP-Hard. This can be proved by reduction from the vertex cover problem (Neuhaus and Br¨oker, 1997). A heuristic solution is to do projective pars- ing first, and then post-process the projective dependency parse to add non-projective edges (Nivre and Nilsson, 2005). More recent work has applied techniques for approxi- mate inference in graphical models, including belief propagation (Smith and Eisner, 2008), integer linear programming (Martins et al., 2009), variational inference (Martins et al., 2010), and Markov Chain Monte Carlo (Zhang et al., 2014). 11.2.2 Computing scores for dependency arcs The arc-factored scoring function ψ(i r−→j, w; θ) can be defined in several ways: Linear ψ(i r−→j, w; θ) = θ · f(i r−→j, w) [11.4] Neural ψ(i r−→j, w; θ) = Feedforward([uwi; uwj]; θ) [11.5] Generative ψ(i r−→j, w; θ) = log p(wj, r | wi). [11.6] Linear feature-based arc scores Linear models for dependency parsing incorporate many of the same features used in sequence labeling and discriminative constituent parsing. These include: • the length and direction of the arc; • the words wi and wj linked by the dependency relation; • the prefixes, suffixes, and parts-of-speech of these words; • the neighbors of the dependency arc, wi−1, wi+1, wj−1, wj+1; • the prefixes, suffixes, and part-of-speech of these neighbor words. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_283_Chunk283
|
266 CHAPTER 11. DEPENDENCY PARSING Each of these features can be conjoined with the dependency edge label r. Note that features in an arc-factored parser can refer to words other than wi and wj. The restriction is that the features consider only a single arc. Bilexical features (e.g., sushi →chopsticks) are powerful but rare, so it is useful to aug- ment them with coarse-grained alternatives, by “backing off” to the part-of-speech or affix. For example, the following features are created by backing off to part-of-speech tags in an unlabeled dependency parser: f(3 −→5, we eat sushi with chopsticks) = ⟨sushi →chopsticks, sushi →NNS, NN →chopsticks, NNS →NN⟩. Regularized discriminative learning algorithms can then trade off between features at varying levels of detail. McDonald et al. (2005) take this approach as far as tetralexical features (e.g., (wi, wi+1, wj−1, wj)). Such features help to avoid choosing arcs that are un- likely due to the intervening words: for example, there is unlikely to be an edge between two nouns if the intervening span contains a verb. A large list of first and second-order features is provided by Bohnet (2010), who uses a hashing function to store these features efficiently. Neural arc scores Given vector representations xi for each word wi in the input, a set of arc scores can be computed from a feedforward neural network: ψ(i r−→j, w; θ) =FeedForward([xi; xj]; θr), [11.7] where unique weights θr are available for each arc type (Pei et al., 2015; Kiperwasser and Goldberg, 2016). Kiperwasser and Goldberg (2016) use a feedforward network with a single hidden layer, z =g(Θr[xi; xj] + b(z) r ) [11.8] ψ(i r−→j) =βrz + b(y) r , [11.9] where Θr is a matrix, βr is a vector, each br is a scalar, and the function g is an elementwise tanh activation function. The vector xi can be set equal to the word embedding, which may be pre-trained or learned by backpropagation (Pei et al., 2015). Alternatively, contextual information can be incorporated by applying a bidirectional recurrent neural network across the input, as Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_284_Chunk284
|
11.2. GRAPH-BASED DEPENDENCY PARSING 267 described in § 7.6. The RNN hidden states at each word can be used as inputs to the arc scoring function (Kiperwasser and Goldberg, 2016). Feature-based arc scores are computationally expensive, due to the costs of storing and searching a huge table of weights. Neural arc scores can be viewed as a compact solution to this problem. Rather than working in the space of tuples of lexical features, the hidden layers of a feedforward network can be viewed as implicitly computing fea- ture combinations, with each layer of the network evaluating progressively more words. An early paper on neural dependency parsing showed substantial speed improvements at test time, while also providing higher accuracy than feature-based models (Chen and Manning, 2014). Probabilistic arc scores If each arc score is equal to the log probability log p(wj, r | wi), then the sum of scores gives the log probability of the sentence and arc labels, by the chain rule. For example, consider the unlabeled parse of we eat sushi with rice, y ={(ROOT, 2), (2, 1), (2, 3), (3, 5), (5, 4)} [11.10] log p(w | y) = X (i→j)∈y log p(wj | wi) [11.11] = log p(eat | ROOT) + log p(we | eat) + log p(sushi | eat) + log p(rice | sushi) + log p(with | rice). [11.12] Probabilistic generative models are used in combination with expectation-maximization (chapter 5) for unsupervised dependency parsing (Klein and Manning, 2004). 11.2.3 Learning Having formulated graph-based dependency parsing as a structure prediction problem, we can apply similar learning algorithms to those used in sequence labeling. Given a loss function ℓ(θ; w(i), y(i)), we can compute gradient-based updates to the parameters. For a model with feature-based arc scores and a perceptron loss, we obtain the usual structured perceptron update, ˆy = argmax y′∈Y(w) θ · f(w, y′) [11.13] θ =θ + f(w, y) −f(w, ˆy) [11.14] In this case, the argmax requires a maximization over all dependency trees for the sen- tence, which can be computed using the algorithms described in § 11.2.1. We can apply all the usual tricks from § 2.3: weight averaging, a large margin objective, and regular- ization. McDonald et al. (2005) were the first to treat dependency parsing as a structure Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_285_Chunk285
|
268 CHAPTER 11. DEPENDENCY PARSING prediction problem, using MIRA, an online margin-based learning algorithm. Neural arc scores can be learned in the same way, backpropagating from a margin loss to updates on the feedforward network that computes the score for each edge. A conditional random field for arc-factored dependency parsing is built on the proba- bility model, p(y | w) = exp P i r−→j∈y ψ(i r−→j, w; θ) P y′∈Y(w) exp P i r−→j∈y′ ψ(i r−→j, w; θ) [11.15] Such a model is trained to minimize the negative log conditional-likelihood. Just as in CRF sequence models (§ 7.5.3) and the logistic regression classifier (§ 2.5), the gradients involve marginal probabilities p(i r−→j | w; θ), which in this case are probabilities over individual dependencies. In arc-factored models, these probabilities can be computed in polynomial time. For projective dependency trees, the marginal probabilities can be computed in cubic time, using a variant of the inside-outside algorithm (Lari and Young, 1990). For non-projective dependency parsing, marginals can also be computed in cubic time, using the matrix-tree theorem (Koo et al., 2007; McDonald et al., 2007; Smith and Smith, 2007). Details of these methods are described by K¨ubler et al. (2009). 11.3 Transition-based dependency parsing Graph-based dependency parsing offers exact inference, meaning that it is possible to re- cover the best-scoring parse for any given model. But this comes at a price: the scoring function is required to decompose into local parts — in the case of non-projective parsing, these parts are restricted to individual arcs. These limitations are felt more keenly in de- pendency parsing than in sequence labeling, because second-order dependency features are critical to correctly identify some types of attachments. For example, prepositional phrase attachment depends on the attachment point, the object of the preposition, and the preposition itself; arc-factored scores cannot account for all three of these features si- multaneously. Graph-based dependency parsing may also be criticized on the basis of intuitions about human language processing: people read and listen to sentences sequen- tially, incrementally building mental models of the sentence structure and meaning before getting to the end (Jurafsky, 1996). This seems hard to reconcile with graph-based algo- rithms, which perform bottom-up operations on the entire sentence, requiring the parser to keep every word in memory. Finally, from a practical perspective, graph-based depen- dency parsing is relatively slow, running in cubic time in the length of the input. Transition-based algorithms address all three of these objections. They work by mov- ing through the sentence sequentially, while performing actions that incrementally up- date a stored representation of what has been read thus far. As with the shift-reduce Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_286_Chunk286
|
11.3. TRANSITION-BASED DEPENDENCY PARSING 269 parser from § 10.6.2, this representation consists of a stack, onto which parsing substruc- tures can be pushed and popped. In shift-reduce, these substructures were constituents; in the transition systems that follow, they will be projective dependency trees over partial spans of the input.4 Parsing is complete when the input is consumed and there is only a single structure on the stack. The sequence of actions that led to the parse is known as the derivation. One problem with transition-based systems is that there may be multiple derivations for a single parse structure — a phenomenon known as spurious ambiguity. 11.3.1 Transition systems for dependency parsing A transition system consists of a representation for describing configurations of the parser, and a set of transition actions, which manipulate the configuration. There are two main transition systems for dependency parsing: arc-standard, which is closely related to shift- reduce, and arc-eager, which adds an additional action that can simplify derivations (Ab- ney and Johnson, 1991). In both cases, transitions are between configurations that are represented as triples, C = (σ, β, A), where σ is the stack, β is the input buffer, and A is the list of arcs that have been created (Nivre, 2008). In the initial configuration, Cinitial = ([ROOT], w, ∅), [11.16] indicating that the stack contains only the special node ROOT, the entire input is on the buffer, and the set of arcs is empty. An accepting configuration is, Caccept = ([ROOT], ∅, A), [11.17] where the stack contains only ROOT, the buffer is empty, and the arcs A define a spanning tree over the input. The arc-standard and arc-eager systems define a set of transitions between configurations, which are capable of transforming an initial configuration into an accepting configuration. In both of these systems, the number of actions required to parse an input grows linearly in the length of the input, making transition-based parsing considerably more efficient than graph-based methods. Arc-standard The arc-standard transition system is closely related to shift-reduce, and to the LR algo- rithm that is used to parse programming languages (Aho et al., 2006). It includes the following classes of actions: • SHIFT: move the first item from the input buffer on to the top of the stack, (σ, i|β, A) ⇒(σ|i, β, A), [11.18] 4Transition systems also exist for non-projective dependency parsing (e.g., Nivre, 2008). Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_287_Chunk287
|
270 CHAPTER 11. DEPENDENCY PARSING where we write i|β to indicate that i is the leftmost item in the input buffer, and σ|i to indicate the result of pushing i on to stack σ. • ARC-LEFT: create a new left-facing arc of type r between the item on the top of the stack and the first item in the input buffer. The head of this arc is j, which remains at the front of the input buffer. The arc j r−→i is added to A. Formally, (σ|i, j|β, A) ⇒(σ, j|β, A ⊕j r−→i), [11.19] where r is the label of the dependency arc, and ⊕concatenates the new arc j r−→i to the list A. • ARC-RIGHT: creates a new right-facing arc of type r between the item on the top of the stack and the first item in the input buffer. The head of this arc is i, which is “popped” from the stack and pushed to the front of the input buffer. The arc i r−→j is added to A. Formally, (σ|i, j|β, A) ⇒(σ, i|β, A ⊕i r−→j), [11.20] where again r is the label of the dependency arc. Each action has preconditions. The SHIFT action can be performed only when the buffer has at least one element. The ARC-LEFT action cannot be performed when the root node ROOT is on top of the stack, since this node must be the root of the entire tree. The ARC- LEFT and ARC-RIGHT remove the modifier words from the stack (in the case of ARC-LEFT) and from the buffer (in the case of ARC-RIGHT), so it is impossible for any word to have more than one parent. Furthermore, the end state can only be reached when every word is removed from the buffer and stack, so the set of arcs is guaranteed to constitute a spanning tree. An example arc-standard derivation is shown in Table 11.2. Arc-eager dependency parsing In the arc-standard transition system, a word is completely removed from the parse once it has been made the modifier in a dependency arc. At this time, any dependents of this word must have already been identified. Right-branching structures are common in English (and many other languages), with words often modified by units such as prepo- sitional phrases to their right. In the arc-standard system, this means that we must first shift all the units of the input onto the stack, and then work backwards, creating a series of arcs, as occurs in Table 11.2. Note that the decision to shift bagels onto the stack guarantees that the prepositional phrase with lox will attach to the noun phrase, and that this decision must be made before the prepositional phrase is itself parsed. This has been argued to be cognitively implausible (Abney and Johnson, 1991); from a computational perspective, it means that a parser may need to look several steps ahead to make the correct decision. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_288_Chunk288
|
11.3. TRANSITION-BASED DEPENDENCY PARSING 271 σ β action arc added to A 1. [ROOT] they like bagels with lox SHIFT 2. [ROOT, they] like bagels with lox ARC-LEFT (they ←like) 3. [ROOT] like bagels with lox SHIFT 4. [ROOT, like] bagels with lox SHIFT 5. [ROOT, like, bagels] with lox SHIFT 6. [ROOT, like, bagels, with] lox ARC-LEFT (with ←lox) 7. [ROOT, like, bagels] lox ARC-RIGHT (bagels →lox) 8. [ROOT, like] bagels ARC-RIGHT (like →bagels) 9. [ROOT] like ARC-RIGHT (ROOT →like) 10. [ROOT] ∅ DONE Table 11.2: Arc-standard derivation of the unlabeled dependency parse for the input they like bagels with lox. Arc-eager dependency parsing changes the ARC-RIGHT action so that right depen- dents can be attached before all of their dependents have been found. Rather than re- moving the modifier from both the buffer and stack, the ARC-RIGHT action pushes the modifier on to the stack, on top of the head. Because the stack can now contain elements that already have parents in the partial dependency graph, two additional changes are necessary: • A precondition is required to ensure that the ARC-LEFT action cannot be applied when the top element on the stack already has a parent in A. • A new REDUCE action is introduced, which can remove elements from the stack if they already have a parent in A: (σ|i, β, A) ⇒(σ, β, A). [11.21] As a result of these changes, it is now possible to create the arc like →bagels before parsing the prepositional phrase with lox. Furthermore, this action does not imply a decision about whether the prepositional phrase will attach to the noun or verb. Noun attachment is chosen in the parse in Table 11.3, but verb attachment could be achieved by applying the REDUCE action at step 5 or 7. Projectivity The arc-standard and arc-eager transition systems are guaranteed to produce projective dependency trees, because all arcs are between the word at the top of the stack and the Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_289_Chunk289
|
272 CHAPTER 11. DEPENDENCY PARSING σ β action arc added to A 1. [ROOT] they like bagels with lox SHIFT 2. [ROOT, they] like bagels with lox ARC-LEFT (they ←like) 3. [ROOT] like bagels with lox ARC-RIGHT (ROOT →like) 4. [ROOT, like] bagels with lox ARC-RIGHT (like →bagels) 5. [ROOT, like, bagels] with lox SHIFT 6. [ROOT, like, bagels, with] lox ARC-LEFT (with ←lox) 7. [ROOT, like, bagels] lox ARC-RIGHT (bagels →lox) 8. [ROOT, like, bagels, lox] ∅ REDUCE 9. [ROOT, like, bagels] ∅ REDUCE 10. [ROOT, like] ∅ REDUCE 11. [ROOT] ∅ DONE Table 11.3: Arc-eager derivation of the unlabeled dependency parse for the input they like bagels with lox. left-most edge of the buffer (Nivre, 2008). Non-projective transition systems can be con- structed by adding actions that create arcs with words that are second or third in the stack (Attardi, 2006), or by adopting an alternative configuration structure, which main- tains a list of all words that do not yet have heads (Covington, 2001). In pseudo-projective dependency parsing, a projective dependency parse is generated first, and then a set of graph transformation techniques are applied, producing non-projective edges (Nivre and Nilsson, 2005). Beam search In “greedy” transition-based parsing, the parser tries to make the best decision at each configuration. This can lead to search errors, when an early decision locks the parser into a poor derivation. For example, in Table 11.2, if ARC-RIGHT were chosen at step 4, then the parser would later be forced to attach the prepositional phrase with lox to the verb likes. Note that the likes →bagels arc is indeed part of the correct dependency parse, but the arc-standard transition system requires it to be created later in the derivation. Beam search is a general technique for ameliorating search errors in incremental de- coding.5 While searching, the algorithm maintains a set of partially-complete hypotheses, called a beam. At step t of the derivation, there is a set of k hypotheses, each of which 5Beam search is used throughout natural language processing, and beyond. In this text, it appears again in coreference resolution (§ 15.2.4) and machine translation (§ 18.4). Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_290_Chunk290
|
11.3. TRANSITION-BASED DEPENDENCY PARSING 273 t = 1 t = 2 t = 3 t = 4 t = 5 [Root] they can fish [Root, they] can fish [Root, they] fish [Root, can] ∅ [Root] ∅ [Root, can] fish [Root, fish] ∅ [Root] ∅ Shift Arc-Right Arc-Left Arc-Right Arc-Left Arc-Right Arc-Right Figure 11.7: Beam search for unlabeled dependency parsing, with beam size K = 2. The arc lists for each configuration are not shown, but can be computed from the transitions. includes a score s(k) t and a set of dependency arcs A(k) t : h(k) t = (s(k) t , A(k) t ) [11.22] Each hypothesis is then “expanded” by considering the set of all valid actions from the current configuration c(k) t , written A(c(k) t ). This yields a large set of new hypotheses. For each action a ∈A(c(k) t ), we score the new hypothesis A(k) t ⊕a. The top k hypotheses by this scoring metric are kept, and parsing proceeds to the next step (Zhang and Clark, 2008). Note that beam search requires a scoring function for action sequences, rather than individual actions. This issue will be revisited in the next section. Figure 11.7 shows the application of beam search to dependency parsing, with a beam size of K = 2. For the first transition, the only valid action is SHIFT, so there is only one possible configuration at t = 2. From this configuration, there are three possible actions. The two best scoring actions are ARC-RIGHT and ARC-LEFT, and so the resulting hypotheses from these actions are on the beam at t = 3. From these configurations, there are three possible actions each, but the best two are expansions of the bottom hypothesis at t = 3. Parsing continues until t = 5, at which point both hypotheses reach an accepting state. The best-scoring hypothesis is then selected as the parse. 11.3.2 Scoring functions for transition-based parsers Transition-based parsing requires selecting a series of actions. In greedy transition-based parsing, this can be done by training a classifier, ˆa = argmax a∈A(c) Ψ(a, c, w; θ), [11.23] where A(c) is the set of admissible actions in the current configuration c, w is the input, and Ψ is a scoring function with parameters θ (Yamada and Matsumoto, 2003). A feature-based score can be computed, Ψ(a, c, w) = θ · f(a, c, w), using features that may consider any aspect of the current configuration and input sequence. Typical features for transition-based dependency parsing include: the word and part-of-speech of the top Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_291_Chunk291
|
274 CHAPTER 11. DEPENDENCY PARSING element on the stack; the word and part-of-speech of the first, second, and third elements on the input buffer; pairs and triples of words and parts-of-speech from the top of the stack and the front of the buffer; the distance (in tokens) between the element on the top of the stack and the element in the front of the input buffer; the number of modifiers of each of these elements; and higher-order dependency features as described above in the section on graph-based dependency parsing (see, e.g., Zhang and Nivre, 2011). Parse actions can also be scored by neural networks. For example, Chen and Manning (2014) build a feedforward network in which the input layer consists of the concatenation of embeddings of several words and tags: • the top three words on the stack, and the first three words on the buffer; • the first and second leftmost and rightmost children (dependents) of the top two words on the stack; • the leftmost and right most grandchildren of the top two words on the stack; • embeddings of the part-of-speech tags of these words. Let us call this base layer x(c, w), defined as, c =(σ, β, A) x(c, w) =[vwσ1, vtσ1vwσ2, vtσ2, vwσ3, vtσ3, vwβ1, vtβ1, vwβ2, vtβ2, . . .], where vwσ1 is the embedding of the first word on the stack, vtβ2 is the embedding of the part-of-speech tag of the second word on the buffer, and so on. Given this base encoding of the parser state, the score for the set of possible actions is computed through a feedfor- ward network, z =g(Θ(x→z)x(c, w)) [11.24] ψ(a, c, w; θ) =Θ(z→y) a z, [11.25] where the vector z plays the same role as the features f(a, c, w), but is a learned represen- tation. Chen and Manning (2014) use a cubic elementwise activation function, g(x) = x3, so that the hidden layer models products across all triples of input features. The learning algorithm updates the embeddings as well as the parameters of the feedforward network. 11.3.3 Learning to parse Transition-based dependency parsing suffers from a mismatch between the supervision, which comes in the form of dependency trees, and the classifier’s prediction space, which is a set of parsing actions. One solution is to create new training data by converting parse trees into action sequences; another is to derive supervision directly from the parser’s performance. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_292_Chunk292
|
11.3. TRANSITION-BASED DEPENDENCY PARSING 275 Oracle-based training A transition system can be viewed as a function from action sequences (derivations) to parse trees. The inverse of this function is a mapping from parse trees to derivations, which is called an oracle. For the arc-standard and arc-eager parsing system, an oracle can be computed in linear time in the length of the derivation (K¨ubler et al., 2009, page 32). Both the arc-standard and arc-eager transition systems suffer from spurious ambiguity: there exist dependency parses for which multiple derivations are possible, such as 1 ← 2 →3.The oracle must choose between these different derivations. For example, the algorithm described by K¨ubler et al. (2009) would first create the left arc (1 ←2), and then create the right arc, (1 ←2) →3; another oracle might begin by shifting twice, resulting in the derivation 1 ←(2 →3). Given such an oracle, a dependency treebank can be converted into a set of oracle ac- tion sequences {A(i)}N i=1. The parser can be trained by stepping through the oracle action sequences, and optimizing on an classification-based objective that rewards selecting the oracle action. For transition-based dependency parsing, maximum conditional likelihood is a typical choice (Chen and Manning, 2014; Dyer et al., 2015): p(a | c, w) = exp Ψ(a, c, w; θ) P a′∈A(c) exp Ψ(a′, c, w; θ) [11.26] ˆθ = argmax θ N X i=1 |A(i)| X t=1 log p(a(i) t | c(i) t , w), [11.27] where |A(i)| is the length of the action sequence A(i). Recall that beam search requires a scoring function for action sequences. Such a score can be obtained by adding the log-likelihoods (or hinge losses) across all actions in the sequence (Chen and Manning, 2014). Global objectives The objective in Equation 11.27 is locally-normalized: it is the product of normalized probabilities over individual actions. A similar characterization could be made of non- probabilistic algorithms in which hinge-loss objectives are summed over individual ac- tions. In either case, training on individual actions can be sub-optimal with respect to global performance, due to the label bias problem (Lafferty et al., 2001; Andor et al., 2016). As a stylized example, suppose that a given configuration appears 100 times in the training data, with action a1 as the oracle action in 51 cases, and a2 as the oracle action in the other 49 cases. However, in cases where a2 is correct, choosing a1 results in a cascade of subsequent errors, while in cases where a1 is correct, choosing a2 results in only a single Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_293_Chunk293
|
276 CHAPTER 11. DEPENDENCY PARSING error. A classifier that is trained on a local objective function will learn to always choose a1, but choosing a2 would minimize the overall number of errors. This observation motivates a global objective, such as the globally-normalized condi- tional likelihood, p(A(i) | w; θ) = exp P|A(i)| t=1 Ψ(a(i) t , c(i) t , w) P A′∈A(w) exp P|A′| t=1 Ψ(a′ t, c′ t, w) , [11.28] where the denominator sums over the set of all possible action sequences, A(w).6 In the conditional random field model for sequence labeling (§ 7.5.3), it was possible to compute this sum explicitly, using dynamic programming. In transition-based parsing, this is not possible. However, the sum can be approximated using beam search, X A′∈A(w) exp |A′| X t=1 Ψ(a′ t, c′ t, w) ≈ K X k=1 exp |A(k)| X t=1 Ψ(a(k) t , c(k) t , w), [11.29] where A(k) is an action sequence on a beam of size K. This gives rise to the following loss function, L(θ) = − |A(i)| X t=1 Ψ(a(i) t , c(i) t , w) + log K X k=1 exp |A(k)| X t=1 Ψ(a(k) t , c(k) t , w). [11.30] The derivatives of this loss involve expectations with respect to a probability distribution over action sequences on the beam. *Early update and the incremental perceptron When learning in the context of beam search, the goal is to learn a decision function so that the gold dependency parse is always reachable from at least one of the partial derivations on the beam. (The combination of a transition system (such as beam search) and a scoring function for actions is known as a policy.) To achieve this, we can make an early update as soon as the oracle action sequence “falls off” the beam, even before a complete analysis is available (Collins and Roark, 2004; Daum´e III and Marcu, 2005). The loss can be based on the best-scoring hypothesis on the beam, or the sum of all hypotheses (Huang et al., 2012). For example, consider the beam search in Figure 11.7. In the correct parse, fish is the head of dependency arcs to both of the other two words. In the arc-standard system, 6Andor et al. (2016) prove that the set of globally-normalized conditional distributions is a strict superset of the set of locally-normalized conditional distributions, and that globally-normalized conditional models are therefore strictly more expressive. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_294_Chunk294
|
11.4. APPLICATIONS 277 this can be achieved only by using SHIFT for the first two actions. At t = 3, the oracle action sequence has fallen off the beam. The parser should therefore stop, and update the parameters by the gradient ∂ ∂θL(A(i) 1:3, {A(k) 1:3}; θ), where A(i) 1:3 is the first three actions of the oracle sequence, and {A(k) 1:3} is the beam. This integration of incremental search and learning was first developed in the incre- mental perceptron (Collins and Roark, 2004). This method updates the parameters with respect to a hinge loss, which compares the top-scoring hypothesis and the gold action sequence, up to the current point t. Several improvements to this basic protocol are pos- sible: • As noted earlier, the gold dependency parse can be derived by multiple action se- quences. Rather than checking for the presence of a single oracle action sequence on the beam, we can check if the gold dependency parse is reachable from the current beam, using a dynamic oracle (Goldberg and Nivre, 2012). • By maximizing the score of the gold action sequence, we are training a decision function to find the correct action given the gold context. But in reality, the parser will make errors, and the parser is not trained to find the best action given a context that may not itself be optimal. This issue is addressed by various generalizations of incremental perceptron, known as learning to search (Daum´e III et al., 2009). Some of these methods are discussed in chapter 15. 11.4 Applications Dependency parsing is used in many real-world applications: any time you want to know about pairs of words which might not be adjacent, you can use dependency arcs instead of regular expression search patterns. For example, you may want to match strings like delicious pastries, delicious French pastries, and the pastries are delicious. It is possible to search the Google n-grams corpus by dependency edges, finding the trend in how often a dependency edge appears over time. For example, we might be inter- ested in knowing when people started talking about writing code, but we also want write some code, write good code, write all the code, etc. The result of a search on the dependency edge write →code is shown in Figure 11.8. This capability has been applied to research in digital humanities, such as the analysis of gender in Shakespeare Muralidharan and Hearst (2013). A classic application of dependency parsing is relation extraction, which is described Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_295_Chunk295
|
278 CHAPTER 11. DEPENDENCY PARSING Figure 11.8: Google n-grams results for the bigram write code and the dependency arc write => code (and their morphological variants) in chapter 17. The goal of relation extraction is to identify entity pairs, such as (MELVILLE, MOBY-DICK) (TOLSTOY, WAR AND PEACE) (MARQU´EZ, 100 YEARS OF SOLITUDE) (SHAKESPEARE, A MIDSUMMER NIGHT’S DREAM), which stand in some relation to each other (in this case, the relation is authorship). Such entity pairs are often referenced via consistent chains of dependency relations. Therefore, dependency paths are often a useful feature in supervised systems which learn to detect new instances of a relation, based on labeled examples of other instances of the same relation type (Culotta and Sorensen, 2004; Fundel et al., 2007; Mintz et al., 2009). Cui et al. (2005) show how dependency parsing can improve automated question an- swering. Suppose you receive the following query: (11.1) What percentage of the nation’s cheese does Wisconsin produce? The corpus contains this sentence: (11.2) In Wisconsin, where farmers produce 28% of the nation’s cheese, ... The location of Wisconsin in the surface form of this string makes it a poor match for the query. However, in the dependency graph, there is an edge from produce to Wisconsin in both the question and the potential answer, raising the likelihood that this span of text is relevant to the question. A final example comes from sentiment analysis. As discussed in chapter 4, the polarity of a sentence can be reversed by negation, e.g. Jacob Eisenstein. Draft of November 13, 2018.
|
nlp_Page_296_Chunk296
|
11.4. APPLICATIONS 279 (11.3) There is no reason at all to believe the polluters will suddenly become reasonable. By tracking the sentiment polarity through the dependency parse, we can better iden- tify the overall polarity of the sentence, determining when key sentiment words are re- versed (Wilson et al., 2005; Nakagawa et al., 2010). Additional resources More details on dependency grammar and parsing algorithms can be found in the manuscript by K¨ubler et al. (2009). For a comprehensive but whimsical overview of graph-based de- pendency parsing algorithms, see Eisner (1997). Jurafsky and Martin (2019) describe an agenda-based version of beam search, in which the beam contains hypotheses of varying lengths. New hypotheses are added to the beam only if their score is better than the worst item currently on the beam. Another search algorithm for transition-based parsing is easy-first, which abandons the left-to-right traversal order, and adds the highest-scoring edges first, regardless of where they appear (Goldberg and Elhadad, 2010). Goldberg et al. (2013) note that although transition-based methods can be implemented in linear time in the length of the input, na¨ıve implementations of beam search will require quadratic time, due to the cost of copying each hypothesis when it is expanded on the beam. This issue can be addressed by using a more efficient data structure for the stack. Exercises 1. The dependency structure 1 ←2 →3, with 2 as the root, can be obtained from more than one set of actions in arc-standard parsing. List both sets of actions that can obtain this parse. Don’t forget about the edge ROOT →2. 2. This problem develops the relationship between dependency parsing and lexical- ized context-free parsing. Suppose you have a set of unlabeled arc scores {ψ(i → j)}M i,j=1 ∪{ψ(ROOT →j)}M j=1. a) Assuming each word type occurs no more than once in the input ((i ̸= j) ⇒ (wi ̸= wj)), how would you construct a weighted lexicalized context-free gram- mar so that the score of any projective dependency tree is equal to the score of some equivalent derivation in the lexicalized context-free grammar? b) Verify that your method works for the example They fish. c) Does your method require the restriction that each word type occur no more than once in the input? If so, why? d) *If your method required that each word type occur only once in the input, show how to generalize it. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_297_Chunk297
|
280 CHAPTER 11. DEPENDENCY PARSING 3. In arc-factored dependency parsing of an input of length M, the score of a parse is the sum of M scores, one for each arc. In second order dependency parsing, the total score is the sum over many more terms. How many terms are the score of the parse for Figure 11.2, using a second-order dependency parser with grandparent and sibling features? Assume that a child of ROOT has no grandparent score, and that a node with no siblings has no sibling scores. 4. a) In the worst case, how many terms can be involved in the score of an input of length M, assuming second-order dependency parsing? Describe the structure of the worst-case parse. As in the previous problem, assume that there is only one child of ROOT, and that it does not have any grandparent scores. b) What about third-order dependency parsing? 5. Provide the UD-style unlabeled dependency parse for the sentence Xi-Lan eats shoots and leaves, assuming shoots is a noun and leaves is a verb. Provide arc-standard and arc-eager derivations for this dependency parse. 6. Compute an upper bound on the number of successful derivations in arc-standard shift-reduce parsing for unlabeled dependencies, as a function of the length of the input, M. Hint: a lower bound is the number of projective decision trees, 1 M+1
|
nlp_Page_298_Chunk298
|
11.4. APPLICATIONS 281 9. Count all pairs of words grouped by the CONJ relation. Select all pairs of words (i, j) for which i and j each participate in CONJ relations at least five times. Compute and sort by the pointwise mutual information, which is defined in § 14.3 as, PMI(i, j) = log p(i, j) p(i)p(j). [11.31] Here, p(i) is the fraction of CONJ relations containing word i (in either position), and p(i, j) is the fraction of such relations linking i and j (in any order). 10. In § 4.2, we encountered lexical semantic relationships such as synonymy (same meaning), antonymy (opposite meaning), and hypernymy (i is a special case of j). Another relevant relation is co-hypernymy, which means that i and j share a hypernym. Of the top 20 pairs identified by PMI in the previous problem, how many participate in synsets that are linked by one of these four relations? Use WORDNET to check for these relations, and count a pair of words if any of their synsets are linked. Under contract with MIT Press, shared under CC-BY-NC-ND license.
|
nlp_Page_299_Chunk299
|
Part III Meaning 283
|
nlp_Page_301_Chunk300
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.