| { |
| "paper_id": "P93-1032", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:52:20.601500Z" |
| }, |
| "title": "AUTOMATIC ACQUISITION OF A LARGE SUBCATEGORIZATION DICTIONARY FROM CORPORA", |
| "authors": [ |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Stanford University Dept. of Linguistics", |
| "location": { |
| "addrLine": "Bldg. 100 Stanford", |
| "postCode": "94305-2150", |
| "region": "CA", |
| "country": "USA" |
| } |
| }, |
| "email": "manning@csli.stanford.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper presents a new method for producing a dictionary of subcategorization frames from unlabelled text corpora. It is shown that statistical filtering of the results of a finite state parser running on the output of a stochastic tagger produces high quality results, despite the error rates of the tagger and the parser. Further, it is argued that this method can be used to learn all subcategorization frames, whereas previous methods are not extensible to a general solution to the problem.", |
| "pdf_parse": { |
| "paper_id": "P93-1032", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper presents a new method for producing a dictionary of subcategorization frames from unlabelled text corpora. It is shown that statistical filtering of the results of a finite state parser running on the output of a stochastic tagger produces high quality results, despite the error rates of the tagger and the parser. Further, it is argued that this method can be used to learn all subcategorization frames, whereas previous methods are not extensible to a general solution to the problem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Rule-based parsers use subcategorization information to constrain the number of analyses that are generated. For example, from subcategorization alone, we can deduce that the PP in (1) must be an argument of the verb, not a noun phrase modifier:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INTRODUCTION", |
| "sec_num": null |
| }, |
| { |
| "text": "(1) John put [Nethe cactus] [epon the table].", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INTRODUCTION", |
| "sec_num": null |
| }, |
| { |
| "text": "Knowledge of subcategorization also aids text gereration programs and people learning a foreign language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INTRODUCTION", |
| "sec_num": null |
| }, |
| { |
| "text": "A subcategorization frame is a statement of what types of syntactic arguments a verb (or adjective) takes, such as objects, infinitives, thatclauses, participial clauses, and subcategorized prepositional phrases. In general, verbs and adjectives each appear in only a small subset of all possible argument subcategorization frames.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INTRODUCTION", |
| "sec_num": null |
| }, |
| { |
| "text": "A major bottleneck in the production of highcoverage parsers is assembling lexical information, \u00b0Thanks to Julian Kupiec for providing the tagger on which this work depends and for helpful discussions and comments along the way. I am also indebted for comments on an earlier draft to Marti Hearst (whose comments were the most useful!), Hinrich Schfitze, Penni Sibun, Mary Dalrymple, and others at Xerox PARC, where this research was completed during a summer internship; Stanley Peters, and the two anonymous ACL reviewers. such as subcategorization information. In early and much continuing work in computational linguistics, this information has been coded laboriously by hand. More recently, on-line versions of dictionaries that provide subcategorization information have become available to researchers (Hornby 1989 , Procter 1978 , Sinclair 1987 . But this is the same method of obtaining subcategorizations -painstaking work by hand. We have simply passed the need for tools that acquire lexical information from the computational linguist to the lexicographer.", |
| "cite_spans": [ |
| { |
| "start": 809, |
| "end": 821, |
| "text": "(Hornby 1989", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 822, |
| "end": 836, |
| "text": ", Procter 1978", |
| "ref_id": null |
| }, |
| { |
| "start": 837, |
| "end": 852, |
| "text": ", Sinclair 1987", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INTRODUCTION", |
| "sec_num": null |
| }, |
| { |
| "text": "Thus there is a need for a program that can acquire a subcategorization dictionary from on-line corpora of unrestricted text:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INTRODUCTION", |
| "sec_num": null |
| }, |
| { |
| "text": "1. Dictionaries with subcategorization information are unavailable for most languages (only a few recent dictionaries, generally targeted at nonnative speakers, list subcategorization frames). 2. No dictionary lists verbs from specialized subfields (as in I telneted to Princeton), but these could be obtained automatically from texts such as computer manuals. 3. Hand-coded lists are expensive to make, and invariably incomplete. 4. A subcategorization dictionary obtained automatically from corpora can be updated quickly and easily as different usages develop. Dictionaries produced by hand always substantially lag real language use.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INTRODUCTION", |
| "sec_num": null |
| }, |
| { |
| "text": "The last two points do not argue against the use of existing dictionaries, but show that the incomplete information that they provide needs to be supplemented with further knowledge that is best collected automatically) The desire to combine hand-coded and automatically learned knowledge 1A point made by Church and Hanks (1989) . Arbitrary gaps in listing can be smoothed with a program such as the work presented here. For example, among the 27 verbs that most commonly cooccurred with from, Church and Hanks found 7 for which this suggests that we should aim for a high precision learner (even at some cost in coverage), and that is the approach adopted here.", |
| "cite_spans": [ |
| { |
| "start": 306, |
| "end": 329, |
| "text": "Church and Hanks (1989)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INTRODUCTION", |
| "sec_num": null |
| }, |
| { |
| "text": "Both in traditional grammar and modern syntactic theory, a distinction is made between arguments and adjuncts. In sentence (2), John is an argument and in the bathroom is an adjunct:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS AND DIFFICULTIES", |
| "sec_num": null |
| }, |
| { |
| "text": "(2) Mary berated John in the bathroom.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS AND DIFFICULTIES", |
| "sec_num": null |
| }, |
| { |
| "text": "Arguments fill semantic slots licensed by a particular verb, while adjuncts provide information about sentential slots (such as time or place) that can be filled for any verb (of the appropriate aspectual type).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS AND DIFFICULTIES", |
| "sec_num": null |
| }, |
| { |
| "text": "While much work has been done on the argument/adjunct distinction (see the survey of distinctions in Pollard and Sag (1987, pp. 134-139) ), and much other work presupposes this distinction, in practice, it gets murky (like many things in linguistics). I will adhere to a conventional notion of the distinction, but a tension arises in the work presented here when judgments of argument/adjunct status reflect something other than frequency of cooccurrence -since it is actually cooccurrence data that a simple learning program like mine uses. I will return to this issue later.", |
| "cite_spans": [ |
| { |
| "start": 101, |
| "end": 136, |
| "text": "Pollard and Sag (1987, pp. 134-139)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS AND DIFFICULTIES", |
| "sec_num": null |
| }, |
| { |
| "text": "Different classifications of subcategorization frames can be found in each of the dictionaries mentioned above, and in other places in the linguistics literature. I will assume without discussion a fairly standard categorization of subcategorization frames into 19 classes (some parameterized for a preposition), a selection of which are shown below: (Sinclair 1987) . The learner presented here finds a subcategorization involving from for all but one of these 7 verbs (the exception being ferry which was fairly rare in the training corpus).", |
| "cite_spans": [ |
| { |
| "start": 351, |
| "end": 366, |
| "text": "(Sinclair 1987)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS AND DIFFICULTIES", |
| "sec_num": null |
| }, |
| { |
| "text": "IV TV DTV THAT NPTHAT INF NPINF ING P(prep)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "DEFINITIONS AND DIFFICULTIES", |
| "sec_num": null |
| }, |
| { |
| "text": "While work has been done on various sorts of collocation information that can be obtained from text corpora, the only research that I am aware of that has dealt directly with the problem of the automatic acquisition of subcategorization frames is a series of papers by Brent (Brent and Berwick 1991 , Brent 1991 , Brent 1992 ). Brent and Betwick (1991) took the approach of trying to generate very high precision data. 2 The input was hand-tagged text from the Penn Treebank, and they used a very simple finite state parser which ignored nearly all the input, but tried to learn from the sentences which seemed least likely to contain false triggers -mainly sentences with pronouns and proper names. 3 This was a consistent strategy which produced promising initial results. However, using hand-tagged text is clearly not a solution to the knowledge acquisition problem (as hand-tagging text is more laborious than collecting subcategorization frames), and so, in more recent papers, Brent has attempted learning subcategorizations from untagged text. Brent (1991) used a procedure for identifying verbs that was still very accurate, but which resulted in extremely low yields (it garnered as little as 3% of the information gained by his subcategorization learner running on tagged text, which itself ignored a huge percentage of the information potentially available). More recently, Brent (1992) substituted a very simple heuristic method to detect verbs (anything that occurs both with and without the suffix -ing in the text is taken as a potential verb, and every potential verb token is taken as an actual verb unless it is preceded by a determiner or a preposition other than to. 4 This is a rather simplistic and inadequate approach to verb detection, with a very high error rate. In this work I will use a stochastic part-of-speech tagger to detect verbs (and the part-of-speech of other words), and will suggest that this gives much better results. 5", |
| "cite_spans": [ |
| { |
| "start": 275, |
| "end": 298, |
| "text": "(Brent and Berwick 1991", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 299, |
| "end": 311, |
| "text": ", Brent 1991", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 312, |
| "end": 324, |
| "text": ", Brent 1992", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 328, |
| "end": 352, |
| "text": "Brent and Betwick (1991)", |
| "ref_id": null |
| }, |
| { |
| "start": 1052, |
| "end": 1064, |
| "text": "Brent (1991)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PREVIOUS WORK", |
| "sec_num": null |
| }, |
| { |
| "text": "Leaving this aside, moving to either this last approach of Brent's or using a stochastic tagger undermines the consistency of the initial approach. Since the system now makes integral use of a high-error-rate component, s it makes little sense 2That is, data with very few errors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PREVIOUS WORK", |
| "sec_num": null |
| }, |
| { |
| "text": "3A false trigger is a clause in the corpus that one wrongly takes as evidence that a verb can appear with a certain subcategorization frame. 4Actually, learning occurs only from verbs in the base or -ing forms; others are ignored (Brent 1992, ", |
| "cite_spans": [ |
| { |
| "start": 230, |
| "end": 242, |
| "text": "(Brent 1992,", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "PREVIOUS WORK", |
| "sec_num": null |
| }, |
| { |
| "text": "SSee Brent (1992, p. 9) for arguments against using a stochastic tagger; they do not seem very persuasive (in brief, there is a chance of spurious correlations, and it is difficult to evaluate composite systems).", |
| "cite_spans": [ |
| { |
| "start": 5, |
| "end": 23, |
| "text": "Brent (1992, p. 9)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "p. 8).", |
| "sec_num": null |
| }, |
| { |
| "text": "SOn the order of a 5% error rate on each token for for other components to be exceedingly selective about which data they use in an attempt to avoid as many errors as possible. Rather, it would seem more desirable to extract as much information as possible out of the text (even if it is noisy), and then to use appropriate statistical techniques to handle the noise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "p. 8).", |
| "sec_num": null |
| }, |
| { |
| "text": "There is a more fundamental reason to think that this is the right approach. Brent and Berwick's original program learned just five subcategorization frames (TV, THAT, NPTHAT, INF and NPINF). While at the time they suggested that \"we foresee no impediment to detecting many more,\" this has apparently not proved to be the case (in Brent (1992) only six are learned: the above plus DTV). It seems that the reason for this is that their approach has depended upon finding cues that are very accurate predictors for a certain subcategorization (that is, there are very few false triggers), such as pronouns for NP objects and to plus a finite verb for infinitives. However, for many subcategorizations there just are no highly accurate cues/ For example, some verbs subcategorize for the preposition in, such as the ones shown in (3):(3) a. Two women are assisting the police in c. We were traveling along in a noisy helicopter.", |
| "cite_spans": [ |
| { |
| "start": 331, |
| "end": 343, |
| "text": "Brent (1992)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "p. 8).", |
| "sec_num": null |
| }, |
| { |
| "text": "There just is no high accuracy cue for verbs that subcategorize for in. Rather one must collect cooccurrence statistics, and use significance testing, a mutual information measure or some other form of statistic to try and judge whether a particular verb subcategorizes for in or just sometimes the stochastic tagger (Kupiec 1992) , and a presumably higher error rate on Brent's technique for detecting verbs, rThis inextensibility is also discussed by Hearst (1992).", |
| "cite_spans": [ |
| { |
| "start": 317, |
| "end": 330, |
| "text": "(Kupiec 1992)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "p. 8).", |
| "sec_num": null |
| }, |
| { |
| "text": "SA sample of 100 uses of /n from the New York Times suggests that about 70% of uses are in postverbal contexts, but, of these, only about 15% are subcategorized complements (the rest being fairly evenly split between NP modifiers and time or place adjunct PPs). appears with a locative phrase. 9 Thus, the strategy I will use is to collect as much (fairly accurate) information as possible from the text corpus, and then use statistical filtering to weed out false cues.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "p. 8).", |
| "sec_num": null |
| }, |
| { |
| "text": "One month (approximately 4 million words) of the New York Times newswire was tagged using a version of Julian Kupiec's stochastic part-of-speech tagger (Kupiec 1992) . l\u00b0 Subcategorization learning was then performed by a program that processed the output of the tagger. The program had two parts: a finite state parser ran through the text, parsing auxiliary sequences and noting complements after verbs and collecting histogram-type statistics for the appearance of verbs in various contexts. A second process of statistical filtering then took the raw histograms and decided the best guess for what subcategorization frames each observed verb actually had.", |
| "cite_spans": [ |
| { |
| "start": 152, |
| "end": 165, |
| "text": "(Kupiec 1992)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "METHOD", |
| "sec_num": null |
| }, |
| { |
| "text": "The finite state parser essentially works as follows: it scans through text until it hits a verb or auxiliary, it parses any auxiliaries, noting whether the verb is active or passive, and then it parses complements following the verb until something recognized as a terminator of subcategorized arguments is reached) 1 Whatever has been found is entered in the histogram. The parser includes a simple NP recognizer (parsing determiners, possessives, adjectives, numbers and compound nouns) and various other rules to recognize certain cases that appeared frequently (such as direct quotations in either a normal or inverted, quotation first, order). The parser does not learn from participles since an NP after them may be the subject rather than the object (e.g., the yawning man).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The finite state parser", |
| "sec_num": null |
| }, |
| { |
| "text": "The parser has 14 states and around 100 transitions. It outputs a list of elements occurring after the verb, and this list together with the record of whether the verb is passive yields the overall context in which the verb appears. The parser skips to the start of the next sentence in a few cases where things get complicated (such as on encountering a 9One cannot just collect verbs that always appear with in because many verbs have multiple subcategorization frames. As well as (3b), chip can also just be a IV: John chipped his tooth.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The finite state parser", |
| "sec_num": null |
| }, |
| { |
| "text": "1\u00b0Note that the input is very noisy text, including sports results, bestseller lists and all the other vagaries of a newswire.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The finite state parser", |
| "sec_num": null |
| }, |
| { |
| "text": "aaAs well as a period, things like subordinating conjunctions mark the end of subcategorized arguments. Additionally, clausal complements such as those introduced by that function both as an argument and as a marker that this is the final argument. conjunction, the scope of which is ambiguous, or a relative clause, since there will be a gap somewhere within it which would give a wrong observation). However, there are many other things that the parser does wrong or does not notice (such as reduced relatives). One could continue to refine the parser (up to the limits of what can be recognized by a finite state device), but the strategy has been to stick with something simple that works a reasonable percentage of the time and then to filter its results to determine what subcategorizations verbs actually have.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The finite state parser", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that the parser does not distinguish between arguments and adjuncts. 12 Thus the frame it reports will generally contain too many things. Indicative results of the parser can be observed in Fig. 1 , where the first line under each line of text shows the frames that the parser found. Because of mistakes, skipping, and recording adjuncts, the finite state parser records nothing or the wrong thing in the majority of cases, but, nevertheless, enough good data are found that the final subcategorization dictionary describes the majority of the subcategorization frames in which the verbs are used in this sample.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 195, |
| "end": 201, |
| "text": "Fig. 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "The finite state parser", |
| "sec_num": null |
| }, |
| { |
| "text": "Filtering assesses the frames that the parser found (called cues below). A cue may be a correct subcategorization for a verb, or it may contain spurious adjuncts, or it may simply be wrong due to a mistake of the tagger or the parser. The filtering process attempts to determine whether one can be highly confident that a cue which the parser noted is actually a subcategorization frame of the verb in question.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Filtering", |
| "sec_num": null |
| }, |
| { |
| "text": "The method used for filtering is that suggested by Brent (1992) . Let Bs be an estimated upper bound on the probability that a token of a verb that doesn't take the subcategorization frame s will nevertheless appear with a cue for s. If a verb appears m times in the corpus, and n of those times it cooccurs with a cue for s, then the probability that all the cues are false cues is bounded by the binomial distribution:", |
| "cite_spans": [ |
| { |
| "start": 51, |
| "end": 63, |
| "text": "Brent (1992)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Filtering", |
| "sec_num": null |
| }, |
| { |
| "text": "m m! n (m- -B,) m-- i=n", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Filtering", |
| "sec_num": null |
| }, |
| { |
| "text": "Thus the null hypothesis that the verb does not have the subcategorization frame s can be rejected if the above sum is less than some confidence level C (C = 0.02 in the work reported here). Brent was able to use extremely low values for B~ (since his cues were sparse but unlikely to be 12Except for the fact that it will only count the first of multiple. PPs as an argument. false cues), and indeed found the best performance with values of the order of 2 -8 . However, using my parser, false cues are common. For example, when the recorded subcategorization is __ NP PP(of), it is likely that the PP should actually be attached to the NP rather than the verb. Hence I have used high bounds on the probability of cues being false cues for certain triggers (the used values range from 0.25 (for WV-P(of)) to 0.02). At the moment, the false cue rates B8 in my system have been set empirically. Brent (1992) discusses a method of determining values for the false cue rates automatically, and this technique or some similar form of automatic optimization could profitably be incorporated into my system.", |
| "cite_spans": [ |
| { |
| "start": 894, |
| "end": 906, |
| "text": "Brent (1992)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Filtering", |
| "sec_num": null |
| }, |
| { |
| "text": "The program acquired a dictionary of 4900 subcategorizations for 3104 verbs (an average of 1.6 per verb). Post-editing would reduce this slightly (a few repeated typos made it in, such as acknowlege, a few oddities such as the spelling garontee as a 'Cajun' pronunciation of guarantee and a few cases of mistakes by the tagger which, for example, led it to regard lowlife as a verb several times by mistake). Nevertheless, this size already compares favorably with the size of some production MT systems (for example, the English dictionary for Siemens' METAL system lists about 2500 verbs (Adriaens and de Braekeleer 1992)). In general, all the verbs for which subcategorization frames were determined are in Webster's (Gove 1977 ) (the only noticed exceptions being certain instances of prefixing, such as overcook and repurchase), but a larger number of the verbs do not appear in the only dictionaries that list subcategorization frames (as their coverage of words tends to be more limited). Examples are fax, lambaste, skedaddle, sensationalize, and solemnize. Some idea of the growth of the subcategorization dictionary can be had from Table 1 . The two basic measures of results are the information retrieval notions of recall and precision: How many of the subcategorization frames of the verbs were learned and what percentage of the things in the induced dictionary are correct? I have done some preliminary work to answer these questions.", |
| "cite_spans": [ |
| { |
| "start": 720, |
| "end": 730, |
| "text": "(Gove 1977", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1142, |
| "end": 1149, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "RESULTS", |
| "sec_num": null |
| }, |
| { |
| "text": "In the mezzanine, a man came with two sons and one baseball glove, like so many others there, in case, [p(with) to America at age 10, stood with the crowd as \"Take Me Out to the Ball Game\" was played. The \u00b0KP(to) OKIv fans sang and waved their orange caps.", |
| "cite_spans": [ |
| { |
| "start": 103, |
| "end": 111, |
| "text": "[p(with)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RESULTS", |
| "sec_num": null |
| }, |
| { |
| "text": "OKIv OKTv OKTv Figure 1 . A randomly selected sample of text from the New York Times, with what the parser could extract from the text on the second line and whether the resultant dictionary has the correct subcategorization for this occurrence shown on the third line (OK indicates that it does, while * indicates that it doesn't).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 15, |
| "end": 23, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "[np]", |
| "sec_num": null |
| }, |
| { |
| "text": "For recall, we might ask how many of the uses of verbs in a text are captured by our subcategorization dictionary. For two randomly selected pieces of text from other parts of the New York Times newswire, a portion of which is shown in Fig. 1 , out of 200 verbs, the acquired subcategorization dictionary listed 163 of the subcategorization frames that appeared. So the token recall rate is approximately 82%. This compares with a baseline accuracy of 32% that would result from always guessing TV (transitive verb) and a performance figure of 62% that would result from a system that correctly classified all TV and THAT verbs (the two most common types), but which got everything else wrong.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 236, |
| "end": 242, |
| "text": "Fig. 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "[np]", |
| "sec_num": null |
| }, |
| { |
| "text": "We can get a pessimistic lower bound on precision and recall by testing the acquired dictionary against some published dictionary. 13 For this 13The resulting figures will be considerably lower than the true precision and recall because the dictionary lists subcategorization frames that do not appear in the training corpus and vice versa. However, this is still a useful exercise to undertake, as one can attain a high token success rate by just being able to accurately detect the most common subcategorization test, 40 verbs were selected (using a random number generator) from a list of 2000 common verbs. 14 Table 2 gives the subcategorizations listed in the OALD (recoded where necessary according to my classification of subcategorizations) and those in the subcategorization dictionary acquired by my program in a compressed format. Next to each verb, listing just a subcategorization frame means that it appears in both the OALD and my subcategorization dictionary, a subcategorization frame preceded by a minus sign (-) means that the subcategorization frame only appears in the OALD, and a subcategorization frame preceded by a plus sign (+) indicates one listed only in my program's subcategorization dictionary (i.e., one that is probably wrong). 15 The numbers are the number of cues that the program saw for each subcatframes.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 614, |
| "end": 621, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "[np]", |
| "sec_num": null |
| }, |
| { |
| "text": "14The number 2000 is arbitrary, but was chosen following the intuition that one wanted to test the program's performance on verbs of at least moderate frequency.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "[np]", |
| "sec_num": null |
| }, |
| { |
| "text": "15The verb redesign does not appear in the OALD, so its subcategorization entry was determined by me, based on the entry in the OALD for design.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "[np]", |
| "sec_num": null |
| }, |
| { |
| "text": "egorization frame (that is in the resulting subcategorization dictionary). Table 3 then summarizes the results from the previous table. Lower bounds for the precision and recall of my induced subcategorization dictionary are approximately 90% and 43% respectively (looking at types).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 75, |
| "end": 136, |
| "text": "Table 3 then summarizes the results from the previous table.", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "[np]", |
| "sec_num": null |
| }, |
| { |
| "text": "The aim in choosing error bounds for the filtering procedure was to get a highly accurate dictionary at the expense of recall, and the lower bound precision figure of 90% suggests that this goal was achieved. The lower bound for recall appears less satisfactory. There is room for further work here, but this does represent a pessimistic lower bound (recall the 82% token recall figure above). Many of the more obscure subcategorizations for less common verbs never appeared in the modest-sized learning corpus, so the model had no chance to master them. 16", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "[np]", |
| "sec_num": null |
| }, |
| { |
| "text": "Further, the learned corpus may reflect language use more accurately than the dictionary. The OALD lists retire to NP and retire from NP as subeategorized PP complements, but not retire in NP. However, in the training corpus, the collocation retire in is much more frequent than retire to (or retire from). In the absence of differential error bounds, the program is always going to take such more frequent collocations as subeategorized. Actually, in this case, this seems to be the right result. While in can also be used to introduce a locative or temporal adjunct:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "[np]", |
| "sec_num": null |
| }, |
| { |
| "text": "(5) John retired from the army in 1945.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "[np]", |
| "sec_num": null |
| }, |
| { |
| "text": "if in is being used similarly to to so that the two sentences in (6) are equivalent: it seems that in should be regarded as a subcategorized complement of retire (and so the dictionary is incomplete).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "[np]", |
| "sec_num": null |
| }, |
| { |
| "text": "As a final example of the results, let us discuss verbs that subcategorize for from (of. fn. 1 and Church and Hanks 1989) . The acquired subcategorization dictionary lists a subcategorization involving from for 97 verbs. Of these, 1 is an outright mistake, and 1 is a verb that does not appear in the Cobuild dictionary (reshape). Of the rest, 64 are listed as occurring with from in Cobuild and 31 are not. While in some of these latter cases it could be argued that the occurrences of from are adjuncts rather than arguments, there are also a6For example, agree about did not appear in the learning corpus (and only once in total in another two months of the New York Times newswire that I examined). While disagree about is common, agree about seems largely disused: people like to agree with people but disagree about topics. Table 2 . Subcategorizations for 40 randomly selected verbs in OALD and acquired subcategorization dictionary (see text for key). ", |
| "cite_spans": [ |
| { |
| "start": 99, |
| "end": 121, |
| "text": "Church and Hanks 1989)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 830, |
| "end": 837, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "[np]", |
| "sec_num": null |
| }, |
| { |
| "text": "This paper presented one method of learning subcategorizations, but there are other approaches one might try. For disambiguating whether a PP is subcategorized by a verb in the V NP PP environment, Hindle and Rooth (1991) used a t-score to determine whether the PP has a stronger association with the verb or the preceding NP. This method could be usefully incorporated into my parser, but it remains a special-purpose technique for one particular ease. Another research direction would be making the parser stochastic as well, rather than it being a categorical finite state device that runs on the output of a stochastic tagger.", |
| "cite_spans": [ |
| { |
| "start": 198, |
| "end": 221, |
| "text": "Hindle and Rooth (1991)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FUTURE DIRECTIONS", |
| "sec_num": null |
| }, |
| { |
| "text": "There are also some linguistic issues that remain. The most troublesome case for any English subcategorization learner is dealing with prepositional complements. As well as the issues discussed above, another question is how to represent the subcategorization frames of verbs that take a range of prepositional complements (but not all).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FUTURE DIRECTIONS", |
| "sec_num": null |
| }, |
| { |
| "text": "For example, put can take virtually any locative or directional PP complement, while lean is more choosy (due to facts about the world):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FUTURE DIRECTIONS", |
| "sec_num": null |
| }, |
| { |
| "text": "l~My system tries to learn many more subcategorization frames, most of which are more difficult to detect accurately than the ones considered in Brent's work, so overall figures are not comparable. The recall figures presented in Brent (1992) gave the rate of recall out of those verbs which generated at least one cue of a given subcategorization rather than out of all verbs that have that subcategorization (pp. 17-19) , and are thus higher than the true recall rates from the corpus (observe in Table 3 that no cues were generated for infrequent verbs or subcategorization patterns). In Brent's earlier work (Brent 1991) , the error rates reported were for learning from tagged text. No error rates for running the system on untagged text were given and no recall figures were given for either system. The applications of this system are fairly obvious. For a parsing system, the current subcategorization dictionary could probably be incorporated as is, since the utility of the increase in coverage would almost undoubtedly outweigh problems arising from the incorrect subcategorization frames in the dictionary. A lexicographer would want to review the results by hand. Nevertheless, the program clearly finds gaps in printed dictionaries (even ones prepared from machine-readable corpora, like Cobuild), as the above example with forbid showed. A lexicographer using this program might prefer it adjusted for higher recall, even at the expense of lower precision. When a seemingly incorrect subcategorization frame is listed, the lexicographer could then ask for the cues that led to the postulation of this frame, and proceed to verify or dismiss the examples presented.", |
| "cite_spans": [ |
| { |
| "start": 230, |
| "end": 242, |
| "text": "Brent (1992)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 410, |
| "end": 421, |
| "text": "(pp. 17-19)", |
| "ref_id": null |
| }, |
| { |
| "start": 612, |
| "end": 624, |
| "text": "(Brent 1991)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 499, |
| "end": 506, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "FUTURE DIRECTIONS", |
| "sec_num": null |
| }, |
| { |
| "text": "A final question is the applicability of the methods presented here to other languages. Assuming the existence of a part-of-speech lexicon for another language, Kupiec's tagger can be trivially modified to tag other languages (Kupiec 1992) . The finite state parser described here depends heavily on the fairly fixed word order of English, and so precisely the same technique could only be employed with other fixed word order languages. However, while it is quite unclear how Brent's methods could be applied to a free word order language, with the method presented here, there is a clear path forward. Languages that have free word order employ either case markers or agreement affixes on the head to mark arguments. Since the tagger provides this kind of morphological knowledge, it would be straightforward to write a similar program that determines the arguments of a verb using any combination of word order, case marking and head agreement markers, as appropriate for the language at hand. Indeed, since case-marking is in some ways more reliable than word order, the results for other languages might even be better than those reported here.", |
| "cite_spans": [ |
| { |
| "start": 226, |
| "end": 239, |
| "text": "(Kupiec 1992)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "FUTURE DIRECTIONS", |
| "sec_num": null |
| }, |
| { |
| "text": "After establishing that it is desirable to be able to automatically induce the subcategorization frames of verbs, this paper examined a new technique for doing this. The paper showed that the technique of trying to learn from easily analyzable pieces of data is not extendable to all subcategorization frames, and, at any rate, the sparseness of appropriate cues in unrestricted texts suggests that a better strategy is to try and extract as much (noisy) information as possible from as much of the data as possible, and then to use statistical techniques to filter the results. Initial experiments suggest that this technique works at least as well as previously tried techniques, and yields a method that can learn all the possible subcategorization frames of verbs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CONCLUSION", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Converting Large On-line Valency Dictionaries for NLP Applications: From PROTON Descriptions to METAL Frames", |
| "authors": [ |
| { |
| "first": "Geert", |
| "middle": [], |
| "last": "Adriaens", |
| "suffix": "" |
| }, |
| { |
| "first": "Gert", |
| "middle": [], |
| "last": "De Braekeleer", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of COLING-92", |
| "volume": "", |
| "issue": "", |
| "pages": "1182--1186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adriaens, Geert, and Gert de Braekeleer. 1992. Converting Large On-line Valency Dictionaries for NLP Applications: From PROTON Descrip- tions to METAL Frames. In Proceedings of COLING-92, 1182-1186.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Automatic Acquisition of Subcategorization Frames from Untagged Text", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [ |
| "R" |
| ], |
| "last": "Brent", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Proceedings of the 29th Annual Meeting of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "209--214", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brent, Michael R. 1991. Automatic Acquisi- tion of Subcategorization Frames from Untagged Text. In Proceedings of the 29th Annual Meeting of the ACL, 209-214.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Robust Acquisition of Subcategorizations from Unrestricted Text: Unsupervised Learning with Syntactic Knowledge", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [ |
| "R" |
| ], |
| "last": "Brent", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brent, Michael R. 1992. Robust Acquisition of Subcategorizations from Unrestricted Text: Un- supervised Learning with Syntactic Knowledge.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Automatic Acquisition of Subcategorization Frames from Free Text Corpora", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Ms", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hopkins University", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "D" |
| ], |
| "last": "Baltimore", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Brent", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Michael", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Berwick", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Proceedings of the ~th DARPA Speech and Natural Language Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "MS, John Hopkins University, Baltimore, MD. Brent, Michael R., and Robert Berwick. 1991. Automatic Acquisition of Subcategorization Frames from Free Text Corpora. In Proceedings of the ~th DARPA Speech and Natural Language Workshop. Arlington, VA: DARPA.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Word Association Norms, Mutual Information, and Lexicography", |
| "authors": [ |
| { |
| "first": "Kenneth", |
| "middle": [], |
| "last": "Church", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Hanks", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "Proceedings of the 27th Annual Meeting of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "76--83", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Church, Kenneth, and Patrick Hanks. 1989. Word Association Norms, Mutual Information, and Lexicography. In Proceedings of the 27th An- nual Meeting of the ACL, 76-83.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Webster's seventh new collegiate dictionary", |
| "authors": [ |
| { |
| "first": "Philip", |
| "middle": [ |
| "B" |
| ], |
| "last": "Gove", |
| "suffix": "" |
| } |
| ], |
| "year": 1977, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gove, Philip B. (ed.). 1977. Webster's seventh new collegiate dictionary. Springfield, MA: G. & C. Merriam.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Automatic Acquisition of Hyponyms from Large Text Corpora", |
| "authors": [ |
| { |
| "first": "Marti", |
| "middle": [], |
| "last": "Hearst", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of COLING-92", |
| "volume": "", |
| "issue": "", |
| "pages": "539--545", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hearst, Marti. 1992. Automatic Acquisition of Hyponyms from Large Text Corpora. In Pro- ceedings of COLING-92, 539-545.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Structural Ambiguity and Lexical Relations", |
| "authors": [ |
| { |
| "first": "Donald", |
| "middle": [], |
| "last": "Hindle", |
| "suffix": "" |
| }, |
| { |
| "first": "Mats", |
| "middle": [], |
| "last": "Rooth", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Proceedings of the 291h Annual Meeting of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "229--236", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hindle, Donald, and Mats Rooth. 1991. Struc- tural Ambiguity and Lexical Relations. In Pro- ceedings of the 291h Annual Meeting of the ACL, 229-236.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Oxford Advanced Learner's Dictionary of Current English", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "S" |
| ], |
| "last": "Hornby", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hornby, A. S. 1989. Oxford Advanced Learner's Dictionary of Current English. Oxford: Oxford University Press. 4th edition.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Robust Part-of-Speech Tagging Using a Hidden Markov Model", |
| "authors": [ |
| { |
| "first": "Julian", |
| "middle": [ |
| "M" |
| ], |
| "last": "Kupiec", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Computer Speech and Language", |
| "volume": "6", |
| "issue": "", |
| "pages": "225--242", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kupiec, Julian M. 1992. Robust Part-of-Speech Tagging Using a Hidden Markov Model. Com- puter Speech and Language 6:225-242.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Information-Based Syntax and Semantics", |
| "authors": [], |
| "year": 1987, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "1987. Information-Based Syntax and Semantics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "chipped in to buy her a new TV. c. His letter was couched in conciliatory terms. But the majority of occurrences of in after a verb are NP modifiers or non-subcategorized locative phrases, such as those in (4). s (4) a. He gauged support for a change in the party leadership. b. He built a ranch in a new suburb.", |
| "uris": null, |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF1": { |
| "text": "(6) a. John retired to Malibu. b. John retired in Malibu.", |
| "uris": null, |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF2": { |
| "text": "-P(t0):19, NPINF:ll, --TV-P(for), --DTV, +TV:7 attribute: WV-P(to):67, +P(to):12 become: IV:406, XCOMP:142, --PP(Of) bridge: WV:6, +P(between):3 burden: WV:6, TV-P(with):5 calculate: THAT:I 1, TV:4, --WH, --NPINF, --TV-P(Up), --TV-V(into) depict: WV-P(as):10, IV:9, --NPING dig: WV:12, P(out):8, P(up):7, --IV, --TV-P (in), --TV-P (0lit), --TV-P (over), --TV-P (up), --P(for) drill: Tv-P(in):I4, TV:14, --IV, --P(FOR) emanate: P(from ):2 employ: TV:31,--TV-P(on),--TV-P(in),--TV-P(as), --NPINF encourage: NPINF:IO8, TV:60, --TV-P(in) , TV-P(in):16, --IV, --P(), --TV-P(together), --TV-P(up), --TV-P(out), --TV-P(away) mean: THAT:280, TV:73, NPINF:57, INF:41, ING:35, --TV-PP (to), --POSSING, --TV-PP (as) --DTV, --TV-PP (for) occupy: TV:17, --TV-P(in), --TV-P(with) prod: TV:4, Tv-e(into):3, --IV, --P(AT), against), -P (with), --IV tour: TV:9, IV:6, --P(IN) troop:--IV, -P0, [TV: trooping the color] wallow: P(in):2,--IV,-P(about),-P(around) water:WV:13,--IV,--WV-P(down), -}-THAT:6", |
| "uris": null, |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF3": { |
| "text": "(8) a. John leaned against the wall b. *John leaned under the table c. *John leaned up the chute The program doesn't yet have a good way of representing classes of prepositions.", |
| "uris": null, |
| "type_str": "figure", |
| "num": null |
| }, |
| "TABREF1": { |
| "text": "Growth of subcategorization dictionary", |
| "type_str": "table", |
| "content": "<table><tr><td>Words</td><td>Verbs in</td><td>Subcats</td><td>Subcats</td></tr><tr><td>Processed</td><td>subcat</td><td>learned</td><td>learned</td></tr><tr><td>(million)</td><td>dictionary</td><td/><td>per verb</td></tr><tr><td>1.2</td><td>1856</td><td>2661</td><td>1.43</td></tr><tr><td>2.9</td><td>2689</td><td>4129</td><td>1.53</td></tr><tr><td>4.1</td><td>3104</td><td>4900</td><td>1.58</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF3": { |
| "text": "Comparison of results with OALD For example, Cobuild does not list that forbid takes from-marked participial complements, but this is very well attested in the New York Times newswire, as the examples in (7) show:(7) a. The Constitution appears to forbid the", |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td/><td/><td/><td>some unquestionable omissions from the diction-</td></tr><tr><td>Word agree: all: annoy:</td><td colspan=\"3\">Subcategorization frames Right Wrong Out of Incorrect 6 8 0 1 0 1</td><td/><td>ary. general, as a former president who came to power through a coup, from taking of-fice. b. Parents and teachers are forbidden from taking a lead in the project, and ...</td></tr><tr><td>assign: attribute: become: bridge: burden: calculate:</td><td>2 1 2 1 2 2</td><td>1 1 1</td><td colspan=\"2\">4 Tv 1 P(/o) 3 1 wv-P(belween) 2 5</td><td>Unfortunately, for several reasons the results presented here are not directly comparable with those of Brent's systems. 17 However, they seems to represent at least a comparable level of perfor-mance.</td></tr><tr><td>chart:</td><td>1</td><td>1</td><td>1 DTV</td><td/></tr><tr><td>chop:</td><td>1</td><td/><td>3</td><td/></tr><tr><td>depict:</td><td>2</td><td/><td>3</td><td/></tr><tr><td>dig:</td><td>3</td><td/><td>9</td><td/></tr><tr><td>drill:</td><td>2</td><td/><td>4</td><td/></tr><tr><td>emanate:</td><td>1</td><td/><td>1</td><td/></tr><tr><td>employ:</td><td>1</td><td/><td>5</td><td/></tr><tr><td>encourage:</td><td>2</td><td/><td>3</td><td/></tr><tr><td>exact:</td><td>0</td><td/><td>2</td><td/></tr><tr><td>exclaim:</td><td>1</td><td/><td>3</td><td/></tr><tr><td>exhaust:</td><td>1</td><td/><td>1</td><td/></tr><tr><td>exploit:</td><td>1</td><td/><td>1</td><td/></tr><tr><td>fascinate:</td><td>1</td><td/><td>1</td><td/></tr><tr><td>flavor:</td><td>1</td><td/><td>2</td><td/></tr><tr><td>heat:</td><td>2</td><td/><td>4</td><td/></tr><tr><td>leak:</td><td>1</td><td/><td>5</td><td/></tr><tr><td>lock:</td><td>2</td><td/><td>8</td><td/></tr><tr><td>mean:</td><td>5</td><td/><td>10</td><td/></tr><tr><td>occupy:</td><td>1</td><td/><td>3</td><td/></tr><tr><td>prod:</td><td>2</td><td/><td>5</td><td/></tr><tr><td>redesign:</td><td>1</td><td/><td>4</td><td/></tr><tr><td>reiterate:</td><td>1</td><td/><td>2</td><td/></tr><tr><td>remark:</td><td>1</td><td>1</td><td>4 IV</td><td/></tr><tr><td>retire:</td><td>2</td><td>1</td><td>5 P(in)</td><td/></tr><tr><td>shed:</td><td>1</td><td/><td>2</td><td/></tr><tr><td>sift:</td><td>1</td><td/><td>3</td><td/></tr><tr><td>strive:</td><td>2</td><td/><td>6</td><td/></tr><tr><td>tour:</td><td>2</td><td/><td>3</td><td/></tr><tr><td>troop:</td><td>0</td><td/><td>3</td><td/></tr><tr><td>wallow:</td><td>1</td><td/><td>4</td><td/></tr><tr><td>water:</td><td>1</td><td>1</td><td>3 THAT</td><td/></tr><tr><td/><td>60</td><td>7</td><td>139</td><td/></tr><tr><td colspan=\"4\">Precision (percent right of ones learned):</td><td>90%</td></tr><tr><td colspan=\"4\">Recall (percent of OALD ones learned):</td><td>43%</td></tr></table>", |
| "html": null, |
| "num": null |
| } |
| } |
| } |
| } |