ACL-OCL / Base_JSON /prefixJ /json /J90 /J90-1003.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J90-1003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:54:58.010986Z"
},
"title": "WORD ASSOCIATION NORMS, ] /IUTUAL INFORMATION, AND LEXICOGRAPHY",
"authors": [
{
"first": "Kenneth",
"middle": [
"Ward"
],
"last": "Church",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "N.J. Patrick Hanks Collins Publishers Glasgow",
"location": {
"country": "Scotland"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The term word association is used in a very particular sense in the psycholinguistic literature. (Generally speaking, subjects respond quicker than normal to the word nurse if it follows a highly associated word such as doctor.) We will extend the term to provide the basis for a statistical description of a variety of interesting linguistic phenomena, ranging from semantic relations of the doctor/nurse type (content word/content word) to lexico-syntactic co-occurrence constraints between verbs and prepositions (content word/function word). This paper will propose an objective measure based on the information theoretic notion of mutual information, for estimating word association norms from computer readable corpora. (The standard method of obtaining word association norms, testing a few thousand :mbjects on a few hundred words, is both costly and unreliable.) The proposed measure, the association ratio, estimates word association norms directly from computer readable corpora, making it possible to estimate norms for tens of thousands of words.",
"pdf_parse": {
"paper_id": "J90-1003",
"_pdf_hash": "",
"abstract": [
{
"text": "The term word association is used in a very particular sense in the psycholinguistic literature. (Generally speaking, subjects respond quicker than normal to the word nurse if it follows a highly associated word such as doctor.) We will extend the term to provide the basis for a statistical description of a variety of interesting linguistic phenomena, ranging from semantic relations of the doctor/nurse type (content word/content word) to lexico-syntactic co-occurrence constraints between verbs and prepositions (content word/function word). This paper will propose an objective measure based on the information theoretic notion of mutual information, for estimating word association norms from computer readable corpora. (The standard method of obtaining word association norms, testing a few thousand :mbjects on a few hundred words, is both costly and unreliable.) The proposed measure, the association ratio, estimates word association norms directly from computer readable corpora, making it possible to estimate norms for tens of thousands of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "It is common practice in linguistics to classify words not only on the basis of their meanings but also on the basis of their co-occurrence with other words. Running through the whole Firthian tradition, for example, is the theme that \"You shall know a word by the company it keeps\" (Firth, 1957) .",
"cite_spans": [
{
"start": 283,
"end": 296,
"text": "(Firth, 1957)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MEANING AND ASSOCIATION",
"sec_num": "1"
},
{
"text": "On the one hand, bank co-occurs with words and expression such as money, notes, loan, account, investment, clerk, official, manager, robbery, vaults, working in a, its actions, First National, of England, and so forth. On the other hand, we find bank co-occurring with river, swim, boat, east (and of course West and South, which have acquired special meanings of their own), on top of the, and of the Rhine. (Hanks 1987, p. 127) The search for increasingly delicate word classes is not new. In lexicography, for example, it goes back at least to the \"verb patterns\" described in Hornby's Advanced Learner's Dictionary (first edition 1948) . What is new is that facilities for the computational storage and analysis of large bodies of natural language have developed significantly in recent years, so that it is now becoming possible to test and apply informal assertions of this kind in a more rigorous way, and to see what company our words do keep.",
"cite_spans": [
{
"start": 73,
"end": 211,
"text": "notes, loan, account, investment, clerk, official, manager, robbery, vaults, working in a, its actions, First National, of England, and so",
"ref_id": null
},
{
"start": 409,
"end": 424,
"text": "(Hanks 1987, p.",
"ref_id": "BIBREF4"
},
{
"start": 619,
"end": 639,
"text": "(first edition 1948)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MEANING AND ASSOCIATION",
"sec_num": "1"
},
{
"text": "The proposed statistical description has a large number of potentially important applications, including: (a) constraining the language model both for speech recognition and optical character recognition (OCR), (b) providing disambiguation cues for parsing highly ambiguous syntactic structures such as noun compounds, conjunctions, and prepositional phrases, (c) retrieving texts from large databases (e.g. newspapers, patents), (d) enhancing the productivity of computational linguists in compiling lexicons of lexico-synWctic facts, and (e) enhancing the productivity of lexicographers in identifying normal and conventional usage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRACTICAL APPLICATIONS",
"sec_num": "2"
},
{
"text": "Consider the optical character recognizer (OCR) application. Suppose that we have an OCR device as in Kahan et al. (1987) , and it has assigned about equal probability to having recognized farm and form, where the context is either: (1) federal credit or (2) some of.",
"cite_spans": [
{
"start": 102,
"end": 121,
"text": "Kahan et al. (1987)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PRACTICAL APPLICATIONS",
"sec_num": "2"
},
{
"text": "The proposed association measure can make use of the fact that farm is much more likely in the first context and form is much more likely in the second to resolve the ambiguity. Note that alternative disambiguation methods based on syntactic constraints such as part of speech are unlikely to help in this case since both form and farm are commonly used as nouns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "/farm",
"sec_num": null
},
{
"text": "Word association norms are well known to be an important factor in psycholinguistic research, especially in the area of lexical retrieval. Generally speaking, subjects respond quicker than normal to the word nurse if it follows a highly associated word such as doctor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PSYCHOLINGUISTICS",
"sec_num": null
},
{
"text": "Some results and implications are summarized from reaction-time experiments in which subjects either (a) classified successive strings of letters as words and nonwords, or (b) pronounced the strings. Both types of response to words (e.g. BUTTER) were consistently faster when preceded by associated words (e.g. BREAD) rather than unassociated words (e.g. NURSE) (Meyer et al. 1975, p. 98) Much of this psycholinguistic research is based on empirical estimates of word association norms as in Palermo and Jenkins (1964) , perhaps the most influential study of its kind, though extremely small and somewhat dated. This study measured 200 words by asking a few thousand subjects to write down a word after each of the 200 words to be measured. Results are reported in tabular form, indicating which words were written down, and by how many subjects, factored by grade level and sex. The word doctor, for example, is reported on pp. 98-100 to be most often associated with nurse, followed by sick, health, medicine, hospital, man, sickness, lawyer, and about 70 more words.",
"cite_spans": [
{
"start": 362,
"end": 388,
"text": "(Meyer et al. 1975, p. 98)",
"ref_id": null
},
{
"start": 492,
"end": 518,
"text": "Palermo and Jenkins (1964)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PSYCHOLINGUISTICS",
"sec_num": null
},
{
"text": "We propose an alternative measure, the association ratio, for measuring word association norms, based on the information theoretic concept of mutual information. 1 The proposed measure is more objective and less costly than the subjective method employed in Palermo and Jenkins (1964) . The association ratio can be scaled up to provide robust estimates of word association norms for a large portion of the language. Using the association ratio measure, the five most associated words are, in order: dentists, nurses, treating, treat, and hospitals.",
"cite_spans": [
{
"start": 258,
"end": 284,
"text": "Palermo and Jenkins (1964)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AN INFORMATION THEORETIC MEASURE",
"sec_num": "4"
},
{
"text": "What is \"mutual information?\" According to Fano (1961) , if two points (words), x and y, have probabilities P(x) and P(y), then their mutual information, I(x,y), is defined to be",
"cite_spans": [
{
"start": 43,
"end": 54,
"text": "Fano (1961)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AN INFORMATION THEORETIC MEASURE",
"sec_num": "4"
},
{
"text": "P(x, y) I(x, y) =-log2 P(x)P(y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AN INFORMATION THEORETIC MEASURE",
"sec_num": "4"
},
{
"text": "Informally, mutual information compares the probability of observing x and y together (the joint probability) with the probabilities of observing x and y independently (chance) . If there is a genuine association between x and y, then the joint probability P(x,y) will be much larger than chance P(x) P(y), and consequently I(x,y) >> 0. If there is no interesting relationship between x and y, then P(x,y) P(x) P(y), and thus, I(x,y) ~ O. If x and y are in complementary distribution, then P(x,y) will be much less than P(x) P(y), forcing I(x,y) << 0.",
"cite_spans": [
{
"start": 168,
"end": 176,
"text": "(chance)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "AN INFORMATION THEORETIC MEASURE",
"sec_num": "4"
},
{
"text": "In our application, word probabilities P(x) and P(y) are estimated by counting the number of observations of x and y in a corpus, f (x) andf(y), and normalizing by N, the size of the corpus. (Our examples use a number of different corpora with different sizes: 15 million words for the 1987 AP corpus, 36 million words for the 1988 AP corpus, and 8.6 million tokens for the tagged corpus.) Joint probabilities, P(x,y), are estimated by counting the number of times that x is followed by y in a window of w words, fw (x,y), and normalizing by N. The window size parameter allows us to look at different scales. Smaller window sizes will identify fixed expressions (idioms such as bread and butter) and other relations that hold over short ranges; larger window sizes will highlight semantic concepts and other relationships that hold over larger scales. Table 1 may help show the contrast. 2 In fixed expressions, such as bread and butter and drink and drive, the words of interest are separated by a fixed number of words and there is very little variance. In the 1988 AP, it was found that the two words are always exactly two words apart whenever they are found near each other (within five words). That is, the mean separation is two, and the variance is zero. Compounds also have very fixed word order (little variance), but the average separation is closer to one word rather than two. In contrast, relations such as man/woman are less fixed, as indicated by a larger variance in their separation. (The nearly zero value for the mean separation for man/women indicates the words appear about equally often in either order.) Lexical relations come in several varieties. There are some like refraining from that are fairly fixed, others such as coming from that may be separated by an argument, and still others like keeping from that are almost certain to be separated by an argument. The ideal window size is different in each case. For the remainder of this paper, the window size, w, will be set to five words as a compromise; this setting is large enough to show some of the constraints between verbs and arguments, but not so large that it would wash out constraints that make use of strict adjacency)",
"cite_spans": [],
"ref_spans": [
{
"start": 853,
"end": 860,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "AN INFORMATION THEORETIC MEASURE",
"sec_num": "4"
},
{
"text": "Since the association ratio becomes unstable when the counts are very small, we will not discuss word pairs with f(x,y) _< 5. An improvement would make use of t-scores, and throw out pairs that were not significant. Unfortunately, this requires an estimate of the variance off(x,y), which goes beyond the scope of this paper. For the remainder of this paper, we will adopt the simple but arbitrary threshold, and ignore pairs with small counts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "AN INFORMATION THEORETIC MEASURE",
"sec_num": "4"
},
{
"text": "Technically, the association ratio is different from mutual information in two respects. First, joint probabilities are supposed to be symmetric: P(x,y) = P(y, x), and thus, mutual information is also symmetric: I(x,y) = I(y, x). However, the association ratio is not symmetric, sincef(x, y) encodes linear precedence. (Recall thatf(x, y) denotes the number of times that word x appears before y in the window of w words, not the number of times the two words appear in either order.) Although we could fix this problem by redefiningf(x, y) to be symmetric (by averaging the matrix with its transpose), we have decided not to do so, since order information appears to be very interesting. Notice the asymmetry in the pairs in Table 2 (computed from 44 million words of 1988 AP text), illustrating a wide variety of biases ranging from sexism to syntax.",
"cite_spans": [],
"ref_spans": [
{
"start": 726,
"end": 733,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "AN INFORMATION THEORETIC MEASURE",
"sec_num": "4"
},
{
"text": "Second, one might expect f(x, y) <_ f(x) and f(x, y) <_ f(y), but the way we have been counting, this needn't be the case if x and y happen to appear several times in the window. For example, given the sentence, \"Library workers were prohibited from saving books from this heap of ruins,\" which appeared in an AP story on April 1, 1988 , f(prohibited) = 1 and f(prohibited, from) = 2. This problem can be fixed by dividingf(x, y) by w -1 (which has the consequence of subtracting log2 (w -1) = 2 from our association ratio scores). This adjustment has the addi- tional beneft of assuring that Z f(x,y) = ~ f(x) = Zf(y) = N. When I(x, y) is large, the association ratio produces very credible results not unlike those reported in Palermo and Jenkins (1964) , as illustrated in Table 3 . In contrast, when I(x, y) ---: 0, the pairs are less interesting. (As a very rough rule; of thumb, we have observed that pairs with I(x, y) > 3 tend to be interesting, and pairs with smaller I(x, y) are generally not. One can make this statement precise by calibrating the measure with subjective measures. Alternatively, one could make estimates of the variance and then make statements about confidence levels, e.g. with 95% confidence, P(x, y) > e(x) P(y).)",
"cite_spans": [
{
"start": 322,
"end": 335,
"text": "April 1, 1988",
"ref_id": null
},
{
"start": 729,
"end": 755,
"text": "Palermo and Jenkins (1964)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 776,
"end": 783,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "AN INFORMATION THEORETIC MEASURE",
"sec_num": "4"
},
{
"text": "If I(x, y) << 0, we would predict that x and y are in complementary distribution. However, we are rarely able to observe I(x, y) << 0 because our corpora are too small (and our measurement techniques are too crude). Suppose, for example, that both x and y appear about 10 times per million words of text. Then, P(x) = P(y) = 10 -5 and chance is P(x) P(x) = 10 -I\u00b0. Thus, to say that I(x, y) is much less than 0, we need to say that P(x, y) is much less than 10 -t\u00b0, a statement that is hard to make with much confidence given the size of presently available corpora. In fact, we cannot (easily) observe a probability less than 1/N ~ 10 -7, and therefore it is hard to know if I(x, y) is much less than chance or not, unless chance is very large. (In fact, the pair a... doctors in Table 3 , appears significantly less often than chance. But to justify this statement, we need to compensate for the window size (which shifts the score downward by 2.0, e.g. from 0.96 down to -1.04), and we need to estimate the standard deviation, using a method such as Good (1953). 4",
"cite_spans": [],
"ref_spans": [
{
"start": 781,
"end": 788,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "AN INFORMATION THEORETIC MEASURE",
"sec_num": "4"
},
{
"text": "Although the psycholinguistic literature documents the significance of noun/noun word associations such as doctor/ nurse in considerable detail, relatively little is said about Table 3 . Some interesting Associations with \"Doctor\" in the 1987 AP Corpus (N = 15 million) . That is enough to show its main patterning and it suggests that in currently-held corpora there will be found sufficient evidence for the description of a substantial collection of phrases ... (Sinclair 1987c, pp. 151-152) . Using Sinclair's estimates P(set) ~ 250 x 10 -6, P(off) ~-556 x 10 -6, and P(set, off) ~ 70/(7.3 x 106), we would estimate the mutual information to be I(set; off) = log2P(set, off)/(P(set) P(off)) ~ 6.1. In the 1988 AP corpus (N = 44,344,077), we estimate P(set) ~ 13,046/N, P(off) ~ 20,693/N, and P(set, off) ~ 463/N. Given these estimates, we would compute the mutual information to be l(set; off) ~ 6.2.",
"cite_spans": [
{
"start": 465,
"end": 494,
"text": "(Sinclair 1987c, pp. 151-152)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 177,
"end": 184,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "LEXICO-SYNTACTIC REGULARITIES",
"sec_num": "5"
},
{
"text": "I(x, y) f(x, y) f(x) x f(y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICO-SYNTACTIC REGULARITIES",
"sec_num": "5"
},
{
"text": "In this example, at least, the values seem to be fairly comparable across corpora. In other examples, we will see some differences due to sampling. Sinclair's corpus is a fairly balanced sample of (mainly British) text; the AP corpus is an unbalanced sample of American journalese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICO-SYNTACTIC REGULARITIES",
"sec_num": "5"
},
{
"text": "This association between set and offis relatively strong; the joint probability is more than 26 = 64 times larger than chance. The other particles that Sinclair mentions have association ratios that can be seen in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 214,
"end": 221,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "LEXICO-SYNTACTIC REGULARITIES",
"sec_num": "5"
},
{
"text": "The first three, set up, set off, and set out, are clearly associated; the last three are not so clear. As Sinclair suggests, the approach is well suited for identifying the phrasal verbs, at least in certain cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LEXICO-SYNTACTIC REGULARITIES",
"sec_num": "5"
},
{
"text": "Phrasal verbs involving the preposition to raise an interesting problem because of the possible confusion with the infinitive marker to. We have found that if we first tag every word in the corpus with a part of speech using a method such as Church (1988) , and then measure associations between tagged words, we can identify interesting contrasts between verbs associated with a following preposition to~in and verbs associated with a following infinitive marker to~to. (Part of speech notation is borrowed from Francis and Kucera (1982) ; in = preposition; to = infinitive marker; vb = bare verb; vbg = verb + ing; vbd = verb + ed; vbz = verb + s; vbn = verb + en.) The association ratio identifies quite a number of verbs associated in an interesting way with to; restricting our attention to pairs with a score of 3.0 or more, there are 768 verbs associated with the preposition to~in and 551 verbs with the infinitive marker to/to. The ten verbs found to be most associated before to/in are:",
"cite_spans": [
{
"start": 242,
"end": 255,
"text": "Church (1988)",
"ref_id": "BIBREF0"
},
{
"start": 513,
"end": 538,
"text": "Francis and Kucera (1982)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PREPROCESSING WITH A PART OF SPEECH TAGGER",
"sec_num": "6"
},
{
"text": "\u2022 to~in: alluding/vbg, adhere/vb, amounted/vbn, relating/ vbg, amounting/vbg, revert/vb, reverted/vbn, resorting/ vbg, relegated/vbn \u2022 to~to: obligated/vbn, trying/vbg, compelled/vbn, enables/vbz, supposed/vbn, intends/vbz, vowing/vbg, tried/vbd, enabling/vbg, tends/vbz, tend/vb, intend/vb, tries/vbz Thus, we see there is considerable leverage to be gained by preprocessing the corpus and manipulating the inventory of tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PREPROCESSING WITH A PART OF SPEECH TAGGER",
"sec_num": "6"
},
{
"text": "Hindle (Church et al. 1989) has found it helpful to preprocess the input with the Fidditch parser (Hindle 1983a (Hindle , 1983b to identify associations between verbs and arguments, and postulate semantic classes for nouns on this basis. Hindle's method is able to find some very interesting associations, as Tables 5 and 6 demonstrate. After running his parser over the 1988 AP corpus (44 million words), Hindle found N = 4,112,943 subject/verb/ object (SVO) triples. The mutual information between a verb and its object was computed from these 4 million triples by counting how often the verb and its object were found in the same triple and dividing by chance. Thus, for example, disconnect/V and telephone/0 have a joint probability of 7/N. In this case, chance is 84/N x 481/N because there are 84 SVO triples with the verb disconnect, and 481 SVO triples with the object telephone. The mutual information is log z 7N/(84 \u00d7 481) = 9.48. Similarly, the mutual information for drink/Vbeer/O is 9.9 = log 2 29N/ (660 \u00d7 195). (drink/V and beer/O are found in 660 and ",
"cite_spans": [
{
"start": 7,
"end": 27,
"text": "(Church et al. 1989)",
"ref_id": "BIBREF0"
},
{
"start": 98,
"end": 111,
"text": "(Hindle 1983a",
"ref_id": "BIBREF5"
},
{
"start": 112,
"end": 127,
"text": "(Hindle , 1983b",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 309,
"end": 323,
"text": "Tables 5 and 6",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "PREPROCESSING WITH A PARSER",
"sec_num": "7"
},
{
"text": "drink/V martinis/O 12.6 3 drink/V cup_water/O 11.6 3 drink/V champagne/O 10.9 3 drink/V beverage/O 10.8 8 drink/V cup_coffee/O 10.6 2 drink/V cognac/ O 10.6 2 drink/V beer/O 9.9 29 drink/V eup/O 9.7 6 drink/V coffee/O 9.7 12 drink/V toast/O 9.6 4 drink/V alcohol/O 9.4 20 drink/V wine/ O 9.3 10 drink/V fluid/O 9.0 5 drink/V liquor/O 8.9 4 drink/V tea]O 8.9 5 drink/V milk/O 8.7 8 drink/V juice/O 8.3 4 drink/V water/O 7.2 43 drink/V quantity]O 7.1 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PREPROCESSING WITH A PARSER",
"sec_num": "7"
},
{
"text": "195 SVO triples, respectively; they are found together in 29 of these triples). This application of Hindle's parser illustrates a second example of preprocessing the input to highlight certain constraints of interest. For measuring syntactic constraints, it may be useful to include some part of speech information and to exclude much of the internal structure of noun phrases. For other purposes, it may be helpful to tag items and/or phrases with semantic labels such as *person*, *place*, *time*, *body part*, *bad*, and so on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PREPROCESSING WITH A PARSER",
"sec_num": "7"
},
{
"text": "Large machine-readable corpora are only just now becoming available to lexicographers. Up to now, lexicographers have been reliant either on citations collected by human readers, which introduced an element of selectivity and so inevitably distortion (rare words and uses were collected but common uses of common words were not), or on small corpora of only a million words or so, which are reliably informative for only the most common uses of the few most frequent words of English. (A million-word corpus such as the Brown Corpus is reliable, roughly, for only some uses of only some of the forms of around 4000 dictionary entries. But standard dictionaries typically contain twenty times this number of entries.) The computational tools available for studying machinereadable corpora are at present still rather primitive. These are concordancing programs (see Figure 1) , which are basically KWIC (key word in context; Aho et al. 1988) indexes with additional features such as the ability to extend the context, sort leftward as well as rightward, and so on. There is very little interactive software. In a typical situation in the lexicography of the 1980s, a lexicographer is giwen the concordances for a word, marks up the printout with colored pens to identify the salient senses, and then writes syntactic descriptions and definitions.",
"cite_spans": [
{
"start": 924,
"end": 940,
"text": "Aho et al. 1988)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 865,
"end": 874,
"text": "Figure 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "APPLICATIONS IN LEXICOGRAPHY",
"sec_num": "8"
},
{
"text": "Although this technology is a great improvement on using human readers to collect boxes of citation index cards (tlhe method Murray used in constructing The Oxford English Dictionary a century ago), it works well if there are no more than a few dozen concordance lines for a word, and only two or three main sense divisions. In analyzing a complex word such as take, save, or from, the lexicographer is trying to pick out significant patterns and subtle distinctions that are buried in literally thousands of concordance lines: pages and pages of computer printout. The unaided human mind simply cannot discover all the signifi-Is Su~Say, calling for ~x~ater economic reforms to mmi~:ion asseaed that \" the Postal Se~wice could Then. sl0e said, the family hopes to e out-of-work steelworker, \" because that doesn't ....",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "APPLICATIONS IN LEXICOGRAPHY",
"sec_num": "8"
},
{
"text": "We suspend reality when we say we'll sclent~ts has won the first round in an effort to about three children in a mining town who plot to GM executives say the slmtdow~ will rtr~ent as receiver, lilstracted officials to U3, to save them fi~m diamken yankee brawlel~, \" Ta~ said.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "APPLICATIONS IN LEXICOGRAPHY",
"sec_num": "8"
},
{
"text": "save those who were p~=aengers. \" save. \" Figure 1 Short Sample of the Concordance to \"save\" from the AP 1987 Corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 42,
"end": 50,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "APPLICATIONS IN LEXICOGRAPHY",
"sec_num": "8"
},
{
"text": "cant patterns, let alone group them and rank them in order of importance. The AP 1987 concordance to save is many pages long; there are 666 lines for the base form alone, and many more for the inflected forms saved, saves, saving, and savings. In the discussion that follows, we shall, for the sake of simplicity, not analyze the inflected forms and we shall only look at the patterns to the right of save (see Table 7 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 411,
"end": 418,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "APPLICATIONS IN LEXICOGRAPHY",
"sec_num": "8"
},
{
"text": "It is hard to know what is important in such a concordance and what is not. For example, although it is easy to see from the concordance selection in Figure 1 that the word \"to\" often comes before \"save\" and the word \"the\" often comes after \"save,\" it is hard to say from examination of a concordance alone whether either or both of these co-occurrences have any significance.",
"cite_spans": [],
"ref_spans": [
{
"start": 150,
"end": 158,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "APPLICATIONS IN LEXICOGRAPHY",
"sec_num": "8"
},
{
"text": "Two examples will illustrate how the association ratio measure helps make the analysis both quicker and more accurate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "APPLICATIONS IN LEXICOGRAPHY",
"sec_num": "8"
},
{
"text": "The association ratios in Table 7 show that association norms apply to function words as well as content words. For example, one of the words significantly associated with save is from. Many dictionaries, for example Webster's Ninth New Collegiate Dictionary (Merriam Webster), make no explicit mention of from in the entry for save, although Table 7 . Words Often Co-Occurring to the Right of\"Save\" British learners' dictionaries do make specific mention of from in connection with save. These learners' dictionaries pay more attention to language structure and collocation than do American collegiate dictionaries, and lexicographers trained in the British tradition are often fairly skilled at spotting these generalizations. However, teasing out such facts and distinguishing true intuitions from false intuitions takes a lot of time and hard work, and there is a high probability of inconsistencies and omissions. Which other verbs typically associate with from, and where does save rank in such a list? The association ratio identified 1530 words that are associated with from; 911 of them were tagged as verbs. The first 100 verbs are:",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 33,
"text": "Table 7",
"ref_id": null
},
{
"start": 343,
"end": 350,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "EXAMPLE 1: \"SAVE ... FROM\"",
"sec_num": "8.1"
},
{
"text": "I(x, y) f(x, y) f(x) x f(y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EXAMPLE 1: \"SAVE ... FROM\"",
"sec_num": "8.1"
},
{
"text": "refrain /vb, gleaned/vbn, stems/vbz, stemmed/vbd, stemming/vbg, ranging/vbg, stemmed/vbn, ranged/ vbn, derived/vbn, ranged/vbd, extort/vb, graduated/ vbd, barred/vbn, benefiting/vbg, benefitted/vbn, benefited/vbn, excused/vbd, arising/vbg, range/vb, exempts/ vbz, suffers/vbz, exempting/vbg, benefited/vbd, prevented/vbd (7.0), seeping/vbg, barred/vbd, prevents/ vbz, suffering/vbg, excluded/vbn, marks/vbz, profiting/ vbg, recovering/vbg, discharged/vbn, rebounding/vbg, vary/vb, exempted/vbn, separate/vb, banished/vbn, withdrawing/vbg, ferry/vb, prevented/vbn, profit/vb, bar/vb, excused/vbn, bars/vbz, benefit/vb, emerges/ vbz, emerge/vb, varies/vbz, differ/vb, removed/vbn, exempt/vb, expelled/vbn, withdraw/vb, stem/vb, separated/vbn, judging/vbg, adapted/vbn, escaping/vbg, inherited/vbn, differed/vbd, emerged/vbd, withheld/vbd, leaked/vbn, strip/vb, resulting/vbg, discourage/vb, prevent/vb, withdrew/vbd, prohibits/vbz, borrowing/vbg, preventing/vbg, prohibit/vb, resulted/vbd (6.0), preclude/vb, divert/vb, distinguish/vb, pulled/vbn, fell/ vbn, varied/vbn, emerging/vbg, suffer/vb, prohibiting/ vbg, extract/vb, subtract/vb, recover/vb, paralyzed/ vbn, stole/vbd, departing/vbg, escaped/vbn, prohibited/ vbn, forbid/vb, evacuated/vbn, reap/vb, barring/vbg, removing/vbg, stolen/vbn, receives/vbz. Save...from is a good example for illustrating the advantages of the association ratio. Save is ranked 319th in this list, indicating that the association is modest, strong enough to be important (21 times more likely than chance), but not so strong that it would pop out at us in a concordance, or that it would be one of the first things to come to mind.",
"cite_spans": [
{
"start": 8,
"end": 12,
"text": "/vb,",
"ref_id": null
},
{
"start": 13,
"end": 25,
"text": "gleaned/vbn,",
"ref_id": null
},
{
"start": 26,
"end": 36,
"text": "stems/vbz,",
"ref_id": null
},
{
"start": 37,
"end": 49,
"text": "stemmed/vbd,",
"ref_id": null
},
{
"start": 50,
"end": 63,
"text": "stemming/vbg,",
"ref_id": null
},
{
"start": 64,
"end": 76,
"text": "ranging/vbg,",
"ref_id": null
},
{
"start": 77,
"end": 89,
"text": "stemmed/vbn,",
"ref_id": null
},
{
"start": 90,
"end": 102,
"text": "ranged/ vbn,",
"ref_id": null
},
{
"start": 103,
"end": 115,
"text": "derived/vbn,",
"ref_id": null
},
{
"start": 116,
"end": 127,
"text": "ranged/vbd,",
"ref_id": null
},
{
"start": 128,
"end": 138,
"text": "extort/vb,",
"ref_id": null
},
{
"start": 139,
"end": 154,
"text": "graduated/ vbd,",
"ref_id": null
},
{
"start": 155,
"end": 166,
"text": "barred/vbn,",
"ref_id": null
},
{
"start": 167,
"end": 182,
"text": "benefiting/vbg,",
"ref_id": null
},
{
"start": 183,
"end": 198,
"text": "benefitted/vbn,",
"ref_id": null
},
{
"start": 199,
"end": 213,
"text": "benefited/vbn,",
"ref_id": null
},
{
"start": 214,
"end": 226,
"text": "excused/vbd,",
"ref_id": null
},
{
"start": 227,
"end": 239,
"text": "arising/vbg,",
"ref_id": null
},
{
"start": 240,
"end": 249,
"text": "range/vb,",
"ref_id": null
},
{
"start": 250,
"end": 263,
"text": "exempts/ vbz,",
"ref_id": null
},
{
"start": 264,
"end": 276,
"text": "suffers/vbz,",
"ref_id": null
},
{
"start": 277,
"end": 291,
"text": "exempting/vbg,",
"ref_id": null
},
{
"start": 292,
"end": 306,
"text": "benefited/vbd,",
"ref_id": null
},
{
"start": 307,
"end": 327,
"text": "prevented/vbd (7.0),",
"ref_id": null
},
{
"start": 328,
"end": 340,
"text": "seeping/vbg,",
"ref_id": null
},
{
"start": 341,
"end": 352,
"text": "barred/vbd,",
"ref_id": null
},
{
"start": 353,
"end": 367,
"text": "prevents/ vbz,",
"ref_id": null
},
{
"start": 368,
"end": 382,
"text": "suffering/vbg,",
"ref_id": null
},
{
"start": 383,
"end": 396,
"text": "excluded/vbn,",
"ref_id": null
},
{
"start": 397,
"end": 407,
"text": "marks/vbz,",
"ref_id": null
},
{
"start": 408,
"end": 423,
"text": "profiting/ vbg,",
"ref_id": null
},
{
"start": 424,
"end": 439,
"text": "recovering/vbg,",
"ref_id": null
},
{
"start": 440,
"end": 455,
"text": "discharged/vbn,",
"ref_id": null
},
{
"start": 456,
"end": 471,
"text": "rebounding/vbg,",
"ref_id": null
},
{
"start": 472,
"end": 480,
"text": "vary/vb,",
"ref_id": null
},
{
"start": 481,
"end": 494,
"text": "exempted/vbn,",
"ref_id": null
},
{
"start": 495,
"end": 507,
"text": "separate/vb,",
"ref_id": null
},
{
"start": 508,
"end": 521,
"text": "banished/vbn,",
"ref_id": null
},
{
"start": 522,
"end": 538,
"text": "withdrawing/vbg,",
"ref_id": null
},
{
"start": 539,
"end": 548,
"text": "ferry/vb,",
"ref_id": null
},
{
"start": 549,
"end": 563,
"text": "prevented/vbn,",
"ref_id": null
},
{
"start": 564,
"end": 574,
"text": "profit/vb,",
"ref_id": null
},
{
"start": 575,
"end": 582,
"text": "bar/vb,",
"ref_id": null
},
{
"start": 583,
"end": 595,
"text": "excused/vbn,",
"ref_id": null
},
{
"start": 596,
"end": 605,
"text": "bars/vbz,",
"ref_id": null
},
{
"start": 606,
"end": 617,
"text": "benefit/vb,",
"ref_id": null
},
{
"start": 618,
"end": 631,
"text": "emerges/ vbz,",
"ref_id": null
},
{
"start": 632,
"end": 642,
"text": "emerge/vb,",
"ref_id": null
},
{
"start": 643,
"end": 654,
"text": "varies/vbz,",
"ref_id": null
},
{
"start": 655,
"end": 665,
"text": "differ/vb,",
"ref_id": null
},
{
"start": 666,
"end": 678,
"text": "removed/vbn,",
"ref_id": null
},
{
"start": 679,
"end": 689,
"text": "exempt/vb,",
"ref_id": null
},
{
"start": 690,
"end": 703,
"text": "expelled/vbn,",
"ref_id": null
},
{
"start": 704,
"end": 716,
"text": "withdraw/vb,",
"ref_id": null
},
{
"start": 717,
"end": 725,
"text": "stem/vb,",
"ref_id": null
},
{
"start": 726,
"end": 740,
"text": "separated/vbn,",
"ref_id": null
},
{
"start": 741,
"end": 753,
"text": "judging/vbg,",
"ref_id": null
},
{
"start": 754,
"end": 766,
"text": "adapted/vbn,",
"ref_id": null
},
{
"start": 767,
"end": 780,
"text": "escaping/vbg,",
"ref_id": null
},
{
"start": 781,
"end": 795,
"text": "inherited/vbn,",
"ref_id": null
},
{
"start": 796,
"end": 809,
"text": "differed/vbd,",
"ref_id": null
},
{
"start": 810,
"end": 822,
"text": "emerged/vbd,",
"ref_id": null
},
{
"start": 823,
"end": 836,
"text": "withheld/vbd,",
"ref_id": null
},
{
"start": 837,
"end": 848,
"text": "leaked/vbn,",
"ref_id": null
},
{
"start": 849,
"end": 858,
"text": "strip/vb,",
"ref_id": null
},
{
"start": 859,
"end": 873,
"text": "resulting/vbg,",
"ref_id": null
},
{
"start": 874,
"end": 888,
"text": "discourage/vb,",
"ref_id": null
},
{
"start": 889,
"end": 900,
"text": "prevent/vb,",
"ref_id": null
},
{
"start": 901,
"end": 914,
"text": "withdrew/vbd,",
"ref_id": null
},
{
"start": 915,
"end": 929,
"text": "prohibits/vbz,",
"ref_id": null
},
{
"start": 930,
"end": 944,
"text": "borrowing/vbg,",
"ref_id": null
},
{
"start": 945,
"end": 960,
"text": "preventing/vbg,",
"ref_id": null
},
{
"start": 961,
"end": 973,
"text": "prohibit/vb,",
"ref_id": null
},
{
"start": 974,
"end": 993,
"text": "resulted/vbd (6.0),",
"ref_id": null
},
{
"start": 994,
"end": 1006,
"text": "preclude/vb,",
"ref_id": null
},
{
"start": 1007,
"end": 1017,
"text": "divert/vb,",
"ref_id": null
},
{
"start": 1018,
"end": 1033,
"text": "distinguish/vb,",
"ref_id": null
},
{
"start": 1034,
"end": 1045,
"text": "pulled/vbn,",
"ref_id": null
},
{
"start": 1046,
"end": 1056,
"text": "fell/ vbn,",
"ref_id": null
},
{
"start": 1057,
"end": 1068,
"text": "varied/vbn,",
"ref_id": null
},
{
"start": 1069,
"end": 1082,
"text": "emerging/vbg,",
"ref_id": null
},
{
"start": 1083,
"end": 1093,
"text": "suffer/vb,",
"ref_id": null
},
{
"start": 1094,
"end": 1111,
"text": "prohibiting/ vbg,",
"ref_id": null
},
{
"start": 1112,
"end": 1123,
"text": "extract/vb,",
"ref_id": null
},
{
"start": 1124,
"end": 1136,
"text": "subtract/vb,",
"ref_id": null
},
{
"start": 1137,
"end": 1148,
"text": "recover/vb,",
"ref_id": null
},
{
"start": 1149,
"end": 1164,
"text": "paralyzed/ vbn,",
"ref_id": null
},
{
"start": 1165,
"end": 1175,
"text": "stole/vbd,",
"ref_id": null
},
{
"start": 1176,
"end": 1190,
"text": "departing/vbg,",
"ref_id": null
},
{
"start": 1191,
"end": 1203,
"text": "escaped/vbn,",
"ref_id": null
},
{
"start": 1204,
"end": 1220,
"text": "prohibited/ vbn,",
"ref_id": null
},
{
"start": 1221,
"end": 1231,
"text": "forbid/vb,",
"ref_id": null
},
{
"start": 1232,
"end": 1246,
"text": "evacuated/vbn,",
"ref_id": null
},
{
"start": 1247,
"end": 1255,
"text": "reap/vb,",
"ref_id": null
},
{
"start": 1256,
"end": 1268,
"text": "barring/vbg,",
"ref_id": null
},
{
"start": 1269,
"end": 1282,
"text": "removing/vbg,",
"ref_id": null
},
{
"start": 1283,
"end": 1294,
"text": "stolen/vbn,",
"ref_id": null
},
{
"start": 1295,
"end": 1320,
"text": "receives/vbz. Save...from",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "EXAMPLE 1: \"SAVE ... FROM\"",
"sec_num": "8.1"
},
{
"text": "If the dictionary is going to list save.., from, then, for consistency's sake, it ought to consider listing all of the more important associations as well. Of the 27 bare verbs (tagged 'vb') in the list above, all but seven are listed in Collins Cobuild English Language Dictionary as occurring with from. However, this dictionary does not note that vary, ferry, strip, divert, forbid, and reap occur with from. If the Cobuild lexicographers had had access to the proposed measure, they could possibly have obtained better coverage at less cost.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EXAMPLE 1: \"SAVE ... FROM\"",
"sec_num": "8.1"
},
{
"text": "Having established the relative importance of save ... from, and having noted that the two words are rarely adjacent, we would now like to speed up the labor-intensive task of categorizing the concordance lines. Ideally, we would like to develop a set of semi-automatic tools that would help a lexicographer produce something like Figure 2 , which provides an annotated summary of the 65 concordance lines for save ... from. 5 The save ... from pattern occurs in about 10% of the 666 concordance lines for save.",
"cite_spans": [],
"ref_spans": [
{
"start": 331,
"end": 340,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "EXAMPLE 2: IDENTIFYING SEMANTIC CLASSES",
"sec_num": "8.2"
},
{
"text": "Traditionally, semantic categories have been only vaguely recognized, and to date little effort has been devoted to a systematic classification of a large corpus. Lexicographers have tended to use concordances impressionistically; semantic theorists, AI-ers, and others have concentrated on a few interesting examples, e.g. bachelor, and have not given much thought to how the results might be scaled up.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "EXAMPLE 2: IDENTIFYING SEMANTIC CLASSES",
"sec_num": "8.2"
},
{
"text": "With this concern in mind, it seems reasonable to ask how well these 65 lines for save...from fit in with all other uses of save A laborious concordance analysis was undertaken to answer this question. When it was nearing completion, we noticed that the tags that we were inventing to capture the generalizations could in most cases have been suggested by looking at the lexical items listed in the association ratio table for save. For example, we had failed to notice the significance of time adverbials in our analysis of save, and no dictionary records this. Yet it should be If we had looked at the association ratio tables before labC.ing the 65 lines for save ... from, we might have noticed the very large value for save.., forests, suggesting that there may be an important pattern here. In fact, this pattern probably subsumes most of the occurrences of the \"save [ANIMAL]\" pattern noticed in Figure 2 . Thus, these tables do not provide semantic tags, but they provide a powerful set of suggestions to the lexicographer for what needs to be accounted for in choosing a set of semantic tags. It may be that everything said here about save and other words is true only of 1987 American journalese. Intuitively, however, many of the patterns discovered seem to be good candidates for conventions of general English. A future step would be to examine other more balanced corpora and test how well the patterns hold up.",
"cite_spans": [],
"ref_spans": [
{
"start": 903,
"end": 911,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "EXAMPLE 2: IDENTIFYING SEMANTIC CLASSES",
"sec_num": "8.2"
},
{
"text": "We began this paper with the psycholinguistic notion of word association norm, and extended that concept toward the information theoretic definition of mutual information. This provided a precise statistical calculation that could be applied to a very large corpus of text to produce a table of associations for tens of thousands of words. We were then able to show that the table encoded a number of very interesting patterns ranging from doctor.., nurse to save ....from. We finally concluded by showing how the patterns in the association ratio table might help a lexicographer organize a concordance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSIONS",
"sec_num": "9"
},
{
"text": "In point of fact, we actually developed these results in basically the reverse order. Concordance analysis is still extremely labor-intensive and prone to errors of omission. The ways that concordances are sorted don't adequately support current lexicographic practice. Despite the fact that a concordance is indexed by a single word, often lexicographers actually use a second word such as from or an equally common semantic concept such as a time adverbial to decide how to categorize concordance lines. In other words, they use two words to triangulate in on a word sense. This triangulation approach clusters concordance lines together into word senses based primarily on usage (distribu-tional evidence), as opposed to intuitive notions of meaning. Thus, the question of what is a word sense can be addressed with syntactic methods (symbol pushing), and need not address semantics (interpretation), even though the inventory of tags may appear to have semantic values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSIONS",
"sec_num": "9"
},
{
"text": "The triangulation approach requires \"art.\" How does the lexicographer decide which potential cut points are \"interesting\" and which are merely due to chance? The proposed association ratio score provides a practical and objective measure that is often a fairly good approximation to the \"art.\" Since the proposed measure is objective, it can be applied in a systematic way over a large body of material, steadily improving consistency and productivity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSIONS",
"sec_num": "9"
},
{
"text": "But on the other hand, the objective score can be misleading. The score takes only distributional evidence into account. For example, the measure favors set ... for over set ... down; it doesn't know that the former is less interesting because its semantics are compositional. In addition, the measure is extremely superficial; it cannot cluster words into appropriate syntactic classes without an explicit preprocess such as Church's parts program or Hindle's parser. Neither of these preprocesses, though, can help highlight the \"natural\" similarity between nouns such as picture and photograph. Although one might imagine a preprocess that would help in this particular case, there will probably always be a class of generalizations that are obvious to an intelligent lexicographer, but lie hopelessly beyond the objectivity of a computer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSIONS",
"sec_num": "9"
},
{
"text": "Despite these problems, the association ratio could be an important tool to aid the lexicographer, rather like an index to the concordances. It can help us decide what to look for; it provides a quick summary of what company our words do keep.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSIONS",
"sec_num": "9"
},
{
"text": "Computational Linguistics Volume 16, Number 1, March 1990",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Kenneth Church and Patrick HanksWord Association Norms, Mutual Information, and Lexicography",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text",
"authors": [
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "T",
"middle": [
"X"
],
"last": "Austin",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Gale",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hanks",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hindle",
"suffix": ""
}
],
"year": 1988,
"venue": "Second Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church, K. 1988 \"A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text,\" Second Conference on Applied Natural Lan- guage Processing, Austin, TX. Church, K.; Gale, W.; Hanks, P.; and Hindle, D. 1989 \"Parsing, Word Associations and Typical Predicate-Argument Relations,\" Interna- tional Workshop on Parsing Technologies, CMU.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Transmission of Information: A Statistical Theory of Communications",
"authors": [
{
"first": "R",
"middle": [],
"last": "Fano",
"suffix": ""
}
],
"year": 1961,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fano, R. 1961 Transmission of Information: A Statistical Theory of Communications. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A Synopsis of Linguistic Theory 1930-1955",
"authors": [
{
"first": "J",
"middle": [],
"last": "Firth",
"suffix": ""
}
],
"year": 1957,
"venue": "Studies in Linguistic Analysis",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Firth, J. 1957 \"A Synopsis of Linguistic Theory 1930-1955,\" in Studies in Linguistic Analysis, Philological Society, Oxford; reprinted in Palmer, F. (ed.) 1968 Selected Papers of J. R. Firth, Longman, Harlow.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The Population Frequencies of Species and the Estimation of Population Parameters",
"authors": [
{
"first": "W",
"middle": [],
"last": "Francis",
"suffix": ""
},
{
"first": "H",
"middle": [
";"
],
"last": "Ku~era",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Good",
"suffix": ""
},
{
"first": "I",
"middle": [
"J"
],
"last": "",
"suffix": ""
}
],
"year": 1953,
"venue": "Houghton Mifflin Company",
"volume": "40",
"issue": "",
"pages": "237--264",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francis, W. and Ku~era, H. 1982 Frequency Analysis of English Usage. Houghton Mifflin Company, Boston, MA. Good, I. J. 1953 The Population Frequencies of Species and the Estima- tion of Population Parameters. Biometrika, Vol. 40, 237-264.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Looking Up: An Account of the COBUILD Project in Lexical Computing",
"authors": [
{
"first": "P",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanks, P. 1987 \"Definitions and Explanations,\" in J. Sinclair (ed.), Looking Up: An Account of the COBUILD Project in Lexical Comput- ing. Collins, London and Glasgow.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Deterministic Parsing of Syntactic Non-fluencies",
"authors": [
{
"first": "D",
"middle": [],
"last": "Hindle",
"suffix": ""
}
],
"year": 1983,
"venue": "Proceedings of the 23rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hindle, D. 1983a \"Deterministic Parsing of Syntactic Non-fluencies.\" In Proceedings of the 23rd Annual Meeting of the Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "User Manual for Fidditch, a Deterministic Parser",
"authors": [
{
"first": "D",
"middle": [],
"last": "Hindle",
"suffix": ""
}
],
"year": 1983,
"venue": "Naval Research Laboratory Technical Memorandum #",
"volume": "",
"issue": "",
"pages": "7590--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hindle, D. 1983b \"User Manual for Fidditch, a Deterministic Parser.\" Naval Research Laboratory Technical Memorandum #7590-142.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The Advanced Learner's Dictionary",
"authors": [
{
"first": "A",
"middle": [],
"last": "Hornby",
"suffix": ""
}
],
"year": 1948,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hornby, A. 1948 The Advanced Learner's Dictionary, Oxford University Press, Oxford, U.K.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "On the Recognition of Printed Characters of any Font or Size",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kahan",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Pavlidis",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Baird",
"suffix": ""
}
],
"year": 1987,
"venue": "IEEE Transactions",
"volume": "",
"issue": "",
"pages": "274--287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kahan, S.; Pavlidis, T.; and Baird, H. 1987 \"On the Recognition of Printed Characters of any Font or Size,\" IEEE Transactions PAMI, 274-287.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Loci of Contextual Effects on Visual Word-Recognition",
"authors": [
{
"first": "D",
"middle": [],
"last": "Meyer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schvaneveldt",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ruddy",
"suffix": ""
}
],
"year": 1975,
"venue": "Attention and Performance V",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meyer, D.; Schvaneveldt, R.; and Ruddy, M. 1975 \"Loci of Contextual Effects on Visual Word-Recognition,\" in P. Rabbitt and S. Dornic (eds.), Attention and Performance V, Academic Press, New York.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Word AssociationNorms",
"authors": [
{
"first": "D",
"middle": [],
"last": "Palermo",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Jenkins",
"suffix": ""
}
],
"year": 1964,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Palermo, D. and Jenkins, J. 1964 \"Word AssociationNorms.\" University of Minnesota Press, Minneapolis, MN.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Looking Up: An Account of the COBUILD Project in Lexical Computing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Sinclair",
"suffix": ""
}
],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sinclair, J. 1987b \"The Nature of the Evidence,\" in J. Sinclair (ed.), Looking Up: An Account of the COBUILD Project in Lexical Comput- ing. Collins, London and Glasgow.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Microcoding the Lexicon with Co-Occurrence Knowledge",
"authors": [
{
"first": "F",
"middle": [],
"last": "Smadja",
"suffix": ""
}
],
"year": null,
"venue": "Lexical Acquisition: Using On-Line Resources to Build a Lexicon",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smadja, F. In press. \"Microcoding the Lexicon with Co-Occurrence Knowledge,\" in Zernik (ed.), Lexical Acquisition: Using On-Line Re- sources to Build a Lexicon, MIT Press, Cambridge, MA. NOTES",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "This statistic has also been used by the IBM speech group (Jelinek 1982) for constructing language models for applications in speech recognition",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "This statistic has also been used by the IBM speech group (Jelinek 1982) for constructing language models for applications in speech recognition.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Smadja (in press) discusses the separation between collocates in a very similar way",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smadja (in press) discusses the separation between collocates in a very similar way.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "It might be interesting to consider alternatives (e.g. a triangular window or a decaying exponential) that would weight words less and less as they are separated by more and more words. Other windows are also possible. For example, Hindle (Church et al. 1989) has used a syntactic parser to",
"authors": [],
"year": null,
"venue": "This definition fw(x,y) uses a rectangular window",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "This definition fw(x,y) uses a rectangular window. It might be interesting to consider alternatives (e.g. a triangular window or a decaying exponential) that would weight words less and less as they are separated by more and more words. Other windows are also possible. For example, Hindle (Church et al. 1989) has used a syntactic parser to select words in certain constructions of interest.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Although the Good-Turing Method (Good 1953) is more than 35 years old, it is still heavily cited. For example, Katz (1987) uses the method in order to estimate trigram probabilities in the IBM speech recognizer. The Good-Turing Method is helpful for trigrams that have not been seen",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Although the Good-Turing Method (Good 1953) is more than 35 years old, it is still heavily cited. For example, Katz (1987) uses the method in order to estimate trigram probabilities in the IBM speech recognizer. The Good-Turing Method is helpful for trigrams that have not been seen very often in the training corpus.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Syntactic \"chunking\" shows that, in spite of its co-occurrence of from with save, this line does not belong here. An intriguing exercise, given the lookup table we are trying to construct, is how to guard against false inferences such as that since shoppers is tagged",
"authors": [],
"year": null,
"venue": "save shoppers anywhere from $50... raises interesting problems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The last unclassified line .... save shoppers anywhere from $50... raises interesting problems. Syntactic \"chunking\" shows that, in spite of its co-occurrence of from with save, this line does not belong here. An intriguing exercise, given the lookup table we are trying to construct, is how to guard against false inferences such as that since shoppers is tagged [PERSON], $50 to $500 must here count as either BAD or a LOCATION. Accidental coincidences of this kind do not have a significant effect on the measure, however, although they do serve as a reminder of the probabilistic nature of the findings.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The word time itself also occurs significantly in the table, but on closer examination it is clear that this use of time (e.g. to save time) counts as something like a commodity or resource, not as part of a time adjunct",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The word time itself also occurs significantly in the table, but on closer examination it is clear that this use of time (e.g. to save time) counts as something like a commodity or resource, not as part of a time adjunct. Such are the pitfalls of lexicography (obvious when they are pointed out).",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"num": null,
"html": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td colspan=\"5\">Mean and Variance of the Separation Between</td></tr><tr><td>X and Y</td><td/><td/><td/><td/></tr><tr><td/><td/><td/><td colspan=\"2\">Separation</td></tr><tr><td>Relation</td><td>Word x</td><td>Word y</td><td>Mean</td><td>Variance</td></tr><tr><td>Fixed</td><td>break</td><td>butter</td><td>2.00</td><td>0.00</td></tr><tr><td/><td>drink</td><td>drive</td><td>2.00</td><td>0.00</td></tr><tr><td>Compound</td><td>computer</td><td>scientist</td><td>1.12</td><td>O. I 0</td></tr><tr><td/><td>United</td><td>States</td><td>0.98</td><td>0.14</td></tr><tr><td>Semantic</td><td>man</td><td>woman</td><td>1.46</td><td>8.07</td></tr><tr><td/><td>man</td><td>women</td><td>-0.12</td><td>13.08</td></tr><tr><td>Lexical</td><td>refraining</td><td>from</td><td>1.11</td><td>0.20</td></tr><tr><td/><td>coming</td><td>from</td><td>0.83</td><td>2.89</td></tr><tr><td/><td>keeping</td><td>from</td><td>2.14</td><td>5.53</td></tr></table>"
},
"TABREF1": {
"num": null,
"html": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">Asymmetry in 1988 AP Corpus (N = 44 million)</td></tr><tr><td>x</td><td>y</td><td>f(x, y)</td><td>f(y, x)</td></tr><tr><td>doctors</td><td>nurses</td><td>99</td><td>10</td></tr><tr><td>man</td><td>woman</td><td>256</td><td>56</td></tr><tr><td>doctors</td><td>lawyers</td><td>29</td><td>19</td></tr><tr><td>bread</td><td>butter</td><td>15</td><td>1</td></tr><tr><td>save</td><td>life</td><td>129</td><td>11</td></tr><tr><td>save</td><td>money</td><td>187</td><td>11</td></tr><tr><td>save</td><td>from</td><td>176</td><td>18</td></tr><tr><td>supposed</td><td>to</td><td>1188</td><td>25</td></tr></table>"
},
"TABREF2": {
"num": null,
"html": null,
"text": "In addition to identifying semantic relations of the doctor/nurse variety, we believe the association ratio can also be used to search for interesting lexicosyntactic relationships between verbs and typical arguments/adjuncts. The proposed association ratio can be viewed as a formalization ofSinclair's argument: How common are the phrasal verbs with set? Set is particularly rich in making combinations with words like about, in, up, out, on, off, and these words are themselves very common. How likely is set offto occur? Both are frequent words [set occurs approximately 250 times in a million words and off occurs approximately 556 times in a million words... [T]he question we are asking can be roughly rephrased as follows: how likely is off to occur immediately after set?...",
"type_str": "table",
"content": "<table><tr><td>associations among verbs, function words, adjectives, and</td><td/><td/><td/><td/><td/></tr><tr><td>other non-nouns. This is 0.00025 x</td><td/><td/><td/><td/><td/></tr><tr><td>0.00055 [P(x) P(y)], which gives us the tiny figure of</td><td/><td/><td/><td/><td/></tr><tr><td>0.0000001375 ... The assumption behind this calcula-</td><td/><td/><td/><td/><td/></tr><tr><td>tion is that the words are distributed at random in a text</td><td/><td/><td/><td/><td/></tr><tr><td>[at chance, in our terminology]. It is obvious to a linguist</td><td/><td/><td/><td/><td/></tr><tr><td>that this is not so, and a rough measure of how much set</td><td/><td/><td/><td/><td/></tr><tr><td>and offattract each other is to compare the probability</td><td/><td/><td/><td/><td/></tr><tr><td>with what actually happens ... Set off occurs nearly</td><td/><td/><td/><td/><td/></tr><tr><td>70 times in the 7.3 million word corpus [P(x, y) =</td><td/><td/><td/><td/><td/></tr><tr><td>70/(7.3 x 106) &gt;&gt; P(x) P(y)]</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td>y</td></tr><tr><td>11.3</td><td>12</td><td>111</td><td>honorary</td><td>621</td><td>doctor</td></tr><tr><td>11.3</td><td>8</td><td colspan=\"2\">1105 doctors</td><td>44</td><td>dentists</td></tr><tr><td>10.7</td><td>30</td><td colspan=\"2\">1105 doctors</td><td>241</td><td>nurses</td></tr><tr><td>9.4</td><td>8</td><td colspan=\"2\">1105 doctors</td><td>154</td><td>treating</td></tr><tr><td>9.0</td><td>6</td><td>275</td><td>examined</td><td>621</td><td>doctor</td></tr><tr><td>8.9</td><td>11</td><td colspan=\"2\">1105 doctors</td><td>317</td><td>treat</td></tr><tr><td>8.7</td><td>25</td><td>621</td><td>doctor</td><td colspan=\"2\">1407 bills</td></tr><tr><td>8.7</td><td>6</td><td>621</td><td>doctor</td><td>350</td><td>visits</td></tr><tr><td>8.6</td><td>19</td><td colspan=\"2\">1105 doctors</td><td>676</td><td>hospitals</td></tr><tr><td>8,4</td><td>6</td><td>241</td><td>nurses</td><td colspan=\"2\">1105 doctors</td></tr><tr><td colspan=\"5\">Some Uninteresting Associations with \"Doctor\"</td><td/></tr><tr><td>0.96</td><td>6</td><td>621</td><td>doctor</td><td colspan=\"2\">73785 with</td></tr><tr><td>0.95</td><td>41</td><td>284690</td><td>a</td><td colspan=\"2\">1105 doctors</td></tr><tr><td>0.93</td><td>12</td><td>84716</td><td>is</td><td colspan=\"2\">1105 doctors</td></tr></table>"
},
"TABREF3": {
"num": null,
"html": null,
"text": "",
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"4\">Some Phrasal Verbs in 1988 AP Corpus</td><td/></tr><tr><td colspan=\"2\">(N = 44 million)</td><td/><td/><td/><td/></tr><tr><td>x</td><td>y</td><td>f(x)</td><td>f(y)</td><td>f(x, y)</td><td>I(x; y)</td></tr><tr><td>set</td><td>up</td><td>13,046</td><td>64,601</td><td>2713</td><td>7.3</td></tr><tr><td>set</td><td>off</td><td>13,046</td><td>20,693</td><td>463</td><td>6.2</td></tr><tr><td>set</td><td>out</td><td>13,046</td><td>47,956</td><td>301</td><td>4.4</td></tr><tr><td>set</td><td>on</td><td>13,046</td><td>258,170</td><td>162</td><td>1.1</td></tr><tr><td>set</td><td>in</td><td>13,046</td><td>739,932</td><td>795</td><td>1.8</td></tr><tr><td>set</td><td>about</td><td>13,046</td><td>82,319</td><td>16</td><td>-0.6</td></tr></table>"
},
"TABREF4": {
"num": null,
"html": null,
"text": "What Can You Drink?",
"type_str": "table",
"content": "<table><tr><td>Verb</td><td>Object</td><td>Mutual Info</td><td>Joint Freq</td></tr></table>"
},
"TABREF5": {
"num": null,
"html": null,
"text": "What Can You Do to a Telephone?",
"type_str": "table",
"content": "<table><tr><td>Verb</td><td>Object</td><td>Mutual Info</td><td>Joint Freq</td></tr><tr><td>sit_by/V</td><td colspan=\"2\">telephone/O 11.78</td><td>7</td></tr><tr><td colspan=\"2\">disconnect/V telephone/O</td><td>9.48</td><td>7</td></tr><tr><td>answer/V</td><td>telephone/O</td><td>8.80</td><td>98</td></tr><tr><td>hang_up]V</td><td>telephone/O</td><td>7.87</td><td>3</td></tr><tr><td>tap/V</td><td>telephone/O</td><td>7.69</td><td>15</td></tr><tr><td>pick_up/V</td><td>telephone/O</td><td>5.63</td><td>11</td></tr><tr><td>return/V</td><td>telephone/O</td><td>5.01</td><td>19</td></tr><tr><td>be_by/V</td><td>telephone/O</td><td>4.93</td><td>2</td></tr><tr><td>spot/V</td><td>telephone/O</td><td>4.43</td><td>2</td></tr><tr><td>repeat/V</td><td>telephone/O</td><td>4.39</td><td>3</td></tr><tr><td>place/V</td><td>telephone/O</td><td>4.23</td><td>7</td></tr><tr><td>receive/V</td><td>telephone/O</td><td>4.22</td><td>28</td></tr><tr><td>install/V</td><td>telephone/O</td><td>4.20</td><td>2</td></tr><tr><td>be_on/V</td><td>telephone/O</td><td>4.05</td><td>15</td></tr><tr><td>come_to/V</td><td>telephone/O</td><td>3.63</td><td>6</td></tr><tr><td>use/V</td><td>telephone/O</td><td>3.59</td><td>29</td></tr><tr><td>operate/V</td><td>telephone/O</td><td>3.16</td><td>4</td></tr></table>"
},
"TABREF8": {
"num": null,
"html": null,
"text": "Figure 2Some AP 1987 Concordance Lines to \"save...from, \" Roughly Sorted into Categories. clear fi'om the association ratio table above that annually and month 6 are commonly found with save. More detailed inspection shows that the time adverbials correlate interestingly with just one group of save objects, namely those tagged[MONEY]. The AP wire is full of discussions of saving $1.2 billion per month; computational lexicography should measure and record such patterns if they are general, even when traditional dictionaries do not. A,; another example illustrating how the association ratio tables would have helped us analyze the save concordance lines, we found ourselves contemplating the semantic tag ENV(IRONMENT) to analyze lines such as:",
"type_str": "table",
"content": "<table><tr><td/><td>the trend to</td><td>save the forests[ENV]</td></tr><tr><td/><td>it's our turn to</td><td>save the lake[ENV],</td></tr><tr><td/><td>joined a fight to</td><td>save their forests[ENV],</td></tr><tr><td/><td>can we get busy to</td><td>save the planet[ENV] ?</td></tr><tr><td colspan=\"2\">save X from Y (65 concordance lines)</td></tr><tr><td colspan=\"2\">1 save PERSON from Y (23 concordance lines)</td></tr><tr><td colspan=\"2\">1.1 save PERSON from BAD (19 concordance lines)</td></tr><tr><td>( Robert DeNiro ) to</td><td>save Indian tribes(PERSON] from genocide[DESTRUCT[BAD]] at the hands of</td></tr><tr><td>\" We wanted to</td><td>save him(PERSON] ~orn undue ~ouble[BAD] and loss(BAD] of money , \"</td></tr><tr><td>Murphy was sacrificed to</td><td>save more powerful Democrats(PERSON] from harm(BAD] .</td></tr><tr><td>\" God sent this man to</td><td>save my five children(PERSON] from being burned to death(DESTRUCT(BAD]] and</td></tr><tr><td>Pope John Paul I] to \"</td><td>save us(PERSON] fl~m sin(BAD] . \"</td></tr><tr><td colspan=\"2\">1.2 save PERSON from (BAD) LOC(AT1ON) (4 concordance lines)</td></tr><tr><td>rescuers who helped</td><td>save the toddler(PERSON] from an abandoned weU[LOC] will be feted with a parade</td></tr><tr><td>while attempting to</td><td>save two drowning hoys[PERSON] from a turbulent(BAD] creeklLOC] in Otdo[LOC]</td></tr><tr><td colspan=\"2\">2. save INST(ITUTION) from (ECON) BAD (27 concordance lines)</td></tr><tr><td>member states to help</td><td>save the EEC[INSTI from possible bankaxlptcy[BCON][BAD] this year.</td></tr><tr><td>should be sought \" to</td><td>save the compeny[CORP[1NST]] from bankmptfy[BCON][BAD].</td></tr><tr><td>law was necessary to</td><td>save the counffy[NATIOlq[lNST]] flora disaster(BAD].</td></tr><tr><td>operation \" to</td><td>save the nation(NATION(INS'r]] from COmmUnL~n[BAD][POL1TICAL] .</td></tr><tr><td>were not needed to</td><td>save the system from benkauptcy[ECON][BAD].</td></tr><tr><td>his efforts to</td><td>save the wodd[INST] from the like~ of Lothax and the Spider Woman</td></tr><tr><td colspan=\"2\">3. save ANIMAL from DESTRUCT(ION) (5 concordance lines)</td></tr><tr><td>give them the money to</td><td>save the dogs(ANIMAL] from being destroyed(DESTRUCT] ,</td></tr><tr><td>program intended to</td><td>save the giant birds(ANIMAL] ~om extinction[DESTRUCTI,</td></tr><tr><td colspan=\"2\">UNCLASSIFIED (10 concordance lines)</td></tr><tr><td>walnut and ash tx~es to</td><td>save them from the axes and saws of a logging company.</td></tr><tr><td>after the a~aek to</td><td>save the ship from a temble[BAD] fire, Navy reports concluded Thursday.</td></tr><tr><td>cemficates that would</td><td>save shopper~[pERSON] anywhere f~m $50[MONEY] [NUMBER] to $500[MONEY] (/flu</td></tr></table>"
}
}
}
}