ACL-OCL / Base_JSON /prefixC /json /C00 /C00-1027.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C00-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:31:44.267333Z"
},
"title": "Empirical Estimates of Adaptation: The chance of Two Noriegas is closer to p /2 than p 2",
"authors": [
{
"first": "Kenneth",
"middle": [
"W"
],
"last": "Church",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Labs-Research",
"location": {
"addrLine": "180 Park Ave",
"settlement": "Florham Park",
"region": "NJ",
"country": "USA"
}
},
"email": "kwc@research.att.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Repetition is very common. Adaptive language models, which allow probabilities to change or adapt after seeing just a few words of a text, were introduced in speech recognition to account for text cohesion. Suppose a document mentions Noriega once. What is the chance that he will be mentioned again? If the first instance has probability p, then under standard (bag-of-words) independence assumptions, two instances ought to have probability p 2 , but we find the probability is actually closer to p /2. The first mention of a word obviously depends on frequency, but surprisingly, the second does not. Adaptation depends more on lexical content than frequency; there is more adaptation for content words (proper nouns, technical terminology and good keywords for information retrieval), and less adaptation for function words, cliches and ordinary first names.",
"pdf_parse": {
"paper_id": "C00-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "Repetition is very common. Adaptive language models, which allow probabilities to change or adapt after seeing just a few words of a text, were introduced in speech recognition to account for text cohesion. Suppose a document mentions Noriega once. What is the chance that he will be mentioned again? If the first instance has probability p, then under standard (bag-of-words) independence assumptions, two instances ought to have probability p 2 , but we find the probability is actually closer to p /2. The first mention of a word obviously depends on frequency, but surprisingly, the second does not. Adaptation depends more on lexical content than frequency; there is more adaptation for content words (proper nouns, technical terminology and good keywords for information retrieval), and less adaptation for function words, cliches and ordinary first names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Adaptive language models were introduced in the Speech Recognition literature to model repetition. Jelinek (1997, p. 254) describes cache-based models which combine two estimates of word (ngram) probabilities, Pr L , a local estimate based on a relatively small cache of recently seen words, and Pr G , a global estimate based on a large training corpus.",
"cite_spans": [
{
"start": 99,
"end": 121,
"text": "Jelinek (1997, p. 254)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Pr A (w) = \u03bbPr L (w) + ( 1 \u2212 \u03bb) Pr G (w)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additive:",
"sec_num": "1."
},
{
"text": "2. Case-based:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additive:",
"sec_num": "1."
},
{
"text": "Pr C (w) = \uf8f1 \uf8f2 \uf8f3 \u03bb 2 Pr G (w) \u03bb 1 Pr L (w) otherwise if w\u2208cache",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additive:",
"sec_num": "1."
},
{
"text": "Intuitively, if a word has been mentioned recently, then (a) the probability of that word (and related words) should go way up, and (b) many other words should go down a little. We will refer to (a) as positive adaptation and (b) as negative adaptation. Our empirical experiments confirm the intuition that positive adaptation, Pr( + adapt), is typically much larger than negative adaptation, Pr( \u2212 adapt). That is, Pr( + adapt) >> Pr(prior) > Pr( \u2212 adapt). Two methods, Pr( + adapt 1 ) and Pr( + adapt 2 ), will be introduced for estimating positive adaptation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additive:",
"sec_num": "1."
},
{
"text": "1. Pr( + adapt 1 ) = Pr(w\u2208test \uf8f4 w\u2208history) 2. Pr( + adapt 2 ) = Pr(k\u22652 \uf8f4 k\u22651 ) \u223c \u223cdf 2 / df 1 The two methods produce similar results, usually well within a factor of two of one another. The first method splits each document into two equal pieces, a history portion and a test portion. The adapted probabilities are modeled as the chance that a word will appear in the test portion, given that it appeared in the history. The second method, suggested by Church and Gale (1995) , models adaptation as the chance of a second mention (probability that a word will appear two or more times, given that it appeared one or more times). Pr( + adapt 2 ) is approximated by df 2 / df 1 , where df k is the number of documents that contain the word/ngram k or more times. (df k is a generalization of document frequency, df, a standard term in Information Retrieval.)",
"cite_spans": [
{
"start": 455,
"end": 477,
"text": "Church and Gale (1995)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Additive:",
"sec_num": "1."
},
{
"text": "Both methods are non-parametric (unlike cache models). Parametric assumptions, when appropriate, can be very powerful (better estimates from less training data), but errors resulting from inappropriate assumptions can outweigh the benefits. In this empirical investigation of the magnitude and shape of adaptation we decided to use conservative non-parametric methods to hedge against the risk of inappropriate parametric assumptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additive:",
"sec_num": "1."
},
{
"text": "The two plots (below) illustrate some of the reasons for being concerned about standard parametric assumptions. The first plot shows the number of times that the word ''said'' appears in each of the 500 documents in the Brown Corpus (Francis & Kucera, 1982) . Note that there are quite a few documents with more than 15 instances of ''said,'' especially in Press and Fiction. There are also quite a few documents with hardly any instances of ''said,'' especially in the Learned genre. We have found a similar pattern in other collections; ''said'' is more common in newswire (Associated Press and Wall Street Journal) than technical writing (Department of Energy abstracts).",
"cite_spans": [
{
"start": 233,
"end": 257,
"text": "(Francis & Kucera, 1982)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Additive:",
"sec_num": "1."
},
{
"text": "The second plot (below) compares these Brown Corpus observations to a Poisson. The circles indicate the number of documents that have x instances of ''said.'' As mentioned above, Press and Fiction documents can mention ''said'' 15 times or more, while documents in the Learned genre might not mention the word at all. The line shows what would be expected under a Poisson. Clearly the line does not fit the circles very well. The probability of ''said'' depends on many factors (e.g, genre, topic, style, author) that make the distributions broader than chance (Poisson). We find especially broad distributions for words that adapt a lot. We will show that adaptation is huge. Pr( + adapt) is often several orders of magnitude larger than Pr(prior). In addition, we find that Pr( + adapt) has a very different shape from Pr(prior). By construction, Pr(prior) varies over many orders of magnitude depending on the frequency of the word. Interestingly, though, we find that Pr( + adapt) has almost no dependence on word frequency, although there is a strong lexical dependence. Some words adapt more than others. The result is quite robust. Words that adapt more in one corpus also tend to adapt more in another corpus of similar material. Both the magnitude and especially the shape (lack of de-pendence on frequency as well as dependence on content) are hard to capture in an additive-based cache model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additive:",
"sec_num": "1."
},
{
"text": "Later in the paper, we will study neighbors, words that do not appear in the history but do appear in documents near the history using an information retrieval notion of near. We find that neighbors adapt more than non-neighbors, but not as much as the history. The shape is in between as well. Neighbors have a modest dependency on frequency, more than the history, but not as much as the prior.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Additive:",
"sec_num": "1."
},
{
"text": "Neighbors are an extension of Florian & Yarowsky (1999) , who used topic clustering to build a language model for contexts such as: ''It is at least on the Serb side a real setback to the x.'' Their work was motivated by speech recognition applications where it would be desirable for the language model to favor x = ''peace'' over x = ''piece.'' Obviously, acoustic evidence is not very helpful in this case. Trigrams are also not very helpful because the strongest clues (e.g., ''Serb,'' ''side'' and ''setback'') are beyond the window of three words. Florian & Yarowsky cluster documents into about 10 2 topics, and compute a separate trigram language model for each topic. Neighbors are similar in spirit, but support more topics.",
"cite_spans": [
{
"start": 30,
"end": 55,
"text": "Florian & Yarowsky (1999)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Additive:",
"sec_num": "1."
},
{
"text": "Method 1 splits each document into two equal pieces. The first half of each document is referred to as the history portion of the document and the second half of each document is referred to as the test portion of the document. The task is to predict the test portion of the document given the history. We start by computing a contingency table for each word, as illustrated below: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimates of Adaptation: Method 1",
"sec_num": "2."
},
{
"text": "Documents containing ''hostages'' in 1990 AP test test _ __ _ _____________________________________ history a =638 b =505 history _ _____ c =557 d =76787 \uf8ec \uf8ec \uf8ec \uf8ec This",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimates of Adaptation: Method 1",
"sec_num": "2."
},
{
"text": ") = Pr(w\u2208test \uf8f4 \u00acw\u2208history) \u223c \u223c c + d c _ ____",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimates of Adaptation: Method 1",
"sec_num": "2."
},
{
"text": "Adapted probabilities will be compared to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimates of Adaptation: Method 1",
"sec_num": "2."
},
{
"text": "Pr(prior) = Pr(w\u2208test) \u223c \u223c(a + c)/ D where D = a + b + c + d.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimates of Adaptation: Method 1",
"sec_num": "2."
},
{
"text": "Positive adaptation tends to be much larger than the prior, which is just a little larger than negative adaptation, as illustrated in the table below for the word ''hostages'' in four years of the Associated Press (AP) newswire. We find remarkably consistent results when we compare one year of the AP news to another (though topics do come and go over time). Generally, the differences of interest are huge (orders of magnitude) compared to the differences among various control conditions (at most factors of two or three). Note that values are more similar within columns than across columns. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimates of Adaptation: Method 1",
"sec_num": "2."
},
{
"text": "We find that some words adapt more than others, and that words that adapt more in one year of the AP also tend to adapt more in another year of the AP. In general, words that adapt a lot tend to have more content (e.g., good keywords for information retrieval (IR)) and words that adapt less have less content (e.g., function words).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation is Lexical",
"sec_num": "3."
},
{
"text": "It is often assumed that word frequency is a good (inverse) correlate of content. In the psycholinguistic literature, the term ''high frequency'' is often used synonymously with ''function words,'' and ''low frequency'' with ''content words.'' In IR, inverse document frequency (IDF) is commonly used for weighting keywords. The table below is interesting because it questions this very basic assumption. We compare two words, ''Kennedy'' and ''except,'' that are about equally frequent (similar priors). Intuitively, ''Kennedy'' is a content word and ''except'' is not. This intuition is supported by the adaptation statistics: the adaptation ratio, Pr( + adapt)/ Pr(prior), is much larger for ''Kennedy'' than for ''except.'' A similar pattern holds for negative adaptation, but in the reverse direction. That is, Pr( \u2212 adapt)/ Pr(prior) is much smaller for ''Kennedy'' than for ''except.'' In general, we expect more adaptation for better keywords (e.g., ''Kennedy'') and less adaptation for less good keywords (e.g., function words such as ''except''). This observation runs counter to the standard practice of weighting keywords solely on the basis of frequency, without considering adaptation. In a related paper, Umemura and Church (submitted), we describe a term weighting method that makes use of adaptation (sometimes referred to as burstiness). The table above compares surnames with first names. These surnames are excellent keywords unlike the first names, which are nearly as useless for IR as function words. The adaptation ratio, Pr( + adapt)/ Pr(prior), is much larger for the surnames than for the first names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation is Lexical",
"sec_num": "3."
},
{
"text": "Kennedy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation is Lexical",
"sec_num": "3."
},
{
"text": "What is the probability of seeing two Noriegas in a document? The chance of the first one is p \u223c \u223c0. 006. According to the table above, the chance of two is about 0. 75p, closer to p /2 than p 2 . Finding a rare word like Noriega in a document is like lightning. We might not expect lightning to strike twice, but it happens all the time, especially for good keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adaptation is Lexical",
"sec_num": "3."
},
{
"text": "Thus far, we have seen that adaptation can be large, but to demonstrate the shape property (lack of dependence on frequency), the counts in the contingency table need to be smoothed. The problem is that the estimates of a, b, c, d, and especially estimates of the ratios of these quantities, become unstable when the counts are small. The standard methods of smoothing in the speech recognition literature are Good-Turing (GT) and Held-Out (HO), described in sections 15.3 & 15.4 of Jelinek (1997) . In both cases, we let r be an observed count of an object (e.g., the frequency of a word and/or ngram), and r * be our best estimate of r in another corpus of the same size (all other things being equal).",
"cite_spans": [
{
"start": 483,
"end": 497,
"text": "Jelinek (1997)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Smoothing (for low frequency words)",
"sec_num": "4."
},
{
"text": "HO splits the training corpus into two halves. The first half is used to count r for all objects of interest (e.g., the frequency of all words in vocabulary). These counts are then used to group objects into bins. The r th bin contains all (and only) the words with count r. For each bin, we compute N r , the number of words in the r th bin. The second half of the training corpus is then used to compute C r , the aggregate frequency of all the words in the r th bin. The final result is simply: r * = C r / N r If the two halves of the training corpora or the test corpora have different sizes, then r * should be scaled appropriately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard Held-Out (HO)",
"sec_num": "4.1"
},
{
"text": "We chose HO in this work because it makes few assumptions. There is no parametric model. All that is assumed is that the two halves of the training corpus are similar, and that both are similar to the testing corpus. Even this assumption is a matter of some concern, since major stories come and go over time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Standard Held-Out (HO)",
"sec_num": "4.1"
},
{
"text": "As above, the training corpus is split into two halves. We used two different years of AP news. The first half is used to count document frequency df. (Document frequency will be used instead of standard (term) frequency.) Words are binned by df and by their cell in the contingency table. The first half of the corpus is used to compute the number of words in each bin:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application of HO to Contingency Tables",
"sec_num": "4.2"
},
{
"text": "N df ,a , N df ,b , N df ,c and N df ,d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application of HO to Contingency Tables",
"sec_num": "4.2"
},
{
"text": "; the second half of the corpus is used to compute the aggregate document frequency for the words in each bin: C df ,a , C df ,b , C df ,c and C df ,d . The final result is simply:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application of HO to Contingency Tables",
"sec_num": "4.2"
},
{
"text": "a df * = C df ,a / N df ,a , b df * = C df ,b / N df ,b , c df * = C df ,c / N df ,c and d df * = C df ,d / N df ,d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application of HO to Contingency Tables",
"sec_num": "4.2"
},
{
"text": ". We compute the probabilities as before, but replace a, h n n n n n n n n n n n n nn n n n nn n nn n nn n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n n With these smoothed estimates, we are able to show that Pr( + adapt), labeled h in the plot above, is larger and less dependent on frequency than Pr(prior), labeled p. The plot shows a third group, labeled n for neighbors, which will be described later. Note that the ns fall between the ps and the hs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application of HO to Contingency Tables",
"sec_num": "4.2"
},
{
"text": "Thus far, we have seen that adaptation can be huge: Pr( + adapt) >> Pr(prior), often by two or three orders of magnitude. Perhaps even more surprisingly, although the first mention depends strongly on frequency (df), the second does not. Some words adapt more (e.g., Noriega, Aristide, Escobar) and some words adapt less (e.g., John, George, Paul). The results are robust. Words that adapt more in one year of AP news tend to adapt more in another year, and vice versa.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Application of HO to Contingency Tables",
"sec_num": "4.2"
},
{
"text": "Pr( + adapt 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method 2:",
"sec_num": "5."
},
{
"text": "So far, we have limited our attention to the relatively simple case where the history and the test are the same size. In practice, this won't be the case. We were concerned that the observations above might be artifacts somehow caused by this limitation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method 2:",
"sec_num": "5."
},
{
"text": "We experimented with two approaches for understanding the effect of this limitation and found that the size of the history doesn't change Pr( + adapt) very much. The first approach split the history and the test at various points ranging from 5% to 95%. Generally, Pr( + adapt 1 ) increases as the size of the test portion grows relative to the size of the history, but the effect is relatively small (more like a factor of two than an order of magnitude).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method 2:",
"sec_num": "5."
},
{
"text": "We were even more convinced by the second approach, which uses Pr( + adapt 2 ), a completely different argument for estimating adaptation and doesn't depend on the relative size of the history and the test. The two methods produce remarkably similar results, usually well within a factor of two of one another (even when adapted probabilities are orders of magnitude larger than the prior).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method 2:",
"sec_num": "5."
},
{
"text": "Pr( + adapt 2 ) makes use of df j (w), a generalization of document frequency. df j (w) is the number of documents with j or more instances of w; (df 1 is the standard notion of df).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method 2:",
"sec_num": "5."
},
{
"text": "Pr( + adapt 2 ) = Pr(k\u22652 \uf8f4 k\u22651 ) = df 2 / df 1 Method 2 has some advantages and some disadvantages in comparison with method 1. On the positive side, method 2 can be generalized to compute the chance of a third instance: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 The plot (above) is similar to the plot in section 4.2 which showed that adapted probabilities (labeled h) are larger and less dependent on frequency than the prior (labeled p). So too, the plot (above) shows that the second and third mentions of a word (labeled 2 and 3, respectively) are larger and less dependent on frequency than the first mention (labeled 1). The plot in section 4.2 used method 1 whereas the plot (above) uses method 2. Both plots use the HO smoothing, so there is only one point per bin (df value), rather than one per word.",
"cite_spans": [],
"ref_spans": [
{
"start": 221,
"end": 658,
"text": "1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method 2:",
"sec_num": "5."
},
{
"text": "Pr(k\u22653 \uf8f4 k\u22652",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method 2:",
"sec_num": "5."
},
{
"text": "Florian and Yarowsky's example, ''It is at least on the Serb side a real setback to the x,'' provides a nice motivation for neighborhoods. Suppose the context (history) mentions a number of words related to a peace process, but doesn't mention the word ''peace.'' Intuitively, there should still be some adaptation. That is, the probability of ''peace'' should go up quite a bit (positive adaptation), and the probability of many other words such as ''piece'' should go down a little (negative adaptation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighborhoods (Near)",
"sec_num": "6."
},
{
"text": "We start by partitioning the vocabulary into three exhaustive and mutually exclusive sets: hist, near and other (abbreviations for history, neighborhood and otherwise, respectively). The first set, hist, contains the words that appear in the first half of the document, as before. Other is a catchall for the words that are in neither of the first two sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighborhoods (Near)",
"sec_num": "6."
},
{
"text": "The interesting set is near. It is generated by query expansion. The history is treated as a query in an information retrieval documentranking engine. (We implemented our own ranking engine using simple IDF weighting.) The neighborhood is the set of words that appear in the k \u223c \u223c10 or k \u223c \u223c100 top documents returned by the retrieval engine. To ensure that the three sets partition the vocabulary, we exclude the history from the neighborhood: near = words in query expansion of hist -hist",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighborhoods (Near)",
"sec_num": "6."
},
{
"text": "The adaptation probabilities are estimated using a contingency table like before, but we now have a three-way partition (hist, near and other) of the vocabulary instead of the two-way partition, as illustrated below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighborhoods (Near)",
"sec_num": "6."
},
{
"text": "Documents containing ''peace'' in 1991 AP test _ ___________________________________ history a =2125 b =2160 c =1963 d =74573 \uf8ec \uf8ec \uf8ec \uf8ec test _ ________________________ hist a =2125 b =2160 near e =1479 f =22516 other g =484 h =52057 \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighborhoods (Near)",
"sec_num": "6."
},
{
"text": "In estimating adaptation probabilities, we continue to use a, The table below shows that ''Kennedy'' adapts more than ''except'' and that ''peace'' adapts more than ''piece.'' That is, ''Kennedy'' has a larger spread than ''except'' between the history and the otherwise case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighborhoods (Near)",
"sec_num": "6."
},
{
"text": "prior hist near other src w _ _______________________________________ _ _______________________________________ 0.026 0.40 0.022 0.0050 AP91 Kennedy 0.020 0.32 0.025 0.0038 AP93 0.026 0.05 0.018 0.0122 AP91 except 0.019 0.05 0.014 0.0081 AP93 _ _______________________________________ 0.077 0.50 0.062 0.0092 AP91 peace 0.074 0.49 0.066 0.0069 AP93 0.015 0.10 0.014 0.0066 AP91 piece 0.013 0.08 0.015 0.0046 AP93",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighborhoods (Near)",
"sec_num": "6."
},
{
"text": "When df is small (df < 100), HO smoothing is used to group words into bins by df. Adaptation probabilities are computed for each bin, rather than for each word. Since these probabilities are implicitly conditional on df, they have already been weighted by df in some sense, and therefore, it is unnecessary to introduce an additional explicit weighting scheme based on df or a simple transform thereof such as IDF.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighborhoods (Near)",
"sec_num": "6."
},
{
"text": "The experiments below split the neighborhood into four classes, ranging from better neighbors to worse neighbors, depending on expansion frequency, ef. ef (t) is a number between 1 and k, indicating how many of the k top scoring documents contain t. (Better neighbors appear in more of the top scoring documents, and worse neighbors appear in fewer.) All the neighborhood classes fall between hist and other, with better neighbors adapting more than worse neighbors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighborhoods (Near)",
"sec_num": "6."
},
{
"text": "Recall that the task is to predict the test portion (the second half) of a document given the history (the first half). The following table shows a selection of words (sorted by the third column) from the test portion of one of the test documents. The table is separated into thirds by horizontal lines. The words in the top third receive much higher scores by the proposed method (S) than by a baseline (B). These words are such good keywords that one can fairly confidently guess what the story is about. Most of these words re-ceive a high score because they were mentioned in the history portion of the document, but ''laid-off'' receives a high score by the neighborhood mechanism. Although ''laid-off'' is not mentioned explicitly in the history, it is obviously closely related to a number of words that were, especially ''layoffs,'' but also ''notices'' and ''cuts.'' It is reassuring to see the neighborhood mechanism doing what it was designed to do.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7."
},
{
"text": "The middle third shows words whose scores are about the same as the baseline. These words tend to be function words and other low content words that give us little sense of what the document is about. The bottom third contains words whose scores are much lower than the baseline. These words tend to be high in content, but misleading. The word ''arms,'' for example, might suggest that story is about a military conflict. The proposed score, S, shown in column 1, is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7."
},
{
"text": "Pr S (w) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 Pr(w \uf8f4 other) Pr(w \uf8f4 near 4 ) Pr(w \uf8f4 near 3 ) Pr(w \uf8f4 near 2 ) Pr(w \uf8f4 near 1 ) Pr(w \uf8f4 hist) otherwise if w\u2208near 4 if w\u2208near 3 if w\u2208near 2 if w\u2208near 1 if w\u2208hist",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7."
},
{
"text": "where near 1 through near 4 are four neighborhoods (k = 100). Words in near 4 are the best neighbors (ef \u226510) and words in near 1 are the worst neighbors (ef = 1). The baseline, B, shown in column 2, is: Pr B (w) = df / D. Column 3 compares the first two columns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7."
},
{
"text": "We applied this procedure to a year of the AP news and found a sizable gain in information on average: 0.75 bits per word type per document. In addition, there were many more big winners (20% of the documents gained 1 bit/type) than big losers (0% lost 1 bit/type). The largest winners include lists of major cities and their temperatures, lists of major currencies and their prices, and lists of commodities and their prices. Neighborhoods are quite successful in guessing the second half of such lists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7."
},
{
"text": "On the other hand, there were a few big losers, e.g., articles that summarize the major stories of the day, week and year. The second half of a summary article is almost never about the same subject as the first half. There were also a few end-of-document delimiters that were garbled in transmission causing two different documents to be treated as if they were one. These garbled documents tended to cause trouble for the proposed method; in such cases, the history comes from one document and the test comes from another.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7."
},
{
"text": "In general, the proposed adaptation method performed well when the history is helpful for predicting the test portion of the document, and it performed poorly when the history is misleading. This suggests that we ought to measure topic shifts using methods suggested by Hearst (1994) and Florian & Yarowsky (1999) . We should not use the history when we believe that there has been a major topic shift.",
"cite_spans": [
{
"start": 270,
"end": 283,
"text": "Hearst (1994)",
"ref_id": "BIBREF3"
},
{
"start": 288,
"end": 313,
"text": "Florian & Yarowsky (1999)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "7."
},
{
"text": "Adaptive language models were introduced to account for repetition. It is well known that the second instance of a word (or ngram) is much more likely than the first. But what we find surprising is just how large the effect is. The chance of two Noriegas is closer to p /2 than p 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8."
},
{
"text": "In addition to the magnitude of adaptation, we were also surprised by the shape: while the first instance of a word depends very strongly on frequency, the second does not. Adaptation depends more on content than frequency; adaptation is stronger for content words such as proper nouns, technical terminology and good keywords for information retrieval, and weaker for function words, cliches and first names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8."
},
{
"text": "The shape and magnitude of adaptation has implications for psycholinguistics, information retrieval and language modeling. Psycholinguistics has tended to equate word frequency with content, but our results suggest that two words with similar frequency (e.g., ''Kennedy'' and ''except'') can be distinguished on the basis of their adaptation. Information retrieval has tended to use frequency in a similar way, weighting terms by IDF (inverse document frequency), with little attention paid to adaptation. We propose a term weighting method that makes use of adaptation (burstiness) and expansion frequency in a related paper (Umemura and Church, submitted).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8."
},
{
"text": "Two estimation methods were introduced to demonstrate the magnitude and shape of adaptation. Both methods produce similar results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8."
},
{
"text": "\u2022 Pr( + adapt 1 ) = Pr(test \uf8f4 hist)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8."
},
{
"text": "\u2022 Pr( + adapt 2 ) = Pr(k\u22652 \uf8f4 k\u22651 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8."
},
{
"text": "Neighborhoods were then introduced for words such as ''laid-off'' that were not in the history but were close (''laid-off'' is related to ''layoff,'' which was in the history). Neighborhoods were defined in terms of query expansion. The history is treated as a query in an information retrieval document-ranking system. Words in the k topranking documents (but not in the history) are called neighbors. Neighbors adapt more than other terms, but not as much as words that actually appeared in the history. Better neighbors (larger ef) adapt more than worse neighbors (smaller ef).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8."
},
{
"text": ".00000 h h h h h h h h h h hh h hhhhhh h h h hhh h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h hh h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h hh h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h h",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": " 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 ",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 10030,
"text": "3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Poisson Mixtures",
"authors": [
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Gale",
"suffix": ""
}
],
"year": 1995,
"venue": "Journal of Natural Language Engineering",
"volume": "1",
"issue": "2",
"pages": "163--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church, K. and Gale, W. (1995) ''Poisson Mixtures,'' Journal of Natural Language Engi- neering, 1:2, pp. 163-190.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Dynamic Nonlocal Language Modeling via Hierarchical Topic-Based Adaptation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "167--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florian, R. and Yarowsky, D. (1999) ''Dynamic Nonlocal Language Modeling via Hierarchical Topic-Based Adaptation,'' ACL, pp. 167-174.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Frequency Analysis of English Usage",
"authors": [
{
"first": "W",
"middle": [],
"last": "Francis",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Kucera",
"suffix": ""
}
],
"year": 1982,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francis, W., and Kucera, H. (1982) Frequency Analysis of English Usage, Houghton Mifflin Company, Boston, MA.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Context and Structure in Automated Full-Text Information Access",
"authors": [
{
"first": "M",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hearst, M. (1994) Context and Structure in Automated Full-Text Information Access, PhD Thesis, Berkeley, available via www.sims.ber- keley.edu/\u02dchearst.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Empirical Term Weighting: A Framework for Studying Limits",
"authors": [
{
"first": "F",
"middle": [
";"
],
"last": "Jelinek",
"suffix": ""
},
{
"first": "Usa",
"middle": [],
"last": "Umemura",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1997,
"venue": "Stop Lists, Burstiness and Query Expansion",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelinek, F. (1997) Statistical Methods for Speech Recognition, MIT Press, Cambridge, MA, USA. Umemura, K. and Church, K. (submitted) ''Empirical Term Weighting: A Framework for Studying Limits, Stop Lists, Burstiness and Query Expansion.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "b, c, d with a *, b *, c *, d *, respectively.History (h) >> Neighborhood (n) >> Prior (p"
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "b, c and d as before, but four new variables are introduced: e, f, g and h, where c =e + g and d = f + h. Pr(w\u2208test) \u223c \u223c(a + c)/ D prior Pr(w\u2208test \uf8f4 w\u2208hist) \u223c \u223ca /(a + b) hist Pr(w\u2208test \uf8f4 w\u2208near) \u223c \u223ce /(e + f ) near Pr(w\u2208test \uf8f4 w\u2208other) \u223c \u223cg /(g + h) other"
},
"TABREF0": {
"num": null,
"html": null,
"content": "<table><tr><td>Pr( + adapt 1 ) = Pr(w\u2208test \uf8f4 w\u2208history) \u223c \u223c a + b a _ ____</td></tr><tr><td>Pr( \u2212 adapt 1</td></tr><tr><td>that there are (a) 638 doc-</td></tr><tr><td>uments with ''hostages'' in both the first half</td></tr><tr><td>(history) and the second half (test), (b) 505 doc-</td></tr><tr><td>uments with ''hostages'' in just the first half, (c)</td></tr><tr><td>557 documents with ''hostages'' in just the</td></tr><tr><td>second half, and (d) 76,787 documents with</td></tr><tr><td>''hostages'' in neither half. Positive and negative</td></tr><tr><td>adaptation are defined in terms a, b, c and d.</td></tr></table>",
"type_str": "table",
"text": ""
},
"TABREF5": {
"num": null,
"html": null,
"content": "<table><tr><td/><td/><td colspan=\"5\">Adaptation is huge (and hardly dependent on frequency)</td></tr><tr><td/><td>1.00000</td><td/><td/><td/><td/></tr><tr><td/><td>0.10000</td><td/><td/><td/><td/></tr><tr><td>Probability</td><td>0.00100 0.01000</td><td/><td/><td/><td/></tr><tr><td/><td>0.00010</td><td>1</td><td>1</td><td>1 1 1 11 1 11 11 1 1</td><td/></tr><tr><td/><td>0.00001</td><td>1</td><td/><td/><td/></tr><tr><td/><td>1</td><td/><td/><td>10</td><td>100</td><td>1000</td><td>10000</td><td>100000</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"2\">Document Frequency (df)</td></tr></table>",
"type_str": "table",
"text": "). But unfortunately, we do not know how to use method 2 to estimate negative adaptation; we leave that as an open question."
}
}
}
}