| { |
| "paper_id": "H93-1021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T03:30:41.091693Z" |
| }, |
| "title": "ADAPTIVE LANGUAGE MODELING USING THE MAXIMUM ENTROPY PRINCIPLE", |
| "authors": [ |
| { |
| "first": "Raymond", |
| "middle": [], |
| "last": "Lau", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "IBM Research Division Thomas J. Watson Research Center Yorktown Heights", |
| "location": { |
| "postCode": "10598", |
| "region": "NY" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Ronald", |
| "middle": [], |
| "last": "Rosenfel~", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "IBM Research Division Thomas J. Watson Research Center Yorktown Heights", |
| "location": { |
| "postCode": "10598", |
| "region": "NY" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "IBM Research Division Thomas J. Watson Research Center Yorktown Heights", |
| "location": { |
| "postCode": "10598", |
| "region": "NY" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We describe our ongoing efforts at adaptive statistical language modeling. Central to our approach is the Maximum Entropy (ME) Principle, allowing us to combine evidence from multiple sources, such as long-distance triggers and conventional short.distance trigrams. Given consistent statistical evidence, a unique ME solution is guaranteed to exist, and an iterative algorithm exists which is guaranteed to converge to it. Among the advantages of this approach are its simplicity, its generality, and its incremental nature. Among its disadvantages are its computational requirements. We describe a succession of ME models, culminating in our current Maximum Likelihood / Maximum Entropy (ML/ME) model. Preliminary results with the latter show a 27% perplexity reduction as compared to a conventional trigram model.", |
| "pdf_parse": { |
| "paper_id": "H93-1021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We describe our ongoing efforts at adaptive statistical language modeling. Central to our approach is the Maximum Entropy (ME) Principle, allowing us to combine evidence from multiple sources, such as long-distance triggers and conventional short.distance trigrams. Given consistent statistical evidence, a unique ME solution is guaranteed to exist, and an iterative algorithm exists which is guaranteed to converge to it. Among the advantages of this approach are its simplicity, its generality, and its incremental nature. Among its disadvantages are its computational requirements. We describe a succession of ME models, culminating in our current Maximum Likelihood / Maximum Entropy (ML/ME) model. Preliminary results with the latter show a 27% perplexity reduction as compared to a conventional trigram model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Until recently, the most successful language model (given enough training data) was the trigram [1] , where the probability of a word is estimated based solely on the two words preceding it. The trigram model is simple yet powerful [2] . However, since it does not use anything but the very immediate history, it is incapable of adapting to the style or topic of the document, and is therefore considered a static model. In contrast, a dynamic or adaptive model is one that changes its estimates as a result of \"seeing\" some of the text. An adaptive model may, for example, rely on the history of the current document in estimating the probability of a word. Adaptive models are superior to static ones in that they are able to improve their performance after seeing some of the data. This is particularly useful in two situations. First, when a large heterogeneous language source is composed of smaller, more homogeneous segments, such as newspaper articles. An adaptive model trained on the heterogeneous source will be able to hone in on the particular \"sublanguage\" used in each of the articles. Secondly, when a model trained on data from one domain is used in another domain. Again, an adaptive model will be able to adjust to the new language, thus improving its performance.", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 99, |
| "text": "[1]", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 232, |
| "end": 235, |
| "text": "[2]", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STATE OF THE ART", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The most successful adaptive LM to date is described in [3] . A cache of the last few hundred words is maintained, and is used *This work is now continued by Ron Rosenfeld at Carnegie Mellon University.", |
| "cite_spans": [ |
| { |
| "start": 56, |
| "end": 59, |
| "text": "[3]", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STATE OF THE ART", |
| "sec_num": "1." |
| }, |
| { |
| "text": "to derive a \"cache trigrarn\". The latter is then interpolated with the static trigram. This results in a 23% reduction in perplexity, and a 5%-24% reduction in the error rate of a speech recognizer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STATE OF THE ART", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In what follows, we describe our efforts at improving our adaptive statistical language models by capitalizing on the information present in the document history.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "STATE OF THE ART", |
| "sec_num": "1." |
| }, |
| { |
| "text": "To extract information from the document history, we propose the idea of a trigger pair as the basic information bearing element. If a word sequence A is significantly correlated with another word sequence B, then (A---, B) is considered a \"trigger pair\", with A being the trigger and B the triggered sequence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "TRIGGER-BASED MODELING", |
| "sec_num": "2." |
| }, |
| { |
| "text": "When A occurs in the document, it triggers B, causing its probability estimate to change.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "TRIGGER-BASED MODELING", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Before attempting to design a trigger-based model, one should study what long distance factors have significant effects on word probabilities. Obviously, some information about P(B) can be gained simply by knowing that A had occurred. But exactly how much? And can we gain significantly more by considering how recently A occurred, or how many times?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "TRIGGER-BASED MODELING", |
| "sec_num": "2." |
| }, |
| { |
| "text": "We have studied these issues using the a Wail Street Journal corpus of 38 million words. Some illustrations are given in figs. 1 and 2. As can be expected, different trigger pairs give different answers, and hence should be modeled differently. More detailed modeling should be used when the expected return is higher.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "TRIGGER-BASED MODELING", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Once we determined the phenomena to be modeled, one main issue still needs to be addressed. Given the part of the document processed so far (h), and a word w considered for the next position, there are many different estimates of P(wlh). These estimates are derived from the various triggers of w, from the static trigram model, and possibly from other sources, how do we combine them all to form one optimal estimate? We propose a solution to this problem in the next section. given that 'STOCK' occurred (did not occur) before in the document.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "TRIGGER-BASED MODELING", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Using several different probability estimates to arrive at one combined estimate is a general problem that arises in many tasks. We use the maximum entropy (ME) principle ( [4, 5] ), which can be summarized as follows:", |
| "cite_spans": [ |
| { |
| "start": 173, |
| "end": 176, |
| "text": "[4,", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 177, |
| "end": 179, |
| "text": "5]", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MAXIMUM ENTROPY SOLUTIONS", |
| "sec_num": "3." |
| }, |
| { |
| "text": "1. Reformulate the different estimates as constraints on the expectation of various functions, to be satisfied by the target (combined) estimate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MAXIMUM ENTROPY SOLUTIONS", |
| "sec_num": "3." |
| }, |
| { |
| "text": "2. Among all probabilitydistributionsthat satisfy these constraints, choose the one that has the highest entropy. In the next 3 sections, we describe a succession of models we developed, all based on the ME principle. We then expand on the last model, describe possible future extensions to it, and report current results. More details can be found in [6, 7] .", |
| "cite_spans": [ |
| { |
| "start": 352, |
| "end": 355, |
| "text": "[6,", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 356, |
| "end": 358, |
| "text": "7]", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MAXIMUM ENTROPY SOLUTIONS", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Assume we have identified for each word w in a vocabulary, V, a set of nw trigger words tw~ t~... t~,,; we further assume that we have the relative frequency of observing a trigger word, t, occurring somewhere in the history, h, (in our case we have used a history length, K, of either 25, 50, 200, or 1000 words) and the word w just occurs after the history from some training text; denote the observed relative frequency of a trigger and a word w by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MODEL I: EARLY ATTEMPTS", |
| "sec_num": "4." |
| }, |
| { |
| "text": "c(t E h and w immediatelyf ollows h) d(t, w) = N", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MODEL I: EARLY ATTEMPTS", |
| "sec_num": "4." |
| }, |
| { |
| "text": "where c(.) is the count in the training data. We use {t, w} to indicate the event that trigger t occurred in the history and word w occurs next; the term long-distance bigram has been used for this event.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MODEL I: EARLY ATTEMPTS", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Assume we have a joint distribution p(h, w) of the history of K words and the next word w. We require this joint model to assign to the events {t, w} a probability that matches the observed relative frequencies. Assuming we have R such constraints we find a model that has Maximum Entropy:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MODEL I: EARLY ATTEMPTS", |
| "sec_num": "4." |
| }, |
| { |
| "text": "p*(h, w) = arg max -Ep(h, w) lgp(h, w) h,w", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MODEL I: EARLY ATTEMPTS", |
| "sec_num": "4." |
| }, |
| { |
| "text": "subject to the R trigger constraints;:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MODEL I: EARLY ATTEMPTS", |
| "sec_num": "4." |
| }, |
| { |
| "text": "h:tEh", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "p(t, w) = E p(h, w) = d(t, w)", |
| "sec_num": null |
| }, |
| { |
| "text": "We also include the case that none of the triggers of word w occur in the history (we denote this event by {to, w}.) Using Lagrange multipliers, one can easily show that the Maximum Entropy model is given by:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "p(t, w) = E p(h, w) = d(t, w)", |
| "sec_num": null |
| }, |
| { |
| "text": "p(h, w) = H ItWt t:tE h", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "p(t, w) = E p(h, w) = d(t, w)", |
| "sec_num": null |
| }, |
| { |
| "text": "i.e., the joint probability is the product of lh(W) factors one factor for each trigger t,~ of word w that occurs in the history h (or one factor if none of the triggers occur.) The Maximum Entropy joint distribution over a space of ]VI K\u00f7l is given by R parameters, one for each constraint. In our case, we used a maximum of 20 triggers per word for a 20k vocabulary with an average of 10 resulting in 200,000 constraints.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "p(t, w) = E p(h, w) = d(t, w)", |
| "sec_num": null |
| }, |
| { |
| "text": "! we also imposed unigram constraints to match the unigram distribution of the vocabulary \u2022 The log", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "p(t, w) = E p(h, w) = d(t, w)", |
| "sec_num": null |
| }, |
| { |
| "text": "One can use the \"Brown\" algorithm to determine the set of factors. At each iteration, one updates the factor of one constraint and as long as one cycles through all constraints repeatedly the factors will converge to the optimal value. At the i-th iteration, assume we are updating the factor that corresponds to the {t, w}-constraint. Then the update is given by:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ltow to determine the factors?", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "d( t, w) P~' = P~llm(t, W)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ltow to determine the factors?", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "where the model predicted value m(t, w) is given by:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ltow to determine the factors?", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "m(t, w) = E P\u00b0lt(h' w) (I) h:tE h", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ltow to determine the factors?", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "where pOll uses the old factor values.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ltow to determine the factors?", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "Using the ME joint model, we define a conditional unigrara model by:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ltow to determine the factors?", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "p* (h, w) p(wlh) = Ewp,(h,w )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ltow to determine the factors?", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "This is a \"time-varying\" unigram model where the previous K words determine the relative probability that w would occur next. The perplexity of the resulting model was about 2000 much higher than the perplexity of a static unigram model. In particular, the model underestimated the probability of the frequent words. To ease that problem we disallowed any triggers for the most frequent L words. We experimented with L ranging from 100 to 500 words. The resulting model was better though its perplexity was still about 1100 which is 43% higher than the static unigram perplexity of 772. One reason that we conjecture was that the ME model gives a rather high probability for histories that are quite unlikely in reality and the trigger constraints are matched using those unrealistic histories. We tried an ad hoc computation where the summation over the histories in Equation 1 was weighed by a crude estimate, w(h), of the probability of the history i.e. we used", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ltow to determine the factors?", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "m(t, w) = E w(h)P\u00b0ll(h, w) h:tEh", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ltow to determine the factors?", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "The resulting model had a much lower perplexity of 559, about 27% lower than the static unigram model on a test set of (1927 words). This ad hoc computation indicates that we need to model the histories more realistically. The model we propose in the next section is derived from the viewpoint that ME indicates that R factors define a conditional model that captures the\"Iong-distance\" bigram constraints and that using this parametric form with Maximum Likelihood estimation may ailow us to concentrate on typical histories that occur in the data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ltow to determine the factors?", |
| "sec_num": "4.1." |
| }, |
| { |
| "text": "The ME viewpoint results in a conditional model that belongs to the exponential family with K parameters when K constraints are contemplated. We can use Maximum Likelihood estimation to estimate the K factors of the model. likelihood of a training set is given by:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MODEL H: ML OF CONDITIONAL ME", |
| "sec_num": "5." |
| }, |
| { |
| "text": "N-1 L = E lgp(wt+l Ih,) t=o N-1 Hiet~(w,.i) ~i", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MODEL H: ML OF CONDITIONAL ME", |
| "sec_num": "5." |
| }, |
| { |
| "text": "where lh(w) is the set of triggers for word w that occur in h.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MODEL H: ML OF CONDITIONAL ME", |
| "sec_num": "5." |
| }, |
| { |
| "text": "The convexity of the log likelihood guarantees that any hill climbing method will converge to the global optimum. The gradient can be shown to be:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MODEL H: ML OF CONDITIONAL ME", |
| "sec_num": "5." |
| }, |
| { |
| "text": "o Olz---~L = -L-(d(t, w) -E p(wlhl) [~wt h:t6h", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MODEL H: ML OF CONDITIONAL ME", |
| "sec_num": "5." |
| }, |
| { |
| "text": "one can use the gradient to iteratively re-estimate the factors by:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MODEL H: ML OF CONDITIONAL ME", |
| "sec_num": "5." |
| }, |
| { |
| "text": "new_ oil 1 P~ -Pwt + T~(d( t, w) -m'(t, w)) lawt", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MODEL H: ML OF CONDITIONAL ME", |
| "sec_num": "5." |
| }, |
| { |
| "text": "where the model predicted value m'(t, w) for a constraint is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MODEL H: ML OF CONDITIONAL ME", |
| "sec_num": "5." |
| }, |
| { |
| "text": "m'(t, w) = ~ P(wlh)) h:tEh", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MODEL H: ML OF CONDITIONAL ME", |
| "sec_num": "5." |
| }, |
| { |
| "text": "The training data is used to estimate the gradient given the current estimate of the factors. The size of the gradient step can be optimized by a line search on a small amount of training data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MODEL H: ML OF CONDITIONAL ME", |
| "sec_num": "5." |
| }, |
| { |
| "text": "Given the \"time-varying\" unigram estimate, we use the methods of [8] to obtain a bigram LM whose unigram matches the time-varying unigram using a window of the most recent L words.", |
| "cite_spans": [ |
| { |
| "start": 65, |
| "end": 68, |
| "text": "[8]", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "MODEL H: ML OF CONDITIONAL ME", |
| "sec_num": "5." |
| }, |
| { |
| "text": "For estimating a probability function P(x), each constraint i is associated with a constraint function f i(x) and a desired expectation ci. The constraint is then written as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CURRENT MODEL: ML/ME", |
| "sec_num": "6." |
| }, |
| { |
| "text": "def E Epf i = P(x)ffi(x) = Ci .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CURRENT MODEL: ML/ME", |
| "sec_num": "6." |
| }, |
| { |
| "text": "(2) x Given consistent constraints, a unique ME solutions is guaranteed to exist, and to be of the form:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CURRENT MODEL: ML/ME", |
| "sec_num": "6." |
| }, |
| { |
| "text": "P(x) = H Pif'(x) ' (3) i", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CURRENT MODEL: ML/ME", |
| "sec_num": "6." |
| }, |
| { |
| "text": "where the pi's are some unknown constants, to be found.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CURRENT MODEL: ML/ME", |
| "sec_num": "6." |
| }, |
| { |
| "text": "To search the exponential family defined by (3) for the ~i's that will make P(x) satisfy all the constraints, an iterative algorithm, \"Generalized Iterative Scaling\", exists, which is guaranteed to converge to the solution ( [9] ).", |
| "cite_spans": [ |
| { |
| "start": 225, |
| "end": 228, |
| "text": "[9]", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Probability functions of the form (3) are called log-linear, and the family of functions defined by holding thefi's fixed and varying the pi's is called an exponential family.", |
| "sec_num": null |
| }, |
| { |
| "text": "To reformulate a trigger pair A---, B as a constraint, define the constraint functionf~..~ as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulating Triggers as Constraints", |
| "sec_num": "6.1." |
| }, |
| { |
| "text": "1 ifAEh, w=B fa--~(h, w) = 0 otherwise (4)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulating Triggers as Constraints", |
| "sec_num": "6.1." |
| }, |
| { |
| "text": "Set c,~-.,n to R~..~], the empirical expectation offA-,B (ie its expectation in the training data). Now impose on the desired probability estimate P(h, w) the constraint:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formulating Triggers as Constraints", |
| "sec_num": "6.1." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "Ep [fA--~t~] = E [f~--.B]", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Formulating Triggers as Constraints", |
| "sec_num": "6.1." |
| }, |
| { |
| "text": "Generalized Iterative Scaling can be used to find the ME estimate of a simple (non-conditional) probability distribution over some event space. But in our case, we need to estimate conditional probabilities of the form P(wlh). How should this be done more efficiently than in the previous models?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Conditionals: The ML/ME Solution", |
| "sec_num": "6.2." |
| }, |
| { |
| "text": "An elegant solution was proposed by [10] . Let P(h, w) be the desired probability estimate, and let P(h, w) be the empirical distribution of the training data. Letfi(h, w) be any constraint function, and let cl be its desired expectation. Equation 5 can be rewritten as:", |
| "cite_spans": [ |
| { |
| "start": 36, |
| "end": 40, |
| "text": "[10]", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Conditionals: The ML/ME Solution", |
| "sec_num": "6.2." |
| }, |
| { |
| "text": "E P(h). E P(wlh) .fi(h, w) = ci (6) h w", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Conditionals: The ML/ME Solution", |
| "sec_num": "6.2." |
| }, |
| { |
| "text": "We now modify the constraint to be:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Conditionals: The ML/ME Solution", |
| "sec_num": "6.2." |
| }, |
| { |
| "text": "PCh). ~ PCwlh) . f iCh, w) = ci (7) h w", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 35, |
| "text": "(7)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Conditionals: The ML/ME Solution", |
| "sec_num": "6.2." |
| }, |
| { |
| "text": "One possible interpretation of this modification is as follows. Instead of constraining the expectation offi(h, w) with regard to P(h, w), we constrain its expectation with regard to a different probability distribution, say Q(h, w), whose conditional Q(wlh) is the same as that of P, but whose marginal Q(h) is the same as that of P. To better understand the effect of this change, define H as the set of all possible histories h, and define Hi, as the partition of H induced byfi. Then the modification is equivalent to assuming that, for every constralntfi, P(Hfj) = P(Hf,). Since typically H/., is a very small set, the assumption is reasonable.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Conditionals: The ML/ME Solution", |
| "sec_num": "6.2." |
| }, |
| { |
| "text": "The unique ME solution that satisfies equations like (7) or (6) can be shown to also be the Maximum Likelihood (ML) solution, namely that function which, among the exponential family defined by the constraints, has the maximum likelihood of generating the data. The identity of the ML and ME solutions, apart from being aesthetically pleasing, is extremely useful when estimating the conditional P(wlh). It means that hillclimbing methods can be used in conjunction with Generalized Iterative Scaling to speed up the search. Since the likelihood objective function is convex, hillclimbing will not get stuck in local minima.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Conditionals: The ML/ME Solution", |
| "sec_num": "6.2." |
| }, |
| { |
| "text": "We combine the trigger based model with the currently best static model, the N-Gram, by reformulating the latter to fit into the ML/ME paradigm. The usual unigram, bigram and trigram ML estimates are replaced by unigram, bigrarn and trigrarn constraints conveying the same information. Specifically, the constraint function for the unigram wl is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating the trigram model", |
| "sec_num": "6.3." |
| }, |
| { |
| "text": "1 ifw = wl", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating the trigram model", |
| "sec_num": "6.3." |
| }, |
| { |
| "text": "fw,(h,w)= 0 otherwise (8) and its associated constraint is: wlh rw,(h, w)= fw, (h, w) h w", |
| "cite_spans": [ |
| { |
| "start": 22, |
| "end": 25, |
| "text": "(8)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 60, |
| "end": 85, |
| "text": "wlh rw,(h, w)= fw, (h, w)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Incorporating the trigram model", |
| "sec_num": "6.3." |
| }, |
| { |
| "text": "P(", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating the trigram model", |
| "sec_num": "6.3." |
| }, |
| { |
| "text": "Similarly, the constraint function for the bigram Wl, w2 is 1 ifhendsin wl and w= w2 (10) fwt,~(h, w) = 0 otherwise and its associated constraint is", |
| "cite_spans": [ |
| { |
| "start": 85, |
| "end": 89, |
| "text": "(10)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating the trigram model", |
| "sec_num": "6.3." |
| }, |
| { |
| "text": "P(h) ~ P(wlh)f w,,~(h, w) = Ef w,,w2(h, w).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating the trigram model", |
| "sec_num": "6.3." |
| }, |
| { |
| "text": "h w (11) and similarly for higher-order ngrarns.", |
| "cite_spans": [ |
| { |
| "start": 4, |
| "end": 8, |
| "text": "(11)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating the trigram model", |
| "sec_num": "6.3." |
| }, |
| { |
| "text": "The computational bottleneck of the Generalized Iterative Scaling algorithm is in constraints which, for typical histories h, are non-zero for a large number of w's. This means that bigram constraints are more expensive than trigram constraints. Implicit computation can be used for unigram constraints. Therefore, the time cost of bigram and trigger constraints dominates the total time cost of the algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating the trigram model", |
| "sec_num": "6.3." |
| }, |
| { |
| "text": "The ME principle and the Generalized Iterative Scaling algorithm have several important advantages:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ME: PROS AND CONS", |
| "sec_num": "7." |
| }, |
| { |
| "text": ". .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ME: PROS AND CONS", |
| "sec_num": "7." |
| }, |
| { |
| "text": "The ME principle is simple and intuitively appealing. It imposes all of the constituent constraints, but assumes nothing else. For the special case of constraints derived from marginal probabilities, it is equivalent to assuming a lack of higher-order interactions [11] . ME is extremely general. Any probability estimate of any subset of the event space can be used, including estimates that were not derived from the data or that are inconsistent with it. The distance dependence and count dependence illustrated in figs. 1 and 2 can be readily accommodated. Many other knowledge sources, including higher-order effects, can be incorporated. Note that constraints need not be independent of nor uncorrelated with each other.", |
| "cite_spans": [ |
| { |
| "start": 265, |
| "end": 269, |
| "text": "[11]", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ME: PROS AND CONS", |
| "sec_num": "7." |
| }, |
| { |
| "text": "3. The information captured by existing language models can be absorbed into the ML/ME model. We have shown how this is done for the conventional N-gram model. Later on we will show, how it can be done for the cache model of [3] . 4 . Generalized Iterative Scaling lends itself to incremental adaptation. New constraints can be added at any time. Old constraints can be maintained or else allowed to relax.", |
| "cite_spans": [ |
| { |
| "start": 225, |
| "end": 228, |
| "text": "[3]", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 231, |
| "end": 232, |
| "text": "4", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ME: PROS AND CONS", |
| "sec_num": "7." |
| }, |
| { |
| "text": "A unique ME solution is guaranteed to exist for consistent constraints. The Generalized Iterative Scaling algorithm is guaranteed to converge to it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5.", |
| "sec_num": null |
| }, |
| { |
| "text": "This approach also has the following weaknesses:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5.", |
| "sec_num": null |
| }, |
| { |
| "text": "1. Generalized Iterative Scaling is computationally very expensive. When the complete system is trained on the entire 50 million words of Wall Street Journal data, it is expected to require many thousands of MIPS-hours to run to completion.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5.", |
| "sec_num": null |
| }, |
| { |
| "text": "2. While the algorithm is guaranteed to converge, we do not have a theoretical bound on its convergence rate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5.", |
| "sec_num": null |
| }, |
| { |
| "text": "3. It is sometimes useful to impose constraints that are not satisfied by the training data. For example, we may choose to use Good-Tmqng discounting [12] , or else the constraints may be derived from other data, or be externally imposed. Under these circumstances, the constraints may no longer be consistent, and the theoretical results guaranteeing existence, uniqueness and convergence may not hold.", |
| "cite_spans": [ |
| { |
| "start": 150, |
| "end": 154, |
| "text": "[12]", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5.", |
| "sec_num": null |
| }, |
| { |
| "text": "It seems that the power of the cache model, described in section 1, comes from the \"bursty\" nature of language. Namely, infrequent words tend to occur in \"bursts\", and once a word occurred in a document, its probability of recurrence is significantly elevated.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INCORPORATING THE CACHE MODEL", |
| "sec_num": "8." |
| }, |
| { |
| "text": "Of course, this phenomena can be captured by a trigger pair of the form A ~ A, which we call a \"self trigger\". We have done exactly that in [13] . We found that self triggers are responsible for a disproportionatelylarge part of the reduction in perplexity. Furthermore, self triggers proved particularly robust: when tested in new domains, they maintained the correlations found in the training databetter than the\"regular\" triggers did.", |
| "cite_spans": [ |
| { |
| "start": 140, |
| "end": 144, |
| "text": "[13]", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INCORPORATING THE CACHE MODEL", |
| "sec_num": "8." |
| }, |
| { |
| "text": "Thus self triggers are particularly important, and should be modeled separately and in more detail. The trigger model we currently use does not distinguish between one or more occurrences of a given word in the history, whereas the cache model does. For self-triggers, the additional information can be significant (see fig. 3 ). We plan to model self triggers in more detail. We will consider explicit modeling of frequency of occurrence, distance from last occurrence, and other factors. All of these aspects can easily be formulated as constraints and incorporated into the ME formalism.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 320, |
| "end": 326, |
| "text": "fig. 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "INCORPORATING THE CACHE MODEL", |
| "sec_num": "8." |
| }, |
| { |
| "text": "The ML/ME model described above was trained on 5 million words of Wail Street Journal text, using DARPA's official \"200\" vocabulary of some 20,000 words. A conventionai trigram model was used as a baseline. The constraints used by the ML/ME model were: 18,400 unigram constraints, 240,000 bigram constraints, and 414,000 trigram constraints. One experiment was run with 36,000 trigger constraints (best 3 triggers for each word), and another with 65,000 trigger constraints (best 6 triggers per word). All models were trained on the same data, and evaluated on 325,000 words on independent data. The Maximum Entropy models were also interpolated with the conventional trigram, using yet unseen data for interpolation. Results are summarized in table 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RESULTS", |
| "sec_num": "9." |
| }, |
| { |
| "text": "Test-set % improvement model Perplexity over baseline trigrarn 173 --ML/ME-top3 134 23% +trigram 129 25% MI_/ME-top6 130 25% 127 27% +trigram Table 1 : Improvement of Maximum Likelihood / Maximum Entropy model over a conventional trigram model. Training is on 5 million words of WSJ text. Vocabulary is 20,000 words.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 9, |
| "end": 168, |
| "text": "% improvement model Perplexity over baseline trigrarn 173 --ML/ME-top3 134 23% +trigram 129 25% MI_/ME-top6 130 25% 127 27% +trigram Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "RESULTS", |
| "sec_num": "9." |
| }, |
| { |
| "text": "The trigger constraints used in this run were selected very crudely, and their number was not optimized. We believe much more improvement can be achieved. Special modeling of self triggers has not been implemented yet. Similarly, we expect it to yield further improvement.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "RESULTS", |
| "sec_num": "9." |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We are grateful to Peter Brown, Stephen Della Pietra, Vincent Della Pietra and Bob Mercer for many suggestions and discussions. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ACKNOWLEDGEMENTS", |
| "sec_num": "10." |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A Statistical Approach to Continuous Speech Recognition", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Bahl", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Jelinek", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1983, |
| "venue": "IEEE Trans. on PAMI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bahl, L., Jelinek, F., Mercer, R.L., \"A Statistical Approach to Continuous Speech Recognition,\" IEEE Trans. on PAMI, 1983.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Up From Trigrams!", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Jelinek", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Eurospeech", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jelinek, E, \"Up From Trigrams!\" Eurospeech 1991.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "A Dynamic Language Model for Speech Recognition", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Jellnek", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Merialdo", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Su'auss", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Proceedings of the Speech and Natural Language DARPA Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "293--295", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jellnek, F., Merialdo, B., Roukos, S., and SU'auss, M., \"A Dynamic Language Model for Speech Recognition.\" Proceed- ings of the Speech and Natural Language DARPA Workshop, pp.293-295, Feb. 1991.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Information Theory and Statistical Mechanics", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [ |
| "T" |
| ], |
| "last": "Jaines", |
| "suffix": "" |
| } |
| ], |
| "year": 1957, |
| "venue": "Phys. Rev", |
| "volume": "106", |
| "issue": "", |
| "pages": "620--630", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jaines, E. T., \"Information Theory and Statistical Mechanics.\" Phys. Rev. 106, pp. 620-630, 1957.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Information Theory in Statistics", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kullback", |
| "suffix": "" |
| } |
| ], |
| "year": 1959, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kullback, S., Information Theory in Statistics. Wiley, New York, 1959. [6, 7].", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Adaptive Statistical Language Modeling: a Maximum Entropy Approach", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Rosenfeld", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rosenfeld, R., \"Adaptive Statistical Language Modeling: a Maximum Entropy Approach,\" Ph.D. Thesis Proposal, Carnegie MeUon Universit~ September 1992.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Trigger-Based Language Models: a Maximum Entropy Approach", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Lau", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Rosenfeld", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of ICASSP-93", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lau, R., Rosenfeld, R., Roukos, S., \"Trigger-Based Language Models: a Maximum Entropy Approach,\" Proceedings of ICASSP-93, April 1993.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Adaptive Language Modeling Using Minimum Discriminant Estimation", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Della Pielra", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mercer", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of lCASSP-92", |
| "volume": "", |
| "issue": "", |
| "pages": "1--633", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Della Pietra, S., Della Pielra, V., Mercer, R. L., Roukos, S., \"Adaptive Language Modeling Using Minimum Discriminant Estimation,\" Proceedings of lCASSP-92, pp. 1-633-636, San Francisco, March 1992.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Generalized Iterative Scaling for Log-Linear Models", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "N" |
| ], |
| "last": "Darroch", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Ratcliff", |
| "suffix": "" |
| } |
| ], |
| "year": 1972, |
| "venue": "The Annals of Mathematical Statistics", |
| "volume": "43", |
| "issue": "", |
| "pages": "1470--1480", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Darroch, J.N. and Ratcliff, D., \"Generalized Iterative Scaling for Log-Linear Models\", The Annals of Mathematical Statis- tics, Vol. 43, pp 1470-1480, 1972.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Maximum Entropy Methods and Their Applications to Maximum Likelihood Parameter Estimation of Conditional Exponential Models", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Dcha Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Nadas", |
| "suffix": "" |
| }, |
| { |
| "first": "Roukos", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "A forthcoming IBM technical report", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brown, P., DcHa Pietra, S., Della Pietra, V., Mercer, R., Nadas, A., and Roukos, S., \"Maximum Entropy Methods and Their Applications to Maximum Likelihood Parameter Estimation of Conditional Exponential Models,\" A forthcoming IBM techni- cal report.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Maximum Entropy for Hypothesis Formulation, Especially for Multidimensional Contingency Tables", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [ |
| "J" |
| ], |
| "last": "Good", |
| "suffix": "" |
| } |
| ], |
| "year": 1963, |
| "venue": "Annals of Mathematical Statistics", |
| "volume": "34", |
| "issue": "", |
| "pages": "911--934", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": ", Good, I. J., \"Maximum Entropy for Hypothesis Formulation, Especially for Multidimensional Contingency Tables.\" Annals of Mathematical Statistics, Vol. 34, pp. 911-934, 1963.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "The Population Frequencies of Species and the Estimation of Population Parameters", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [ |
| "J" |
| ], |
| "last": "Good", |
| "suffix": "" |
| } |
| ], |
| "year": 1953, |
| "venue": "Biometrika", |
| "volume": "40", |
| "issue": "3", |
| "pages": "237--264", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Good, I. J., \"The Population Frequencies of Species and the Estimation of Population Parameters.\" Biometrika, Vol. 40, no. 3, 4, pp. 237-264, 1953.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Improvements in Stochastic Language Modeling", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Rosenfeld", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [ |
| "D" |
| ], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the Speech and Natural Language DARPA Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rosenfeld, R., and Huang, X. D., \"Improvements in Stochastic Language Modeling.\" Proceedings of the Speech and Natural Language DARPA Workshop, Feb. 1992.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Probability of 'SHARES' as a function of the distance from the last occurrence of 'STOCK' in the same document. The middle horizontal line is the unconditional probability. The top (bottom) line is the probability of 'SHARES'", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "text": "Probability of'WINTER' as a function of the number of times 'SUMMER' occurred before it in the same document. Horizontal lines are as in fig. 1.", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "text": "Behavior of a self-trigger: Probability of 'DE-FAULT' as a function of the number of times it already occurred in the document. The horizontal line is the unconditional probability.", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| } |
| } |
| } |
| } |