ACL-OCL / Base_JSON /prefixJ /json /J96 /J96-3007.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J96-3007",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:16:41.544794Z"
},
"title": "A Chart Re-estimation Algorithm for a Probabilistic Recursive Transition Network",
"authors": [
{
"first": "Young",
"middle": [
"S"
],
"last": "Haw",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Key-Sun",
"middle": [],
"last": "Choi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Suwon",
"location": {
"addrLine": "Suwon, Suwon, 440-600",
"postBox": "P.O. Box 77-78",
"country": "Korea"
}
},
"email": "kschoi@csking.kaist.ac.kr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "is an elevated version of a Recursive Transition Network used to model and process context-free languages in stochastic parameters. We present a re-estimation algorithm for training probabilistic parameters, and show how efficiently it can be implemented using charts. The complexity of the Outside algorithm we present is O(N4G 3) where N is the input size and G is the number of states. This complexity can be significantly overcome when the redundant computations are avoided. Experiments on the Penn tree corpus show that re-estimation can be done more efficiently with charts.",
"pdf_parse": {
"paper_id": "J96-3007",
"_pdf_hash": "",
"abstract": [
{
"text": "is an elevated version of a Recursive Transition Network used to model and process context-free languages in stochastic parameters. We present a re-estimation algorithm for training probabilistic parameters, and show how efficiently it can be implemented using charts. The complexity of the Outside algorithm we present is O(N4G 3) where N is the input size and G is the number of states. This complexity can be significantly overcome when the redundant computations are avoided. Experiments on the Penn tree corpus show that re-estimation can be done more efficiently with charts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Though hidden Markov models have been successful in some applications such as corpus tagging, they are limited to the problems of regular languages. There have been attempts to associate probabilities with context-free grammar formalisms. Recently Briscoe and Carroll (1993) have reported work on generalized probabilistic LR parsing, and others have tried different formalisms such as LTAG (Schabes, Roth, and Osborne 1993) and Link grammar (Lafferty, Sleator, and Temperley 1992) . Kupiec extended a SCFG that worked on CNF to a general CFG (Kupiec 1991) . The re-estimation algorithm presented in this paper may be seen as another version for general CFG.",
"cite_spans": [
{
"start": 248,
"end": 274,
"text": "Briscoe and Carroll (1993)",
"ref_id": null
},
{
"start": 391,
"end": 424,
"text": "(Schabes, Roth, and Osborne 1993)",
"ref_id": null
},
{
"start": 442,
"end": 481,
"text": "(Lafferty, Sleator, and Temperley 1992)",
"ref_id": "BIBREF2"
},
{
"start": 543,
"end": 556,
"text": "(Kupiec 1991)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "One significant problem of most probabilistic approaches is the computational burden of estimating the parameters (Lari and Young 1990) . In this paper, we consider a probabilistic recursive transition network (PRTN) as an underlying grammar representation, and present an algorithm for training the probabilistic parameters, then suggest an improved version that works with reduced redundant computations. The key point is to save intermediate results and avoid the same computation later on. Moreover, the computation of Outside probabilities can be made only on the valid parse space once a chart is prepared.",
"cite_spans": [
{
"start": 114,
"end": 135,
"text": "(Lari and Young 1990)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "A PRTN denoted by A is a 6-tuple. = (A,u,s,;:,r,5) .",
"cite_spans": [
{
"start": 36,
"end": 50,
"text": "(A,u,s,;:,r,5)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistic Recursive Transition Network",
"sec_num": "2."
},
{
"text": "CFG NP ~ art AP noun NP , AP noun NP ~ noun AP . adj AP AP = adj PRTN Call i States noun 0.2 ~.o o.8 ~ p a i v ~ v o.~,,~ ............... J : .....",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistic Recursive Transition Network",
"sec_num": "2."
},
{
"text": "Illustration of PRTN. A parse is composed of dark-headed transitions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1",
"sec_num": null
},
{
"text": "i Return i States ~@",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1",
"sec_num": null
},
{
"text": "A is a transition matrix containing transition probabilities, and B is a word matrix containing the probability distribution of the words observable at each terminal transition. I? specifies the types of transitions, and ~ represents a stack. S and .~ denote start and final states, respectively. Stack operations are associated with transitions; transitions are classified into three types, according to the stack operation. The first type is nonterminal transition, in which state identification is pushed into the stack. The second type is pop transition, in which transition is determined by the content of the stack. The third type is transitions not committed to stack operation; these are terminal and empty transitions. In general, the grammar expressed in PRTN consists of layers. A layer is a fragment of network that corresponds to a nonterminal. A table of the probability distribution of words is defined at each terminal transition. Pop transitions represent the returning of a layer to one of its (possibly multiple) higher layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1",
"sec_num": null
},
{
"text": "In this paper, parses are assumed to be sequences of dark-headed transitions (see Figure 1 ). States at which pop transitions are defined are called pop states. Other notations are listed below. first(1) returns the first state of layer I. last(l) returns the last state of layer I. layer(s) returns the layer state s belongs to. bout(1) returns the states from which layer 1 branches out. bin(1) returns the states to which layer 1 returns. terminal(1) returns a set of terminal edges in layer 1. nonterminal(1) returns a set of nonterminal edges in layer I. denotes the edge between states i and j.",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 90,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 1",
"sec_num": null
},
{
"text": "[i,j] denotes the network segment between states i and j.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1",
"sec_num": null
},
{
"text": "Wa~b is a word sequence covering the a th to b th word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 1",
"sec_num": null
},
{
"text": "The task of a re-estimation algorithm is to assign probabilities to transitions and the word symbols defined at each terminal transition. The Inside-Outside algorithm provides a formal basis for estimating parameters of context free languages so that the probabilities of the word sequences (sample sentences) may be maximized. The reestimation algorithm for PRTN uses a variation of the Inside-Outside algorithm customized for PRTN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-estimation Algorithm",
"sec_num": "3."
},
{
"text": "Let a word sequence of length N be denoted by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-estimation Algorithm",
"sec_num": "3."
},
{
"text": "W = Wl Wa--.WN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-estimation Algorithm",
"sec_num": "3."
},
{
"text": "Now define the Inside probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-estimation Algorithm",
"sec_num": "3."
},
{
"text": "The Inside probability denoted by Pi(i)s~t of state i is the probability that layer(i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 1",
"sec_num": null
},
{
"text": "generates the string positioned from s to t starting at state i given a model ,~.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 1",
"sec_num": null
},
{
"text": "That is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 1",
"sec_num": null
},
{
"text": "Pi(i)s~t = P([i,e] --+ Ws~tl,~)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 1",
"sec_num": null
},
{
"text": "where e = last(layer(i)). And by definition:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 1",
"sec_num": null
},
{
"text": "t P,(i)s~t = ~ aikb(T~, Ws)P,(k)s+l~t + ~ Y2 aijauvP,(j)s~rP,(v)r+l~t. k j r=s (1) ---+ ___+",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 1",
"sec_num": null
},
{
"text": "where ik E terminal(layer(i)), ij E nonterminal(layer(i) ), u = last(layer(j)), v E bin(layer(j)), and layer(i) = layer(v).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 1",
"sec_num": null
},
{
"text": "After the last word is generated, the last state of layer(i) should be reached. Figure 2 is the pictorial view of the Inside probability. A valid sequence can begin only at state $, thus to be strict, P1(8) has an additional product, P($). When the immediate transition ~ is of terminal type, the transition probability aq and the probability of the S th word at the transition b(~j, Ws) are multiplied together with the Inside probability of the rest of the sequence, Ws+l~t.",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 88,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Definition 1",
"sec_num": null
},
{
"text": "1 if i = last(layer(i)), Pl(i)t+l~t = 0 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 1",
"sec_num": null
},
{
"text": "Now define the Outside probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 1",
"sec_num": null
},
{
"text": "The Outside probability denoted by Po (i,j)s~t is the probability that partial sequences, Wl~s_l and Wt+l~N, are generated, provided that the partial sequence, Ws~t, is generated by [i,j] given a model ~.",
"cite_spans": [
{
"start": 182,
"end": 187,
"text": "[i,j]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 2",
"sec_num": null
},
{
"text": "Po (i,j) where x E bout(layer(i)), y E bin(layer(i)), f =first(layer(i)), e = last(layer(i)), layer(i) = layer(]'), and layer(x) = layer(y).",
"cite_spans": [
{
"start": 3,
"end": 8,
"text": "(i,j)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "And by definition:",
"sec_num": null
},
{
"text": "The summation on x is defined only when a ~ 1 or b ~ N (i.e., there are words left to be generated). Nonterminal and its corresponding pop transitions are defined to be 1 when a = 1 and b = N.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "And by definition:",
"sec_num": null
},
{
"text": "For a boundary case of the Outside probability where f is the first state of a layer in the above equation: Figure 3 shows the network configuration in computing the Outside probability. In equation 2, P~(f, i)~s-1 is the probability that sequence, Wa~-l, is generated by layer(i)",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 116,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "And by definition:",
"sec_num": null
},
{
"text": "1 iff = $, Po(i,j)l~S = 0 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "And by definition:",
"sec_num": null
},
{
"text": "left to state i, and Pl(j)t+l~b is the probability that sequence Wt+l~b is generated by layer(i) right to state j. The computation of P~ 0 c, i)s~t--a slight variation of the Inside probability in which the P~(f)a~b'S in equation 1 are replaced by P~0 c, i)a~b--is done as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "And by definition:",
"sec_num": null
},
{
"text": "P~(f,i)s~t { pl0C)s~t if s __q t, = 1 if s > t andf = i, 0 if s > t andf\u00a2 i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "And by definition:",
"sec_num": null
},
{
"text": "It is basically the same as the Inside probability except that it carries an i that indicates a stop state. Now we can derive the re-estimation algorithm for .g and/3 using the Inside and Outside probabilities. As the result of constrained maximization of Baum's auxiliary A Chart Re-estimation Algorithm function, we have the following form of re-estimation for each transition (Rabiner 1989) .",
"cite_spans": [
{
"start": 379,
"end": 393,
"text": "(Rabiner 1989)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "And by definition:",
"sec_num": null
},
{
"text": "expected number of transitions from state i to state j expected number of transitions from state i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "And by definition:",
"sec_num": null
},
{
"text": "The expectation of each transition type is computed as follows: For a terminal transition: where u E bout(layer(i)), j c bin(layer(i)), v = first(layer(i)), layer(u) = layer(j), layer(v) = layer(i), and uv is a nonterminal transition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "And by definition:",
"sec_num": null
},
{
"text": "Y'~ffr=l aijb(~, W~)Po(i,j)~~~ Et( ij ) = P(W I A)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "And by definition:",
"sec_num": null
},
{
"text": "Since transitions of terminal and nonterminal types can occur together at a state, terminal transitions are estimated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "And by definition:",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "---+ a-ij Gk Et(ik) + Gk Ent(~)",
"eq_num": "(3)"
}
],
"section": "And by definition:",
"sec_num": null
},
{
"text": "For nonterminal transitions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "And by definition:",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E,,,( q)",
"eq_num": "(4)"
}
],
"section": "And by definition:",
"sec_num": null
},
{
"text": "And for pop transitions, notice that only pop transitions are possible at a pop state: The re-estimation continues until the probability of the word sequences reaches a certain stability. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "And by definition:",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L aq _ Go,(q) Y~k Epop( ik )",
"eq_num": "(5)"
}
],
"section": "And by definition:",
"sec_num": null
},
{
"text": "It can be shown that the complexity of the Inside algorithm is O(N3G 3) and that of the Outside algorithm is O(N4G 3) where N is the input size and G is the number of states. The complexity is too much for current workstations when either N or G becomes bigger than a few 10s. A basic implementation of the algorithm is to use a chart and avoid doing the same computations more than once. For instance, the table for storing Inside computations takes O(N2G2C) store, where C is the number of terminal and nonterminal categories. A chart item is a function of five parameters, and returns an Inside probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Chart Re-estimation Algorithm",
"sec_num": "4."
},
{
"text": "A chart item is associated with categories implying that the item is valid on the specified categories that begin the net fragment of the item. Suppose a net fragment [i,j] begins with NP and ADJP, then given a sentence fragment Ws~t, ADJP may not participate in generating Ws~t, while NP may. The information of valid categories is useful when the chart is used in computing Outside probabilities.",
"cite_spans": [
{
"start": 167,
"end": 172,
"text": "[i,j]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "I(i,j, s, t, c) = Pi(i,j)s~t.",
"sec_num": null
},
{
"text": "An Outside probability is the result of computing many Inside probabilities. Computing an Inside probability even in an application of moderate size can be impractical. A naive implementation of Outside computation takes numerous Inside computations, so estimating even a parameter will not be realistic in a serial workstation (Lari and Young 1990) .",
"cite_spans": [
{
"start": 328,
"end": 349,
"text": "(Lari and Young 1990)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "I(i,j, s, t, c) = Pi(i,j)s~t.",
"sec_num": null
},
{
"text": "The proposed estimation algorithm aims at reducing the redundant Inside computations in computing an Outside probability. The idea is to identify the Inside probabilities used in generating an input sentence and to compute an Outside probability using mainly those Insides. This is done first by computing an Inside probability of the input sentence, which can return a table of Insides used in the computation. Note that the Insides in the deepest depth are produced first, as the recursion is released, thus there can be many Insides that are not relevant to the given sentence. The Insides that participate in generating the input sentence can be identified by running the Inside algorithm one more time, top-down. Figure 4 illustrates the steps of the revised Outside computation.",
"cite_spans": [],
"ref_spans": [
{
"start": 718,
"end": 726,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "I(i,j, s, t, c) = Pi(i,j)s~t.",
"sec_num": null
},
{
"text": "The identified Insides, however, do not cover all the Insides needed in computing an Outside probability. This is because the Inside algorithm works on a network from left to right and one transition at a time. Many Insides that are missed in the table are compositions of smaller Insides.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I(i,j, s, t, c) = Pi(i,j)s~t.",
"sec_num": null
},
{
"text": "Once charts of selected Insides are prepared, an Outside probability is computed as follows: axfaevlOe, i, a, e, t + 1, b)Po(x, y) a~b .",
"cite_spans": [
{
"start": 93,
"end": 103,
"text": "axfaevlOe,",
"ref_id": null
},
{
"start": 104,
"end": 106,
"text": "i,",
"ref_id": null
},
{
"start": 107,
"end": 109,
"text": "a,",
"ref_id": null
},
{
"start": 110,
"end": 112,
"text": "e,",
"ref_id": null
},
{
"start": 113,
"end": 119,
"text": "t + 1,",
"ref_id": null
},
{
"start": 120,
"end": 127,
"text": "b)Po(x,",
"ref_id": null
},
{
"start": 128,
"end": 130,
"text": "y)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "I(i,j, s, t, c) = Pi(i,j)s~t.",
"sec_num": null
},
{
"text": "Po(i,j)s~, = ~ ~",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I(i,j, s, t, c) = Pi(i,j)s~t.",
"sec_num": null
},
{
"text": "{xiI (x,y,a,b,c)>O} (a,b)ca(f,e,s,t) where x E bout(layer(i)), y c bin(layer(i)), f = first(layer(i)), e = last(layer(i)), c c {nonterminal}, layer(i) = layer(j), and layer(x) = layer(y).",
"cite_spans": [
{
"start": 5,
"end": 36,
"text": "(x,y,a,b,c)>O} (a,b)ca(f,e,s,t)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "I(i,j, s, t, c) = Pi(i,j)s~t.",
"sec_num": null
},
{
"text": "The function cr0C, e,s, t) returns a set of (a, b) pairs where there are Inside items I0 c, e, a, b) defined at the chart such that a G s and b > t. In short, the items for state f indicate the possible combinations of sentence segments inclusive of the given fragment Ws~t because the chart contains items of all the valid sentence segments that were generated through the layer ~c, e]. When the current layer ~c, e] is completed with the two Insides computed, the computation extends to the Outside.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I(i,j, s, t, c) = Pi(i,j)s~t.",
"sec_num": null
},
{
"text": "Useless advancements into high layers that do not lead to the successful completion of a given sentence can be avoided by making sure that Ix, y] generates Wa~b and the category of current layer c is defined, which can be checked by consulting the chart items for state x.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "I(i,j, s, t, c) = Pi(i,j)s~t.",
"sec_num": null
},
{
"text": "The goal of our experiments is to see how much saving the new estimation algorithm achieves in computational cost. Out of 14,132 Wall Street Journal trees of the Penn tree corpus, 1,543 trees corresponding to sentences with 10 words or less were chosen, and the programs written in C language were run at a Sparcl0 workstation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "The basic implementation of an Inside-Outside algorithm assumes tables for Insides and Outsides so that identical Insides and Outsides need not be recomputed. A chart Outside or re-estimation algorithm assumes a refined table of Insides that contains only valid Insides used in generating the input sentence as discussed earlier, and Outside computation is done based on the refined table.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "The improvement from the chart re-estimation algorithm is measured in the number of actual Inside and Outside computations done to estimate the parameters. Figure 5 shows the average counts of Insides used in estimating 50 trees randomly selected from 1,543 samples. Before the re-estimation algorithm is applied, an RTN that faithfully encodes the input trees without any overgeneration is constructed from the 50 trees. The gain in Insides from the chart re-estimation algorithm is very clear, and in the case of Outsides the gain is even more conspicuous (see Figure 6 ). The number of Insides counted in chart version also includes the Insides computed in preparing the chart.",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 164,
"text": "Figure 5",
"ref_id": null
},
{
"start": 563,
"end": 571,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "We have presented an efficient re-estimation algorithm for a PRTN that made use of only valid Insides. The method requires the preparation of a chart by running Inside computation twice over a whole sentence. The suggested method focuses mainly on reducing the computational overhead of Outside computation, which is a major portion of a re-estimation. The computation of an Inside probability may be improved further using a similar technique introduced in this paper. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "Gain in Outsides using chart re-estimation with the same 50 sentences as in Figure 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 84,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 6",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Generalized probabilistic LR parsing of natural language (Corpora) with unification-based grammars",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Griscoe",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "25--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "griscoe, Ted, and John Carroll. 1993. Generalized probabilistic LR parsing of natural language (Corpora) with unification-based grammars.\" Computational Linguistics 19(1): 25-57.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A trellis-based algorithm for estimating the parameters of a hidden stochastic context-free grammar",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Kupiec",
"suffix": ""
}
],
"year": 1991,
"venue": "Proceedings of the Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "241--246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kupiec, Julian. 1991. A trellis-based algorithm for estimating the parameters of a hidden stochastic context-free grammar. In Proceedings of the Speech and Natural Language Workshop, pages 241-246, DARPA, Pacific Grove.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Grammatical trigrams: A probabilistic model of link grammar",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Sleator",
"suffix": ""
},
{
"first": "Davy",
"middle": [],
"last": "Temperley",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lafferty, John, Daniel Sleator, and Davy Temperley. 1992. Grammatical trigrams: A probabilistic model of link grammar.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "AAAI Fall Symposium Series: Probabilistic Approaches to Natural Language",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "89--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "AAAI Fall Symposium Series: Probabilistic Approaches to Natural Language, pages 89-97, Cambridge.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The estimation of stochastic context-free grammars using the Inside-Outside algorithm",
"authors": [
{
"first": "K",
"middle": [],
"last": "Lari",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Young",
"suffix": ""
}
],
"year": 1990,
"venue": "Computer Speech and Language",
"volume": "4",
"issue": "",
"pages": "35--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lari, K. and S. J. Young. 1990. The estimation of stochastic context-free grammars using the Inside-Outside algorithm. Computer Speech and Language 4:35-56.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A tutorial on hidden Markov models and selected applications in speech recognition",
"authors": [
{
"first": "Lawrence",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rabiner, Lawrence R. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. In",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "s~t = P([,S',i] --~ Wl~s_l, ~',..~-] -~-Wt+l~ N I A) N = ~ ~ ~ axfaeYPr(f'i)a~s-lPl(j)t+l~bPo(x,y)a~b \u2022 x a=l b=t"
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "last(layer(j)), v \u00a2 bin(layer(j)), layer(i) = layer(v), layer(j) = layer(u), and uv is a pop transition. For a pop transition: N N ~-~s=l ~-~t=s &,vPl(v)s~taijPo(u, J)sNt Go,( ij ) = P(W I ,k)"
},
"FIGREF2": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "transition ~ and a word symbol w: Ct ~.,. w,=~,, aijb(~, Wt)Po(i,j)t~t b( ij,w) = Y'~fft=~ aqb(~, Wt)Po(i,j)t~t"
},
"FIGREF3": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Gain in Insides using chart re-estimation, showing 50 randomly-chosen sentences out of 1"
},
"TABREF0": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>sentence W</td><td/></tr><tr><td/><td>T</td></tr><tr><td colspan=\"2\">Compute Inside probability 1 ofW [</td></tr><tr><td/><td>Inside Table</td></tr><tr><td>Select valid Insides by running Inside algorithm topdown</td><td>[~</td></tr><tr><td/><td>of</td></tr><tr><td/><td>valid Insides</td></tr><tr><td>Run Outside algorithm</td><td>] I</td></tr><tr><td>Figure 4</td><td/></tr></table>",
"text": "Outside computation with chart. Inside computation builds a table of computed Insides."
}
}
}
}