ACL-OCL / Base_JSON /prefixN /json /N07 /N07-1019.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N07-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:48:14.207599Z"
},
"title": "Worst-Case Synchronous Grammar Rules",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Rochester Rochester",
"location": {
"postCode": "14627",
"region": "NY"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We relate the problem of finding the best application of a Synchronous Context-Free Grammar (SCFG) rule during parsing to a Markov Random Field. This representation allows us to use the theory of expander graphs to show that the complexity of SCFG parsing of an input sentence of length N is \u2126(N cn), for a grammar with maximum rule length n and some constant c. This improves on the previous best result of \u2126(N c \u221a n).",
"pdf_parse": {
"paper_id": "N07-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "We relate the problem of finding the best application of a Synchronous Context-Free Grammar (SCFG) rule during parsing to a Markov Random Field. This representation allows us to use the theory of expander graphs to show that the complexity of SCFG parsing of an input sentence of length N is \u2126(N cn), for a grammar with maximum rule length n and some constant c. This improves on the previous best result of \u2126(N c \u221a n).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent interest in syntax-based methods for statistical machine translation has lead to work in parsing algorithms for synchronous context-free grammars (SCFGs). Generally, parsing complexity depends on the length of the longest rule in the grammar, but the exact nature of this relationship has only recently begun to be explored. It has been known since the early days of automata theory (Aho and Ullman, 1972) that the languages of string pairs generated by a synchronous grammar can be arranged in an infinite hierarchy, with each rule size \u2265 4 producing languages not possible with grammars restricted to smaller rules. For any grammar with maximum rule size n, a fairly straightforward dynamic programming strategy yields an O(N n+4 ) algorithm for parsing sentences of length N . However, this is often not the best achievable complexity, and the exact bounds of the best possible algorithms are not known. Satta and Peserico (2005) showed that a permutation can be defined for any length n such that tabular parsing strategies must take at least \u2126(N c \u221a n ), that is, the exponent of the algorithm is proportional to the square root of the rule length.",
"cite_spans": [
{
"start": 390,
"end": 412,
"text": "(Aho and Ullman, 1972)",
"ref_id": "BIBREF0"
},
{
"start": 914,
"end": 939,
"text": "Satta and Peserico (2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we improve this result, showing that in the worst case the exponent grows linearly with the rule length. Using a probabilistic argument, we show that the number of easily parsable permutations grows slowly enough that most permutations must be difficult, where by difficult we mean that the exponent in the complexity is greater than a constant factor times the rule length. Thus, not only do there exist permutations that have complexity higher than the square root case of Satta and Peserico (2005) , but in fact the probability that a randomly chosen permutation will have higher complexity approaches one as the rule length grows. Our approach is to first relate the problem of finding an efficient parsing algorithm to finding the treewidth of a graph derived from the SCFG rule's permutation. We then show that this class of graphs are expander graphs, which in turn means that the treewidth grows linearly with the graph size.",
"cite_spans": [
{
"start": 490,
"end": 515,
"text": "Satta and Peserico (2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We write SCFG rules as productions with one lefthand side nonterminal and two righthand side strings. Nonterminals in the two strings are linked with superscript indices; symbols with the same index must be further rewritten synchronously. For example,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous Parsing Strategies",
"sec_num": "2"
},
{
"text": "X \u2192 A (1) B (2) C (3) D (4) , A (1) B (2) C (3) D (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous Parsing Strategies",
"sec_num": "2"
},
{
"text": "(1) is a rule with four children and no reordering, while",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous Parsing Strategies",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "X \u2192 A (1) B (2) C (3) D (4) , B (2) D (4) A (1) C (3)",
"eq_num": "(2)"
}
],
"section": "Synchronous Parsing Strategies",
"sec_num": "2"
},
{
"text": "Algorithm 1 BottomUpParser(grammar G, input strings e, f )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous Parsing Strategies",
"sec_num": "2"
},
{
"text": "for x 0 , x n such that 1 < x 0 < x n < |e| in increasing order of x n \u2212 x 0 do for y 0 , y n such that 1 < y 0 < y n < |f | in increasing order of y n \u2212 y 0 do for Rules R of form X \u2192 X (1) 1 ...X (n) n , X (\u03c0(1)) \u03c0(1) ...X (\u03c0(n)) \u03c0(n) in G do p = P (R) max x 1 ..x n\u22121 y 1 ..y n\u22121 i \u03b4(X i , x i\u22121 , x i , y \u03c0(i)\u22121 , y \u03c0(i) ) \u03b4(X, x 0 , x n , y 0 , y n ) = max{\u03b4(X, x 0 , x n , y 0 , y n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous Parsing Strategies",
"sec_num": "2"
},
{
"text": ", p} end for end for end for expresses a more complex reordering. In general, we can take indices in the first grammar dimension to be consecutive, and associate a permutation \u03c0 with the second dimension. If we use X i for 0 \u2264 i \u2264 n as a set of variables over nonterminal symbols (for example, X 1 and X 2 may both stand for nonterminal A), we can write rules in the general form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous Parsing Strategies",
"sec_num": "2"
},
{
"text": "X 0 \u2192 X (1) 1 ...X (n) n , X (\u03c0(1)) \u03c0(1) ...X (\u03c0(n)) \u03c0(n)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous Parsing Strategies",
"sec_num": "2"
},
{
"text": "Grammar rules also contain terminal symbols, but as their position does not affect parsing complexity, we focus on nonterminals and their associated permutation \u03c0 in the remainder of the paper. In a probabilistic grammar, each rule R has an associated probability P (R). The synchronous parsing problem consists of finding the tree covering both strings having the maximum product of rule probabilities. 1 We assume synchronous parsing is done by storing a dynamic programming table of recognized nonterminals, as outlined in Algorithm 1. We refer to a dynamic programming item for a given nonterminal with specified boundaries in each language as a cell. The algorithm computes cells by maximizing over boundary variables x i and y i , which range over positions in the two input strings, and specify beginning and end points for the SCFG rule's child nonterminals.",
"cite_spans": [
{
"start": 404,
"end": 405,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synchronous Parsing Strategies",
"sec_num": "2"
},
{
"text": "The maximization in the inner loop of Algorithm 1 is the most expensive part of the procedure, as it would take O(N 2n\u22122 ) with exhaustive search; making this step more efficient is our focus in this paper. The maximization can be done with further dynamic programming, storing partial results which contain some subset of an SCFG rule's righthand side nonterminals that have been recognized. A parsing strategy for a specific SCFG rule consists of an order in which these subsets should be combined, until all the rule's children have been recognized. The complexity of an individual parsing step depends on the number of free boundary variables, each of which can take O(N ) values. It is often helpful to visualize parsing strategies on the permutation matrix corresponding to a rule's permutation \u03c0. Figure 1 shows the permutation matrix of rule (2) with a three-step parsing strategy. Each panel shows one combination step along with the projections of the partial results in each dimension; the endpoints of these projections correspond to free boundary variables. The second step has the highest number of distinct endpoints, five in the vertical dimension and three horizontally, meaning parsing can be done in time O(N 8 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 804,
"end": 812,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synchronous Parsing Strategies",
"sec_num": "2"
},
{
"text": "As an example of the impact that the choice of parsing strategy can make, Figure 2 shows a permutation for which a clever ordering of partial results enables parsing in time O(N 10 ) in the length of the input strings. Permutations having this pattern of diagonal stripes can be parsed using this strategy in time O(N 10 ) regardless of the length n of the SCFG rule, whereas a na\u00efve strategy proceeding from left to right in either input string would take time O(N n+3 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 82,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synchronous Parsing Strategies",
"sec_num": "2"
},
{
"text": "In this section, we connect the maximization of probabilities for a cell to the Markov Random Field Figure 1 : The tree on the left defines a three-step parsing strategy for rule (2). In each step, the two subsets of nonterminals in the inner marked spans are combined into a new chart item with the outer spans. The intersection of the outer spans, shaded, has now been processed. Tic marks indicate distinct endpoints of the spans being combined, corresponding to the free boundary variables.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Markov Random Fields for Cells",
"sec_num": "2.1"
},
{
"text": "{A, B, C, D} {A, B, C} {A, B} {A} {B} {C} {D} x 0 x 1 x 2 x 3 x 4 y 0 y 1 y 2 y 3 y 4 A B C D x 0 x 1 x 2 x 3 x 4 y 0 y 1 y 2 y 3 y 4 A B C D x 0 x 1 x 2 x 3 x 4 y 0 y 1 y 2 y 3 y 4 A B C D",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markov Random Fields for Cells",
"sec_num": "2.1"
},
{
"text": "(MRF) representation, which will later allow us to use algorithms and complexity results based on the graphical structure of MRFs. A Markov Random Field is defined as a probability distribution 2 over a set of variables x that can be written as a product of factors f i that are functions of various subsets x i of x. The probability of an SCFG rule instance computed by Algorithm 1 can be written in this functional form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markov Random Fields for Cells",
"sec_num": "2.1"
},
{
"text": "\u03b4 R (x) = P (R) i f i (x i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markov Random Fields for Cells",
"sec_num": "2.1"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markov Random Fields for Cells",
"sec_num": "2.1"
},
{
"text": "x = {x i , y i } for 0 \u2264 i \u2264 n x i = {x i\u22121 , x i , y \u03c0(i)\u22121 , y \u03c0(i) }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markov Random Fields for Cells",
"sec_num": "2.1"
},
{
"text": "and the MRF has one factor f i for each child nonterminal X i in the grammar rule R. The factor's value is the probability of the child nonterminal, which can be expressed as a function of its four boundaries:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markov Random Fields for Cells",
"sec_num": "2.1"
},
{
"text": "f i (x i ) = \u03b4(X i , x i\u22121 , x i , y \u03c0(i)\u22121 , y \u03c0(i) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markov Random Fields for Cells",
"sec_num": "2.1"
},
{
"text": "For reasons that are explained in the following section, we augment our Markov Random Fields with a dummy factor for the completed parent nonterminal's chart item. Thus there is one dummy factor d for each grammar rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markov Random Fields for Cells",
"sec_num": "2.1"
},
{
"text": "d(x 0 , x n , y 0 , y n ) = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markov Random Fields for Cells",
"sec_num": "2.1"
},
{
"text": "expressed as a function of the four outer boundary variables of the completed rule, but with a constant Figure 2 : A parsing strategy maintaining two spans in each dimension is O(N 10 ) for any length permutation of this general form.",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 112,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Markov Random Fields for Cells",
"sec_num": "2.1"
},
{
"text": "value of 1 so as not to change the probabilities computed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markov Random Fields for Cells",
"sec_num": "2.1"
},
{
"text": "Thus an SCFG rule with n child nonterminals always results in a Markov Random Field with 2n + 2 variables and n + 1 factors, with each factor a function of exactly four variables.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markov Random Fields for Cells",
"sec_num": "2.1"
},
{
"text": "Markov Random Fields are often represented as graphs. A factor graph representation has a node for each variable and factor, with an edge connecting each factor to the variables it depends on. An example for rule (2) is shown in Figure 3 , with round nodes for variables, square nodes for factors, and a diamond for the special dummy factor.",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 237,
"text": "Figure 3",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Markov Random Fields for Cells",
"sec_num": "2.1"
},
{
"text": "Efficient computation on Markov Random Fields is performed by first transforming the MRF into a junction tree (Jensen et al., 1990; Shafer and Shenoy, 1990) , and then applying the standard message-passing algorithm for graphical models over this tree structure. The complexity of the message passing algorithm depends on the structure of the junction tree, which in turn depends on the graph structure of the original MRF. A junction tree can be constructed from a Markov Random Field by the following three steps:",
"cite_spans": [
{
"start": 110,
"end": 131,
"text": "(Jensen et al., 1990;",
"ref_id": "BIBREF5"
},
{
"start": 132,
"end": 156,
"text": "Shafer and Shenoy, 1990)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Junction Trees",
"sec_num": "2.2"
},
{
"text": "d y 0 y 1 y 2 y 3 y 4 f 1 f 2 f 3 f 4 x 0 x 1 x 2 x 3 x 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Junction Trees",
"sec_num": "2.2"
},
{
"text": "\u2022 Connect all variable nodes that share a factor, and remove factor nodes. This results in the graphs shown in Figure 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 111,
"end": 119,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Junction Trees",
"sec_num": "2.2"
},
{
"text": "\u2022 Choose a triangulation of the resulting graph, by adding chords to any cycle of length greater than three.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Junction Trees",
"sec_num": "2.2"
},
{
"text": "\u2022 Decompose the triangulated graph into a tree of cliques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Junction Trees",
"sec_num": "2.2"
},
{
"text": "We call nodes in the resulting tree, corresponding to cliques in the triangulated graph, clusters. Each cluster has a potential function, which is a function of the variables in the cluster. For each factor in the original MRF, the junction tree will have at least one cluster containing all of the variables on which the factor is defined. Each factor is associated with one such cluster, and the cluster's potential function is set to be the product of its factors, for all combinations of variable values. Triangulation ensures that the resulting tree satisfies the junction tree property, which states that for any two clusters containing the same variable x, all nodes on the path connecting the clusters also contain x. A junction tree derived from the MRF of Figure 3 is shown in Figure 5 . The message-passing algorithm for graphical models can be applied to the junction tree. The algo-",
"cite_spans": [],
"ref_spans": [
{
"start": 766,
"end": 774,
"text": "Figure 3",
"ref_id": "FIGREF0"
},
{
"start": 787,
"end": 795,
"text": "Figure 5",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Junction Trees",
"sec_num": "2.2"
},
{
"text": "y 0 y 1 y 2 y 3 y 4 x 0 x 1 x 2 x 3 x 4 y 0 y 1 y 2 y 3 y 4 x 0 x 1 x 2 x 3 x 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Junction Trees",
"sec_num": "2.2"
},
{
"text": "Figure 4: The graphs resulting from connecting all interacting variables for the identity permutation (1, 2, 3, 4) (top) and the (2, 4, 1, 3) permutation of rule (2) (bottom).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Junction Trees",
"sec_num": "2.2"
},
{
"text": "rithm works from the leaves of the tree inward, alternately multiplying in potential functions and maximizing over variables that are no longer needed, effectively distributing the max and product operators so as to minimize the interaction between variables. The complexity of the message-passing is O(nN k ), where the junction tree contain O(n) clusters, k is the maximum cluster size, and each variable in the cluster can take N values. However, the standard algorithm assumes that the factor functions are predefined as part of the input. In our case, however, the factor functions themselves depend on message-passing calculations from other grammar rules:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Junction Trees",
"sec_num": "2.2"
},
{
"text": "f i (x i ) = \u03b4(X i , x i\u22121 , x i , y \u03c0(i)\u22121 , y \u03c0(i) ) = max R \u2032 :X i \u2192\u03b1,\u03b2 P (R \u2032 ) max x \u2032 : x \u2032 0 =x i\u22121 ,x \u2032 n \u2032 =x i y \u2032 0 =y \u03c0(i\u22121) ,y \u2032 n \u2032 =y \u03c0(i) \u03b4 R \u2032 (x \u2032 ) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Junction Trees",
"sec_num": "2.2"
},
{
"text": "We must modify the standard algorithm in order to interleave computation among the junction trees corresponding to the various rules in the grammar, using the bottom-up ordering of computation from Algorithm 1. Where, in the standard algorithm, each message contains a complete table for all assignments to its variables, we break these into a separate message for each individual assignment of variables. The overall complexity is unchanged, because each assignment to all variables in each cluster is still considered only once.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Junction Trees",
"sec_num": "2.2"
},
{
"text": "The dummy factor d ensures that every junction tree we derive from an SCFG rule has a cluster containing all four outer boundary variables, allowing efficient lookup of the inner maximization in (3). Because the outer boundary variables need not appear throughout the junction tree, this technique allows reuse of some partial results across different outer boundaries. As an example, consider message passing on the junction tree of shown in Figure 5 , which corresponds to the parsing strategy of Figure 1. Only the final step involves all four boundaries of the complete cell, but the most complex step is the second, with a total of eight boundaries. This efficient reuse would not be achieved by applying the junction tree technique directly to the maximization operator in Algorithm 1, because we would be fixing the outer boundaries and computing the junction tree only over the inner boundaries.",
"cite_spans": [],
"ref_spans": [
{
"start": 443,
"end": 451,
"text": "Figure 5",
"ref_id": "FIGREF1"
},
{
"start": 499,
"end": 505,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Junction Trees",
"sec_num": "2.2"
},
{
"text": "x 0 x 3 x 4 y 0 y 2 y 3 y 4 x 0 x 2 x 3 y 0 y 1 y 2 y 3 y 4 x 0 x 1 x 2 y 1 y 2 y 3 y 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Junction Trees",
"sec_num": "2.2"
},
{
"text": "The complexity of the message passing algorithm over an MRF's junction tree is determined by the treewidth of the MRF. In this section we show that, because parsing strategies are in direct correspondence with valid junction trees, we can use treewidth to analyze the complexity of a grammar rule. We define a tabular parsing strategy as any dynamic programming algorithm that stores partial results corresponding to subsets of a rule's child nonterminals. Such a strategy can be represented as a recursive partition of child nonterminals, as shown in Figure 1(left) . We show below that a recursive partition of children having maximum complexity k at any step can be converted into a junction tree having k as the maximum cluster size. This implies that finding the optimal junction tree will give a parsing strategy at least as good as the strategy of the optimal recursive partition.",
"cite_spans": [],
"ref_spans": [
{
"start": 552,
"end": 566,
"text": "Figure 1(left)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Treewidth and Tabular Parsing",
"sec_num": "3"
},
{
"text": "A recursive partition of child nonterminals can be converted into a junction tree as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth and Tabular Parsing",
"sec_num": "3"
},
{
"text": "\u2022 For each leaf of the recursive partition, which represents a single child nonterminal i, create a leaf in the junction tree with the cluster",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth and Tabular Parsing",
"sec_num": "3"
},
{
"text": "(x i\u22121 , x i , y \u03c0(i)\u22121 , y \u03c0(i) ) and the potential func- tion f i (x i\u22121 , x i , y \u03c0(i)\u22121 , y \u03c0(i) ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth and Tabular Parsing",
"sec_num": "3"
},
{
"text": "\u2022 For each internal node in the recursive partition, create a corresponding node in the junction tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth and Tabular Parsing",
"sec_num": "3"
},
{
"text": "\u2022 Add each variable x i to all nodes in the junction tree on the path from the node for child nonterminal i \u2212 1 to the node for child nonterminal i. Similarly, add each variable y \u03c0(i) to all nodes in the junction tree on the path from the node for child nonterminal \u03c0(i) \u2212 1 to the node for child nonterminal \u03c0(i).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth and Tabular Parsing",
"sec_num": "3"
},
{
"text": "Because each variable appears as an argument of only two factors, the junction tree nodes in which it is present form a linear path from one leaf of the tree to another. Since each variable is associated only with nodes on one path through the tree, the resulting tree will satisfy the junction tree property. The tree structure of the original recursive partition implies that the variable rises from two leaf nodes to the lowest common ancestor of both leaves, and is not contained in any higher nodes. Thus each node in the junction tree contains variables corresponding to the set of endpoints of the spans defined by the two subsets corresponding to its two children. The number of variables at each node in the junction tree is identical to the number of free endpoints at the corresponding combination in the recursive partition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth and Tabular Parsing",
"sec_num": "3"
},
{
"text": "Because each recursive partition corresponds to a junction tree with the same complexity, finding the best recursive partition reduces to finding the junction tree with the best complexity, i.e., the smallest maximum cluster size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth and Tabular Parsing",
"sec_num": "3"
},
{
"text": "Finding the junction tree with the smallest cluster size is equivalent to finding the input graph's treewidth, the smallest k such that the graph can be embedded in a k-tree. In general, this problem was shown to be NP-complete by Arnborg et al. (1987) . However, because the treewidth of a given rule lower bounds the complexity of its tabular parsing strategies, parsing complexity for general rules can be bounded with treewidth results for worst-case rules, without explicitly identifying the worst-case permutations.",
"cite_spans": [
{
"start": 231,
"end": 252,
"text": "Arnborg et al. (1987)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth and Tabular Parsing",
"sec_num": "3"
},
{
"text": "In this section, we show that the treewidth of the graphs corresponding to worst-case permutations growths linearly with the permutation's length. Our strategy is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth Grows Linearly",
"sec_num": "4"
},
{
"text": "1. Define a 3-regular graph for an input permutation consisting of a subset of edges from the original graph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth Grows Linearly",
"sec_num": "4"
},
{
"text": "2. Show that the edge-expansion of the 3-regular graph grows linearly for randomly chosen permutations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth Grows Linearly",
"sec_num": "4"
},
{
"text": "3. Use edge-expansion to bound the spectral gap.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth Grows Linearly",
"sec_num": "4"
},
{
"text": "4. Use spectral gap to bound treewidth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth Grows Linearly",
"sec_num": "4"
},
{
"text": "For the first step, we define H = (V, E) as a random 3-regular graph on 2n vertices obtained as follows. Let G 1 = (V 1 , E 1 ) and G 2 = (V 2 , E 2 ) be cycles, each on a separate set of n vertices. These two cycles correspond to the edges (x i , x i+1 ) and (y i , y i+1 ) in the graphs of the type shown in Figure 4 . Let M be a random perfect matching between V 1 and V 2 . The matching represents the edges (x i , y \u03c0(i) ) produced from the input permutation \u03c0. Let H be the union of G 1 , G 2 , and M . While H contains only some of the edges in the graphs defined in the previous section, removing edges cannot increase the treewidth.",
"cite_spans": [],
"ref_spans": [
{
"start": 310,
"end": 318,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Treewidth Grows Linearly",
"sec_num": "4"
},
{
"text": "For the second step of the proof, we use a probabilistic argument detailed in the next subsection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth Grows Linearly",
"sec_num": "4"
},
{
"text": "For the third step, we will use the following connection between the edge-expansion and the eigenvalue gap (Alon and Milman, 1985; Tanner, 1984) .",
"cite_spans": [
{
"start": 107,
"end": 130,
"text": "(Alon and Milman, 1985;",
"ref_id": "BIBREF1"
},
{
"start": 131,
"end": 144,
"text": "Tanner, 1984)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth Grows Linearly",
"sec_num": "4"
},
{
"text": "Lemma 4.1 Let G be a k-regular graph. Let \u03bb 2 be the second largest eigenvalue of G. Let h(G) be the edge-expansion of G. Then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth Grows Linearly",
"sec_num": "4"
},
{
"text": "k \u2212 \u03bb 2 \u2265 h(G) 2 2k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth Grows Linearly",
"sec_num": "4"
},
{
"text": "Finally, for the fourth step, we use a relation between the eigenvalue gap and treewidth for regular graphs shown by Chandran and Subramanian (2003) . Lemma 4.2 Let G be a k-regular graph. Let n be the number of vertices of G. Let \u03bb 2 be the second largest eigenvalue of G. Then",
"cite_spans": [
{
"start": 117,
"end": 148,
"text": "Chandran and Subramanian (2003)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth Grows Linearly",
"sec_num": "4"
},
{
"text": "tw(G) \u2265 n 4k (k \u2212 \u03bb 2 ) \u2212 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth Grows Linearly",
"sec_num": "4"
},
{
"text": "Note that in our setting k = 3. In order to use Lemma 4.2 we will need to give a lower bound on the eigenvalue gap k \u2212 \u03bb 2 of G.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Treewidth Grows Linearly",
"sec_num": "4"
},
{
"text": "The edge-expansion of a set of vertices T is the ratio of the number of edges connecting vertices in T to the rest of the graph, divided by the number of vertices in T ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge Expansion",
"sec_num": "4.1"
},
{
"text": "|E(T, V \u2212 T )| |T |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge Expansion",
"sec_num": "4.1"
},
{
"text": "where we assume that |T | \u2264 |V |/2. The edge expansion of a graph is the minimum edge expansion of any subset of vertices:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge Expansion",
"sec_num": "4.1"
},
{
"text": "h(G) = min T \u2286V |E(T, V \u2212 T )| min{|T |, |V \u2212 T |} .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge Expansion",
"sec_num": "4.1"
},
{
"text": "Intuitively, if all subsets of vertices are highly connected to the remainder of the graph, there is no way to decompose the graph into minimally interacting subgraphs, and thus no way to decompose the dynamic programming problem of parsing into smaller pieces. Let n k be the standard binomial coefficient, and for \u03b1 \u2208 R, let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge Expansion",
"sec_num": "4.1"
},
{
"text": "n \u2264 \u03b1 = \u230a\u03b1\u230b k=0 n k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge Expansion",
"sec_num": "4.1"
},
{
"text": "We will use the following standard inequality valid for 0 \u2264 \u03b1 \u2264 n:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge Expansion",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "n \u2264 \u03b1 \u2264 ne \u03b1 \u03b1",
"eq_num": "(4)"
}
],
"section": "Edge Expansion",
"sec_num": "4.1"
},
{
"text": "Lemma 4.3 With probability at least 0.98 the graph H has edge-expansion at least 1/50.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Edge Expansion",
"sec_num": "4.1"
},
{
"text": "Let \u03b5 = 1/50. Assume that T \u2286 V is a set with a small edge-expansion, i. e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "|E(T, V \u2212 T )| \u2264 \u03b5|T |,",
"eq_num": "(5)"
}
],
"section": "Proof :",
"sec_num": null
},
{
"text": "and |T | \u2264 |V |/2 = n. Let T i = T \u2229 V i and let t i = |T i |, for i = 1, 2. We will w.l.o.g. assume t 1 \u2264 t 2 . We will denote as \u2113 i the number of spans of consecutive vertices from E i contained in T . Thus",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "2\u2113 i = |E(T i , V i \u2212 T i )|, for i = 1, 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "The spans counted by \u2113 1 and \u2113 2 correspond to continuous spans counted in computing the complexity of a chart parsing operation. However, unlike in the diagrams in the earlier part of this paper, in our graph theoretic argument there is no requirement that T select only corresponding pairs of vertices from V 1 and V 2 . There are at least 2(\u2113 1 +\u2113 2 )+t 2 \u2212t 1 edges between T and V \u2212 T . This is because there are 2\u2113 i edges within V i at the left and right boundaries of the \u2113 i spans, and at least t 2 \u2212 t 1 edges connecting the extra vertices from T 2 that have no matching vertex in T 1 . Thus from assumption (5) we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "t 2 \u2212 t 1 \u2264 \u03b5(t 1 + t 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "which in turn implies",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t 1 \u2264 t 2 \u2264 1 + \u03b5 1 \u2212 \u03b5 t 1 .",
"eq_num": "(6)"
}
],
"section": "Proof :",
"sec_num": null
},
{
"text": "Similarly, using (6), we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2113 1 + \u2113 2 \u2264 \u03b5 2 (t 1 + t 2 ) \u2264 \u03b5 1 \u2212 \u03b5 t 1 .",
"eq_num": "(7)"
}
],
"section": "Proof :",
"sec_num": null
},
{
"text": "That is, for T to have small edge expansion, the vertices in T 1 and T 2 must be collected into a small number of spans \u2113 1 and \u2113 2 . This limit on the number of spans allows us to limit the number of ways of choosing T 1 and T 2 . Suppose that t 1 is given. Any pair T 1 , T 2 is determined by the edges in E(T 1 , V 1 \u2212 T 1 ), and E(T 2 , V 2 \u2212 T 2 ), and two bits (corresponding to the possible \"swaps\" of T i with V i \u2212 T i ). Note that we can choose at most 2\u2113 1 + 2\u2113 2 \u2264 t 1 \u2022 2\u03b5/(1 \u2212 \u03b5) edges in total. Thus the number of choices of T 1 and T 2 is bounded above by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "4 \u2022 2n \u2264 2\u03b5 1\u2212\u03b5 t 1 .",
"eq_num": "(8)"
}
],
"section": "Proof :",
"sec_num": null
},
{
"text": "For a given choice of T 1 and T 2 , for T to have small edge expansion, there must also not be too many edges that connect T 1 to vertices in V 2 \u2212 T 2 . Let k be the number of edges between T 1 and T 2 . There are at least t 1 + t 2 \u2212 2k edges between T and V \u2212 T and from assumption (5) we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t 1 + t 2 \u2212 2k \u2264 \u03b5(t 1 + t 2 ) Thus k \u2265 (1 \u2212 \u03b5) t 1 + t 2 2 \u2265 (1 \u2212 \u03b5)t 1 .",
"eq_num": "(9)"
}
],
"section": "Proof :",
"sec_num": null
},
{
"text": "The probability that there are \u2265 (1 \u2212 \u03b5)t 1 edges between T 1 and T 2 is bounded by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "t 1 \u2264 \u03b5t 1 t 2 n (1\u2212\u03b5)t 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "where the first term selects vertices in T 1 connected to T 2 , and the second term upper bounds the probability that the selected vertices are indeed connected to T 2 . Using 6, we obtain a bound in terms of t 1 alone:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "t 1 \u2264 \u03b5t 1 1 + \u03b5 1 \u2212 \u03b5 \u2022 t 1 n (1\u2212\u03b5)t 1 ,",
"eq_num": "(10)"
}
],
"section": "Proof :",
"sec_num": null
},
{
"text": "Combining the number of ways of choosing T 1 and T 2 (8) with the bound on the probability that the edges M from the input permutation connect almost all the vertices in T 1 to vertices from T 2 (10), and using the union bound over values of t 1 , we obtain that the probability p that there exists T \u2286 V with edge-expansion less than \u03b5 is bounded by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "2 \u230an/2\u230b t 1 =0 4\u2022 2n \u2264 2\u03b5 1\u2212\u03b5 t 1 t 1 \u2264 \u03b5t 1 1 + \u03b5 1 \u2212 \u03b5 \u2022 t 1 n (1\u2212\u03b5)t 1",
"eq_num": "("
}
],
"section": "Proof :",
"sec_num": null
},
{
"text": "11) where the factor of 2 is due to the assumption t 1 \u2264 t 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "The graph H is connected and hence T has at least one out-going edge. Therefore if t 1 + t 2 \u2264 1/\u03b5, the edge-expansion of T is at least \u03b5. Thus a set with edge-expansion less than \u03b5 must have t 1 + t 2 \u2265 1/\u03b5, which, by (6), implies t 1 \u2265 (1 \u2212 \u03b5)/(2\u03b5). Thus the sum in (11) can be taken for t from \u2308(1 \u2212 \u03b5)/(2\u03b5)\u2309 to \u230an/2\u230b. Using (4) we obtain",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p \u2264 8 \u230an/2\u230b t 1 =\u2308 1\u2212\u03b5 2\u03b5 \u2309 \uf8ee \uf8f0 2ne 2\u03b5 1\u2212\u03b5 t 1 2\u03b5 1\u2212\u03b5 t 1 t 1 e \u03b5t 1 \u03b5t 1 1 + \u03b5 1 \u2212 \u03b5 \u2022 t 1 n (1\u2212\u03b5)t 1 = 8 \u230an/2\u230b t 1 =\u2308 1\u2212\u03b5 2\u03b5 \u2309 e(1 \u2212 \u03b5) \u03b5 2\u03b5 1\u2212\u03b5 e \u03b5 \u03b5 1 + \u03b5 1 \u2212 \u03b5 1\u2212\u03b5 t 1 n 1\u2212\u03b5\u2212 2\u03b5 1\u2212\u03b5 t 1 .",
"eq_num": "(12)"
}
],
"section": "Proof :",
"sec_num": null
},
{
"text": "We will use t 1 /n \u2264 1/2 and plug \u03b5 = 1/50 into (12). We obtain",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "p \u2264 8 \u221e t 1 =25",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "0.74 t 1 \u2264 0.02.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "While this constant bound on p is sufficient for our main complexity result, it can further be shown that p approaches zero as n increases, from the fact that the geometric sum in (12) converges, and each term for fixed t 1 goes to zero as n grows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "This completes the second step of the proof as outlined at the beginning of this section. The constant bound on the edge expansion implies a constant bound on the eigenvalue gap (Lemma 4.1), which in turn implies an \u2126(n) bound on treewidth (Lemma 4.2), yielding:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "Theorem 4.4 Tabular parsing strategies for Synchronous Context-Free Grammars containing rules with all permutations of length n require time \u2126(N cn ) in the input string length N for some constant c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "We have shown our result without explicitly constructing a difficult permutation, but we close with one example. The zero-based permutations of length p, where p is prime, \u03c0(i) = i \u22121 mod p for 0 < i < p, and \u03c0(0) = 0, provide a known family of expander graphs (see Hoory et al. (2006) ).",
"cite_spans": [
{
"start": 266,
"end": 285,
"text": "Hoory et al. (2006)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proof :",
"sec_num": null
},
{
"text": "We have shown in the exponent in the complexity of polynomial-time parsing algorithms for synchronous context-free grammars grows linearly with the length of the grammar rules. While it is very expensive computationally to test whether a specified permutation has a parsing algorithm of a certain complexity, it turns out that randomly chosen permutations are difficult with high probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We describe our methods in terms of the Viterbi algorithm (using the max-product semiring), but they also apply to nonprobabilistic parsing (boolean semiring), language modeling (sum-product semiring), and Expectation Maximization (with inside and outside passes).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In our case unnormalized.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Acknowledgments This work was supported by NSF grants IIS-0546554, IIS-0428020, and IIS-0325646.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Theory of Parsing, Translation, and Compiling",
"authors": [
{
"first": "Albert",
"middle": [
"V"
],
"last": "Aho",
"suffix": ""
},
{
"first": "Jeffery",
"middle": [
"D"
],
"last": "Ullman",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Albert V. Aho and Jeffery D. Ullman. 1972. The The- ory of Parsing, Translation, and Compiling, volume 1. Prentice-Hall, Englewood Cliffs, NJ.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "\u03bb 1 , isoperimetric inequalities for graphs and superconcentrators",
"authors": [
{
"first": "N",
"middle": [],
"last": "Alon",
"suffix": ""
},
{
"first": "V",
"middle": [
"D"
],
"last": "Milman",
"suffix": ""
}
],
"year": 1985,
"venue": "J. of Combinatorial Theory, Ser. B",
"volume": "38",
"issue": "",
"pages": "73--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Alon and V.D. Milman. 1985. \u03bb 1 , isoperimetric inequalities for graphs and superconcentrators. J. of Combinatorial Theory, Ser. B, 38:73-88.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Complexity of finding embeddings in a k-tree",
"authors": [
{
"first": "Stefen",
"middle": [],
"last": "Arnborg",
"suffix": ""
},
{
"first": "Derek",
"middle": [
"G"
],
"last": "Corneil",
"suffix": ""
},
{
"first": "Andrzej",
"middle": [],
"last": "Proskurowski",
"suffix": ""
}
],
"year": 1987,
"venue": "SIAM Journal of Algebraic and Discrete Methods",
"volume": "8",
"issue": "",
"pages": "277--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefen Arnborg, Derek G. Corneil, and Andrzej Proskurowski. 1987. Complexity of finding embed- dings in a k-tree. SIAM Journal of Algebraic and Dis- crete Methods, 8:277-284, April.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A spectral lower bound for the treewidth of a graph and its consequences",
"authors": [
{
"first": "L",
"middle": [
"S"
],
"last": "Chandran",
"suffix": ""
},
{
"first": "C",
"middle": [
"R"
],
"last": "Subramanian",
"suffix": ""
}
],
"year": 2003,
"venue": "Information Processing Letters",
"volume": "87",
"issue": "",
"pages": "195--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L.S. Chandran and C.R. Subramanian. 2003. A spectral lower bound for the treewidth of a graph and its conse- quences. Information Processing Letters, 87:195-200.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Expander graphs and their applications",
"authors": [
{
"first": "Shlomo",
"middle": [],
"last": "Hoory",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Linial",
"suffix": ""
},
{
"first": "Avi",
"middle": [],
"last": "Wigderson",
"suffix": ""
}
],
"year": 2006,
"venue": "Bull. Amer. Math. Soc",
"volume": "43",
"issue": "",
"pages": "439--561",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shlomo Hoory, Nathan Linial, and Avi Wigderson. 2006. Expander graphs and their applications. Bull. Amer. Math. Soc., 43:439-561.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bayesian updating in causal probabilistic networks by local computations",
"authors": [
{
"first": "V",
"middle": [],
"last": "Finn",
"suffix": ""
},
{
"first": "Steffen",
"middle": [
"L"
],
"last": "Jensen",
"suffix": ""
},
{
"first": "Kristian",
"middle": [
"G"
],
"last": "Lauritzen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Olesen",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Statistics Quarterly",
"volume": "4",
"issue": "",
"pages": "269--282",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finn V. Jensen, Steffen L. Lauritzen, and Kristian G. Ole- sen. 1990. Bayesian updating in causal probabilis- tic networks by local computations. Computational Statistics Quarterly, 4:269-282.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Some computational complexity results for synchronous contextfree grammars",
"authors": [
{
"first": "Giorgio",
"middle": [],
"last": "Satta",
"suffix": ""
},
{
"first": "Enoch",
"middle": [],
"last": "Peserico",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of HLT/EMNLP",
"volume": "",
"issue": "",
"pages": "803--810",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Giorgio Satta and Enoch Peserico. 2005. Some com- putational complexity results for synchronous context- free grammars. In Proceedings of HLT/EMNLP, pages 803-810, Vancouver, Canada, October.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Probability propagation",
"authors": [
{
"first": "G",
"middle": [],
"last": "Shafer",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Shenoy",
"suffix": ""
}
],
"year": 1990,
"venue": "Annals of Mathematics and Artificial Intelligence",
"volume": "2",
"issue": "",
"pages": "327--353",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Shafer and P. Shenoy. 1990. Probability propaga- tion. Annals of Mathematics and Artificial Intelli- gence, 2:327-353.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Explicit construction of concentrators from generalized n-gons",
"authors": [
{
"first": "R",
"middle": [
"M"
],
"last": "Tanner",
"suffix": ""
}
],
"year": 1984,
"venue": "J. Algebraic Discrete Methods",
"volume": "5",
"issue": "",
"pages": "287--294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.M. Tanner. 1984. Explicit construction of concentra- tors from generalized n-gons. J. Algebraic Discrete Methods, 5:287-294.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Markov Random Field for rule (2)."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Junction tree for rule (2)."
}
}
}
}