ACL-OCL / Base_JSON /prefixE /json /E09 /E09-1037.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "E09-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:48:52.272153Z"
},
"title": "Cube Summing, Approximate Inference with Non-Local Features, and Dynamic Programming without Semirings",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "USA"
}
},
"email": "kgimpel@cs.cmu.edu"
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Language Technologies Institute Carnegie Mellon University Pittsburgh",
"location": {
"postCode": "15213",
"region": "PA",
"country": "USA"
}
},
"email": "nasmith@cs.cmu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We introduce cube summing, a technique that permits dynamic programming algorithms for summing over structures (like the forward and inside algorithms) to be extended with non-local features that violate the classical structural independence assumptions. It is inspired by cube pruning (Chiang, 2007; Huang and Chiang, 2007) in its computation of non-local features dynamically using scored k-best lists, but also maintains additional residual quantities used in calculating approximate marginals. When restricted to local features, cube summing reduces to a novel semiring (k-best+residual) that generalizes many of the semirings of Goodman (1999). When non-local features are included, cube summing does not reduce to any semiring, but is compatible with generic techniques for solving dynamic programming equations.",
"pdf_parse": {
"paper_id": "E09-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "We introduce cube summing, a technique that permits dynamic programming algorithms for summing over structures (like the forward and inside algorithms) to be extended with non-local features that violate the classical structural independence assumptions. It is inspired by cube pruning (Chiang, 2007; Huang and Chiang, 2007) in its computation of non-local features dynamically using scored k-best lists, but also maintains additional residual quantities used in calculating approximate marginals. When restricted to local features, cube summing reduces to a novel semiring (k-best+residual) that generalizes many of the semirings of Goodman (1999). When non-local features are included, cube summing does not reduce to any semiring, but is compatible with generic techniques for solving dynamic programming equations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Probabilistic NLP researchers frequently make independence assumptions to keep inference algorithms tractable. Doing so limits the features that are available to our models, requiring features to be structurally local. Yet many problems in NLP-machine translation, parsing, named-entity recognition, and others-have benefited from the addition of non-local features that break classical independence assumptions. Doing so has required algorithms for approximate inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently cube pruning (Chiang, 2007; Huang and Chiang, 2007) was proposed as a way to leverage existing dynamic programming algorithms that find optimal-scoring derivations or structures when only local features are involved. Cube pruning permits approximate decoding with non-local features, but leaves open the question of how the feature weights or probabilities are learned. Meanwhile, some learning algorithms, like maximum likelihood for conditional log-linear models (Lafferty et al., 2001) , unsupervised models (Pereira and Schabes, 1992) , and models with hidden variables (Koo and Collins, 2005; Wang et al., 2007; , require summing over the scores of many structures to calculate marginals.",
"cite_spans": [
{
"start": 22,
"end": 36,
"text": "(Chiang, 2007;",
"ref_id": "BIBREF3"
},
{
"start": 37,
"end": 60,
"text": "Huang and Chiang, 2007)",
"ref_id": "BIBREF13"
},
{
"start": 474,
"end": 497,
"text": "(Lafferty et al., 2001)",
"ref_id": "BIBREF19"
},
{
"start": 520,
"end": 547,
"text": "(Pereira and Schabes, 1992)",
"ref_id": "BIBREF25"
},
{
"start": 583,
"end": 606,
"text": "(Koo and Collins, 2005;",
"ref_id": "BIBREF17"
},
{
"start": 607,
"end": 625,
"text": "Wang et al., 2007;",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first review the semiring-weighted logic programming view of dynamic programming algorithms (Shieber et al., 1995) and identify an intuitive property of a program called proof locality that follows from feature locality in the underlying probability model ( \u00a72). We then provide an analysis of cube pruning as an approximation to the intractable problem of exact optimization over structures with non-local features and show how the use of non-local features with k-best lists breaks certain semiring properties ( \u00a73). The primary contribution of this paper is a novel techniquecube summing-for approximate summing over discrete structures with non-local features, which we relate to cube pruning ( \u00a74). We discuss implementation ( \u00a75) and show that cube summing becomes exact and expressible as a semiring when restricted to local features; this semiring generalizes many commonly-used semirings in dynamic programming ( \u00a76).",
"cite_spans": [
{
"start": 95,
"end": 117,
"text": "(Shieber et al., 1995)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we discuss dynamic programming algorithms as semiring-weighted logic programs. We then review the definition of semirings and important examples. We discuss the relationship between locally-factored structure scores and proofs in logic programs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background",
"sec_num": "2"
},
{
"text": "Many algorithms in NLP involve dynamic programming (e.g., the Viterbi, forward-backward, probabilistic Earley's, and minimum edit distance algorithms). Dynamic programming (DP) involves solving certain kinds of recursive equations with shared substructure and a topological ordering of the variables. Shieber et al. (1995) showed a connection between DP (specifically, as used in parsing) and logic programming, and Goodman (1999) augmented such logic programs with semiring weights, giving an algebraic explanation for the intuitive connections among classes of algorithms with the same logical structure. For example, in Goodman's framework, the forward algorithm and the Viterbi algorithm are comprised of the same logic program with different semirings. Goodman defined other semirings, including ones we will use here. This formal framework was the basis for the Dyna programming language, which permits a declarative specification of the logic program and compiles it into an efficient, agendabased, bottom-up procedure (Eisner et al., 2005) .",
"cite_spans": [
{
"start": 301,
"end": 322,
"text": "Shieber et al. (1995)",
"ref_id": "BIBREF27"
},
{
"start": 416,
"end": 430,
"text": "Goodman (1999)",
"ref_id": "BIBREF10"
},
{
"start": 1026,
"end": 1047,
"text": "(Eisner et al., 2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Programming",
"sec_num": "2.1"
},
{
"text": "For our purposes, a DP consists of a set of recursive equations over a set of indexed variables. For example, the probabilistic CKY algorithm (run on sentence w 1 w 2 ...w n ) is written as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Programming",
"sec_num": "2.1"
},
{
"text": "C X,i\u22121,i = p X\u2192w i (1) C X,i,k = max Y,Z\u2208N;j\u2208{i+1,...,k\u22121} p X\u2192Y Z \u00d7 C Y,i,j \u00d7 C Z,j,k goal = C S,0,n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Programming",
"sec_num": "2.1"
},
{
"text": "where N is the nonterminal set and S \u2208 N is the start symbol. Each C X,i,j variable corresponds to the chart value (probability of the most likely subtree) of an X-constituent spanning the substring w i+1 ...w j . goal is a special variable of greatest interest, though solving for goal correctly may (in general, but not in this example) require solving for all the other values. We will use the term \"index\" to refer to the subscript values on variables (X, i, j on C X,i,j ). Where convenient, we will make use of Shieber et al.'s logic programming view of dynamic programming. In this view, each variable (e.g., C X,i,j in Eq. 1) corresponds to the value of a \"theorem,\" the constants in the equations (e.g., p X\u2192Y Z in Eq. 1) correspond to the values of \"axioms,\" and the DP defines quantities corresponding to weighted \"proofs\" of the goal theorem (e.g., finding the maximum-valued proof, or aggregating proof values). The value of a proof is a combination of the values of the axioms it starts with.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Programming",
"sec_num": "2.1"
},
{
"text": "Semirings define these values and define two operators over them, called \"aggregation\" (max in Eq. 1) and \"combination\" (\u00d7 in Eq. 1). Goodman and Eisner et al. assumed that the values of the variables are in a semiring, and that the equations are defined solely in terms of the two semiring operations. We will often refer to the \"probability\" of a proof, by which we mean a nonnegative R-valued score defined by the semantics of the dynamic program variables; it may not be a normalized probability.",
"cite_spans": [
{
"start": 134,
"end": 159,
"text": "Goodman and Eisner et al.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic Programming",
"sec_num": "2.1"
},
{
"text": "A semiring is a tuple A, \u2295, \u2297, 0, 1 , in which A is a set, \u2295 : A \u00d7 A \u2192 A is the aggregation operation, \u2297 : A \u00d7 A \u2192 A is the combination operation, 0 is the additive identity element (\u2200a \u2208 A, a \u2295 0 = a), and 1 is the multiplicative identity element (\u2200a \u2208 A, a \u2297 1 = a). A semiring requires \u2295 to be associative and commutative, and \u2297 to be associative and to distribute over \u2295. Finally, we require a \u2297 0 = 0 \u2297 a = 0 for all a \u2208 A. 1 Examples include the inside semiring, R \u22650 , +, \u00d7, 0, 1 , and the Viterbi semiring, R \u22650 , max, \u00d7, 0, 1 . The former sums the probabilities of all proofs of each theorem. The latter (used in Eq. 1) calculates the probability of the most probable proof of each theorem. Two more examples follow.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semirings",
"sec_num": "2.2"
},
{
"text": "Viterbi proof semiring. We typically need to recover the steps in the most probable proof in addition to its probability. This is often done using backpointers, but can also be accomplished by representing the most probable proof for each theorem in its entirety as part of the semiring value (Goodman, 1999) . For generality, we define a proof as a string that is constructed from strings associated with axioms, but the particular form of a proof is problem-dependent. The \"Viterbi proof\" semiring includes the probability of the most probable proof and the proof itself. Letting L \u2286 \u03a3 * be the proof language on some symbol set \u03a3, this semiring is defined on the set R \u22650 \u00d7 L with 0 element 0, and 1 element 1, . For two values u 1 , U 1 and u 2 , U 2 , the aggregation operator returns max Table 1 : Commonly used semirings. An element in the Viterbi proof semiring is denoted u1, U1 , where u1 is the probability of proof U1. The max-k function returns a sorted list of the top-k proofs from a set. The function performs a cross-product on two k-best proof lists (Eq. 2).",
"cite_spans": [
{
"start": 293,
"end": 308,
"text": "(Goodman, 1999)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 794,
"end": 801,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semirings",
"sec_num": "2.2"
},
{
"text": "(u 1 , u 2 ), U argmax i\u2208{1,2} u i . Semiring A Aggregation (\u2295) Combination (\u2297) 0 1 inside R \u22650 u1 + u2 u1u2 0 1 Viterbi R \u22650 max(u1, u2) u1u2 0 1 Viterbi proof R \u22650 \u00d7 L max(u1, u2), Uargmax i\u2208{1,2} u i u1u2, U1.U2 0, 1, k-best proof (R \u22650 \u00d7 L) \u2264k max-k(u1 \u222a u2) max-k(u1 u2) \u2205 { 1, }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semirings",
"sec_num": "2.2"
},
{
"text": "The combination operator returns u 1 u 2 , U 1 .U 2 , where U 1 .U 2 denotes the string concatenation of U 1 and U 2 . 2 k-best proof semiring. The \"k-best proof\" semiring computes the values and proof strings of the k most-probable proofs for each theorem. The set is (R \u22650 \u00d7 L) \u2264k , i.e., sequences (up to length k) of sorted probability/proof pairs. The aggregation operator \u2295 uses max-k, which chooses the k highest-scoring proofs from its argument (a set of scored proofs) and sorts them in decreasing order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semirings",
"sec_num": "2.2"
},
{
"text": "To define the combination operator \u2297, we require a cross-product that pairs probabilities and proofs from two k-best lists. We call this , defined on two semiring values",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semirings",
"sec_num": "2.2"
},
{
"text": "u = u 1 , U 1 , ..., u k , U k and v = v 1 , V 1 , ..., v k , V k by: u v = { u i v j , U i .V j | i, j \u2208 {1, ..., k}} (2) Then, u \u2297 v = max-k(u v)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semirings",
"sec_num": "2.2"
},
{
"text": ". This is similar to the k-best semiring defined by Goodman (1999) . These semirings are summarized in Table 1 .",
"cite_spans": [
{
"start": 52,
"end": 66,
"text": "Goodman (1999)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 103,
"end": 110,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semirings",
"sec_num": "2.2"
},
{
"text": "Let X be the space of inputs to our logic program, i.e., x \u2208 X is a set of axioms. Let L denote the proof language and let Y \u2286 L denote the set of proof strings that constitute full proofs, i.e., proofs of the special goal theorem. We assume an exponential probabilistic model such that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features and Inference",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(y | x) \u221d M m=1 \u03bb hm(x,y) m",
"eq_num": "(3)"
}
],
"section": "Features and Inference",
"sec_num": "2.3"
},
{
"text": "where each \u03bb m \u2265 0 is a parameter of the model and each h m is a feature function. There is a bijection between Y and the space of discrete structures that our model predicts. Given such a model, DP is helpful for solving two kinds of inference problems. The first problem, decoding, is to find the highest scoring proof 2 We assume for simplicity that the best proof will never be a tie among more than one proof. Goodman (1999) handles this situation more carefully, though our version is more likely to be used in practice for both the Viterbi proof and k-best proof semirings.\u0177 \u2208 Y for a given input x \u2208 X:",
"cite_spans": [
{
"start": 321,
"end": 322,
"text": "2",
"ref_id": null
},
{
"start": 415,
"end": 429,
"text": "Goodman (1999)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features and Inference",
"sec_num": "2.3"
},
{
"text": "y(x) = argmax y\u2208Y M m=1 \u03bb m hm(x,y) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features and Inference",
"sec_num": "2.3"
},
{
"text": "The second is the summing problem, which marginalizes the proof probabilities (without normalization):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features and Inference",
"sec_num": "2.3"
},
{
"text": "s(x) = y\u2208Y M m=1 \u03bb m hm(x,y) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features and Inference",
"sec_num": "2.3"
},
{
"text": "As defined, the feature functions h m can depend on arbitrary parts of the input axiom set x and the entire output proof y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features and Inference",
"sec_num": "2.3"
},
{
"text": "An important characteristic of problems suited for DP is that the global calculation (i.e., the value of goal ) depend only on local factored parts. In DP equations, this means that each equation connects a relatively small number of indexed variables related through a relatively small number of indices. In the logic programming formulation, it means that each step of the proof depends only on the theorems being used at that step, not the full proofs of those theorems. We call this property proof locality. In the statistical modeling view of Eq. 3, classical DP requires that the probability model make strong Markovian conditional independence assumptions (e.g., in HMMs, S t\u22121 \u22a5 S t+1 | S t ); in exponential families over discrete structures, this corresponds to feature locality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof and Feature Locality",
"sec_num": "2.4"
},
{
"text": "For a particular proof y of goal consisting of t intermediate theorems, we define a set of proof strings i \u2208 L for i \u2208 {1, ..., t}, where i corresponds to the proof of the ith theorem. 3 We can break the computation of feature function h m into a summation over terms corresponding to each i :",
"cite_spans": [
{
"start": 185,
"end": 186,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proof and Feature Locality",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h m (x, y) = t i=1 f m (x, i )",
"eq_num": "(6)"
}
],
"section": "Proof and Feature Locality",
"sec_num": "2.4"
},
{
"text": "This is simply a way of noting that feature functions \"fire\" incrementally at specific points in the proof, normally at the first opportunity. Any feature function can be expressed this way. For local features, we can go farther; we define a function top( ) that returns the proof string corresponding to the antecedents and consequent of the last inference step in . Local features have the property:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof and Feature Locality",
"sec_num": "2.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "h loc m (x, y) = t i=1 f m (x, top( i ))",
"eq_num": "(7)"
}
],
"section": "Proof and Feature Locality",
"sec_num": "2.4"
},
{
"text": "Local features only have access to the most recent deductive proof step (though they may \"fire\" repeatedly in the proof), while non-local features have access to the entire proof up to a given theorem. For both kinds of features, the \"f \" terms are used within the DP formulation. When taking an inference step to prove theorem i, the value",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof and Feature Locality",
"sec_num": "2.4"
},
{
"text": "M m=1 \u03bb fm(x, i ) m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof and Feature Locality",
"sec_num": "2.4"
},
{
"text": "is combined into the calculation of that theorem's value, along with the values of the antecedents. Note that typically only a small number of f m are nonzero for theorem i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof and Feature Locality",
"sec_num": "2.4"
},
{
"text": "When non-local h m /f m that depend on arbitrary parts of the proof are involved, the decoding and summing inference problems are NP-hard (they instantiate probabilistic inference in a fully connected graphical model). Sometimes, it is possible to achieve proof locality by adding more indices to the DP variables (for example, consider modifying the bigram HMM Viterbi algorithm for trigram HMMs). This increases the number of variables and hence computational cost. In general, it leads to exponential-time inference in the worst case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof and Feature Locality",
"sec_num": "2.4"
},
{
"text": "There have been many algorithms proposed for approximately solving instances of these decoding and summing problems with non-local features. Some stem from work on graphical models, including loopy belief propagation (Sutton and McCallum, 2004; Smith and Eisner, 2008) , Gibbs sampling (Finkel et al., 2005) , sequential Monte Carlo methods such as particle filtering (Levy et al., 2008) , and variational inference (Jordan et al., 1999; MacKay, 1997; Kurihara and Sato, 2006) . Also relevant are stacked learning (Cohen and Carvalho, 2005) , interpretable as approximation of non-local feature values (Martins et al., 2008) , and M-estimation , which allows training without inference. Several other approaches used frequently in NLP are approximate methods for decoding only. These include beam search (Lowerre, 1976) , cube pruning, which we discuss in \u00a73, integer linear programming (Roth and Yih, 2004) , in which arbitrary features can act as constraints on y, and approximate solutions like McDonald and Pereira (2006) , in which an exact solution to a related decoding problem is found and then modified to fit the problem of interest.",
"cite_spans": [
{
"start": 217,
"end": 244,
"text": "(Sutton and McCallum, 2004;",
"ref_id": "BIBREF30"
},
{
"start": 245,
"end": 268,
"text": "Smith and Eisner, 2008)",
"ref_id": "BIBREF28"
},
{
"start": 286,
"end": 307,
"text": "(Finkel et al., 2005)",
"ref_id": "BIBREF8"
},
{
"start": 368,
"end": 387,
"text": "(Levy et al., 2008)",
"ref_id": "BIBREF20"
},
{
"start": 416,
"end": 437,
"text": "(Jordan et al., 1999;",
"ref_id": "BIBREF15"
},
{
"start": 438,
"end": 451,
"text": "MacKay, 1997;",
"ref_id": "BIBREF22"
},
{
"start": 452,
"end": 476,
"text": "Kurihara and Sato, 2006)",
"ref_id": "BIBREF18"
},
{
"start": 514,
"end": 540,
"text": "(Cohen and Carvalho, 2005)",
"ref_id": "BIBREF4"
},
{
"start": 602,
"end": 624,
"text": "(Martins et al., 2008)",
"ref_id": "BIBREF23"
},
{
"start": 804,
"end": 819,
"text": "(Lowerre, 1976)",
"ref_id": "BIBREF21"
},
{
"start": 887,
"end": 907,
"text": "(Roth and Yih, 2004)",
"ref_id": "BIBREF26"
},
{
"start": 998,
"end": 1025,
"text": "McDonald and Pereira (2006)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proof and Feature Locality",
"sec_num": "2.4"
},
{
"text": "Cube pruning (Chiang, 2007; Huang and Chiang, 2007) is an approximate technique for decoding (Eq. 4); it is used widely in machine translation. Given proof locality, it is essentially an efficient implementation of the k-best proof semiring. Cube pruning goes farther in that it permits nonlocal features to weigh in on the proof probabilities, at the expense of making the k-best operation approximate. We describe the two approximations cube pruning makes, then propose cube decoding, which removes the second approximation. Cube decoding cannot be represented as a semiring; we propose a more general algebraic structure that accommodates it.",
"cite_spans": [
{
"start": 13,
"end": 27,
"text": "(Chiang, 2007;",
"ref_id": "BIBREF3"
},
{
"start": 28,
"end": 51,
"text": "Huang and Chiang, 2007)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approximate Decoding",
"sec_num": "3"
},
{
"text": "Cube pruning is an approximate solution to the decoding problem (Eq. 4) in two ways.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximations in Cube Pruning",
"sec_num": "3.1"
},
{
"text": "Approximation 1: k < \u221e. Cube pruning uses a finite k for the k-best lists stored in each value. If k = \u221e, the algorithm performs exact decoding with non-local features (at obviously formidable expense in combinatorial problems).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximations in Cube Pruning",
"sec_num": "3.1"
},
{
"text": "Approximation 2: lazy computation. Cube pruning exploits the fact that k < \u221e to use lazy computation. When combining the k-best proof lists of d theorems' values, cube pruning does not enumerate all k d proofs, apply non-local features to all of them, and then return the top k. Instead, cube pruning uses a more efficient but approximate solution that only calculates the non-local factors on O(k) proofs to obtain the approximate top k. This trick is only approximate if non-local features are involved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximations in Cube Pruning",
"sec_num": "3.1"
},
{
"text": "Approximation 2 makes it impossible to formulate cube pruning using separate aggregation and combination operations, as the use of lazy computation causes these two operations to effectively be performed simultaneously. To more directly relate our summing algorithm ( \u00a74) to cube pruning, we suggest a modified version of cube pruning that does not use lazy computation. We call this algorithm cube decoding. This algorithm can be written down in terms of separate aggregation and combination operations, though we will show it is not a semiring.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approximations in Cube Pruning",
"sec_num": "3.1"
},
{
"text": "We formally describe cube decoding, show that it does not instantiate a semiring, then describe a more general algebraic structure that it does instantiate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "Consider the set G of non-local feature functions that map X \u00d7 L \u2192 R \u22650 . 4 Our definitions in \u00a72.2 for the k-best proof semiring can be expanded to accommodate these functions within the semiring value. Recall that values in the k-best proof semiring fall in A k = (R \u22650 \u00d7L) \u2264k . For cube decoding, we use a different set A cd defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "A cd = (R \u22650 \u00d7 L) \u2264k A k \u00d7G \u00d7 {0, 1}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "where the binary variable indicates whether the value contains a k-best list (0, which we call an \"ordinary\" value) or a non-local feature function in G (1, which we call a \"function\" value). We denote a value u \u2208 A cd by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "u = u 1 , U 1 , u 2 , U 2 , ..., u k , U k \u016b , g u , u s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "where each u i \u2208 R \u22650 is a probability and each U i \u2208 L is a proof string.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "We use \u2295 k and \u2297 k to denote the k-best proof semiring's operators, defined in \u00a72.2. We let g 0 be such that g 0 ( ) is undefined for all \u2208 L. For two values",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "u = \u016b, g u , u s , v = v, g v , v s \u2208 A cd , cube decoding's aggregation operator is: u \u2295 cd v = \u016b \u2295 kv , g 0 , 0 if \u00acu s \u2227 \u00acv s (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "Under standard models, only ordinary values will be operands of \u2295 cd , so \u2295 cd is undefined when u s \u2228 v s . We define the combination operator \u2297 cd :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "u \u2297 cd v = (9) \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u016b \u2297 kv , g 0 , 0 if \u00acu s \u2227 \u00acv s , max-k(exec(g v ,\u016b)), g 0 , 0 if \u00acu s \u2227 v s , max-k(exec(g u ,v)), g 0 , 0 if u s \u2227 \u00acv s , , \u03bbz.(g u (z) \u00d7 g v (z)), 1 if u s \u2227 v s .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "where exec(g,\u016b) executes the function g upon each proof in the proof list\u016b, modifies the scores 4 In our setting, gm(x, ) will most commonly be defined as \u03bb",
"cite_spans": [
{
"start": 96,
"end": 97,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "fm(x, ) m",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "in the notation of \u00a72.3. But functions in G could also be used to implement, e.g., hard constraints or other nonlocal score factors. in place by multiplying in the function result, and returns the modified proof list:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "g = \u03bb .g(x, ) exec(g,\u016b) = u 1 g (U 1 ), U 1 , u 2 g (U 2 ), U 2 , ..., u k g (U k ), U k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "Here, max-k is simply used to re-sort the k-best proof list following function evaluation. The semiring properties fail to hold when introducing non-local features in this way. In particular, \u2297 cd is not associative when 1 < k < \u221e. For example, consider the probabilistic CKY algorithm as above, but using the cube decoding semiring with the non-local feature functions collectively known as \"NGramTree\" features (Huang, 2008) that score the string of terminals and nonterminals along the path from word j to word j + 1 when two constituents C Y,i,j and C Z,j,k are combined. The semiring value associated with such a feature is u = , NGramTree \u03c0 (), 1 (for a specific path \u03c0), and we rewrite Eq. 1 as follows (where ranges for summation are omitted for space):",
"cite_spans": [
{
"start": 413,
"end": 426,
"text": "(Huang, 2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "C X,i,k = cd p X\u2192Y Z \u2297 cd C Y,i,j \u2297 cd C Z,j,k \u2297 cd u",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "The combination operator is not associative since the following will give different answers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "5 (p X\u2192Y Z \u2297 cd C Y,i,j ) \u2297 cd (C Z,j,k \u2297 cd u) (10) ((p X\u2192Y Z \u2297 cd C Y,i,j ) \u2297 cd C Z,j,k ) \u2297 cd u (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "In Eq. 10, the non-local feature function is executed on the k-best proof list for Z, while in Eq. 11, NGramTree \u03c0 is called on the k-best proof list for the X constructed from Y and Z. Furthermore, neither of the above gives the desired result, since we actually wish to expand the full set of k 2 proofs of X and then apply NGramTree \u03c0 to each of them (or a higher-dimensional \"cube\" if more operands are present) before selecting the k-best. The binary operations above retain only the top k proofs of X in Eq. 11 before applying NGramTree \u03c0 to each of them. We actually would like to redefine combination so that it can operate on arbitrarily-sized sets of values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "We can understand cube decoding through an algebraic structure with two operations \u2295 and \u2297, where \u2297 need not be associative and need not distribute over \u2295, and furthermore where \u2295 and \u2297 are defined on arbitrarily many operands. We will refer here to such a structure as a generalized semiring. 6 To define \u2297 cd on a set of operands with N ordinary operands and N function operands, we first compute the full O(k N ) cross-product of the ordinary operands, then apply each of the N functions from the remaining operands in turn upon the full N -dimensional \"cube,\" finally calling max-k on the result.",
"cite_spans": [
{
"start": 294,
"end": 295,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Decoding",
"sec_num": "3.2"
},
{
"text": "We present an approximate solution to the summing problem when non-local features are involved, which we call cube summing. It is an extension of cube decoding, and so we will describe it as a generalized semiring. The key addition is to maintain in each value, in addition to the k-best list of proofs from A k , a scalar corresponding to the residual probability (possibly unnormalized) of all proofs not among the k-best. 7 The k-best proofs are still used for dynamically computing non-local features but the aggregation and combination operations are redefined to update the residual as appropriate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "We define the set A cs for cube summing as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "A cs = R \u22650 \u00d7 (R \u22650 \u00d7 L) \u2264k \u00d7 G \u00d7 {0, 1}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "A value u \u2208 A cs is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "u = u 0 , u 1 , U 1 , u 2 , U 2 , ..., u k , U k \u016b , g u , u s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "For a proof list\u016b, we use \u016b to denote the sum of all proof scores, i:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "u i ,U i \u2208\u016b u i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "The aggregation operator over operands",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "{u i } N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "i=1 , all such that u is = 0, 8 is defined by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "N i=1 u i = (12) N i=1 u i0 + Res N i=1\u016b i , max-k N i=1\u016b i , g 0 , 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "6 Algebraic structures are typically defined with binary operators only, so we were unable to find a suitable term for this structure in the literature. 7 Blunsom and Osborne (2008) described a related approach to approximate summing using the chart computed during cube pruning, but did not keep track of the residual terms as we do here. 8 We assume that operands ui to \u2295cs will never be such that uis = 1 (non-local feature functions). This is reasonable in the widely used log-linear model setting we have adopted, where weights \u03bbm are factors in a proof's product score.",
"cite_spans": [
{
"start": 153,
"end": 154,
"text": "7",
"ref_id": null
},
{
"start": 340,
"end": 341,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "where Res returns the \"residual\" set of scored proofs not in the k-best among its arguments, possibly the empty set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "For a set of N +N operands",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "{v i } N i=1 \u222a{w j } N j=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "such that v is = 1 (non-local feature functions) and w js = 1 (ordinary values), the combination operator \u2297 is shown in Eq. 13 Fig. 1 . Note that the case where N = 0 is not needed in this application; an ordinary value will always be included in combination.",
"cite_spans": [],
"ref_spans": [
{
"start": 127,
"end": 133,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "In the special case of two ordinary operands (where u s = v s = 0), Eq. 13 reduces to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "u \u2297 v = (14) u 0 v 0 + u 0 v + v 0 \u016b + Res(\u016b v) , max-k(\u016b v), g 0 , 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "We define 0 as 0, , g 0 , 0 ; an appropriate definition for the combination identity element is less straightforward and of little practical importance; we leave it to future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "If we use this generalized semiring to solve a DP and achieve goal value of u, the approximate sum of all proof probabilities is given by u 0 + \u016b . If all features are local, the approach is exact. With non-local features, the k-best list may not contain the k-best proofs, and the residual score, while including all possible proofs, may not include all of the non-local features in all of those proofs' probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cube Summing",
"sec_num": "4"
},
{
"text": "We have so far viewed dynamic programming algorithms in terms of their declarative specifications as semiring-weighted logic programs. Solvers have been proposed by Goodman (1999) , by Klein and Manning (2001) using a hypergraph representation, and by Eisner et al. (2005) . Because Goodman's and Eisner et al.'s algorithms assume semirings, adapting them for cube summing is non-trivial. 9 To generalize Goodman's algorithm, we suggest using the directed-graph data structure known variously as an arithmetic circuit or computation graph. 10 Arithmetic circuits have recently drawn interest in the graphical model community as a tool for performing probabilistic inference (Darwiche, 2003) . In the directed graph, there are vertices corresponding to axioms (these are sinks in the graph), \u2295 vertices corresponding to theorems, and \u2297 vertices corresponding to summands in the dynamic programming equations. Directed edges point from each node to the nodes it depends on; \u2295 vertices depend on \u2297 vertices, which depend on \u2295 and axiom vertices. Arithmetic circuits are amenable to automatic differentiation in the reverse mode (Griewank and Corliss, 1991) , commonly used in backpropagation algorithms. Importantly, this permits us to calculate the exact gradient of the approximate summation with respect to axiom values, following Eisner et al. (2005) . This is desirable when carrying out the optimization problems involved in parameter estimation. Another differentiation technique, implemented within the semiring, is given by Eisner (2002) .",
"cite_spans": [
{
"start": 165,
"end": 179,
"text": "Goodman (1999)",
"ref_id": "BIBREF10"
},
{
"start": 185,
"end": 209,
"text": "Klein and Manning (2001)",
"ref_id": "BIBREF16"
},
{
"start": 252,
"end": 272,
"text": "Eisner et al. (2005)",
"ref_id": "BIBREF6"
},
{
"start": 389,
"end": 390,
"text": "9",
"ref_id": null
},
{
"start": 674,
"end": 690,
"text": "(Darwiche, 2003)",
"ref_id": "BIBREF5"
},
{
"start": 1125,
"end": 1153,
"text": "(Griewank and Corliss, 1991)",
"ref_id": "BIBREF11"
},
{
"start": 1331,
"end": 1351,
"text": "Eisner et al. (2005)",
"ref_id": "BIBREF6"
},
{
"start": 1530,
"end": 1543,
"text": "Eisner (2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "5"
},
{
"text": "N i=1 v i \u2297 N j=1 w j = \uf8eb \uf8ed B\u2208P(S) b\u2208B w b0 c\u2208S\\B w c \uf8f6 \uf8f8 (13) + Res(exec(g v 1 , . . . exec(g v N ,w 1 \u2022 \u2022 \u2022 w N ) . . .)) , max-k(exec(g v 1 , . . . exec(g v N ,w 1 \u2022 \u2022 \u2022 w N ) . . .)), g 0 , 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "5"
},
{
"text": "Cube pruning is based on the k-best algorithms of Huang and Chiang (2005) , which save time over generic semiring implementations through lazy computation in both the aggregation and combination operations. Their techniques are not as clearly applicable here, because our goal is to sum over all proofs instead of only finding a small subset of them. If computing non-local features is a computational bottleneck, they can be computed only for the O(k) proofs considered when choosing the best k as in cube pruning. Then, the computational requirements for approximate summing are nearly equivalent to cube pruning, but the approximation is less accurate.",
"cite_spans": [
{
"start": 50,
"end": 73,
"text": "Huang and Chiang (2005)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "5"
},
{
"text": "We now consider interesting special cases and variations of cube summing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semirings Old and New",
"sec_num": "6"
},
{
"text": "When restricted to local features, cube pruning and cube summing can be seen as proper semir-k-best proof (Goodman, 1999) k-best + residual Viterbi proof (Goodman, 1999) all proof (Goodman, 1999) Viterbi (Viterbi, 1967) ignore proof inside (Baum et al., 1970) i g n o r e r e s i d u a l ings. Cube pruning reduces to an implementation of the k-best semiring (Goodman, 1998) , and cube summing reduces to a novel semiring we call the k-best+residual semiring. Binary instantiations of \u2297 and \u2295 can be iteratively reapplied to give the equivalent formulations in Eqs. 12 and 13. We define 0 as 0, and 1 as 1, 1, . The \u2295 operator is easily shown to be commutative. That \u2295 is associative follows from associativity of max-k, shown by Goodman (1998) . Showing that \u2297 is associative and that \u2297 distributes over \u2295 are less straightforward; proof sketches are provided in Appendix A. The k-best+residual semiring generalizes many semirings previously introduced in the literature; see Fig. 2 .",
"cite_spans": [
{
"start": 106,
"end": 121,
"text": "(Goodman, 1999)",
"ref_id": "BIBREF10"
},
{
"start": 154,
"end": 169,
"text": "(Goodman, 1999)",
"ref_id": "BIBREF10"
},
{
"start": 180,
"end": 195,
"text": "(Goodman, 1999)",
"ref_id": "BIBREF10"
},
{
"start": 204,
"end": 219,
"text": "(Viterbi, 1967)",
"ref_id": "BIBREF31"
},
{
"start": 240,
"end": 259,
"text": "(Baum et al., 1970)",
"ref_id": "BIBREF0"
},
{
"start": 359,
"end": 374,
"text": "(Goodman, 1998)",
"ref_id": "BIBREF9"
},
{
"start": 730,
"end": 744,
"text": "Goodman (1998)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 977,
"end": 983,
"text": "Fig. 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "The k-best+residual Semiring",
"sec_num": "6.1"
},
{
"text": "k = 0 k = \u221e k = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The k-best+residual Semiring",
"sec_num": "6.1"
},
{
"text": "Once we relax requirements about associativity and distributivity and permit aggregation and combination operators to operate on sets, several extensions to cube summing become possible. First, when computing approximate summations with non-local features, we may not always be interested in the best proofs for each item. Since the purpose of summing is often to calculate statistics under a model distribution, we may wish instead to sample from that distribution. We can replace the max-k function with a sample-k function that samples k proofs from the scored list in its argument, possibly using the scores or possibly uniformly at random. This breaks associativity of \u2295. We conjecture that this approach can be used to simulate particle filtering for structured models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variations",
"sec_num": "6.2"
},
{
"text": "Another variation is to vary k for different theorems. This might be used to simulate beam search, or to reserve computation for theorems closer to goal , which have more proofs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variations",
"sec_num": "6.2"
},
{
"text": "This paper has drawn a connection between cube pruning, a popular technique for approximately solving decoding problems, and the semiringweighted logic programming view of dynamic programming. We have introduced a generalization called cube summing, to be used for solving summing problems, and have argued that cube pruning and cube summing are both semirings that can be used generically, as long as the underlying probability models only include local features. With non-local features, cube pruning and cube summing can be used for approximate decoding and summing, respectively, and although they no longer correspond to semirings, generic algorithms can still be used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "In showing that k-best+residual is a semiring, we will restrict our attention to the computation of the residuals. The computation over proof lists is identical to that performed in the k-best proof semiring, which was shown to be a semiring by Goodman (1998) . We sketch the proofs that \u2297 is associative and that \u2297 distributes over \u2295; associativity of \u2295 is straightforward.",
"cite_spans": [
{
"start": 245,
"end": 259,
"text": "Goodman (1998)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A k-best+residual is a Semiring",
"sec_num": null
},
{
"text": "For a proof list\u0101, \u0101 denotes the sum of proof scores, P i: a i ,A i \u2208\u0101 ai. Note that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A k-best+residual is a Semiring",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Res(\u0101) + max-k(\u0101) = \u0101 (15) \u201a \u201a\u0101 b \u201a \u201a = \u0101 \u201a \u201ab \u201a \u201a",
"eq_num": "(16)"
}
],
"section": "A k-best+residual is a Semiring",
"sec_num": null
},
{
"text": "Associativity. Given three semiring values u, v, and w, we need to show that (u\u2297v)\u2297w = u\u2297(v\u2297w). After expanding the expressions for the residuals using Eq. 14, there are 10 terms on each side, five of which are identical and cancel out immediately. Three more cancel using Eq. 15, leaving:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A k-best+residual is a Semiring",
"sec_num": null
},
{
"text": "LHS = Res(\u016b v) w + Res(max-k(\u016b v) w) RHS = \u016b Res(v w) + Res(\u016b max-k(v w))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A k-best+residual is a Semiring",
"sec_num": null
},
{
"text": "If LHS = RHS, associativity holds. Using Eq. 15 again, we can rewrite the second term in LHS to obtain LHS = Res(\u016b v) w + max-k(\u016b v) w \u2212 max-k(max-k(\u016b v) w)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A k-best+residual is a Semiring",
"sec_num": null
},
{
"text": "Using Eq. 16 and pulling out the common term w , we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A k-best+residual is a Semiring",
"sec_num": null
},
{
"text": "LHS =( Res(\u016b v) + max-k(\u016b v) ) w \u2212 max-k(max-k(\u016b v) w) = (\u016b v) w \u2212 max-k(max-k(\u016b v) w) = (\u016b v) w \u2212 max-k((\u016b v) w)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A k-best+residual is a Semiring",
"sec_num": null
},
{
"text": "The resulting expression is intuitive: the residual of (u\u2297v)\u2297 w is the difference between the sum of all proof scores and the sum of the k-best. RHS can be transformed into this same expression with a similar line of reasoning (and using associativity of ). Therefore, LHS = RHS and \u2297 is associative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A k-best+residual is a Semiring",
"sec_num": null
},
{
"text": "Distributivity. To prove that \u2297 distributes over \u2295, we must show left-distributivity, i.e., that u\u2297(v\u2295w) = (u\u2297v)\u2295(u\u2297 w), and right-distributivity. We show left-distributivity here. As above, we expand the expressions, finding 8 terms on the LHS and 9 on the RHS. Six on each side cancel, leaving:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A k-best+residual is a Semiring",
"sec_num": null
},
{
"text": "LHS = Res(v \u222aw) \u016b + Res(\u016b max-k(v \u222aw)) RHS = Res(\u016b v) + Res(\u016b w) + Res(max-k(\u016b v) \u222a max-k(\u016b w))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A k-best+residual is a Semiring",
"sec_num": null
},
{
"text": "We can rewrite LHS as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A k-best+residual is a Semiring",
"sec_num": null
},
{
"text": "LHS = Res(v \u222aw) \u016b + \u016b max-k(v \u222aw) \u2212 max-k(\u016b max-k(v \u222aw)) = \u016b ( Res(v \u222aw) + max-k(v \u222aw) ) \u2212 max-k(\u016b max-k(v \u222aw)) = \u016b v \u222aw \u2212 max-k(\u016b (v \u222aw)) = \u016b v \u222aw \u2212 max-k((\u016b v) \u222a (\u016b w))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A k-best+residual is a Semiring",
"sec_num": null
},
{
"text": "where the last line follows because distributes over \u222a (Goodman, 1998) . We now work with the RHS:",
"cite_spans": [
{
"start": 55,
"end": 70,
"text": "(Goodman, 1998)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A k-best+residual is a Semiring",
"sec_num": null
},
{
"text": "RHS = Res(\u016b v) + Res(\u016b w) + Res(max-k(\u016b v) \u222a max-k(\u016b w)) = Res(\u016b v) + Res(\u016b w) + max-k(\u016b v) \u222a max-k(\u016b w) \u2212 max-k(max-k(\u016b v) \u222a max-k(\u016b w))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A k-best+residual is a Semiring",
"sec_num": null
},
{
"text": "Since max-k(\u016b v) and max-k(\u016b w) are disjoint (we assume no duplicates; i.e., two different theorems cannot have exactly the same proof), the third term becomes max-k(\u016b v) + max-k(\u016b w) and we have = \u016b v + \u016b w \u2212 max-k(max-k(\u016b v) \u222a max-k(\u016b w)) = \u016b v + \u016b w \u2212 max-k((\u016b v) \u222a (\u016b w)) = \u016b v \u222aw \u2212 max-k((\u016b v) \u222a (\u016b w)) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A k-best+residual is a Semiring",
"sec_num": null
},
{
"text": "When cycles are permitted, i.e., where the value of one variable depends on itself, infinite sums can be involved. We must ensure that these infinite sums are well defined under the semiring. So-called complete semirings satisfy additional conditions to handle infinite sums, but for simplicity we will restrict our attention to DPs that do not involve cycles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The theorem indexing scheme might be based on a topological ordering given by the proof structure, but is not important for our purposes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Distributivity of combination over aggregation fails for related reasons. We omit a full discussion due to space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The bottom-up agenda algorithm inEisner et al. (2005) might possibly be generalized so that associativity, distributivity, and binary operators are not required (John Blatz, p.c.).10 This data structure is not specific to any particular set of operations. We have also used it successfully with the inside semiring.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank three anonymous EACL reviewers, John Blatz, Pedro Domingos, Jason Eisner, Joshua Goodman, and members of the ARK group for helpful comments and feedback that improved this paper. This research was supported by NSF IIS-0836431 and an IBM faculty award.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains",
"authors": [
{
"first": "L",
"middle": [
"E"
],
"last": "Baum",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Petrie",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Soules",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Weiss",
"suffix": ""
}
],
"year": 1970,
"venue": "Annals of Mathematical Statistics",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. E. Baum, T. Petrie, G. Soules, and N. Weiss. 1970. A maximization technique occurring in the statis- tical analysis of probabilistic functions of Markov chains. Annals of Mathematical Statistics, 41(1).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Probabilistic inference for machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Blunsom and M. Osborne. 2008. Probabilistic infer- ence for machine translation. In Proc. of EMNLP.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A discriminative latent variable model for statistical machine translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Blunsom",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Osborne",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Blunsom, T. Cohn, and M. Osborne. 2008. A dis- criminative latent variable model for statistical ma- chine translation. In Proc. of ACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Hierarchical phrase-based translation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Chiang. 2007. Hierarchical phrase-based transla- tion. Computational Linguistics, 33(2):201-228.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Stacked sequential learning",
"authors": [
{
"first": "W",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Carvalho",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. W. Cohen and V. Carvalho. 2005. Stacked sequen- tial learning. In Proc. of IJCAI.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A differential approach to inference in Bayesian networks",
"authors": [
{
"first": "A",
"middle": [],
"last": "Darwiche",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of the ACM",
"volume": "50",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Darwiche. 2003. A differential approach to infer- ence in Bayesian networks. Journal of the ACM, 50(3).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Compiling Comp Ling: Practical weighted dynamic programming and the Dyna language",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Goldlust",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of HLT-EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisner, E. Goldlust, and N. A. Smith. 2005. Com- piling Comp Ling: Practical weighted dynamic pro- gramming and the Dyna language. In Proc. of HLT- EMNLP.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Parameter estimation for probabilistic finite-state transducers",
"authors": [
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Eisner. 2002. Parameter estimation for probabilistic finite-state transducers. In Proc. of ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Incorporating non-local information into information extraction systems by gibbs sampling",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. R. Finkel, T. Grenager, and C. D. Manning. 2005. Incorporating non-local information into informa- tion extraction systems by gibbs sampling. In Proc. of ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Parsing inside-out",
"authors": [
{
"first": "J",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Goodman. 1998. Parsing inside-out. Ph.D. thesis, Harvard University.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Semiring parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "4",
"pages": "573--605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Goodman. 1999. Semiring parsing. Computational Linguistics, 25(4):573-605.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic Differentiation of Algorithms",
"authors": [
{
"first": "A",
"middle": [],
"last": "Griewank",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Corliss",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Griewank and G. Corliss. 1991. Automatic Differ- entiation of Algorithms. SIAM.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Better k-best parsing",
"authors": [
{
"first": "L",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of IWPT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Huang and D. Chiang. 2005. Better k-best parsing. In Proc. of IWPT.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Forest rescoring: Faster decoding with integrated language models",
"authors": [
{
"first": "L",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Huang and D. Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proc. of ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Forest reranking: Discriminative parsing with non-local features",
"authors": [
{
"first": "L",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proc. of ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "An introduction to variational methods for graphical models",
"authors": [
{
"first": "M",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Ghahramani",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Jaakkola",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Saul",
"suffix": ""
}
],
"year": 1999,
"venue": "Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. I. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul. 1999. An introduction to variational methods for graphical models. Machine Learning, 37(2).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Parsing and hypergraphs",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of IWPT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klein and C. Manning. 2001. Parsing and hyper- graphs. In Proc. of IWPT.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Hidden-variable models for discriminative reranking",
"authors": [
{
"first": "T",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Koo and M. Collins. 2005. Hidden-variable models for discriminative reranking. In Proc. of EMNLP.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Variational Bayesian grammar induction for natural language",
"authors": [
{
"first": "K",
"middle": [],
"last": "Kurihara",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Sato",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of ICGI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Kurihara and T. Sato. 2006. Variational Bayesian grammar induction for natural language. In Proc. of ICGI.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Con- ditional random fields: Probabilistic models for seg- menting and labeling sequence data. In Proc. of ICML.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Modeling the effects of memory on human online sentence processing with particle filters",
"authors": [
{
"first": "R",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Reali",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Griffiths",
"suffix": ""
}
],
"year": 2008,
"venue": "Advances in NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Levy, F. Reali, and T. Griffiths. 2008. Modeling the effects of memory on human online sentence pro- cessing with particle filters. In Advances in NIPS.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The Harpy Speech Recognition System",
"authors": [
{
"first": "B",
"middle": [
"T"
],
"last": "Lowerre",
"suffix": ""
}
],
"year": 1976,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. T. Lowerre. 1976. The Harpy Speech Recognition System. Ph.D. thesis, Carnegie Mellon University.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Ensemble learning for hidden Markov models",
"authors": [
{
"first": "D",
"middle": [
"J C"
],
"last": "Mackay",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. J. C. MacKay. 1997. Ensemble learning for hidden Markov models. Technical report, Cavendish Labo- ratory, Cambridge.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Stacking dependency parsers",
"authors": [
{
"first": "A",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "E",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. F. T. Martins, D. Das, N. A. Smith, and E. P. Xing. 2008. Stacking dependency parsers. In Proc. of EMNLP.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Online learning of approximate dependency parsing algorithms",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. McDonald and F. Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proc. of EACL.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Inside-outside reestimation from partially bracketed corpora",
"authors": [
{
"first": "F",
"middle": [
"C N"
],
"last": "Pereira",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Schabes",
"suffix": ""
}
],
"year": 1992,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "128--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. C. N. Pereira and Y. Schabes. 1992. Inside-outside reestimation from partially bracketed corpora. In Proc. of ACL, pages 128-135.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A linear programming formulation for global inference in natural language tasks",
"authors": [
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Roth and W. Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proc. of CoNLL.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Principles and implementation of deductive parsing",
"authors": [
{
"first": "S",
"middle": [],
"last": "Shieber",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Schabes",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 1995,
"venue": "Journal of Logic Programming",
"volume": "24",
"issue": "1-2",
"pages": "3--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Shieber, Y. Schabes, and F. Pereira. 1995. Principles and implementation of deductive parsing. Journal of Logic Programming, 24(1-2):3-36.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Dependency parsing by belief propagation",
"authors": [
{
"first": "D",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. A. Smith and J. Eisner. 2008. Dependency parsing by belief propagation. In Proc. of EMNLP.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Computationally efficient M-estimation of log-linear structure models",
"authors": [
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "D",
"middle": [
"L"
],
"last": "Vail",
"suffix": ""
},
{
"first": "J",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. A. Smith, D. L. Vail, and J. D. Lafferty. 2007. Com- putationally efficient M-estimation of log-linear structure models. In Proc. of ACL.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Collective segmentation and labeling of distant entities in information extraction",
"authors": [
{
"first": "C",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of ICML Workshop on Statistical Relational Learning and Its Connections to Other Fields",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Sutton and A. McCallum. 2004. Collective seg- mentation and labeling of distant entities in infor- mation extraction. In Proc. of ICML Workshop on Statistical Relational Learning and Its Connections to Other Fields.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Error bounds for convolutional codes and an asymptotically optimal decoding algorithm",
"authors": [
{
"first": "A",
"middle": [
"J"
],
"last": "Viterbi",
"suffix": ""
}
],
"year": 1967,
"venue": "IEEE Transactions on Information Processing",
"volume": "13",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. J. Viterbi. 1967. Error bounds for convolutional codes and an asymptotically optimal decoding algo- rithm. IEEE Transactions on Information Process- ing, 13(2).",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "What is the Jeopardy model? a quasi-synchronous grammar for QA",
"authors": [
{
"first": "M",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "N",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mitamura",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Wang, N. A. Smith, and T. Mitamura. 2007. What is the Jeopardy model? a quasi-synchronous gram- mar for QA. In Proc. of EMNLP-CoNLL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Combination operation for cube summing, where S = {1, 2, . . . , N } and P(S) is the power set of S excluding \u2205."
},
"FIGREF1": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Semirings generalized by k-best+residual."
}
}
}
}