ACL-OCL / Base_JSON /prefixQ /json /Q14 /Q14-1027.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q14-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:11:51.512155Z"
},
"title": "2-Slave Dual Decomposition for Generalized Higher Order CRFs",
"authors": [
{
"first": "Xian",
"middle": [],
"last": "Qian",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Texas at Dallas",
"location": {}
},
"email": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Texas at Dallas",
"location": {}
},
"email": "yangl@hlt.utdallas.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We show that the decoding problem in generalized Higher Order Conditional Random Fields (CRFs) can be decomposed into two parts: one is a tree labeling problem that can be solved in linear time using dynamic programming; the other is a supermodular quadratic pseudo-Boolean maximization problem, which can be solved in cubic time using a minimum cut algorithm. We use dual decomposition to force their agreement. Experimental results on Twitter named entity recognition and sentence dependency tagging tasks show that our method outperforms spanning tree based dual decomposition.",
"pdf_parse": {
"paper_id": "Q14-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "We show that the decoding problem in generalized Higher Order Conditional Random Fields (CRFs) can be decomposed into two parts: one is a tree labeling problem that can be solved in linear time using dynamic programming; the other is a supermodular quadratic pseudo-Boolean maximization problem, which can be solved in cubic time using a minimum cut algorithm. We use dual decomposition to force their agreement. Experimental results on Twitter named entity recognition and sentence dependency tagging tasks show that our method outperforms spanning tree based dual decomposition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Conditional Random Fields (Lafferty et al., 2001 ) (CRFs) are popular models for many NLP tasks. In particular, the linear chain CRFs explore local structure information for sequence labeling tasks, such as part-of-speech (POS) tagging, named entity recognition (NER), and shallow parsing. Recent studies have shown that the predictive power of CRFs can be strengthened by breaking the locality assumption. They either add long distance dependencies and patterns to linear chains for improved sequence labeling (Galley, 2006; Finkel et al., 2005; Kazama and Torisawa, 2007) , or directly use the 4-connected neighborhood lattice (Ding et al., 2008) . The resulting non-local models generally suffer from exponential time complexity of inference except some special cases (Sarawagi and Cohen, 2004; Takhanov and Kolmogorov, 2013; Kolmogorov and Zabih, 2004) .",
"cite_spans": [
{
"start": 26,
"end": 48,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF18"
},
{
"start": 511,
"end": 525,
"text": "(Galley, 2006;",
"ref_id": "BIBREF9"
},
{
"start": 526,
"end": 546,
"text": "Finkel et al., 2005;",
"ref_id": "BIBREF7"
},
{
"start": 547,
"end": 573,
"text": "Kazama and Torisawa, 2007)",
"ref_id": "BIBREF13"
},
{
"start": 629,
"end": 648,
"text": "(Ding et al., 2008)",
"ref_id": "BIBREF6"
},
{
"start": 771,
"end": 797,
"text": "(Sarawagi and Cohen, 2004;",
"ref_id": "BIBREF31"
},
{
"start": 798,
"end": 828,
"text": "Takhanov and Kolmogorov, 2013;",
"ref_id": "BIBREF33"
},
{
"start": 829,
"end": 856,
"text": "Kolmogorov and Zabih, 2004)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Approximate decoding algorithms have been proposed in the past decade, such as reranking (Collins, 2002b) , loopy belief propagation (Sutton and Mccallum, 2006) , tree reweighted belief propagation (Kolmogorov, 2006) . In this paper, we focus on dual decomposition (DD), which has attracted much attention recently due to its simplicity and effectiveness (Rush and Collins, 2012) . In short, it decomposes the decoding problem into several sub-problems. For each sub-problem, an efficient decoding algorithm is deployed as a slave solver. Finally a simple method forces agreement among different slaves. A popular choice is the subgradient algorithm. Martins et al. (2011b) showed that the success of the sub-gradient algorithm is strongly tied to the ability of finding a good decomposition, i.e., one involving few overlapping slaves. However, for generalized higher order graphical models, a lightweight decomposition is not at hand and many overlapping slaves may be involved. Martins et al. (2011b) showed that the sub-gradient algorithm exhibits extremely slow convergence in such cases, and they proposed the alternating directions method (DD-ADMM) to tackle these.",
"cite_spans": [
{
"start": 89,
"end": 105,
"text": "(Collins, 2002b)",
"ref_id": "BIBREF5"
},
{
"start": 133,
"end": 160,
"text": "(Sutton and Mccallum, 2006)",
"ref_id": "BIBREF32"
},
{
"start": 198,
"end": 216,
"text": "(Kolmogorov, 2006)",
"ref_id": "BIBREF15"
},
{
"start": 355,
"end": 379,
"text": "(Rush and Collins, 2012)",
"ref_id": "BIBREF28"
},
{
"start": 651,
"end": 673,
"text": "Martins et al. (2011b)",
"ref_id": "BIBREF20"
},
{
"start": 981,
"end": 1003,
"text": "Martins et al. (2011b)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a 2-slave dual decomposition approach for efficient decoding in higher order CRFs. One slave is a tree labeling model that can be solved in linear time using dynamic programming. The other is a supermodular quadratic pseudo-Boolean maximization problem, which can be solved in cubic time via minimum cut. Experimental results on Twitter NER and sentence dependency tagging tasks demonstrate the effectiveness of our technique.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given an undirected graph G = (V, E) with N vertices, let x = x 1 , x 2 , . . . , x N denote the observations of the vertices, and each observation x v is asked to assign one state (or label) in the state set s \u2208 S. The assignment of the graph can be represented by a binary matrix Y N \u00d7|S| , where |S| is the cardinality of S, and the element Y v,s indicates if x v is assigned state s. In the rest of the paper, we use Y v[s] instead, and v[s] to denote the vertex v with state s. The constraint",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2211 s Y v[s] = 1",
"eq_num": "(1)"
}
],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "is required so that each vertex has exactly one state.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "In this paper, we use Y to denote the space of state assignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "The decoding problem is to search the optimal assignment that maximizes the scoring function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "Y * = arg max Y \u2208Y(x) \u03d5(x, Y )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "where \u03d5(x, Y ) is a given scoring function. As x is constant in this maximization problem, we omit x for simplicity in the remainder of the paper. The decoding problem becomes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "max Y \u2208Y \u03d5(Y ).",
"eq_num": "(2)"
}
],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "The scoring function \u03d5(Y ) is usually decomposed into small parts",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "\u03d5(Y ) = \u2211 c \u2211 s\u2208c[\u2022] \u03d5 c[s] \u220f v[s]\u2208c[s] Y v[s]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "where c is a subset of vertices, called a factor. is a pattern of edge (u, v) as shown in Figure 1 . Note that our definition extends of the work of Takhanov and Kolmogorov where patterns are restricted to the state sequences of consecutive vertices (Takhanov and Kolmogorov, 2013) . ",
"cite_spans": [
{
"start": 250,
"end": 281,
"text": "(Takhanov and Kolmogorov, 2013)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [
{
"start": 90,
"end": 98,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "\u220f v[s]\u2208c[s] Y v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "Y v[s]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "to denote whether pattern c[s] appears in the assignment. Then the scoring function becomes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03d5(Y ) = \u2211 c \u2211 s\u2208c[\u2022] \u03d5 c[s] Y c[s] .",
"eq_num": "(3)"
}
],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "Many existing CRFs can be represented using Eq (3). For example, the popular linear chain CRFs consider two types of patterns: vertices and edges connecting adjacent vertices, resulting in the following scoring function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "\u03d5(Y ) = \u2211 v \u2211 s \u03d5 v[s] Y v[s] + \u2211 v \u2211 st\u2208S 2 \u03d5 v(v+1)[st] Y v(v+1)[st]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "The optimal Y can be found in linear time using the Viterbi algorithm. Another example is the skip-chain CRFs, which consider the interactions between similar vertices",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "\u03d5(Y ) = \u2211 v \u2211 s \u03d5 v[s] Y v[s] + \u2211 v \u2211 st\u2208S 2 \u03d5 v(v+1)[st] Y v(v+1)[st] + \u2211 u,v are similar \u2211 s\u2208S \u03d5 uv[ss] Y uv[ss] .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "With positive \u03d5 uv[ss] , the model encourages similar vertices u and v to have identical state s, and thus it yields a more consistent labeling result compared with linear chain CRFs. Empirically, the use of complex patterns achieves better performance but suffers from high computational complexity of inference, which is generally NP-hard. Hence an efficient approximate inference algorithm is required to balance the trade-off.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Background 2.1 Generalized Higher Order CRFs",
"sec_num": "2"
},
{
"text": "Dual decomposition is a popular approach due to its simplicity and effectiveness, and has been successfully applied to many tasks such as machine translation, cross sentential POS tagging, joint POS tagging and parsing. Briefly, dual decomposition attempts to solve problems of the following form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "max Y M \u2211 i=1 \u03d5 i (Y )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "The objective function is the sum of several small components that are tractable in isolation but whose combination is not. These components are called slaves.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "Rather than solving the problem directly, dual decomposition considers the equivalent problem",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "max Y,Z 1 ...Z M M \u2211 i=1 \u03d5 i (Z i ) s.t. Z i = Y \u2200i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "Using Lagrangian relaxation to eliminate the constraint, we get",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min \u03bb max Y,Z 1 ...Z M M \u2211 i=1 \u03d5 i (Z i ) + \u2211 i \u03bb T i (Y \u2212 Z i )",
"eq_num": "(4)"
}
],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "which provides the upper bound of the original problem. \u03bb is the Lagrange multiplier, which is typically optimized via sub-gradient algorithms. Martins et al. (2011b) showed that the success of sub-gradient algorithms is strongly tied to the ability of finding a good decomposition, i.e., one involving few slaves. Finding a concise decomposition is usually task dependent. For example, Koo et al. (2010) introduced dual decomposition for parsing with non-projective head automata. They used only two slaves: one is the arc-factored model, and the other is head automata which involves adjacent siblings and can be solved using dynamic programming in linear time.",
"cite_spans": [
{
"start": 144,
"end": 166,
"text": "Martins et al. (2011b)",
"ref_id": "BIBREF20"
},
{
"start": 387,
"end": 404,
"text": "Koo et al. (2010)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "Dual decomposition is especially efficient for joint learning tasks because a concise decomposition can be derived naturally where each slave solves one subtask. For example, used two slaves for integrated phrase-structure parsing and trigram POS tagging task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "However, for generalized higher order CRFs, a lightweight decomposition may be not at hand. Martins et al. (2011a) showed that the sub-gradient algorithms exhibited extremely slow convergence when handling many slaves. For fast convergence, they employed alternating directions dual decomposition (AD 3 ), which relaxes the agreement constraint via augmented Lagrangian Relaxation, where an additional quadratic penalty term was added into the Lagrangian (Eq (4)). Similarly, Jojic et al. (2010) added a strongly concave term to the Lagrangian to make it differentiable, resulting in fast convergence.",
"cite_spans": [
{
"start": 92,
"end": 114,
"text": "Martins et al. (2011a)",
"ref_id": "BIBREF19"
},
{
"start": 476,
"end": 495,
"text": "Jojic et al. (2010)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "The work most closely related to ours is the work by Komodakis (2011), where dual decomposition was used for decoding general higher order CRFs. Komodakis achieved great empirical success even with the naive decomposition where each slave processes a single higher order factor. His result demonstrates the effectiveness of the dual decomposition framework. Our work improves Komodakis' by using a concise decomposition with only two slaves.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Decomposition",
"sec_num": "2.2"
},
{
"text": "One slave in our approach is a graph representable pseudo-Boolean maximization problem, which can be reduced to a supermodular quadratic pseudo-Boolean maximization problem and solved efficiently using an algorithm for finding a minimal cut.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Representable Pseudo-Boolean Optimization",
"sec_num": "2.3"
},
{
"text": "A pseudo-Boolean function (PBF) (Boros and Hammer, 2002 ) is a multilinear function of binary variables, that is",
"cite_spans": [
{
"start": 32,
"end": 55,
"text": "(Boros and Hammer, 2002",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Representable Pseudo-Boolean Optimization",
"sec_num": "2.3"
},
{
"text": "f (x) = \u2211 i a i x i + \u2211 i<j a ij x i x j + \u2211 i<j<k a ijk x i x j x k + . . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Representable Pseudo-Boolean Optimization",
"sec_num": "2.3"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Representable Pseudo-Boolean Optimization",
"sec_num": "2.3"
},
{
"text": "x i \u2208 {0, 1}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Representable Pseudo-Boolean Optimization",
"sec_num": "2.3"
},
{
"text": "Maximizing a PBF is usually NP-hard, such as the maximum cut problem (Boros and Hammer, 1991) . A pseudo-Boolean function is said to be supermodular iff",
"cite_spans": [
{
"start": 69,
"end": 93,
"text": "(Boros and Hammer, 1991)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Representable Pseudo-Boolean Optimization",
"sec_num": "2.3"
},
{
"text": "f (x) + f (y) \u2264 f (x \u2227 y) + f (x \u2228 y)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Representable Pseudo-Boolean Optimization",
"sec_num": "2.3"
},
{
"text": "where x \u2227 y, x \u2228 y are the element-wise AND and OR operator of the two vectors respectively. This is an important concept, because a supermodular pseudo-Boolean function (SPBF) can be maximized in O(n 6 ) running time (Orlin, 2009) . A necessary and sufficient condition for identifying a SPBF is that all of its second order derivatives are nonnegative (Nemhauser et al., 1978) , i.e., for all i < j,",
"cite_spans": [
{
"start": 218,
"end": 231,
"text": "(Orlin, 2009)",
"ref_id": "BIBREF22"
},
{
"start": 354,
"end": 378,
"text": "(Nemhauser et al., 1978)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Representable Pseudo-Boolean Optimization",
"sec_num": "2.3"
},
{
"text": "\u2202f \u2202x i \u2202x j \u2265 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Representable Pseudo-Boolean Optimization",
"sec_num": "2.3"
},
{
"text": "For example a quadratic PBF is supermodular if its coefficients of all quadratic terms are non-negative. Though the general supermodular maximization algorithm can be used for any SPBF, the special features of some specific problems allow more efficient algorithms to be used. For example, it is well known that the supermodular quadratic pseudo-Boolean maximization problem can be solved in cubic time using min-cut (Billionnet and Minoux, 1985; Kolmogorov and Zabih, 2004) .",
"cite_spans": [
{
"start": 417,
"end": 446,
"text": "(Billionnet and Minoux, 1985;",
"ref_id": "BIBREF1"
},
{
"start": 447,
"end": 474,
"text": "Kolmogorov and Zabih, 2004)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Representable Pseudo-Boolean Optimization",
"sec_num": "2.3"
},
{
"text": "In fact, a subset of SPBFs can be maximized using a min-cut algorithm. A pseudo-Boolean function f (x) is called graph representable or graph expressible if there exists a graph G = (V, E) with terminals s and t and a subset of vertices",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Representable Pseudo-Boolean Optimization",
"sec_num": "2.3"
},
{
"text": "V 0 = V \u2212 {s, t} = {v 1 , . . . , v n , u 1 , . . . , u m } such that, for any configuration x 1 , . . . ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Representable Pseudo-Boolean Optimization",
"sec_num": "2.3"
},
{
"text": "x n , the value of the function f (x) is equal to a constant plus the cost of the minimum s-t cut among all cuts, in which v i is connected with s if x i = 0 and connected with t if x i = 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Representable Pseudo-Boolean Optimization",
"sec_num": "2.3"
},
{
"text": "Our definition extends the work of Kolmogorov and Zabih (2004) that focused on quadratic and cubic functions. Vertices u 1 , . . . u m correspond to the extra binary variables that are introduced to reduce the graph representable PBFs to equivalent quadratic forms. For example, the positive-negative PBFs where all terms of degree 2 or more have positive coefficients are graph representable, and each non-linear term requires one extra binary variable to obtain the equivalent quadratic form (Rhys, 1970) .",
"cite_spans": [
{
"start": 35,
"end": 62,
"text": "Kolmogorov and Zabih (2004)",
"ref_id": "BIBREF14"
},
{
"start": 494,
"end": 506,
"text": "(Rhys, 1970)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Graph Representable Pseudo-Boolean Optimization",
"sec_num": "2.3"
},
{
"text": "We decompose the decoding problem, i.e., maximization of Eq (3), into two parts, a tree labeling problem and a PBF maximization problem. We show that the PBF can be graph representable by reparameterizing the scoring function in Eq (3). Then we reduce these pseudo-Boolean functions to quadratic forms based on the recent work of\u017divn\u00fd and Jeavons (2010), and finally solve the slave problem via graph cuts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Tree-Cut Decomposition for Generalized Higher Order CRFs",
"sec_num": "3"
},
{
"text": "We first describe our idea for a simple case, the fully connected pairwise CRFs (Kr\u00e4henb\u00fchl and Koltun, 2011) , which are generalizations of linearchain CRFs and skip-chain CRFs. Formally, the decoding problem in fully connected pairwise CRFs can be formulated as follows:",
"cite_spans": [
{
"start": 80,
"end": 109,
"text": "(Kr\u00e4henb\u00fchl and Koltun, 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fully Connected Pairwise CRFs",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "max Y \u2211 v \u2211 s \u03d5 v[s] Y v[s] + \u2211 u,v \u2211 st\u2208S 2 \u03d5 uv[st] Y uv[st] s.t. Y \u2208 Y",
"eq_num": "(5)"
}
],
"section": "Fully Connected Pairwise CRFs",
"sec_num": "3.1"
},
{
"text": "Note that for any edge (u, v), adding a constant \u03c8 uv to all of its related patterns will not change the optimal solution of the problem. In other words, the optimal Y for the following problem is irrelevant to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully Connected Pairwise CRFs",
"sec_num": "3.1"
},
{
"text": "\u03c8 uv max Y \u2211 v \u2211 s \u03d5 v[s] Y v[s] + \u2211 u,v \u2211 st\u2208S 2 (\u03c8 uv + \u03d5 uv[st] )Y uv[st] s.t. Y \u2208 Y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully Connected Pairwise CRFs",
"sec_num": "3.1"
},
{
"text": "The reparameterization keeps the optimality of the problem and plays an important role for graph representation, as we will show later. By introducing a new variable Z = Y for the quadratic terms and relaxing the constraint Z = Y using Lagrangian relaxation, we get the relaxed problem",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully Connected Pairwise CRFs",
"sec_num": "3.1"
},
{
"text": "min \u03bb max Y,Z \u2211 v \u2211 s \u03d5 v[s] Y v[s] + \u2211 u,v \u2211 st\u2208S 2 (\u03c8 uv + \u03d5 uv[st] )Z uv[st] + \u2211 v \u2211 s \u03bb v[s] ( Z v[s] \u2212 Y v[s] ) s.t. Y \u2208 Y Z v[s] \u2208 {0, 1}, \u2200v, s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully Connected Pairwise CRFs",
"sec_num": "3.1"
},
{
"text": "We split the inner max Y,Z into two subproblems, and a minimal \u03bb is found using sub-gradient descent algorithms which repeatedly find a maximizing assignment for the subproblems individually.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully Connected Pairwise CRFs",
"sec_num": "3.1"
},
{
"text": "Let f \u03bb (Y ) = \u2211 v,s \u03d5 v[s] Y v[s] \u2212 \u2211 v,s \u03bb v[s] Y v[s] g \u03bb (Z) = \u2211 u,v \u2211 st\u2208S 2 (\u03c8 uv + \u03d5 uv[st] )Z uv[st] + \u2211 v,s \u03bb v[s] Z v[s]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully Connected Pairwise CRFs",
"sec_num": "3.1"
},
{
"text": "The two subproblems are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully Connected Pairwise CRFs",
"sec_num": "3.1"
},
{
"text": "max Y f \u03bb (Y ) s.t. Y \u2208 Y and max Z g \u03bb (Z) Z v[s] \u2208 {0, 1}, \u2200v, s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully Connected Pairwise CRFs",
"sec_num": "3.1"
},
{
"text": "The first subproblem can be solved in linear time since all vertices are independent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully Connected Pairwise CRFs",
"sec_num": "3.1"
},
{
"text": "The second problem is a binary quadratic programming problem. As discussed in Section 2.3, g \u03bb (Z) can be solved using min-cut if the coefficients of the quadratic terms are non-negative, i.e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully Connected Pairwise CRFs",
"sec_num": "3.1"
},
{
"text": "\u03c8 uv + \u03d5 uv[st] \u2265 0, \u2200u, v, s, t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully Connected Pairwise CRFs",
"sec_num": "3.1"
},
{
"text": "Hence, we can set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully Connected Pairwise CRFs",
"sec_num": "3.1"
},
{
"text": "\u03c8 uv = \u2212 min st\u2208uv[\u2022] {\u03d5 uv[st] }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully Connected Pairwise CRFs",
"sec_num": "3.1"
},
{
"text": "to guarantee the non-negativity. This supermodular binary quadratic programming problem can be solved via the push-relabel algorithm (Goldberg, 2008) in O(|S|N ) 3 running time.",
"cite_spans": [
{
"start": 133,
"end": 149,
"text": "(Goldberg, 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fully Connected Pairwise CRFs",
"sec_num": "3.1"
},
{
"text": "Though Z may not satisfy the constraint Z \u2208 Y after sub-gradient descent based optimization, Y must satisfy Y \u2208 Y, hence we could use Y as the final solution if Z and Y disagree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully Connected Pairwise CRFs",
"sec_num": "3.1"
},
{
"text": "Now we consider the general case, maximizing Eq (3). Similar with the pairwise case, we use two slaves. One is a set of independent vertices, and the other is a pseudo-Boolean optimization problem. That is, we can redefine g \u03bb (Z) as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Higher Order CRFs",
"sec_num": "3.2"
},
{
"text": "g \u03bb (Z) = \u2211 c \u2211 s\u2208c[\u2022] ( \u03c8 c + \u03d5 c[s] ) Z c[s] + \u2211 v,s \u03bb v[s] Z v[s]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Higher Order CRFs",
"sec_num": "3.2"
},
{
"text": "A sufficient condition for g \u03bb (Z) to be graph representable is that coefficients of all non-linear terms are non-negative (Freedman and Drineas, 2005) . Hence, we can set",
"cite_spans": [
{
"start": 123,
"end": 151,
"text": "(Freedman and Drineas, 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Higher Order CRFs",
"sec_num": "3.2"
},
{
"text": "\u03c8 c = \u2212 min s\u2208c[\u2022] {\u03d5 c[s] }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Higher Order CRFs",
"sec_num": "3.2"
},
{
"text": "to guarantee the non-negativity. In real applications, higher order patterns are sparse, i.e., {s \u2208 c (Qian et al., 2009; Ye et al., 2009 ). Hence we could skip the patterns with zero weights (\u03d5 c[s] = 0) when calculating",
"cite_spans": [
{
"start": 102,
"end": 121,
"text": "(Qian et al., 2009;",
"ref_id": "BIBREF24"
},
{
"start": 122,
"end": 137,
"text": "Ye et al., 2009",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Higher Order CRFs",
"sec_num": "3.2"
},
{
"text": "[\u2022] | \u03d5 c[s]\u0338 =0 } \u226a |S| |c|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Higher Order CRFs",
"sec_num": "3.2"
},
{
"text": "\u2211 s\u2208c[\u2022] \u03d5 c[s] Z c[s]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Higher Order CRFs",
"sec_num": "3.2"
},
{
"text": "for fast inference. However, the reparameterization described above may introduce many non-zero terms which destroy the sparsity. For example, in the NER task, a binary feature is defined as true if a word subsequence matches a location name in a gazetteer. Suppose c =Little York village is such a word subsequence, then among |S| 3 possible assignments of c, only the one that labels c =Little York village as a location name has non-zero weight. However, the reparameterization may add \u03c8 c to the other |S| 3 \u2212 1 assignments, yielding many new patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Higher Order CRFs",
"sec_num": "3.2"
},
{
"text": "Therefore, we use another reparameterization strategy that exploits the sparsity for efficient decomposition. We only reparameterize the weights of edges, i.e., quadratic terms. Let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Higher Order CRFs",
"sec_num": "3.2"
},
{
"text": "g \u03bb (Z) = \u2211 c |c|=2 \u2211 s\u2208c[\u2022] ( \u03c8 c + \u03d5 c[s] ) Z c[s] + \u2211 c |c|\u22653 \u2211 s\u2208c[\u2022] \u03d5 c[s] Z c[s] + \u2211 v,s \u03bb v[s] Z v[s]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Higher Order CRFs",
"sec_num": "3.2"
},
{
"text": "The optimal solution is unchanged for any \u03c8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Higher Order CRFs",
"sec_num": "3.2"
},
{
"text": "In Appendix A, we show that by setting a sufficiently large \u03c8, g \u03bb (Z) is graph representable. Such reparameterization method requires at most N 2 |S| 2 new patterns \u03c8 c,|c|=2 to make g \u03bb (Z) graph representable. It preserves the sparsity of higher order patterns, hence is more efficient than the naive approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Higher Order CRFs",
"sec_num": "3.2"
},
{
"text": "In some cases, the graph is built by adding sparse global patterns to local models like trees, resulting in nearly tree-structured CRFs. For example, Sutton and Mccallum (2006) used skip-chain CRFs for NER, where skip-edges connecting identical words were added to linear chain CRFs. Since the skipedges are sparse, the resulting graphical models are nearly linear chains. To handle the edges in local models efficiently, we reformulate the decomposition. Let T be a spanning tree of the graph, if edge (u, v) \u2208 T , we put its related patterns into the first slave, otherwise we put its related patterns into the second slave.",
"cite_spans": [
{
"start": 150,
"end": 176,
"text": "Sutton and Mccallum (2006)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-Cut Decomposition",
"sec_num": "3.3"
},
{
"text": "For clarity, we formulate the tree-cut decomposition for generalized higher order CRFs. The first slave involves the patterns covered by the spanning tree T , and its scoring function is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-Cut Decomposition",
"sec_num": "3.3"
},
{
"text": "f \u03bb (Y ) = \u2211 v \u2211 s \u03d5 v[s] Y v[s] \u2212 \u2211 v,s \u03bb v[s] Y v[s] + \u2211 c\u2208T |c|=2 \u2211 s\u2208c[\u2022] \uf8eb \uf8ec \uf8ec \uf8ed \u03c8 c + \u03d5 c[s] + \u2211 c \u2032 [s \u2032 ]\u2287c[s] |c \u2032 |\u22653,\u03d5 c \u2032 [s \u2032 ] <0 \u03d5 c \u2032 [s \u2032 ] \uf8f6 \uf8f7 \uf8f7 \uf8f8 Y c[s] .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-Cut Decomposition",
"sec_num": "3.3"
},
{
"text": "The second slave involves the rest patterns. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-Cut Decomposition",
"sec_num": "3.3"
},
{
"text": "Z v[s] .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-Cut Decomposition",
"sec_num": "3.3"
},
{
"text": "The scoring function of the second slave is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-Cut Decomposition",
"sec_num": "3.3"
},
{
"text": "h(Z, u) = h 1 (Z) + h 2 (Z, u) + h 3 (Z, u) + h 4 (Z, u)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-Cut Decomposition",
"sec_num": "3.3"
},
{
"text": "where h 1 involves the edges that are not in T , h 2 involves positive terms of degree 3 or more. h 3 involves negative cubic terms, h 4 involves negative terms of degree 4 or more.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-Cut Decomposition",
"sec_num": "3.3"
},
{
"text": "h 1 (Z) = \u2211 v,s \u03bb v[s] Z v[s] + \u2211 c\u0338 \u2208T |c|=2 \u2211 s\u2208c[\u2022] \uf8eb \uf8ec \uf8ec \uf8ed \u03c8 c + \u03d5 c[s] + \u2211 c \u2032 [s \u2032 ]\u2287c[s] |c \u2032 |\u22653,\u03d5 c \u2032 [s \u2032 ] <0 \u03d5 c \u2032 [s \u2032 ] \uf8f6 \uf8f7 \uf8f7 \uf8f8 Z c[s] h 2 (Z, u) = \u2211 c |c|\u22653 \u2211 s\u2208c[\u2022] \u03d5 c[s] \u22650 \u03d5 c[s] ( Z c[s] \u2212 |c| + 1 ) u c[s] h 3 (Z, u) = \u2211 c |c|=3 \u2211 s\u2208c[\u2022] \u03d5 c[s] <0 \u03d5 c[s] u c[s] ( Z c[s] \u2212 1 ) h 4 (Z, u) = \u2211 c |c|\u22654 \u2211 s\u2208c[\u2022] \u03d5 c[s] <0 |\u03d5 c[s] | ( u 0 c[s] (2Z c[s] \u2212 3) + |c|\u22124 \u2211 j=1 u j c[s] (Z c[s] \u2212 j \u2212 2) \uf8f6 \uf8f8 . Term Number of Variables h 1 (Z) N 2 |S| 2 h 2 (Z, u) \u2211 c |c|\u22653 \u2211 s\u2208c[\u2022] \u03d5 c[s] \u22650 (1 + |c|) h 3 (Z, u) \u2211 c |c|=3 \u2211 s\u2208c[\u2022] \u03d5 c[s] <0 (1 + |c|) h 4 (Z, u) \u2211 c |c|\u22654 \u2211 s\u2208c[\u2022] \u03d5 c[s] <0 (2|c| \u2212 3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-Cut Decomposition",
"sec_num": "3.3"
},
{
"text": "The relaxed problem for generalized higher order CRFs, i.e., Problem (2) is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-Cut Decomposition",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min \u03bb max Y,Z,u f \u03bb (Y ) + h(Z, u) s.t. Y \u2208 Y",
"eq_num": "(6)"
}
],
"section": "Tree-Cut Decomposition",
"sec_num": "3.3"
},
{
"text": "Z, u are binary",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree-Cut Decomposition",
"sec_num": "3.3"
},
{
"text": "In this section, we theoretically analyze the time complexity for each iteration in dual decomposition. Running time for max Y \u2208Y f \u03bb (Y ) is linear in the size of the graph, i.e., N \u00d7 |S| 2 . Running time for max Z,u h(Z, u) is cubic in the number of variables, which is the sum of variables in function h 1 to h 4 . h 1 (Z) has at most N 2 |S| 2 variables; each pattern in h 2 (Z, u) requires one extra variable, hence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity Analysis",
"sec_num": "3.4"
},
{
"text": "h 2 (Z, u) has \u2211 c |c|\u22653 \u2211 s\u2208c[\u2022] \u03d5 c[s] \u22650",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity Analysis",
"sec_num": "3.4"
},
{
"text": "(1 + |c|) variables. Similarly, we could count the number of variables in h 3 and h 4 , as shown in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 107,
"text": "Table 1",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Complexity Analysis",
"sec_num": "3.4"
},
{
"text": "In summary, each pattern in h(Z, u) requires at most 2|c| \u2212 2 variables, so h(Z, u) has no more than \u2211 c \u2211 s\u2208c [\u2022] (2|c| \u2212 2) variables. Finally, the time complexity for each iteration in dual decomposition is",
"cite_spans": [
{
"start": 111,
"end": 114,
"text": "[\u2022]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity Analysis",
"sec_num": "3.4"
},
{
"text": "O \uf8eb \uf8ec \uf8edN |S| 2 + \uf8eb \uf8ed \u2211 c \u2211 s\u2208c[\u2022] (2|c| \u2212 2) \uf8f6 \uf8f8 3 \uf8f6 \uf8f7 \uf8f8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity Analysis",
"sec_num": "3.4"
},
{
"text": "which is cubic in the total length of patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Complexity Analysis",
"sec_num": "3.4"
},
{
"text": "Our first experiment is named entity recognition in tweets. Recently, information extraction on Twitter or Facebook data is attracting much attention (Ritter et al., 2011) . Different from traditional information extraction for news articles, messages posted on these social media websites are short and noisy, making the task more challenging. In this paper, we use generalized higher order CRFs for Twitter NER with discriminative training, and compare our 2-slave dual decomposition approach with spanning tree based dual decomposition approach and other decoding algorithms.",
"cite_spans": [
{
"start": 150,
"end": 171,
"text": "(Ritter et al., 2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "4.1.1"
},
{
"text": "So far as we know, there are two publicly available data sets for Twitter NER. One is the Ritter's (Ritter et al., 2011) , the other is from MSM2013 Concept Extraction Challenge (Basave et al., 2013) 1 . Note that in Ritter's work (Ritter et al., 2011) , all of the data are used for evaluating named entity type classification, and not used during training. However, our approach requires discriminative training, which makes our method not comparable with their results. Therefore we choose the MSM2013 dataset in our experiment and compare our system with the MSM2013 official runs.",
"cite_spans": [
{
"start": 99,
"end": 120,
"text": "(Ritter et al., 2011)",
"ref_id": "BIBREF27"
},
{
"start": 231,
"end": 252,
"text": "(Ritter et al., 2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "4.1.1"
},
{
"text": "The MSM2013 corpus has 4 types of named entities, person (PER), location (LOC), organization (ORG), and miscellaneous (MISC). The name entities are about film/movie, entertainment award event, political event, programming language, sporting event and TV show. The data is separated into a training set containing 2815 tweets, and a test set containing 1526 tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Sets",
"sec_num": "4.1.1"
},
{
"text": "We cast the NER task as a structured classification problem, and adopt BIESO labeling, where for each multi-word entity of class C, the first word is labeled as B-C, the words in the entity are labeled as I-C, and the last word is labeled as E-C, a single word entity of class C is labeled as S-C, and other words are labeled as O.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "4.1.2"
},
{
"text": "Our baseline NER is a linear chain CRF. As the MSM2013 competition allows to use extra resources, we use several additional datasets to generate rich features. Specifically, we trained two POS taggers and two NER taggers using extra datasets. All the 4 taggers are trained using linear chain CRFs with perceptron training. One POS tagger is trained on Brown and Wall Street Journal corpora in Penn Tree Bank 3, and the other is trained on ARK Twitter NLP corpus (Gimpel et al., 2011) with slight modification. One of the NER taggers is trained on CoNLL 2003 English dataset 2 , and the other is trained on Ritter's dataset.",
"cite_spans": [
{
"start": 462,
"end": 483,
"text": "(Gimpel et al., 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "4.1.2"
},
{
"text": "We used dictionaries in Ark Twitter NLP toolkit 3 , Ritter's Twitter NLP toolkit 4 and Moby Words project 5 to generate dictionary features. We also collected film names and TV shows from IMDB website and musician groups from wikipedia. These dictionaries are used to detect candidate named entities in the training and testing datasets using string matching. Those matched words are assigned with BIESO style labels which are used as features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "4.1.2"
},
{
"text": "We also used the unsupervised word cluster features provided by Ark Twitter NLP toolkit, which has significantly improved the Twitter POS tagging accuracy (Owoputi et al., 2013) . Similar with previous work, we used prefixes of the cluster bit strings with lengths \u2208 {2, 4, . . . , 16} as features.",
"cite_spans": [
{
"start": 155,
"end": 177,
"text": "(Owoputi et al., 2013)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Local Features",
"sec_num": "4.1.2"
},
{
"text": "Previous studies showed that the document level consistency features (same phrases in a document tend to have the same entity class) are effective for NER (Kazama and Torisawa, 2007; Finkel et al., 2005) . However, unlike news articles, tweets are not organized in documents. To use these document level consistency features, we grouped the tweets in MSM2013 dataset using single linkage clustering algorithm where similarity between two tweets is the number of their overlapped words. If the similarity is greater than 4, then we put the two tweets into one group. Unlike standard document clustering, we did not normalize the length of tweets since all the tweets are limited to 140 characters. Then we extracted the group level features as follows. For any two identical phrases x i . . . x i+k , x j . . . x j+k in a group, a binary feature is true if they have the same label subsequences. The pattern set of this feature is c = {i, . . . , i + k, j, . . . , j + k} and c[\u2022] = {s|s i = s j , . . . , s i+k = s j+k }.",
"cite_spans": [
{
"start": 155,
"end": 182,
"text": "(Kazama and Torisawa, 2007;",
"ref_id": "BIBREF13"
},
{
"start": 183,
"end": 203,
"text": "Finkel et al., 2005)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Global Features",
"sec_num": "4.1.3"
},
{
"text": "We use two evaluation metrics. One is the micro averaged F score, which is used in CoNLL2003 shared task. The other is macro averaged F score, which is used in MSM2013 official evaluation (Basave et al., 2013) .",
"cite_spans": [
{
"start": 188,
"end": 209,
"text": "(Basave et al., 2013)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1.4"
},
{
"text": "We compare our approach with two baselines, integer linear programming (ILP) 6 and a naive dual decomposition method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1.4"
},
{
"text": "In naive dual decomposition, we use three types of slaves: a linear chain captures unigrams and bigrams, and the spanning trees cover the skip edges linking identical words. Identical multi-word phrases yield larger factors with more than 4 vertices. They could not be handled efficiently by belief propagation for spanning trees. Therefore, we create multiple slaves, each of which covers a pair of identical multi-word phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1.4"
},
{
"text": "To reduce the number of slaves, we use a greedy algorithm to choose the spanning trees. Each time we select the spanning tree that covers the most uncovered edges. This can be done by performing the maximum spanning tree algorithm on the graph where each uncovered edge has unit weight. Let x * denote the most frequent word in a tweet cluster, and F * is its frequency, then at least (F * \u22121)/2 spanning trees are required to cover the complete subgraph spanned by x * .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1.4"
},
{
"text": "For both dual decomposition systems, averaged perceptron (Collins, 2002a) with 10 iterations is used for parameter estimation. We follow the work of to choose the step size in the sub-gradient algorithm. Table 2 shows the comparison results, including two F scores and total running time (seconds) for training and testing. Performances of the top 4 official runs are also listed. Different from our approach, the top performing systems mainly benefit from rich open resources, such as DBpedia 6 we use Gurobi as the ILP solver, http://www.gurobi.com/ et al., 2013) . We can see that general CRFs with global features are competitive with these top systems. Our 2-slave DD outperforms naive DD and achieves competitive performance with exact inference based on ILP, while is much faster than ILP.",
"cite_spans": [
{
"start": 57,
"end": 73,
"text": "(Collins, 2002a)",
"ref_id": "BIBREF4"
},
{
"start": 494,
"end": 495,
"text": "6",
"ref_id": null
},
{
"start": 552,
"end": 565,
"text": "et al., 2013)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 204,
"end": 211,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1.4"
},
{
"text": "To compare the convergence speed and optimality of 2-slave DD and naive DD algorithms, we use the model trained by ILP, and record the F micro scores, averaged dual objectives per instance (the lower the tighter), decoding time, and fraction of optimality certificates across iterations of the two DD algorithms on test data. Figure 2 shows the performances of the two algorithms relative to decoding time. Our method requires 0.0064 seconds for each iteration on average, about four times slower than the naive DD. However, our approach achieves a tighter upper bound and larger fraction of optimality certificates.",
"cite_spans": [],
"ref_spans": [
{
"start": 326,
"end": 334,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1.4"
},
{
"text": "Our second experiment is sentence dependency tagging in Question Answering forums task studied in Qu and Liu's work (Qu and Liu, 2012) . The goal is to extract the dependency relationships between sentences for automatic question answering. For example, from the posts below, we would need to know that sentence S4 is a comment about sentence S1 and S2, not an answer to S3. Order-3 factors (e.g., red and blue) connects the 3 vertices in adjacent edge pairs.",
"cite_spans": [
{
"start": 116,
"end": 134,
"text": "(Qu and Liu, 2012)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Dependency Tagging",
"sec_num": "4.2"
},
{
"text": "A: [S1]I'm having trouble installing my DVB Card.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Dependency Tagging",
"sec_num": "4.2"
},
{
"text": "[S2]dmesg prints: . . .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Dependency Tagging",
"sec_num": "4.2"
},
{
"text": "[S3]What could I do to resolve this problem? B: [S4] I'm having similar problems with Ubuntu For a pair of sentences, the depending sentence is called the source sentence, and the depended sentence the target sentence. One source sentence can potentially depend on many different target sentences, and one target sentence can also correspond to multiple sources. Qu and Liu (2012) casted the task as a binary classification problem, i.e., whether or not there exists a dependency relation between a pair of sentences. Formally, in this task, Y is a N 2 \u00d7 2 matrix, where N is the number of sentences, Y i * N +j[1] = 1 if the i th sentence depends on the j th sentence, otherwise, Y i * N +j[0] = 1. We use the corpus in Qu and Liu's work (Qu and Liu, 2012) , where dependencies between 3, 483 sentences in 200 threads were System F Sec. 2D CRFs (naive) 0.564 9.2 2D CRFs (2-slave) 0.565 16.4 3-wise CRFs (naive) 0.572 18.7 3-wise CRFs (2-slave) 0.584 17.33 (Qu and Liu, 2012) 0.561 N/A annotated. Following their settings we randomly split annotated threads into three disjoint sets, and run a three-fold cross validation. F score is used as the evaluation metric. Qu and Liu (2012) used the pairwise CRF with a 4-connected neighborhood system (2D CRF) as their graphical model, where each vertex in the graph represents a sentence pair, and each edge connects adjacent source sentences or target sentences. The key observation is that given a source/target sentence, there is strong dependency between adjacent target/source sentences. In this paper, we extend their work by connecting the 3 vertices in adjacent edge pairs, resulting in 3-wise CRFs, as shown in Figure 3 . We use the same vertex features and edge features as in Qu and Liu's work. For a 3-tuple of vertices, we use the following features: combination of the sentence types within the tuple, whether the related sentences are in one post or belong to the same author. Again, we use perceptron to train the model, and the max iteration number for dual decomposition is 200. The spanning tree in our decomposition is the concatenation of all the rows in the graph. Figure 4 : QA sentence dependency tagging using 3-wise CRFs: The F scores, dual objectives, and fraction of optimality certificates relative to decoding time. Table 3 shows the experimental results. For 2D CRFs, the edges can be covered by 2 spanning trees (one covers all vertical edges and the other covers all horizontal edges), hence the naive dual decomposition has only two slaves. Compared with naive DD, our 2-slave DD achieved competitive performance while two times slower. This is because naive DD adopts dynamic programming that runs in linear time. However, for 3-wise CRFs, the naive dual decomposition requires many small slaves to cover the order-3 factors. Therefore our 2-slave method is more effective. The fraction of optimality certificates and dual objectives of 3-wise CRFs relative to decoding time during testing are shown in Figure 4 . For each iteration, our method requires 0.0049 seconds and the naive DD requires 0.00054 seconds, about 10 times faster than ours, but our method converges to a lower lower bound.",
"cite_spans": [
{
"start": 363,
"end": 380,
"text": "Qu and Liu (2012)",
"ref_id": "BIBREF25"
},
{
"start": 739,
"end": 757,
"text": "(Qu and Liu, 2012)",
"ref_id": "BIBREF25"
},
{
"start": 958,
"end": 976,
"text": "(Qu and Liu, 2012)",
"ref_id": "BIBREF25"
},
{
"start": 1166,
"end": 1183,
"text": "Qu and Liu (2012)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 1665,
"end": 1673,
"text": "Figure 3",
"ref_id": null
},
{
"start": 2132,
"end": 2140,
"text": "Figure 4",
"ref_id": null
},
{
"start": 2291,
"end": 2298,
"text": "Table 3",
"ref_id": "TABREF6"
},
{
"start": 2983,
"end": 2991,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Sentence Dependency Tagging",
"sec_num": "4.2"
},
{
"text": "We proposed a new decomposition approach for generalized higher order CRFs using only two slaves. Both permit polynomial decoding time. We evaluated our method on two different tasks: Twitter named entity recognition and forum sentence dependency detection. Experimental results show that though the compact decomposition requires more running time for each iteration, it achieves consistently tighter bounds and outperforms the naive dual decomposition. The two experiments demonstrate that our method works for general graphs, even if the graph can not be decomposed into a few spanning trees (for example, if the graph has large complete subgraphs or large factors).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Our code is available at https://github.com/qxred/higher-order-crf",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We show that by setting a sufficiently large \u03c8, g \u03bb (Z) in Section 3.2 is graph representable. Let g \u03bb (Z) = g 1 (Z) + g 2 (Z) + g 3 (Z) + g 4 (Z)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A",
"sec_num": null
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A",
"sec_num": null
},
{
"text": "g 1 (Z) = \u2211 c |c|=2 \u2211 s\u2208c[\u2022] ( \u03c8 c + \u03d5 c[s] ) Z c[s] + \u2211 v,s \u03bb v[s] Z v[s] g 2 (Z) = \u2211 c |c|\u22653 \u2211 s\u2208c[\u2022] \u03d5 c[s] \u22650 \u03d5 c[s] Z c[s] g 3 (Z) = \u2211 c |c|=3 \u2211 s\u2208c[\u2022] \u03d5 c[s] <0 \u03d5 c[s] Z c[s] g 4 (Z) = \u2211 c |c|\u22654 \u2211 s\u2208c[\u2022] \u03d5 c[s] <0 \u03d5 c[s] Z c[s]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A",
"sec_num": null
},
{
"text": "For g 2 (Z), since coefficients of all terms are nonnegative, we can use the fact",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u220f i a i \u2208{0,1} a i = max b\u2208{0,1} ( \u2211 i a i \u2212 |a| + 1 ) b",
"eq_num": "(7)"
}
],
"section": "Appendix A",
"sec_num": null
},
{
"text": "to reduce g 2 (Z) into an equivalent quadratic form (Freedman and Drineas, 2005) . That is, max Z g 2 (Z) is equivalent to ",
"cite_spans": [
{
"start": 52,
"end": 80,
"text": "(Freedman and Drineas, 2005)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A",
"sec_num": null
},
{
"text": "Z v[s] \u2212 |c| + 1 \uf8f6 \uf8f8 u c[s]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A",
"sec_num": null
},
{
"text": "which is graph representable because coefficients of all the quadratic terms are non-negative. Coefficients of terms in g 3 (Z) and g 4 (Z) are negative, therefore g 3 (Z) and g 4 (Z) are not supermodular. To make them graph representable, we use the following fact Proposition 1 (\u017divn\u00fd and Jeavons, 2010) The pseudo-Boolean function p(x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A",
"sec_num": null
},
{
"text": "= \u2211 1\u2264i<j\u2264K x i x j \u2212 \u220f K i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A",
"sec_num": null
},
{
"text": "x i is graph representable and can be reduced to the quadratic forms: if K = 3, then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(x) = max y\u2208{0,1} (x 1 + x 2 + x 3 \u2212 1)y",
"eq_num": "(8)"
}
],
"section": "Appendix A",
"sec_num": null
},
{
"text": "otherwise K > 3,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(x) = max y 0 \u2208{0,1} y 0 (2 K \u2211 i=1 x i \u2212 3) + max binary y K\u22124 \u2211 j=1 y j ( K \u2211 i=1 x i \u2212 j \u2212 2)",
"eq_num": "(9)"
}
],
"section": "Appendix A",
"sec_num": null
},
{
"text": "According to Eq (8), for each cubic term in g 3 (Z), we have ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A",
"sec_num": null
},
{
"text": "\u03d5 c[s] \u220f v[s]\u2208c[s] Z v[s] = \u03d5 c[s] \uf8eb \uf8ed \u2211 u[s],v[t]\u2208c[s] Z u[s] Z v[t] \u2212 \u220f v[s]\u2208c[s] Z v[s] \uf8f6 \uf8f8 \u2212 \u03d5 c[s] \u2211 u[s],v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A",
"sec_num": null
},
{
"text": "Z v[s] \u2212 1 \uf8f6 \uf8f8 \u2212 \u03d5 c[s] \u2211 u[s],v[t]\u2208c[s] Z u[s] Z v[t]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A",
"sec_num": null
},
{
"text": "The first part on the right hand side is graph representable since all quadratic terms are nonnegative. The second part is a quadratic function of Z, and it can be merged into g 1 (Z). With sufficiently large \u03c8 c in g 1 (Z), we could guarantee the non-negativity of all quadratic terms. Similarly, we could apply Eq (9) to reduce g 4 (Z) to graph representable quadratic forms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A",
"sec_num": null
},
{
"text": "Transactions of the Association for Computational Linguistics, 2 (2014) 339-350. Action Editor: Kristina Toutanova.Submitted 11/2013; Revised 6/2014; Published 10/2014. c 2014 Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://oak.dcs.shef.ac.uk/msm2013/challenge.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "www.cnts.ua.ac.be/conll2003/ 3 https://code.google.com/p/ark-tweet-nlp/ 4 http://github.com/aritter/Twitter nlp 5 http://icon.shef.ac.uk/Moby/mwords.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank three anonymous reviewers for their valuable comments. This work is partly supported by DARPA under Contract No. FA8750-13-2-0041. Any opinions expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Making sense of microposts (msm2013) concept extraction challenge (challenge report)",
"authors": [
{
"first": "Amparo",
"middle": [
"Elizabeth"
],
"last": "",
"suffix": ""
},
{
"first": "Cano",
"middle": [],
"last": "Basave",
"suffix": ""
},
{
"first": "Andrea",
"middle": [],
"last": "Varga",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Rowe",
"suffix": ""
},
{
"first": "Milan",
"middle": [],
"last": "Stankovic",
"suffix": ""
},
{
"first": "Aba-Sah",
"middle": [],
"last": "Dadzie",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Concept Extraction Challenge at the Workshop on 'Making Sense of Microposts",
"volume": "",
"issue": "",
"pages": "1--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amparo Elizabeth Cano Basave, Andrea Varga, Matthew Rowe, Milan Stankovic, and Aba-Sah Dadzie. 2013. Making sense of microposts (msm2013) concept extraction challenge (challenge report). In Proceedings of the Concept Extraction Challenge at the Workshop on 'Making Sense of Microposts', pages 1-15.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Maximizing a supermodular pseudoboolean function: A polynomial algorithm for supermodular cubic functions",
"authors": [
{
"first": "A",
"middle": [],
"last": "Billionnet",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Minoux",
"suffix": ""
}
],
"year": 1985,
"venue": "Discrete Applied Mathematics",
"volume": "12",
"issue": "1",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Billionnet and M. Minoux. 1985. Maximizing a supermodular pseudoboolean function: A polynomial algorithm for supermodular cubic functions. Discrete Applied Mathematics, 12(1):1 -11.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The max-cut problem and quadratic 0-1 optimization; polyhedral aspects, relaxations and bounds",
"authors": [
{
"first": "Endre",
"middle": [],
"last": "Boros",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Peterl",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hammer",
"suffix": ""
}
],
"year": 1991,
"venue": "Annals of Operations Research",
"volume": "33",
"issue": "3",
"pages": "151--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Endre Boros and PeterL. Hammer. 1991. The max-cut problem and quadratic 0-1 optimization; polyhedral aspects, relaxations and bounds. Annals of Operations Research, 33(3):151-180.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Pseudoboolean optimization",
"authors": [
{
"first": "Endre",
"middle": [],
"last": "Boros",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"L"
],
"last": "Hammer",
"suffix": ""
}
],
"year": 2002,
"venue": "Discrete Applied Mathematics",
"volume": "123",
"issue": "1C3",
"pages": "155--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Endre Boros and Peter L. Hammer. 2002. Pseudo- boolean optimization. Discrete Applied Mathematics, 123(1C3):155 -225.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002a. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of EMNLP, pages 1-8.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Ranking algorithms for named entity extraction: Boosting and the votedperceptron",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "489--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002b. Ranking algorithms for named entity extraction: Boosting and the votedperceptron. In Proceedings of ACL, pages 489-496, July.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Using conditional random fields to extract contexts and answers of questions from online forums",
"authors": [
{
"first": "Shilin",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Gao",
"middle": [],
"last": "Cong",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "710--718",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shilin Ding, Gao Cong, Chin-Yew Lin, and Xiaoyan Zhu. 2008. Using conditional random fields to extract contexts and answers of questions from online forums. In Proceedings of ACL-08: HLT, pages 710-718, June.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Incorporating non-local information into information extraction systems by gibbs sampling",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "363--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of ACL, pages 363-370, June.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Energy minimization via graph cuts: Settling what is possible",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Freedman",
"suffix": ""
},
{
"first": "Petros",
"middle": [],
"last": "Drineas",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of CVPR",
"volume": "",
"issue": "",
"pages": "939--946",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Freedman and Petros Drineas. 2005. Energy minimization via graph cuts: Settling what is possible. In Proceedings of CVPR, pages 939-946. IEEE Computer Society.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A skip-chain conditional random field for ranking meeting utterances by importance",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "364--372",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley. 2006. A skip-chain conditional random field for ranking meeting utterances by importance. In Proceedings of EMNLP, pages 364-372.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Part-of-speech tagging for twitter: annotation, features, and experiments",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Mills",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Flanigan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL-HLT, HLT '11",
"volume": "",
"issue": "",
"pages": "42--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tagging for twitter: annotation, features, and experiments. In Proceedings of ACL-HLT, HLT '11, pages 42-47.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The partial augmentcrelabel algorithm for the maximum flow problem",
"authors": [
{
"first": "",
"middle": [],
"last": "Andrewv",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2008,
"venue": "Lecture Notes in Computer Science",
"volume": "5193",
"issue": "",
"pages": "466--477",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "AndrewV. Goldberg. 2008. The partial augmentcrelabel algorithm for the maximum flow problem. In Dan Halperin and Kurt Mehlhorn, editors, Algorithms - ESA 2008, volume 5193 of Lecture Notes in Computer Science, pages 466-477. Springer Berlin Heidelberg.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Accelerated dual decomposition for map inference",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Jojic",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
},
{
"first": "Daphne",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "503--510",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Jojic, Stephen Gould, and Daphne Koller. 2010. Accelerated dual decomposition for map inference. In Proceedings of ICML, pages 503-510. Omnipress.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A new perceptron algorithm for sequence labeling with nonlocal features",
"authors": [
{
"first": "Kentaro",
"middle": [],
"last": "Jun'ichi Kazama",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Torisawa",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "315--324",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun'ichi Kazama and Kentaro Torisawa. 2007. A new perceptron algorithm for sequence labeling with non- local features. In Proceedings of EMNLP-CoNLL, pages 315-324, June.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "What energy functions can be minimized via graph cuts?",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Kolmogorov",
"suffix": ""
},
{
"first": "Ramin",
"middle": [],
"last": "Zabih",
"suffix": ""
}
],
"year": 2004,
"venue": "IEEE Trans. Pattern Anal. Mach. Intell",
"volume": "26",
"issue": "2",
"pages": "147--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Kolmogorov and Ramin Zabih. 2004. What energy functions can be minimized via graph cuts? IEEE Trans. Pattern Anal. Mach. Intell., 26(2):147- 159.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Efficient training for pairwise or higher order CRFs via dual decomposition",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Kolmogorov",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of CVPR",
"volume": "28",
"issue": "",
"pages": "1841--1848",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Kolmogorov. 2006. Convergent tree- reweighted message passing for energy minimization. IEEE Trans. Pattern Anal. Mach. Intell., 28(10):1568- 1583, October. Nikos Komodakis. 2011. Efficient training for pairwise or higher order CRFs via dual decomposition. In Proceedings of CVPR, pages 1841-1848. IEEE Computer Society.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Dual decomposition for parsing with non-projective head automata",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Jaakkola",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Sontag",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1288--1298",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo, Alexander M. Rush, Michael Collins, Tommi Jaakkola, and David Sontag. 2010. Dual decomposition for parsing with non-projective head automata. In Proceedings of EMNLP, pages 1288- 1298, Cambridge, MA, October.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Efficient inference in fully connected crfs with gaussian edge potentials",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Kr\u00e4henb\u00fchl",
"suffix": ""
},
{
"first": "Vladlen",
"middle": [],
"last": "Koltun",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "109--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Kr\u00e4henb\u00fchl and Vladlen Koltun. 2011. Efficient inference in fully connected crfs with gaussian edge potentials. In Proceedings of NIPS, pages 109-117.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML, pages 282-289.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "An augmented lagrangian approach to constrained map inference",
"authors": [
{
"first": "Andre",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Figueiredo",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Aguiar",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "169--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andre Martins, Mario Figueiredo, Pedro Aguiar, Noah Smith, and Eric Xing. 2011a. An augmented lagrangian approach to constrained map inference. In Proceedings of ICML, pages 169-176. ACM, June.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Dual decomposition with many overlapping components",
"authors": [
{
"first": "Andre",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Figueiredo",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Aguiar",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the EMNLP",
"volume": "",
"issue": "",
"pages": "238--249",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andre Martins, Noah Smith, Mario Figueiredo, and Pedro Aguiar. 2011b. Dual decomposition with many overlapping components. In Proceedings of the EMNLP, pages 238-249, July.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "An analysis of approximations for maximizing submodular set functions-I",
"authors": [
{
"first": "G",
"middle": [
"L"
],
"last": "Nemhauser",
"suffix": ""
},
{
"first": "L",
"middle": [
"A"
],
"last": "Wolsey",
"suffix": ""
},
{
"first": "M",
"middle": [
"L"
],
"last": "Fisher",
"suffix": ""
}
],
"year": 1978,
"venue": "Mathematical Programming",
"volume": "14",
"issue": "1",
"pages": "265--294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G.L. Nemhauser, L.A. Wolsey, and M.L. Fisher. 1978. An analysis of approximations for maximizing submodular set functions-I. Mathematical Programming, 14(1):265-294.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A faster strongly polynomial time algorithm for submodular function minimization",
"authors": [
{
"first": "James",
"middle": [
"B"
],
"last": "Orlin",
"suffix": ""
}
],
"year": 2009,
"venue": "Math. Program",
"volume": "118",
"issue": "2",
"pages": "237--251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James B. Orlin. 2009. A faster strongly polynomial time algorithm for submodular function minimization. Math. Program., 118(2):237-251.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Improved part-of-speech tagging for online conversational text with word clusters",
"authors": [
{
"first": "Olutobi",
"middle": [],
"last": "Owoputi",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Schneider",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "380--390",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of NAACL-HLT, pages 380-390, June.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Sparse higher order conditional random fields for improved sequence labeling",
"authors": [
{
"first": "Xian",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Xiaoqian",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Lide",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ICML",
"volume": "382",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xian Qian, Xiaoqian Jiang, Qi Zhang, Xuanjing Huang, and Lide Wu. 2009. Sparse higher order conditional random fields for improved sequence labeling. In Proceedings of ICML, volume 382, page 107. ACM.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Sentence dependency tagging in online question answering forums",
"authors": [
{
"first": "Zhonghua",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "554--562",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhonghua Qu and Yang Liu. 2012. Sentence dependency tagging in online question answering forums. In Proceedings of ACL, pages 554-562, July.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A selection problem of shared fixed costs and network flows",
"authors": [
{
"first": "M",
"middle": [
"W"
],
"last": "Rhys",
"suffix": ""
}
],
"year": 1970,
"venue": "Management Science",
"volume": "17",
"issue": "3",
"pages": "200--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. W. Rhys. 1970. A selection problem of shared fixed costs and network flows. Management Science, 17(3):200-207.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Named entity recognition in tweets: An experimental study",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1524--1534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named entity recognition in tweets: An experimental study. In Proceedings of EMNLP, pages 1524-1534, July.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A tutorial on dual decomposition and lagrangian relaxation for inference in natural language processing",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M. Rush and Michael Collins. 2012. A tutorial on dual decomposition and lagrangian relaxation for inference in natural language processing.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "On dual decomposition and linear programming relaxations for natural language processing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Alexander",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Rush",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Sontag",
"suffix": ""
},
{
"first": "Tommi",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of EMNLP 2010",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On dual decomposition and linear programming relaxations for natural language processing. In Proceedings of EMNLP 2010, pages 1-11, Cambridge, MA, October.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Semimarkov conditional random fields for information extraction",
"authors": [
{
"first": "Sunita",
"middle": [],
"last": "Sarawagi",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sunita Sarawagi and William W. Cohen. 2004. Semi- markov conditional random fields for information extraction. In Proceedings of NIPS.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Introduction to Conditional Random Fields for Relational Learning",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Sutton and Andrew Mccallum, 2006. Introduction to Conditional Random Fields for Relational Learning. MIT Press.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Inference algorithms for pattern-based CRFs on sequence data",
"authors": [
{
"first": "Rustem",
"middle": [],
"last": "Takhanov",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Kolmogorov",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "145--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rustem Takhanov and Vladimir Kolmogorov. 2013. Inference algorithms for pattern-based CRFs on sequence data. In Proceedings of ICML, pages 145- 153.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Classes of submodular constraints expressible by graph cuts",
"authors": [
{
"first": "Peter",
"middle": [
"G"
],
"last": "Stanislav\u017eivn\u00fd",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jeavons",
"suffix": ""
}
],
"year": 2010,
"venue": "Constraints",
"volume": "15",
"issue": "3",
"pages": "430--452",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanislav\u017divn\u00fd and Peter G. Jeavons. 2010. Classes of submodular constraints expressible by graph cuts. Constraints, 15(3):430-452, July.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Conditional random fields with highorder features for sequence labeling",
"authors": [
{
"first": "Nan",
"middle": [],
"last": "Ye",
"suffix": ""
},
{
"first": "Hai",
"middle": [
"Leong"
],
"last": "Wee Sun Lee",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Chieu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "2196--2204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nan Ye, Wee Sun Lee, Hai Leong Chieu, and Dan Wu. 2009. Conditional random fields with high- order features for sequence labeling. In Proceedings of NIPS, pages 2196-2204.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "Twitter NER: The F micro scores, dual objectives, and fraction of optimality certificates relative to decoding time.source target Figure 3: 3-wise CRF for QA sentence dependency tagging.",
"type_str": "figure"
},
"TABREF1": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td>vertex</td></tr><tr><td/><td/><td>s</td></tr><tr><td>state</td><td/></tr><tr><td/><td/><td>t</td></tr><tr><td/><td>u</td><td>v</td></tr><tr><td colspan=\"3\">its members v[s] are selected. For simplicity, in this paper we use</td></tr><tr><td>Y c[s] =</td><td>\u220f</td></tr><tr><td/><td>v[s]\u2208c[s]</td></tr></table>",
"text": "Figure 1: Pattern c[s] = uv[st] is shown in bold."
},
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>Z c[s] =</td><td>\u2211</td></tr><tr><td/><td>v[s]\u2208c[s]</td></tr></table>",
"text": "Let u denote the vector of all the introduced extra binary variables. For each pattern c[s], denote"
},
"TABREF3": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Number of variables in each part of h(Z, u)"
},
"TABREF4": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>Linear chain CRFs</td><td>0.657</td><td>0.815</td><td>98</td></tr><tr><td colspan=\"4\">General CRFs (2-slave DD) 0.680 0.827 214 0.672 0.824 490 General CRFs (naive DD) General CRFs (ILP) 0.680 0.828 8640</td></tr><tr><td>Official 1st Official 2nd Official 3rd Official 4th</td><td>0.670 0.662 0.658 0.610</td><td>N/A N/A N/A N/A</td><td>N/A N/A N/A N/A</td></tr></table>",
"text": "SystemF macro F micro Sec."
},
"TABREF5": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>: Comparison results on MSM2013 Twitter</td></tr><tr><td>NER task.</td></tr><tr><td>Gazetteer, ANNIE Gazetteer, Yago, Microsoft N-</td></tr><tr><td>grams, and external NER system combination,</td></tr><tr><td>such as ANNIE, OpenNLP, LingPipe, OpenCalais</td></tr><tr><td>(Basave</td></tr></table>",
"text": ""
},
"TABREF6": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Comparison results on QA sentence dependency tagging task."
}
}
}
}