ACL-OCL / Base_JSON /prefixW /json /W09 /W09-0204.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W09-0204",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:43:42.371519Z"
},
"title": "A Study of Convolution Tree Kernel with Local Alignment",
"authors": [
{
"first": "Lidan",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": "lzhang@cs.hku.hk"
},
{
"first": "Kwok-Ping",
"middle": [],
"last": "Chan",
"suffix": "",
"affiliation": {},
"email": "kpchan@cs.hku.hk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper discusses a new convolution tree kernel by introducing local alignments. The main idea of the new kernel is to allow some syntactic alternations during each match between subtrees. In this paper, we give an algorithm to calculate the composite kernel. The experiment results show promising improvements on two tasks: semantic role labeling and question classification.",
"pdf_parse": {
"paper_id": "W09-0204",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper discusses a new convolution tree kernel by introducing local alignments. The main idea of the new kernel is to allow some syntactic alternations during each match between subtrees. In this paper, we give an algorithm to calculate the composite kernel. The experiment results show promising improvements on two tasks: semantic role labeling and question classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recently kernel-based methods have become a state-of-art technique and been widely used in natural language processing applications. In this method, a key problem is how to design a proper kernel function in terms of different data representations. So far, there are two kinds of data representations. One is to encode an object with a flat vector whose element correspond to an extracted feature from the object. However the feature vector is sensitive to the structural variations. The extraction schema is heavily dependent on different problems. On the other hand, kernel function can be directly calculated on the object. The advantages are that the original topological information is to a large extent preserved and the introduction of additional noise may be avoided. Thus structure-based kernels can well model syntactic parse tree in a variety of applications, such as relation extraction (Zelenko et al., 2003) , named entity recognition (Culotta and Sorensen, 2004) , semantic role labeling (Moschitti et al., 2008) and so on.",
"cite_spans": [
{
"start": 899,
"end": 921,
"text": "(Zelenko et al., 2003)",
"ref_id": "BIBREF20"
},
{
"start": 949,
"end": 977,
"text": "(Culotta and Sorensen, 2004)",
"ref_id": "BIBREF6"
},
{
"start": 1003,
"end": 1027,
"text": "(Moschitti et al., 2008)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To compute the structural kernel function, Haussler (1999) introduced a general type of kernel function, called\" Convolution kernel\". Based on this work, Collins and Duffy (2002) proposed a tree kernel calculation by counting the common subtrees. In other words, two trees are considered if and only if these two trees are exactly same. In real sentences, some structural alternations within a given phrase are permitted without changing its usage. Therefore, Moschitti (2004) proposed partial trees to partially match between subtrees. Kashima and Koyanagi (2002) generalize the tree kernel to labeled order tree kernel with more flexible match. And from the idea of introducing linguistical knowledge, Zhang et al. (2007) proposed a grammar-driven tree kernel, in which two subtrees are same if and only if the corresponding two productions are in the same manually defined set. In addition, the problem of hard matching can be alleviated by processing or mapping the trees. For example, Tai mapping (Kuboyama et al., 2006) generalized the kernel from counting subtrees to counting the function of mapping. Moreover multi-source knowledge can benefit kernel calculation, such as using dependency information to dynamically determine the tree span (Qian et al., 2008) .",
"cite_spans": [
{
"start": 43,
"end": 58,
"text": "Haussler (1999)",
"ref_id": "BIBREF7"
},
{
"start": 154,
"end": 178,
"text": "Collins and Duffy (2002)",
"ref_id": "BIBREF4"
},
{
"start": 460,
"end": 476,
"text": "Moschitti (2004)",
"ref_id": "BIBREF13"
},
{
"start": 537,
"end": 564,
"text": "Kashima and Koyanagi (2002)",
"ref_id": null
},
{
"start": 704,
"end": 723,
"text": "Zhang et al. (2007)",
"ref_id": "BIBREF21"
},
{
"start": 1002,
"end": 1025,
"text": "(Kuboyama et al., 2006)",
"ref_id": "BIBREF8"
},
{
"start": 1249,
"end": 1268,
"text": "(Qian et al., 2008)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a tree kernel calculation algorithm by allowing variations in productions. The variation is measured with local alignment score between two derivative POS sequences. To reduce the computation complexity, we use the dynamic programming algorithm to compute the score of any alignment. And the top n alignments are considered in the kernel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Another problem in Collins and Duffy's tree kernel is context-free. It does not consider any semantic information located at the leaf nodes of the parsing trees. To lexicalized tree kernel, Bloehdorn et al. (2007) considered the associated term similarity by virtue of WordNet. Shen et al. (2003) constructed a separate lexical feature containing words on a given path and merged into the kernel in linear combination.",
"cite_spans": [
{
"start": 190,
"end": 213,
"text": "Bloehdorn et al. (2007)",
"ref_id": null
},
{
"start": 269,
"end": 296,
"text": "WordNet. Shen et al. (2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows. In section 2, we describe the commonly used tree kernel. In section 3, we propose our method to make use of the local alignment information in kernel calculation. Section 4 presents the results of our experiments for two different applications ( Semantic Role Labeling and Question Classification). Finally section 5 provides our conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main idea of tree kernel is to count the number of common subtrees between two trees T 1 and T 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Tree Kernel",
"sec_num": "2"
},
{
"text": "In convolutional tree kernel (Collins and Duffy, 2002) , a tree(T ) is represented as a vector",
"cite_spans": [
{
"start": 29,
"end": 54,
"text": "(Collins and Duffy, 2002)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Tree Kernel",
"sec_num": "2"
},
{
"text": "h(T ) = (h 1 (T ), ..., h i (T ), ..., h n (T )), where h i (T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Tree Kernel",
"sec_num": "2"
},
{
"text": "is the number of occurrences of the i th tree fragment in the tree T . Since the number of subtrees is exponential with the parse tree size, it is infeasible to directly count the common subtrees. To reduce the computation complexity, a recursive kernel calculation algorithm was presented. Given two trees T 1 and T 2 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Tree Kernel",
"sec_num": "2"
},
{
"text": "K(T 1 , T 2 ) = < h(T 1 ), h(T 2 ) > (1) = i h i (T 1 )h i (T 2 ) = i ( n 1 \u2208N T 1 I i (n 1 ) n 2 \u2208N T 2 I i (n 2 )) = n 1 \u2208N T 1 n 2 \u2208N T 2 (n 1 , n 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Tree Kernel",
"sec_num": "2"
},
{
"text": "where, N T 1 and N T 2 are the sets of all nodes in trees T 1 and T 2 , respectively. I i (n) is the indicator function to be 1 if i-th subtree is rooted at node n and 0 otherwise. And (n 1 , n 2 ) is the number of common subtrees rooted at n 1 and n 2 . It can be computed efficiently according to the following rules:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Tree Kernel",
"sec_num": "2"
},
{
"text": "(1) If the productions at n 1 and n 2 are different, (n 1 , n 2 ) = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Tree Kernel",
"sec_num": "2"
},
{
"text": "(2) If the productions at n 1 and n 2 are same, and n 1 and n 2 are pre-terminals, then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Tree Kernel",
"sec_num": "2"
},
{
"text": "(n 1 , n 2 ) = \u03bb (3) Else, (n 1 , n 2 ) = \u03bb nc(n 1 ) j (1 + (ch(n 1 , j), ch(n 2 , j)))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Tree Kernel",
"sec_num": "2"
},
{
"text": "where nc(n 1 ) is the number of children of n 1 in the tree. Note that n 1 = n 2 because the productions at n 1 and n 2 are same. ch(n 1 , j) represents the j th child of node n 1 . And 0 < \u03bb \u2264 1 is the parameter to downweight the contribution of larger tree fragments to the kernel. It corresponds to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Tree Kernel",
"sec_num": "2"
},
{
"text": "K(T 1 , T 2 ) = i \u03bb size i h i (T 1 )h i (T 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Tree Kernel",
"sec_num": "2"
},
{
"text": ", where size i is the number of rules in the i'th fragment. The time complexity of computing this",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Tree Kernel",
"sec_num": "2"
},
{
"text": "kernel is O(|N T 1 | \u2022 |N T 2 |).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Tree Kernel",
"sec_num": "2"
},
{
"text": "3 Tree Kernel with Local Alignment",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Tree Kernel",
"sec_num": "2"
},
{
"text": "As we referred, one of problems in the basic tree kernel is its hard match between two rules. In other words, at each tree level, the two subtrees are required to be perfectly equal. However, in real sentences, some modifiers can be added into a phrase without changing the phrase's function. For example, two sentences are given in Figure 1 . Considering \"A1\" role, the similarities between two subtrees(in circle) are 0 in (Collins and Duffy, 2002) , because the productions \"NP\u2192DT ADJP NN\" and \"NP\u2192DT NN\" are not identical. From linguistical point of view, the adjective phrase is optional in real sentences, which does not change the corresponding semantic role. Thus the modifier components(like \"ADJP\" in the above example) should be neglected in similarity comparisons.",
"cite_spans": [
{
"start": 425,
"end": 450,
"text": "(Collins and Duffy, 2002)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 333,
"end": 341,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "General Framework",
"sec_num": "3.1"
},
{
"text": "To make the hard match flexible, we can align two string sequences derived from the same node. Considering the above example, Figure 1: Syntactic parse tree with \"A1\" semantic role an alignment might be \"DT ADJP NN\" vs \"DT -NN\", by inserting a symbol(-). The symbol(-) corresponds to a \"NULL\" subtree in the parser tree. And the \"NULL\" subtree can be regarded as a null character in the sentence, see Figure 1 (c). Convolution kernels, studied in (Haussler, 1999) gave the framework to construct a complex kernel from its simple elements. Suppose x \u2208 X can be decomposed into",
"cite_spans": [
{
"start": 447,
"end": 463,
"text": "(Haussler, 1999)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 401,
"end": 409,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "General Framework",
"sec_num": "3.1"
},
{
"text": "x 1 , ..., x m \u2261 x. Let R be a relation over X 1 \u00d7 ... \u00d7 X m \u00d7 X such that R( x) is true iff x 1 , ..., x m are parts of x. R \u22121 (x) = { x|R( x, x)},",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Framework",
"sec_num": "3.1"
},
{
"text": "which returns all components. For example, x is any string, then x can be its characters. The convolution kernel K is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Framework",
"sec_num": "3.1"
},
{
"text": "K(x, y) = x\u2208R \u22121 (x), y\u2208R \u22121 (y) m d=1 K d (x d , y d ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Framework",
"sec_num": "3.1"
},
{
"text": "Considering our problem, for example, a derived string sequence x by the rule \"n 1 \u2192 x\". R(x i , x) is true iff x i appears in the right hand of x. Given two POS sequences x and y derived from two nodes n 1 and n 2 , respectively, A(x, y) denotes all the possible alignments of the sequence. The general form of the kernel with local alignment is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Framework",
"sec_num": "3.1"
},
{
"text": "K (n 1 , n 2 ) = (i,j)\u2208A(x,y) K(n i 1 , n j 2 ) (3) (n 1 , n 2 ) = \u03bb (i,j)\u2208A(x,y) AS (i,j) nc(n 1 ,i) d=1 (1 + (ch(n 1 , i, d), ch(n 2 , j, d))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General Framework",
"sec_num": "3.1"
},
{
"text": "where, (i, j) denotes the i th and j th variation for x and y, AS (i,j) is the score for alignment i and j. And ch(n 1 , i, d) selects the d th subtree for the i th aligned schema of node n 1 . It is easily to prove the above kernel is positive semi-definite, since the kernel K(n i 1 , n j 2 ) is positive semi-definite. The native computation is impractical because the number of all possible alignments(|A(x, y)|) is exponential with respect to |x| and |y|. In the next section, we will discuss how to calculate AS (i,j) for each alignment.",
"cite_spans": [
{
"start": 66,
"end": 71,
"text": "(i,j)",
"ref_id": null
},
{
"start": 518,
"end": 523,
"text": "(i,j)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General Framework",
"sec_num": "3.1"
},
{
"text": "The local alignment(LA) kernel was usually used in bioinformatics, to compare the similarity between two protein sequences(x and y) by exploring their alignments (Saigo et al., 2004) .",
"cite_spans": [
{
"start": 162,
"end": 182,
"text": "(Saigo et al., 2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Local Alignment Kernel",
"sec_num": "3.2"
},
{
"text": "K LA (x, y) = \u03c0\u2208A(x,y) exp \u03b2s(x,y,\u03c0) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Alignment Kernel",
"sec_num": "3.2"
},
{
"text": "where \u03b2 \u2265 0 is a parameter, A(x, y) denotes all possible local alignments between x and y, and s(x, y, \u03c0) is the local alignment score for a given alignment schema \u03c0, which is equal to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Alignment Kernel",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "s(x, y, \u03c0) = |\u03c0| i=1 S(x \u03c0 i 1 , y \u03c0 i 2 )\u2212 |\u03c0|\u22121 j=1 [g(\u03c0 i+1 1 \u2212 \u03c0 i 1 ) + g(\u03c0 i+1 2 \u2212 \u03c0 i 2 )]",
"eq_num": "(5)"
}
],
"section": "Local Alignment Kernel",
"sec_num": "3.2"
},
{
"text": "In equation 5, S is a substitution matrix, and g is a gap penalty function. The alignment score is the sum of the substitution score between the correspondence at the aligned position, minus the sum of the gap penalty for the case that '-' symbol is inserted. In natural language processing, the substitution matrix can be selected as identity matrix and no penalty is accounted. Obviously, the direct computation of the original K LA is not practical. Saigo (2004) presented a dynamic programming algorithm with time complexity O(|x|\u2022|y|). In this paper, this dynamic algorithm is used to compute the kernel matrix, whose element(i, j) is used as AS (i,j) measurement in equation 3.",
"cite_spans": [
{
"start": 453,
"end": 465,
"text": "Saigo (2004)",
"ref_id": "BIBREF17"
},
{
"start": 651,
"end": 656,
"text": "(i,j)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Local Alignment Kernel",
"sec_num": "3.2"
},
{
"text": "Now we embed the above local alignment score into the general tree kernel computation. Equation 3can be re-written into following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Alignment Tree Kernel",
"sec_num": "3.3"
},
{
"text": "(n 1 , n 2 ) = \u03bb \u03c0\u2208A(x,y) (exp \u03b2s(x,y,\u03c0) \u00d7 nc(n 1 ,i) k=1 (1 + (ch(n 1 , i, k), ch(n 2 , j, k)))) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Alignment Tree Kernel",
"sec_num": "3.3"
},
{
"text": "To further reduce the computation complexity, a threshold (\u03be) is used to filter out alignments with low scores. This can help to avoid over-generated subtrees and only select the significant alignments. In other words, by using the threshold (\u03be), we can select the salient subtree variations for kernels. The final kernel calculation is shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Alignment Tree Kernel",
"sec_num": "3.3"
},
{
"text": "(n 1 , n 2 ) = \u03bb \u03c0 \u2208 A(x, y) s(x, y, \u03c0) > \u03be (\u03b5 \u03b2s(x,y,\u03c0) \u00d7 nc(n 1 ,i) k=1 (1 + (ch(n 1 , i, k), ch(n 2 , j, k)))) (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Alignment Tree Kernel",
"sec_num": "3.3"
},
{
"text": "After filtering, the kernel is still positive semi-definite. This can be easily proved using the theorem in (Shin and Kuboyama, 2008) , since this subset selection is transitive. More specifically, if s(x, y, \u03c0) > \u03be s (y, z, \u03c0 ",
"cite_spans": [
{
"start": 108,
"end": 133,
"text": "(Shin and Kuboyama, 2008)",
"ref_id": "BIBREF18"
},
{
"start": 218,
"end": 226,
"text": "(y, z, \u03c0",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Local Alignment Tree Kernel",
"sec_num": "3.3"
},
{
"text": ") > \u03be, then s(x, z, \u03c0 + \u03c0 ) > \u03be.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Alignment Tree Kernel",
"sec_num": "3.3"
},
{
"text": "The algorithm to compute the local alignment tree kernel is given in algorithm 1. For any two nodes pair(x i and y j ), the local alignment score M (x i , y j ) is assigned. In the kernel matrix calculation, the worst case occurs when the tree is balanced and most of the alignments are selected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Alignment Tree Kernel",
"sec_num": "3.3"
},
{
"text": "Algorithm 1 algorithm for local alignment tree kernel Require: 2 nodes n 1 ,n 2 in parse trees;The productions are n 1 \u2192 x 1 , ..., x m and n 2 \u2192 y 1 , ..., y n return (n 1 , n 2 ) if n 1 and n 2 are not same then (n 1 , n 2 ) = 0 else if both n 1 and n 2 are pre-terminals then (n 1 , n 2 ) = 1 else calculate kernel matrix by equation 4for each possible alignment do calculate (n 1 , n 2 ) by equation 7end for end if end if 4 Experiments",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Alignment Tree Kernel",
"sec_num": "3.3"
},
{
"text": "We use the CoNLL-2005 SRL shared task data (Carreras and Marquez, 2005) Considering the two steps in semantic role labeling, i.e. semantic role identification and recognition. We assume identification has been done correctly, and only consider the semantic role classification. In our experiment, we focus on the semantic classes include 6 core (A0-A5), 12 adjunct(AM-) and 8 reference(R-) arguments.",
"cite_spans": [
{
"start": 43,
"end": 71,
"text": "(Carreras and Marquez, 2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "4.1.1"
},
{
"text": "In our implementation, SVM-Light-TK 1 (Moschitti, 2004) is modified. For SVM multi-classifier, the ONE-vs-ALL (OVA) strategy is selected. In all, we prepare the data for each semantic role (r) as following:",
"cite_spans": [
{
"start": 38,
"end": 55,
"text": "(Moschitti, 2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "4.1.1"
},
{
"text": "(1) Given a sentence and its correct full syntactic parse tree;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "4.1.1"
},
{
"text": "(2) Let P be the predicate. Its potential arguments A are extracted according to (Xue and Palmer, 2004) (3) For each pair < p, a >\u2208 P \u00d7 A: if a covers exactly the words of semantic role of p, put minimal subtree < p, a > into positive example set (T + r ); else put it in the negative examples (T \u2212 r )",
"cite_spans": [
{
"start": 81,
"end": 103,
"text": "(Xue and Palmer, 2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "4.1.1"
},
{
"text": "In our experiments, we set \u03b2 = 0.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Setup",
"sec_num": "4.1.1"
},
{
"text": "The classification performance is evaluated with respect to accuracy, precision(p), recall(r) and F 1 = 2pr/(p + r).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.1.2"
},
{
"text": "Accuracy(%) (Collins and Duffy, 2002) 84.35 (Moschitti, 2004) 86.72 (Zhang et al., 2007) 87.96 Our Kernel 88.48 Table 2 compares the performance of our method and other three famous kernels on WSJ test data. We implemented these three methods with the same settings described in the papers. It shows that our kernel achieves the best performance with 88.48% accuracy. The advantages of our approach are: 1). the alignments allow soft syntactic structure match; 2). threshold can avoid overgeneration and selected salient alignments. Another problem in the tree kernel (Collins and Duffy, 2002) is the lack of semantic information, since the match stops at the preterminals. All the lexical information is encoded at the leaf nodes of parsing trees. However, the semantic knowledge is important in some text applications, like Question Classification. To introduce semantic similarities between words into our kernel, we use the framework in Bloehdorn et al. (2007) and rewrite the rule (2) in the iterative tree kernel calculation(in section 2).",
"cite_spans": [
{
"start": 12,
"end": 37,
"text": "(Collins and Duffy, 2002)",
"ref_id": "BIBREF4"
},
{
"start": 44,
"end": 61,
"text": "(Moschitti, 2004)",
"ref_id": "BIBREF13"
},
{
"start": 68,
"end": 88,
"text": "(Zhang et al., 2007)",
"ref_id": "BIBREF21"
},
{
"start": 568,
"end": 593,
"text": "(Collins and Duffy, 2002)",
"ref_id": "BIBREF4"
},
{
"start": 941,
"end": 964,
"text": "Bloehdorn et al. (2007)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 112,
"end": 119,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.1.2"
},
{
"text": ") = 2dep(lso(c 1 ,c 2 )) d(c 1 ,lso(c 1 ,c 2 ))+d(c 2 ,lso(c 1 ,c 2 ))+2dep(lso(c 1 ,c 2 )) Resnik sim RES (c 1 , c 2 ) = \u2212 log P (lso(c 1 , c 2 )) Lin sim LIN (c 1 , c 2 ) = 2 log P (lso(c 1 ,c 2 )) log P (c 1 )+log P (c 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.1.2"
},
{
"text": "(2) If the productions at n 1 and n 2 are same, and n 1 and n 2 are pre-terminals, then (n 1 , n 2 ) = \u03bb\u03b1k",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.1.2"
},
{
"text": "w (w 1 , w 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.1.2"
},
{
"text": "where w 1 and w 2 are two words derived from pre-terminals n 1 and n 2 , respectively, and the parameter \u03b1 is to control the contribution of the leaves. Note that each preterminal has one child or equally covers one word. So k w (w 1 , w 2 ) actually calculate the similarity between two words w 1 and w 2 . In general, there are two ways to measure the semantic similarities. One is to derive from semantic networks such as Word-Net (Mavroeidis et al., 2005; Bloehdorn et al., 2006) . The other way is to use statistical methods of distributional or co-occurrence (\u00d3 S\u00e9aghdha and Copestake, 2008) behavior of the words.",
"cite_spans": [
{
"start": 434,
"end": 459,
"text": "(Mavroeidis et al., 2005;",
"ref_id": "BIBREF11"
},
{
"start": 460,
"end": 483,
"text": "Bloehdorn et al., 2006)",
"ref_id": "BIBREF0"
},
{
"start": 568,
"end": 597,
"text": "S\u00e9aghdha and Copestake, 2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.1.2"
},
{
"text": "WordNet 2 can be regarded as direct graphs semantically linking concepts by means of relations. Table 4 gives some similarity measures between two arbitrary concepts c 1 and c 2 . For our application, the word-toword similarity can be obtained by maximizing the corresponding concept-based similarity scores. In our implementation, we use WordNet::Similarity package 3 (Patwardhan et al., 2003) and the noun hierarchy of WordNet.",
"cite_spans": [
{
"start": 369,
"end": 394,
"text": "(Patwardhan et al., 2003)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 96,
"end": 103,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.1.2"
},
{
"text": "In Table 4 , dep is the length of path from a node to its global root, lso(c 1 , c 2 ) represents the lowest super-ordinate of c 1 and c 2 . The detail definitions can be found in (Budanitsky and Hirst, 2006) .",
"cite_spans": [
{
"start": 180,
"end": 208,
"text": "(Budanitsky and Hirst, 2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.1.2"
},
{
"text": "As an alternative, Latent Semantic Analysis(LSA) is a technique. It calculates the words similarities by means of occurrence of terms in documents. Given a term-bydocument matrix X, its singular value decomposition is: X = U \u03a3V T , where \u03a3 is a diagonal matrix with singular values in decreasing arrangement. The column of U are singular vectors corresponding to the individual singular value. Then the latent semantic similarity kernel of terms t i and t j is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.1.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "sim LSA =< U i k (U j k ) T >",
"eq_num": "(8)"
}
],
"section": "Experimental Results",
"sec_num": "4.1.2"
},
{
"text": "where U k = I k U is to project U onto its first k dimensions. I k is the identity matrix whose first k diagonal elements are 1 and all the other elements are 0. And U i k is the i-th row of the matrix U k . From equation 8, the LSAbased similarity between two terms is the inner product of the two projected vectors. The details of LSA can be found in (Cristianini et al., 2002; Choi et al., 2001 ).",
"cite_spans": [
{
"start": 353,
"end": 379,
"text": "(Cristianini et al., 2002;",
"ref_id": "BIBREF5"
},
{
"start": 380,
"end": 397,
"text": "Choi et al., 2001",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.1.2"
},
{
"text": "In this set of experiment, we evaluate different types of kernels for Question Classification(QC) task. Table 5 : Classification accuracy of different kernels on different data sets this paper we use the same dataset as introduced in (Li and Roth, 2002) . The dataset is divided 4 into 5500 questions for training and 500 questions from TREC 20 for testing. The total training samples are randomly divided into 5 subsets with sizes 1,000, 2,000, 3,000, 4,000 and 5,500 respectively. All the questions are labeled into 6 coarse grained categories and 50 fine grained categories: Abbreviations (abbreviation and expansion), Entity (animal, body, color, creation, currency, medical, event, food, instrument, language, letter, plant, product, religion, sport, substance, symbol, technique, term, vehicle, word) , Description (definition, description, manner, reason) , Human (description, group, individual, title), Location (city, country, mountain, state) and Numeric (code, count, date, distance, money, order, percent, period, speed, temperature, size, weight).",
"cite_spans": [
{
"start": 234,
"end": 253,
"text": "(Li and Roth, 2002)",
"ref_id": "BIBREF10"
},
{
"start": 629,
"end": 806,
"text": "(animal, body, color, creation, currency, medical, event, food, instrument, language, letter, plant, product, religion, sport, substance, symbol, technique, term, vehicle, word)",
"ref_id": null
},
{
"start": 809,
"end": 862,
"text": "Description (definition, description, manner, reason)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 104,
"end": 111,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4.2.2"
},
{
"text": "In this paper, we compare the linear kernel based on bag-of-word (BOW), the original tree kernel (TK), the local alignment tree kernel (section 3, LATK) and its correspondences with LSA similarity and a set of semanticenriched LATK with different similarity metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4.2.2"
},
{
"text": "To obtain the parse tree, we use Charniak parser 5 for every question. Like the previous experiment, SVM-Light-TK software and the OVA strategy are implemented. In all experiments, we use the default parameter in SVM(e.g. margin parameter) and set \u03b1 = 1. In LSA model, we set k = 50. Finally, we use multi-classification accuracy to evaluate the performance. Table 5 gives the results of the experiments. We can see that the local alignment tree kernel increase the multi-classification accuracy of the basic tree kernel by about 0.4%. The introduction of semantic information further improves accuracy. Among WordNet-based metrics, \"Wu and Palmer\" metric achieves the best result, i.e. 92.5%. As a whole, the WordNet-based similarities perform better than LSA-based measurement.",
"cite_spans": [],
"ref_spans": [
{
"start": 359,
"end": 366,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "4.2.2"
},
{
"text": "In this paper, we propose a tree kernel calculation by allowing local alignments. More flexible productions are considered in line with modifiers in real sentences. Considering text related applications, words similarities have been merged into the presented tree kernel. These similarities can be derived from different WordNet-based metrics or document statistics. Finally experiments are carried on two different applications (Semantic Role Labeling and Question Classification).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "For further work, we plan to study exploiting semantic knowledge in the kernel. A promising direction is to study the different effects of these semantic similarities. We are interested in some distributional similarities (Lee, 1999) given certain context. Also the effectivenss of the semantic-enriched tree kernel in SRL is another problem.",
"cite_spans": [
{
"start": 222,
"end": 233,
"text": "(Lee, 1999)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://wordnet.princeton.edu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://search.cpan.org/dist/WordNet-Similarity",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://l2r.cs.uiuc.edu/ cogcomp/Data/QA/QC/ 5 ftp://ftp.cs.brown.edu/pub/nlparser/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semantic kernels for text classification based on topological measures of feature similarity",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Bloehdorn",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Cammisa",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2006,
"venue": "ICDM '06: Proceedings of the Sixth International Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "808--812",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Bloehdorn, Roberto Basili, Marco Cammisa, and Alessandro Moschitti. 2006. Semantic kernels for text classification based on topological measures of feature similarity. In ICDM '06: Proceedings of the Sixth International Conference on Data Mining, pages 808-812, Washington, DC, USA. IEEE Com- puter Society.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Evaluating wordnet-based measures of lexical semantic relatedness",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Budanitsky",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics",
"volume": "32",
"issue": "1",
"pages": "13--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Budanitsky and Graeme Hirst. 2006. Eval- uating wordnet-based measures of lexical semantic relatedness. Computational Linguistics, 32(1):13- 47.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Introduction to the conll-2005 shared task: Semantic role labeling",
"authors": [
{
"first": "X",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Marquez",
"suffix": ""
}
],
"year": 2005,
"venue": "CoNLL '05: Proceedings of the 9th Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Carreras and L. Marquez. 2005. Introduction to the conll-2005 shared task: Semantic role labeling. In CoNLL '05: Proceedings of the 9th Conference on Computational Natural Language Learning.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Latent semantic analysis for text segmentation",
"authors": [
{
"first": "Y",
"middle": [
"Y"
],
"last": "Freddy",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Johanna",
"middle": [],
"last": "Wiemer-Hastings",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "109--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Freddy Y. Y. Choi, Peter Wiemer-hastings, and Johanna Moore. 2001. Latent semantic analysis for text segmentation. In In Proceedings of EMNLP, pages 109-117.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Duffy",
"suffix": ""
}
],
"year": 2002,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "263--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Nigel Duffy. 2002. New rank- ing algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In ACL, pages 263-270.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Latent semantic kernels",
"authors": [
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
},
{
"first": "Huma",
"middle": [],
"last": "Lodhi",
"suffix": ""
}
],
"year": 2002,
"venue": "J. Intell. Inf. Syst",
"volume": "18",
"issue": "2-3",
"pages": "127--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nello Cristianini, John Shawe-Taylor, and Huma Lodhi. 2002. Latent semantic kernels. J. Intell. Inf. Syst., 18(2-3):127-152.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Dependency tree kernels for relation extraction",
"authors": [
{
"first": "Aron",
"middle": [],
"last": "Culotta",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
}
],
"year": 2004,
"venue": "ACL '04: Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "423--429",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aron Culotta and Jeffrey Sorensen. 2004. Dependency tree kernels for relation extraction. In ACL '04: Proceedings of the 42nd Annual Meeting on Asso- ciation for Computational Linguistics, pages 423- 429, Morristown, NJ, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Convolution kernels on discrete structures",
"authors": [
{
"first": "David",
"middle": [],
"last": "Haussler",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Haussler. 1999. Convolution kernels on discrete structures. Technical report.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Flexible tree kernels based on counting the number of tree mappings",
"authors": [
{
"first": "Tetsuji",
"middle": [],
"last": "Kuboyama",
"suffix": ""
},
{
"first": "Kilho",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Hisashi",
"middle": [],
"last": "Kashima",
"suffix": ""
}
],
"year": 2006,
"venue": "ECML/PKDD Workshop on Mining and Learning with Graphs",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tetsuji Kuboyama, Kilho Shin, and Hisashi Kashima. 2006. Flexible tree kernels based on counting the number of tree mappings. In ECML/PKDD Work- shop on Mining and Learning with Graphs.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Measures of distributional similarity",
"authors": [
{
"first": "Lillian",
"middle": [
"Lee"
],
"last": "",
"suffix": ""
}
],
"year": 1999,
"venue": "37th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lillian Lee. 1999. Measures of distributional similar- ity. In 37th Annual Meeting of the Association for Computational Linguistics, pages 25-32.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning question classifiers",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19th international conference on Computational linguistics",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Li and Dan Roth. 2002. Learning question clas- sifiers. In Proceedings of the 19th international conference on Computational linguistics, pages 1- 7, Morristown, NJ, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Word sense disambiguation for exploiting hierarchical thesauri in text classification",
"authors": [
{
"first": "Dimitrios",
"middle": [],
"last": "Mavroeidis",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Tsatsaronis",
"suffix": ""
},
{
"first": "Michalis",
"middle": [],
"last": "Vazirgiannis",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Theobald",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2005,
"venue": "Knowledge discovery in databases: PKDD 2005 : 9th European Conference on Principles and Practice of Knowledge Discovery in Databases",
"volume": "3721",
"issue": "",
"pages": "181--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitrios Mavroeidis, George Tsatsaronis, Michalis Vazirgiannis, Martin Theobald, and Gerhard Weikum. 2005. Word sense disambiguation for exploiting hierarchical thesauri in text classification. In Al\u00edpio Jorge, Lu\u00eds Torgo, Pavel Brazdil, Rui Camacho, and Gama Joao, editors, Knowledge discovery in databases: PKDD 2005 : 9th Eu- ropean Conference on Principles and Practice of Knowledge Discovery in Databases, volume 3721 of Lecture Notes in Computer Science, pages 181-192, Porto, Portugal. Springer.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Tree kernels for semantic role labeling",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Daniele",
"middle": [],
"last": "Pighin",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2008,
"venue": "Comput. Linguist",
"volume": "34",
"issue": "2",
"pages": "193--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Moschitti, Daniele Pighin, and Roberto Basili. 2008. Tree kernels for semantic role label- ing. Comput. Linguist., 34(2):193-224.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A study on convolution kernels for shallow semantic parsing",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2004,
"venue": "ACL '04: Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "335--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Moschitti. 2004. A study on convolution kernels for shallow semantic parsing. In ACL '04: Proceedings of the 42nd Annual Meeting on Asso- ciation for Computational Linguistics, pages 335- 342, Morristown, NJ, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Semantic classification with distributional kernels",
"authors": [
{
"first": "Diarmuid\u00f3",
"middle": [],
"last": "S\u00e9aghdha",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "649--656",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diarmuid\u00d3 S\u00e9aghdha and Ann Copestake. 2008. Se- mantic classification with distributional kernels. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 649-656, Manchester, UK, August. Coling 2008 Or- ganizing Committee.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Using measures of semantic relatedness for word sense disambiguation",
"authors": [
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Fourth International Conference on Intelligent Text Processing and Computational Linguistics (CICLING-03)",
"volume": "",
"issue": "",
"pages": "241--257",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siddharth Patwardhan, Satanjeev Banerjee, and Ted Pedersen. 2003. Using measures of semantic re- latedness for word sense disambiguation. In In Pro- ceedings of the Fourth International Conference on Intelligent Text Processing and Computational Lin- guistics (CICLING-03), pages 241-257.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Exploiting constituent dependencies for tree kernel-based semantic relation extraction",
"authors": [
{
"first": "Longhua",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Fang",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Qiaoming",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Peide",
"middle": [],
"last": "Qian",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "697--704",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Longhua Qian, Guodong Zhou, Fang Kong, Qiaoming Zhu, and Peide Qian. 2008. Exploiting constituent dependencies for tree kernel-based semantic relation extraction. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 697-704, Manchester, UK, August. Coling 2008 Organizing Committee.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Protein homology detection using string alignment kernels",
"authors": [
{
"first": "Hiroto",
"middle": [],
"last": "Saigo",
"suffix": ""
},
{
"first": "Jean-Philippe",
"middle": [],
"last": "Vert",
"suffix": ""
},
{
"first": "Nobuhisa",
"middle": [],
"last": "Ueda",
"suffix": ""
},
{
"first": "Tatsuya",
"middle": [],
"last": "Akutsu",
"suffix": ""
}
],
"year": 2004,
"venue": "Bioinformatics",
"volume": "20",
"issue": "11",
"pages": "1682--1689",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hiroto Saigo, Jean-Philippe Vert, Nobuhisa Ueda, and Tatsuya Akutsu. 2004. Protein homology detec- tion using string alignment kernels. Bioinformatics, 20(11):1682-1689.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A generalization of haussler's convolution kernel: mapping kernel",
"authors": [
{
"first": "Kilho",
"middle": [],
"last": "Shin",
"suffix": ""
},
{
"first": "Tetsuji",
"middle": [],
"last": "Kuboyama",
"suffix": ""
}
],
"year": 2008,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "944--951",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kilho Shin and Tetsuji Kuboyama. 2008. A gener- alization of haussler's convolution kernel: mapping kernel. In ICML, pages 944-951.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Calibrating features for semantic role labeling",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP 2004",
"volume": "",
"issue": "",
"pages": "88--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue and Martha Palmer. 2004. Calibrat- ing features for semantic role labeling. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 88-94, Barcelona, Spain, July. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Kernel methods for relation extraction",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Zelenko",
"suffix": ""
},
{
"first": "Chinatsu",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Richardella",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "1083--1106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation ex- traction. J. Mach. Learn. Res., 3:1083-1106.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A grammar-driven convolution tree kernel for semantic role classification",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Aiti",
"middle": [],
"last": "Aw",
"suffix": ""
},
{
"first": "Guodong",
"middle": [],
"last": "Chew Lim Tan",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "200--207",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Min Zhang, Wanxiang Che, Aiti Aw, Chew Lim Tan, Guodong Zhou, Ting Liu, and Sheng Li. 2007. A grammar-driven convolution tree kernel for se- mantic role classification. In Proceedings of the 45th Annual Meeting of the Association of Compu- tational Linguistics, pages 200-207, Prague, Czech Republic, June. Association for Computational Lin- guistics.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "",
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF2": {
"text": "",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>: Performance comparison between</td></tr><tr><td>different kernel performance on WSJ data</td></tr><tr><td>1 http://dit.unitn.it/ moschitt/Tree-Kernel.htm</td></tr></table>"
},
"TABREF3": {
"text": "",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>: top: overall performance result on</td></tr><tr><td>data sets ; bottom: detail result on WSJ</td></tr><tr><td>data</td></tr></table>"
},
"TABREF4": {
"text": "gives our performance on data sets and the detail result on WSJ test data.Similarity DefinitionWu and Palmer sim W U P (c 1 , c 2",
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>"
},
"TABREF5": {
"text": "",
"html": null,
"type_str": "table",
"num": null,
"content": "<table><tr><td>: popular semantic similarity measurements</td></tr><tr><td>4.2 Question Classification</td></tr><tr><td>4.2.1 Semantic-enriched Tree Kernel</td></tr></table>"
}
}
}
}