ACL-OCL / Base_JSON /prefixP /json /P04 /P04-1043.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P04-1043",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:45:00.830539Z"
},
"title": "A Study on Convolution Kernels for Shallow Semantic Parsing",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Texas at Dallas Human Language Technology Research Institute Richardson",
"location": {
"postCode": "75083-0688",
"region": "TX",
"country": "USA"
}
},
"email": "alessandro.moschitti@utdallas.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we have designed and experimented novel convolution kernels for automatic classification of predicate arguments. Their main property is the ability to process structured representations. Support Vector Machines (SVMs), using a combination of such kernels and the flat feature kernel, classify Prop-Bank predicate arguments with accuracy higher than the current argument classification stateof-the-art. Additionally, experiments on FrameNet data have shown that SVMs are appealing for the classification of semantic roles even if the proposed kernels do not produce any improvement.",
"pdf_parse": {
"paper_id": "P04-1043",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we have designed and experimented novel convolution kernels for automatic classification of predicate arguments. Their main property is the ability to process structured representations. Support Vector Machines (SVMs), using a combination of such kernels and the flat feature kernel, classify Prop-Bank predicate arguments with accuracy higher than the current argument classification stateof-the-art. Additionally, experiments on FrameNet data have shown that SVMs are appealing for the classification of semantic roles even if the proposed kernels do not produce any improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Several linguistic theories, e.g. (Jackendoff, 1990) claim that semantic information in natural language texts is connected to syntactic structures. Hence, to deal with natural language semantics, the learning algorithm should be able to represent and process structured data. The classical solution adopted for such tasks is to convert syntax structures into flat feature representations which are suitable for a given learning model. The main drawback is that structures may not be properly represented by flat features.",
"cite_spans": [
{
"start": 34,
"end": 52,
"text": "(Jackendoff, 1990)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In particular, these problems affect the processing of predicate argument structures annotated in PropBank (Kingsbury and Palmer, 2002) or FrameNet (Fillmore, 1982) . Figure 1 shows an example of a predicate annotation in PropBank for the sentence: \"Paul gives a lecture in Rome\". A predicate may be a verb or a noun or an adjective and most of the time Arg 0 is the logical subject, Arg 1 is the logical object and ArgM may indicate locations, as in our example.",
"cite_spans": [
{
"start": 107,
"end": 135,
"text": "(Kingsbury and Palmer, 2002)",
"ref_id": "BIBREF8"
},
{
"start": 148,
"end": 164,
"text": "(Fillmore, 1982)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 167,
"end": 176,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "FrameNet also describes predicate/argument structures but for this purpose it uses richer semantic structures called frames. These latter are schematic representations of situations involving various participants, properties and roles in which a word may be typically used. Frame elements or semantic roles are arguments of predicates called target words. In FrameNet, the argument names are local to a particular frame. Several machine learning approaches for argument identification and classification have been developed (Gildea and Jurasfky, 2002; Gildea and Palmer, 2002; Surdeanu et al., 2003; Hacioglu et al., 2003) . Their common characteristic is the adoption of feature spaces that model predicate-argument structures in a flat representation. On the contrary, convolution kernels aim to capture structural information in term of sub-structures, providing a viable alternative to flat features.",
"cite_spans": [
{
"start": 524,
"end": 551,
"text": "(Gildea and Jurasfky, 2002;",
"ref_id": "BIBREF4"
},
{
"start": 552,
"end": 576,
"text": "Gildea and Palmer, 2002;",
"ref_id": "BIBREF5"
},
{
"start": 577,
"end": 599,
"text": "Surdeanu et al., 2003;",
"ref_id": "BIBREF12"
},
{
"start": 600,
"end": 622,
"text": "Hacioglu et al., 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we select portions of syntactic trees, which include predicate/argument salient sub-structures, to define convolution kernels for the task of predicate argument classification. In particular, our kernels aim to (a) represent the relation between predicate and one of its arguments and (b) to capture the overall argument structure of the target predicate. Additionally, we define novel kernels as combinations of the above two with the polynomial kernel of standard flat features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experiments on Support Vector Machines using the above kernels show an improvement of the state-of-the-art for PropBank argument classification. On the contrary, FrameNet semantic parsing seems to not take advantage of the structural information provided by our kernels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is organized as follows: Section 2 defines the Predicate Argument Extraction problem and the standard solution to solve it. In Section 3 we present our kernels whereas in Section 4 we show comparative results among SVMs using standard features and the proposed kernels. Finally, Section 5 summarizes the conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Given a sentence in natural language and the target predicates, all arguments have to be recognized. This problem can be divided into two subtasks: (a) the detection of the argument boundaries, i.e. all its compounding words and (b) the classification of the argument type, e.g. Arg0 or ArgM in PropBank or Agent and Goal in FrameNet. The standard approach to learn both detection and classification of predicate arguments is summarized by the following steps: 1. Given a sentence from the training-set generate a full syntactic parse-tree; 2. let P and A be the set of predicates and the set of parse-tree nodes (i.e. the potential arguments), respectively; 3. for each pair <p, a> \u2208 P \u00d7 A:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate Argument Extraction: a standard approach",
"sec_num": "2"
},
{
"text": "\u2022 extract the feature representation set, F p,a ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate Argument Extraction: a standard approach",
"sec_num": "2"
},
{
"text": "\u2022 if the subtree rooted in a covers exactly the words of one argument of p, put F p,a in T + (positive examples), otherwise put it in T \u2212 (negative examples). For example, in Figure 1 , for each combination of the predicate give with the nodes N, S, VP, V, NP, PP, D or IN the instances F \"give\",a are generated. In case the node a exactly covers Paul, a lecture or in Rome, it will be a positive instance otherwise it will be a negative one, e.g. F \"give\",\"IN \" .",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 183,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Predicate Argument Extraction: a standard approach",
"sec_num": "2"
},
{
"text": "To learn the argument classifiers the T + set can be re-organized as positive T + arg i and negative T \u2212 arg i examples for each argument i. In this way, an individual ONE-vs-ALL classifier for each argument i can be trained. We adopted this solution as it is simple and effective (Hacioglu et al., 2003) . In the classification phase, given a sentence of the test-set, all its F p,a are generated and classified by each individ-ual classifier. As a final decision, we select the argument associated with the maximum value among the scores provided by the SVMs, i.e. argmax i\u2208S C i , where S is the target set of arguments.",
"cite_spans": [
{
"start": 281,
"end": 304,
"text": "(Hacioglu et al., 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate Argument Extraction: a standard approach",
"sec_num": "2"
},
{
"text": "-Phrase Type: This feature indicates the syntactic type of the phrase labeled as a predicate argument, e.g. NP for Arg1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate Argument Extraction: a standard approach",
"sec_num": "2"
},
{
"text": "-Parse Tree Path: This feature contains the path in the parse tree between the predicate and the argument phrase, expressed as a sequence of nonterminal labels linked by direction (up or down) symbols, e.g. V \u2191 VP \u2193 NP for Arg1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate Argument Extraction: a standard approach",
"sec_num": "2"
},
{
"text": "-Position: Indicates if the constituent, i.e. the potential argument, appears before or after the predicate in the sentence, e.g. after for Arg1 and before for Arg0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate Argument Extraction: a standard approach",
"sec_num": "2"
},
{
"text": "-Voice: This feature distinguishes between active or passive voice for the predicate phrase, e.g. active for every argument.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate Argument Extraction: a standard approach",
"sec_num": "2"
},
{
"text": "-Head Word : This feature contains the headword of the evaluated phrase. Case and morphological information are preserved, e.g. lecture for Arg1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate Argument Extraction: a standard approach",
"sec_num": "2"
},
{
"text": "-Governing Category indicates if an NP is dominated by a sentence phrase or by a verb phrase, e.g. the NP associated with Arg1 is dominated by a VP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate Argument Extraction: a standard approach",
"sec_num": "2"
},
{
"text": "-Predicate Word : This feature consists of two components: (1) the word itself, e.g. gives for all arguments; and (2) the lemma which represents the verb normalized to lower case and infinitive form, e.g. give for all arguments. Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 237,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Predicate Argument Extraction: a standard approach",
"sec_num": "2"
},
{
"text": "The discovery of relevant features is, as usual, a complex task, nevertheless, there is a common consensus on the basic features that should be adopted. These standard features, firstly proposed in (Gildea and Jurasfky, 2002) , refer to a flat information derived from parse trees, i.e. Phrase Type, Predicate Word, Head Word, Governing Category, Position and Voice. Table 1 presents the standard features and exemplifies how they are extracted from the parse tree in Figure 1 .",
"cite_spans": [
{
"start": 198,
"end": 225,
"text": "(Gildea and Jurasfky, 2002)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 367,
"end": 374,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 468,
"end": 476,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Standard feature space",
"sec_num": "2.1"
},
{
"text": "For example, the Parse Tree Path feature represents the path in the parse-tree between a predicate node and one of its argument nodes. It is expressed as a sequence of nonterminal labels linked by direction symbols (up or down), e.g. in Figure 1 , V\u2191VP\u2193NP is the path between the predicate to give and the argument 1, a lecture. Two pairs <p 1 , a 1 > and <p 2 , a 2 > have two different Path features even if the paths differ only for a node in the parse-tree. This pre- vents the learning algorithm to generalize well on unseen data. In order to address this problem, the next section describes a novel kernel space for predicate argument classification.",
"cite_spans": [],
"ref_spans": [
{
"start": 237,
"end": 245,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Standard feature space",
"sec_num": "2.1"
},
{
"text": "Given a vector space in n and a set of positive and negative points, SVMs classify vectors according to a separating hyperplane, H( x) = w \u00d7 x + b = 0, where w \u2208 n and b \u2208 are learned by applying the Structural Risk Minimization principle (Vapnik, 1995) .",
"cite_spans": [
{
"start": 239,
"end": 253,
"text": "(Vapnik, 1995)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine approach",
"sec_num": "2.2"
},
{
"text": "To apply the SVM algorithm to Predicate Argument Classification, we need a function \u03c6 : F \u2192 n to map our features space F = {f 1 , .., f |F | } and our predicate/argument pair representation, F p,a = F z , into n , such that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine approach",
"sec_num": "2.2"
},
{
"text": "F z \u2192 \u03c6(F z ) = (\u03c6 1 (F z ), .., \u03c6 n (F z ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine approach",
"sec_num": "2.2"
},
{
"text": "From the kernel theory we have that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine approach",
"sec_num": "2.2"
},
{
"text": "H( x) = i=1..l \u03b1 i x i \u2022 x + b = i=1..l \u03b1 i x i \u2022 x + b = = i=1..l \u03b1 i \u03c6(F i ) \u2022 \u03c6(F z ) + b.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine approach",
"sec_num": "2.2"
},
{
"text": "where, F i \u2200i \u2208 {1, .., l} are the training instances and the product",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine approach",
"sec_num": "2.2"
},
{
"text": "K(F i , F z ) =<\u03c6(F i ) \u2022 \u03c6(F z )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine approach",
"sec_num": "2.2"
},
{
"text": "> is the kernel function associated with the mapping \u03c6. The simplest mapping that we can apply is \u03c6(F z ) = z = (z 1 , ..., z n ) where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine approach",
"sec_num": "2.2"
},
{
"text": "z i = 1 if f i \u2208 F z otherwise z i = 0, i.e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine approach",
"sec_num": "2.2"
},
{
"text": "the characteristic vector of the set F z with respect to F. If we choose as a kernel function the scalar product we obtain the linear kernel",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine approach",
"sec_num": "2.2"
},
{
"text": "K L (F x , F z ) = x \u2022 z.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine approach",
"sec_num": "2.2"
},
{
"text": "Another function which is the current stateof-the-art of predicate argument classification is the polynomial kernel:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine approach",
"sec_num": "2.2"
},
{
"text": "K p (F x , F z ) = (c + x \u2022 z) d ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine approach",
"sec_num": "2.2"
},
{
"text": "where c is a constant and d is the degree of the polynom.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Support Vector Machine approach",
"sec_num": "2.2"
},
{
"text": "We propose two different convolution kernels associated with two different predicate argu-ment sub-structures: the first includes the target predicate with one of its arguments. We will show that it contains almost all the standard feature information. The second relates to the sub-categorization frame of verbs. In this case, the kernel function aims to cluster together verbal predicates which have the same syntactic realizations. This provides the classification algorithm with important clues about the possible set of arguments suited for the target syntactic structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Kernels for Semantic Parsing",
"sec_num": "3"
},
{
"text": "3.1 Predicate/Argument Feature (PAF) We consider the predicate argument structures annotated in PropBank or FrameNet as our semantic space. The smallest sub-structure which includes one predicate with only one of its arguments defines our structural feature. For example, Figure 2 illustrates the parse-tree of the sentence \"Paul delivers a talk in formal style\". The circled substructures in (a), (b) and (c) are our semantic objects associated with the three arguments of the verb to deliver, i.e. <deliver, Arg0 >, <deliver, Arg1 > and <deliver, ArgM >. Note that each predicate/argument pair is associated with only one structure, i.e. F p,a contain only one of the circled sub-trees. Other important properties are the followings:",
"cite_spans": [],
"ref_spans": [
{
"start": 272,
"end": 280,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Convolution Kernels for Semantic Parsing",
"sec_num": "3"
},
{
"text": "(1) The overall semantic feature space F contains sub-structures composed of syntactic information embodied by parse-tree dependencies and semantic information under the form of predicate/argument annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Kernels for Semantic Parsing",
"sec_num": "3"
},
{
"text": "(2) This solution is efficient as we have to classify as many nodes as the number of predicate arguments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Kernels for Semantic Parsing",
"sec_num": "3"
},
{
"text": "(3) A constituent cannot be part of two different arguments of the target predicate, i.e. there is no overlapping between the words of two arguments. Thus, two semantic structures F p 1 ,a 1 and F p 2 ,a 2 1 , associated with two different ar- guments, cannot be included one in the other. This property is important because a convolution kernel would not be effective to distinguish between an object and its sub-parts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolution Kernels for Semantic Parsing",
"sec_num": "3"
},
{
"text": "The above object space aims to capture all the information between a predicate and one of its arguments. Its main drawback is that important structural information related to interargument dependencies is neglected. In order to solve this problem we define the Sub-Categorization Feature (SCF). This is the subparse tree which includes the sub-categorization frame of the target verbal predicate. For example, Figure 3 shows the parse tree of the sentence \"He flushed the pan and buckled his belt\". The solid line describes the SCF of the predicate flush, i.e. F f lush whereas the dashed line tailors the SCF of the predicate buckle, i.e. F buckle . Note that SCFs are features for predicates, (i.e. they describe predicates) whereas PAF characterizes predicate/argument pairs.",
"cite_spans": [],
"ref_spans": [
{
"start": 410,
"end": 418,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Sub-Categorization Feature (SCF)",
"sec_num": "3.2"
},
{
"text": "Once semantic representations are defined, we need to design a kernel function to estimate the similarity between our objects. As suggested in Section 2 we can map them into vectors in n and evaluate implicitly the scalar product among them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sub-Categorization Feature (SCF)",
"sec_num": "3.2"
},
{
"text": "Kernel (PAK) Given the semantic objects defined in the previous section, we design a convolution kernel in a way similar to the parse-tree kernel proposed in (Collins and Duffy, 2002) . We divide our mapping \u03c6 in two steps: (1) from the semantic structure space F (i.e. PAF or SCF objects) to the set of all their possible sub-structures element in Fp,a with an abuse of notation we use it to indicate the objects themselves. ",
"cite_spans": [
{
"start": 158,
"end": 183,
"text": "(Collins and Duffy, 2002)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate/Argument structure",
"sec_num": "3.3"
},
{
"text": "F = {f 1 , .., f |F | } and (2) from F to |F | .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate/Argument structure",
"sec_num": "3.3"
},
{
"text": "An example of features in F is given in Figure 4 where the whole set of fragments, F deliver,Arg1 , of the argument structure F deliver,Arg1 , is shown (see also Figure 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 40,
"end": 48,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 162,
"end": 170,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Predicate/Argument structure",
"sec_num": "3.3"
},
{
"text": "It is worth noting that the allowed sub-trees contain the entire (not partial) production rules. is not considered part of the semantic structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate/Argument structure",
"sec_num": "3.3"
},
{
"text": "Thus, in step 1, an argument structure F p,a is mapped in a fragment set F p,a . In step 2, this latter is mapped into x = (x 1 , .., x |F | ) \u2208 |F | , where x i is equal to the number of times that f i occurs in F p,a 2 . In order to evaluate K(\u03c6(F x ), \u03c6(F z )) without evaluating the feature vector x and z we define the indicator function I i (n) = 1 if the substructure i is rooted at node n and 0 otherwise. It follows that \u03c6 i (F x ) = n\u2208Nx I i (n), where N x is the set of the F x 's nodes. Therefore, the kernel can be written as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate/Argument structure",
"sec_num": "3.3"
},
{
"text": "K(\u03c6(F x ), \u03c6(F z )) = |F | i=1 ( nx\u2208Nx I i (n x ))( nz\u2208Nz I i (n z )) = nx\u2208Nx nz\u2208Nz i I i (n x )I i (n z )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate/Argument structure",
"sec_num": "3.3"
},
{
"text": "where N x and N z are the nodes in F x and F z , respectively. In (Collins and Duffy, 2002) , it has been shown that",
"cite_spans": [
{
"start": 66,
"end": 91,
"text": "(Collins and Duffy, 2002)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate/Argument structure",
"sec_num": "3.3"
},
{
"text": "i I i (n x )I i (n z ) = \u2206(n x , n z ) can be computed in O(|N x | \u00d7 |N z |)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate/Argument structure",
"sec_num": "3.3"
},
{
"text": "by the following recursive relation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate/Argument structure",
"sec_num": "3.3"
},
{
"text": "(1) if the productions at n x and n z are different then \u2206(n x , n z ) = 0;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate/Argument structure",
"sec_num": "3.3"
},
{
"text": "(2) if the productions at n x and n z are the same, and n x and n z are pre-terminals then \u2206(n x , n z ) = 1;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate/Argument structure",
"sec_num": "3.3"
},
{
"text": "(3) if the productions at n x and n z are the same, and n x and n z are not pre-terminals then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate/Argument structure",
"sec_num": "3.3"
},
{
"text": "\u2206(n x , n z ) = nc(nx) j=1 (1 + \u2206(ch(n x , j), ch(n z , j))),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate/Argument structure",
"sec_num": "3.3"
},
{
"text": "where nc(n x ) is the number of the children of n x and ch(n, i) is the i-th child of the node n. Note that as the productions are the same ch(n x , i) = ch(n z , i).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate/Argument structure",
"sec_num": "3.3"
},
{
"text": "This kind of kernel has the drawback of assigning more weight to larger structures while the argument type does not strictly depend on the size of the argument (Moschitti and Bejan, 2004) . To overcome this problem we can scale the relative importance of the tree fragments using a parameter \u03bb for the cases (2) and (3), i.e. \u2206(n x , n z ) = \u03bb and \u2206(n x , n z ) = \u03bb",
"cite_spans": [
{
"start": 160,
"end": 187,
"text": "(Moschitti and Bejan, 2004)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate/Argument structure",
"sec_num": "3.3"
},
{
"text": "nc(nx) j=1 (1 + \u2206(ch(n x , j), ch(n z , j))) respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Predicate/Argument structure",
"sec_num": "3.3"
},
{
"text": "It is worth noting that even if the above equations define a kernel function similar to the one proposed in (Collins and Duffy, 2002) , the substructures on which it operates are different from the parse-tree kernel. For example, Figure 4 shows that structures such as [ ]] are valid features, but these fragments (and many others) are not generated by a complete production, i.e. VP \u2192 V NP PP. As a consequence they would not be included in the parse-tree kernel of the sentence.",
"cite_spans": [
{
"start": 108,
"end": 133,
"text": "(Collins and Duffy, 2002)",
"ref_id": "BIBREF0"
},
{
"start": 270,
"end": 271,
"text": "[",
"ref_id": null
}
],
"ref_spans": [
{
"start": 230,
"end": 239,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Predicate/Argument structure",
"sec_num": "3.3"
},
{
"text": "Features In this section we compare standard features with the kernel based representation in order to derive useful indications for their use:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Standard",
"sec_num": "3.4"
},
{
"text": "First, PAK estimates a similarity between two argument structures (i.e., PAF or SCF) by counting the number of sub-structures that are in common. As an example, the similarity between the two structures in Figure 2 , F \"delivers\",Arg0 and F \"delivers\",Arg1 , is equal to 1 since they have in common only the [V delivers] substructure. Such low value depends on the fact that different arguments tend to appear in different structures.",
"cite_spans": [],
"ref_spans": [
{
"start": 206,
"end": 215,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Comparison with Standard",
"sec_num": "3.4"
},
{
"text": "On the contrary, if two structures differ only for a few nodes (especially terminals or near terminal nodes) the similarity remains quite high. For example, if we change the tense of the verb to deliver (Figure 2) , where the NP is unchanged. Thus, the similarity with the previous structure will be quite high as:",
"cite_spans": [],
"ref_spans": [
{
"start": 203,
"end": 213,
"text": "(Figure 2)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Comparison with Standard",
"sec_num": "3.4"
},
{
"text": "(1) the NP with all sub-parts will be matched and (2) the small difference will not highly affect the kernel norm and consequently the final score. The above property also holds for the SCF structures. For example, in Figure 3 , K P AK (\u03c6(F f lush ), \u03c6(F buckle )) is quite high as the two verbs have the same syntactic realization of their arguments. In general, flat features do not possess this conservative property. For example, the Parse Tree Path is very sensible to small changes of parse-trees, e.g. two predicates, expressed in different tenses, generate two different Path features.",
"cite_spans": [],
"ref_spans": [
{
"start": 218,
"end": 227,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Comparison with Standard",
"sec_num": "3.4"
},
{
"text": "Second, some information contained in the standard features is embedded in PAF: Phrase Type, Predicate Word and Head Word explicitly appear as structure fragments. For example, in The same is not true for SCF since it does not contain information about a specific argument. SCF, in fact, aims to characterize the predicate with respect to the overall argument structures rather than a specific pair <p, a>.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Standard",
"sec_num": "3.4"
},
{
"text": "Third, Governing Category, Position and Voice features are not explicitly contained in both PAF and SCF. Nevertheless, SCF may allow the learning algorithm to detect the active/passive form of verbs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Standard",
"sec_num": "3.4"
},
{
"text": "Finally, from the above observations follows that the PAF representation may be used with PAK to classify arguments. On the contrary, SCF lacks important information, thus, alone it may be used only to classify verbs in syntactic categories. This suggests that SCF should be used in conjunction with standard features to boost their classification performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Standard",
"sec_num": "3.4"
},
{
"text": "The aim of our experiments are twofold: On the one hand, we study if the PAF representation produces an accuracy higher than standard features. On the other hand, we study if SCF can be used to classify verbs according to their syntactic realization. Both the above aims can be carried out by combining PAF and SCF with the standard features. For this purpose we adopted two ways to combine kernels 3 : (1) K = K 1 \u2022 K 2 and (2) K = \u03b3K 1 + K 2 . The resulting set of kernels used in the experiments is the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Experiments",
"sec_num": "4"
},
{
"text": "\u2022 K p d is the polynomial kernel with degree d over the standard features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Experiments",
"sec_num": "4"
},
{
"text": "\u2022 K P AF is obtained by using PAK function over the PAF structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Experiments",
"sec_num": "4"
},
{
"text": "\u2022 K P AF +P = \u03b3 K P AF",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Experiments",
"sec_num": "4"
},
{
"text": "|K P AF | + K p d |K p d | , i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Experiments",
"sec_num": "4"
},
{
"text": "e. the sum between the normalized 4 PAF-based kernel and the normalized polynomial kernel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Experiments",
"sec_num": "4"
},
{
"text": "\u2022 K P AF \u2022P = K P AF \u2022K p d |K P AF |\u2022|K p d | , i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Experiments",
"sec_num": "4"
},
{
"text": "e. the normalized product between the PAF-based kernel and the polynomial kernel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Experiments",
"sec_num": "4"
},
{
"text": "\u2022 K SCF +P = \u03b3 K SCF |K SCF | + K p d |K p d | , i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Experiments",
"sec_num": "4"
},
{
"text": "e. the summation between the normalized SCF-based kernel and the normalized polynomial kernel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Experiments",
"sec_num": "4"
},
{
"text": "\u2022 K SCF \u2022P = K SCF \u2022K p d |K SCF |\u2022|K p d | , i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Experiments",
"sec_num": "4"
},
{
"text": "e. the normalized product between SCF-based kernel and the polynomial kernel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Experiments",
"sec_num": "4"
},
{
"text": "The above kernels were experimented over two corpora: PropBank (www.cis.upenn.edu/\u223cace) along with Penn TreeBank 5 2 (Marcus et al., 1993) and FrameNet.",
"cite_spans": [
{
"start": 117,
"end": 138,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora set-up",
"sec_num": "4.1"
},
{
"text": "PropBank contains about 53,700 sentences and a fixed split between training and testing which has been used in other researches e.g., (Gildea and Palmer, 2002; Surdeanu et al., 2003; Hacioglu et al., 2003) . In this split, Sections from 02 to 21 are used for training, section 23 for testing and sections 1 and 22 as developing set. We considered all PropBank arguments 6 from Arg0 to Arg9, ArgA and ArgM for a total of 122,774 and 7,359 arguments in training and testing respectively. It is worth noting that in the experiments we used the gold standard parsing from Penn TreeBank, thus our kernel structures are derived with high precision.",
"cite_spans": [
{
"start": 134,
"end": 159,
"text": "(Gildea and Palmer, 2002;",
"ref_id": "BIBREF5"
},
{
"start": 160,
"end": 182,
"text": "Surdeanu et al., 2003;",
"ref_id": "BIBREF12"
},
{
"start": 183,
"end": 205,
"text": "Hacioglu et al., 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora set-up",
"sec_num": "4.1"
},
{
"text": "For the FrameNet corpus (www.icsi.berkeley",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora set-up",
"sec_num": "4.1"
},
{
"text": ".edu/\u223cframenet) we extracted all 24,558 sentences from the 40 frames of Senseval 3 task (www.senseval.org) for the Automatic Labeling of Semantic Roles. We considered 18 of the most frequent roles and we mapped together those having the same name. Only verbs are selected to be predicates in our evaluations. Moreover, as it does not exist a fixed split between training and testing, we selected randomly 30% of sentences for testing and 70% for training. Additionally, 30% of training was used as a validation-set. The sentences were processed using Collins' parser (Collins, 1997) to generate parse-trees automatically.",
"cite_spans": [
{
"start": 567,
"end": 582,
"text": "(Collins, 1997)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora set-up",
"sec_num": "4.1"
},
{
"text": "The classifier evaluations were carried out using the SVM-light software (Joachims, 1999) available at svmlight.joachims.org with the default polynomial kernel for standard feature evaluations. To process PAF and SCF, we implemented our own kernels and we used them inside SVM-light. The classification performances were evaluated using the f 1 measure 7 for single arguments and the accuracy for the final multi-class classifier. This latter choice allows us to compare the results with previous literature works, e.g. (Gildea and Jurasfky, 2002; Surdeanu et al., 2003; Hacioglu et al., 2003) .",
"cite_spans": [
{
"start": 73,
"end": 89,
"text": "(Joachims, 1999)",
"ref_id": "BIBREF7"
},
{
"start": 520,
"end": 547,
"text": "(Gildea and Jurasfky, 2002;",
"ref_id": "BIBREF4"
},
{
"start": 548,
"end": 570,
"text": "Surdeanu et al., 2003;",
"ref_id": "BIBREF12"
},
{
"start": 571,
"end": 593,
"text": "Hacioglu et al., 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification set-up",
"sec_num": "4.2"
},
{
"text": "For the evaluation of SVMs, we used the default regularization parameter (e.g., C = 1 for normalized kernels) and we tried a few costfactor values (i.e., j \u2208 {0.1, 1, 2, 3, 4, 5}) to adjust the rate between Precision and Recall. We chose parameters by evaluating SVM using K p 3 kernel over the validation-set. Both \u03bb (see Section 3.3) and \u03b3 parameters were evaluated in a similar way by maximizing the performance of SVM using K P AF and \u03b3 K SCF",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification set-up",
"sec_num": "4.2"
},
{
"text": "|K SCF | + K p d |K p d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification set-up",
"sec_num": "4.2"
},
{
"text": "| respectively. These parameters were adopted also for all the other kernels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Classification set-up",
"sec_num": "4.2"
},
{
"text": "To study the impact of our structural kernels we firstly derived the maximal accuracy reachable with standard features along with polynomial kernels. The multi-class accuracies, for Prop-Bank and FrameNet using K p d with d = 1, .., 5, are shown in Figure 5 . We note that (a) the highest performance is reached for d = 3, (b) for PropBank our maximal accuracy (90.5%) is substantially equal to the SVM performance (88%) obtained in (Hacioglu et al., 2003) with degree 2 and (c) the accuracy on FrameNet (85.2%) is higher than the best result obtained in literature, i.e. 82.0% in (Gildea and Palmer, 2002) . This different outcome is due to a different task (we classify different roles) and a different classification algorithm. Moreover, we did not use the Frame information which is very important 8 . It is worth noting that the difference between linear and polynomial kernel is about 3-4 percent points for both PropBank and FrameNet. This remarkable difference can be easily explained by considering the meaning of standard features. For example, let us restrict the classification function C Arg0 to the two features Voice and Position. Without loss of generality we can assume: (a) Voice=1 if active and 0 if passive, and (b) Position=1 when the argument is after the predicate and 0 otherwise. To simplify the example, we also assume that if an argument precedes the target predicate it is a subject, otherwise it is an object 9 . It follows that a constituent is Arg0, i.e. C Arg0 = 1, if only one feature at a time is 1, otherwise it is not an Arg0, i.e. C Arg0 = 0. In other words, C Arg0 = Position XOR Voice, which is the classical example of a non-linear separable function that becomes separable in a superlinear space (Cristianini and Shawe-Taylor, 2000).",
"cite_spans": [
{
"start": 433,
"end": 456,
"text": "(Hacioglu et al., 2003)",
"ref_id": "BIBREF11"
},
{
"start": 581,
"end": 606,
"text": "(Gildea and Palmer, 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 249,
"end": 257,
"text": "Figure 5",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Kernel evaluations",
"sec_num": "4.3"
},
{
"text": "After it was established that the best kernel for standard features is K p 3 , we carried out all the other experiments using it in the kernel combinations. Table 2 and 3 show the single class (f 1 measure) as well as multi-class classifier (accuracy) performance for PropBank and FrameNet respectively. Each column of the two tables refers to a different kernel defined in the 8 Preliminary experiments indicate that SVMs can reach 90% by using the frame feature.",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 170,
"text": "Table 2 and 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Kernel evaluations",
"sec_num": "4.3"
},
{
"text": "9 Indeed, this is true in most part of the cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel evaluations",
"sec_num": "4.3"
},
{
"text": "previous section. The overall meaning is discussed in the following points: First, PAF alone has good performance, since in PropBank evaluation it outperforms the linear kernel (K p 1 ), 88.7% vs. 86.7% whereas in FrameNet, it shows a similar performance 79.5% vs. 82.1% (compare tables with Figure 5 ). This suggests that PAF generates the same information as the standard features in a linear space. However, when a degree greater than 1 is used for standard features, PAF is outperformed 10 . Second, SCF improves the polynomial kernel (d = 3), i.e. the current state-of-the-art, of about 3 percent points on PropBank (column SCF\u2022P). This suggests that (a) PAK can measure the similarity between two SCF structures and (b) the sub-categorization information provides effective clues about the expected argument type. The interesting consequence is that SCF together with PAK seems suitable to automatically cluster different verbs that have the same syntactic realization. We note also that to fully exploit the SCF information it is necessary to use a kernel product (K 1 \u2022 K 2 ) combination rather than the sum (K 1 + K 2 ), e.g. column SCF+P.",
"cite_spans": [],
"ref_spans": [
{
"start": 292,
"end": 300,
"text": "Figure 5",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Kernel evaluations",
"sec_num": "4.3"
},
{
"text": "Finally, the FrameNet results are completely different. No kernel combinations with both PAF and SCF produce an improvement. On the contrary, the performance decreases, suggesting that the classifier is confused by this syntactic information. The main reason for the different outcomes is that PropBank arguments are different from semantic roles as they are an intermediate level between syntax and semantic, i.e. they are nearer to grammatical functions. In fact, in PropBank arguments are annotated consistently with syntactic alternations (see the Annotation guidelines for Prop-Bank at www.cis.upenn.edu/\u223cace). On the contrary FrameNet roles represent the final semantic product and they are assigned according to semantic considerations rather than syntactic aspects. For example, Cause and Agent semantic roles have identical syntactic realizations. This prevents SCF to distinguish between them. Another minor reason may be the use of automatic parse-trees to extract PAF and SCF, even if preliminary experiments on automatic semantic shallow parsing of PropBank have shown no important differences versus semantic parsing which adopts Gold Standard parse-trees.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel evaluations",
"sec_num": "4.3"
},
{
"text": "In this paper, we have experimented with SVMs using the two novel convolution kernels PAF and SCF which are designed for the semantic structures derived from PropBank and FrameNet corpora. Moreover, we have combined them with the polynomial kernel of standard features. The results have shown that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "First, SVMs using the above kernels are appealing for semantically parsing both corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Second, PAF and SCF can be used to improve automatic classification of PropBank arguments as they provide clues about the predicate argument structure of the target verb. For example, SCF improves (a) the classification state-of-theart (i.e. the polynomial kernel) of about 3 percent points and (b) the best literature result of about 5 percent points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Third, additional work is needed to design kernels suitable to learn the deep semantic contained in FrameNet as it seems not sensible to both PAF and SCF information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Finally, an analysis of SVMs using polynomial kernels over standard features has explained why they largely outperform linear classifiers based-on standard features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "In the future we plan to design other structures and combine them with SCF, PAF and standard features. In this vision the learning will be carried out on a set of structural features instead of a set of flat features. Other studies may relate to the use of SCF to generate verb clusters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "A fragment can appear several times in a parse-tree, thus each fragment occurrence is considered as a different element in F p,a .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "It can be proven that the resulting kernels still satisfy Mercer's conditions(Cristianini and Shawe-Taylor, 2000).4 To normalize a kernel K( x, z) we can divide it byK( x, x) \u2022 K( z, z).5 We point out that we removed from Penn TreeBank the function tags like SBJ and TMP as parsers usually are not able to provide this information.6 We noted that only Arg0 to Arg4 and ArgM contain enough training/testing data to affect the overall performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "f1 assigns equal importance to Precision P and Recall R, i.e. f1 = 2P \u2022R P +R .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Unfortunately the use of a polynomial kernel on top the tree fragments to generate the XOR functions seems not successful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research has been sponsored by the ARDA AQUAINT program. In addition, I would like to thank Professor Sanda Harabagiu for her advice, Adrian Cosmin Bejan for implementing the feature extractor and Paul Mor\u0203rescu for processing the FrameNet data. Many thanks to the anonymous reviewers for their invaluable suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Duffy",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Nigel Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In proceeding of ACL-02.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Three generative, lexicalized models for statistical parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1997,
"venue": "proceedings of the ACL-97",
"volume": "",
"issue": "",
"pages": "16--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1997. Three generative, lexicalized models for statistical parsing. In proceedings of the ACL-97, pages 16-23, Somerset, New Jersey.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An introduction to Support Vector Machines",
"authors": [
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nello Cristianini and John Shawe-Taylor. 2000. An introduction to Support Vector Machines. Cam- bridge University Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Frame semantics",
"authors": [
{
"first": "Charles",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
}
],
"year": 1982,
"venue": "Linguistics in the Morning Calm",
"volume": "",
"issue": "",
"pages": "111--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles J. Fillmore. 1982. Frame semantics. In Lin- guistics in the Morning Calm, pages 111-137.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic labeling of semantic roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurasfky",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea and Daniel Jurasfky. 2002. Auto- matic labeling of semantic roles. Computational Linguistic.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The necessity of parsing for predicate argument recognition",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2002,
"venue": "proceedings of ACL-02",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Gildea and Martha Palmer. 2002. The neces- sity of parsing for predicate argument recognition. In proceedings of ACL-02, Philadelphia, PA.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semantic Structures, Current Studies in Linguistics series. Cambridge, Massachusetts",
"authors": [
{
"first": "R",
"middle": [],
"last": "Jackendoff",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Jackendoff. 1990. Semantic Structures, Current Studies in Linguistics series. Cambridge, Mas- sachusetts: The MIT Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Making large-scale SVM learning practical",
"authors": [
{
"first": "T",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Kernel Methods -Support Vector Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Joachims. 1999. Making large-scale SVM learn- ing practical. In Advances in Kernel Methods - Support Vector Learning.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "From treebank to propbank",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2002,
"venue": "proceedings of LREC-02",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul Kingsbury and Martha Palmer. 2002. From treebank to propbank. In proceedings of LREC- 02, Las Palmas, Spain.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Building a large annotated corpus of english: The penn treebank",
"authors": [
{
"first": "M",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. 1993. Building a large anno- tated corpus of english: The penn treebank. Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A semantic kernel for predicate argument classification",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti And Cosmin Adrian Bejan",
"suffix": ""
}
],
"year": 2004,
"venue": "proceedings of CoNLL-04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alessandro Moschitti and Cosmin Adrian Bejan. 2004. A semantic kernel for predicate argu- ment classification. In proceedings of CoNLL-04, Boston, USA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Shallow Semantic Parsing Using Support Vector Machines",
"authors": [
{
"first": "Kadri",
"middle": [],
"last": "Hacioglu",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Wayne",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kadri Hacioglu, Sameer Pradhan, Wayne Ward, James H. Martin, and Daniel Jurafsky. 2003. Shallow Semantic Parsing Using Support Vector Machines. TR-CSLR-2003-03, University of Col- orado.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Using predicate-argument structures for information extraction",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Sanda",
"middle": [
"M"
],
"last": "Harabagiu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Aarseth",
"suffix": ""
}
],
"year": 2003,
"venue": "proceedings of ACL-03",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mihai Surdeanu, Sanda M. Harabagiu, John Williams, and John Aarseth. 2003. Using predicate-argument structures for information ex- traction. In proceedings of ACL-03, Sapporo, Japan.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The Nature of Statistical Learning Theory",
"authors": [
{
"first": "V",
"middle": [],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer-Verlag New York, Inc.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "A predicate argument structure in a parse-tree representation.",
"uris": null
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"text": "Structured features for Arg0, Arg1 and ArgM.",
"uris": null
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"text": "Fp,a was defined as the set of features of the object <p, a>. Since in our representations we have only one S Sub-Categorization Features for two predicate argument structures.",
"uris": null
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"text": "All 17 valid fragments of the semantic structure associated with Arg 1 of Figure 2.",
"uris": null
},
"FIGREF5": {
"num": null,
"type_str": "figure",
"text": "For instance, the sub-tree [NP [D a]] is excluded from the set of the Figure 4 since only a part of the production NP \u2192 D N is used in its generation. However, this constraint does not apply to the production VP \u2192 V NP PP along with the fragment [VP [V NP]] as the subtree [VP [PP [...]]]",
"uris": null
},
"FIGREF6": {
"num": null,
"type_str": "figure",
"text": "are shown fragments like [NP [DT] [N]] or [NP [DT a] [N talk]] which explicitly encode the Phrase Type feature NP for the Arg 1 in Figure 2.b. The Predicate Word is represented by the fragment [V delivers] and the Head Word is encoded in [N talk].",
"uris": null
},
"FIGREF7": {
"num": null,
"type_str": "figure",
"text": "Multi-classifier accuracy according to different degrees of the polynomial kernel.",
"uris": null
},
"TABREF0": {
"html": null,
"text": "Standard features extracted from the parse-tree in",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF2": {
"html": null,
"text": "in delivered, the [VP [V delivers] [NP]] subtree will be transformed in [VP [VBD delivered] [NP]]",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF4": {
"html": null,
"text": "Evaluation of Kernels on PropBank.",
"num": null,
"content": "<table><tr><td>Roles</td><td>P 3</td><td>PAF</td><td>PAF+P</td><td>PAF\u2022P</td><td>SCF+P</td><td>SCF\u2022P</td></tr><tr><td>agent</td><td colspan=\"2\">92.0 88.5</td><td>91.7</td><td>91.3</td><td>93.1</td><td>93.9</td></tr><tr><td>cause</td><td colspan=\"2\">59.7 16.1</td><td>41.6</td><td>27.7</td><td>42.6</td><td>57.3</td></tr><tr><td>degree</td><td colspan=\"2\">74.9 68.6</td><td>71.4</td><td>57.8</td><td>68.5</td><td>60.9</td></tr><tr><td colspan=\"3\">depict. 52.6 29.7</td><td>51.0</td><td>28.6</td><td>46.8</td><td>37.6</td></tr><tr><td>durat.</td><td colspan=\"2\">45.8 52.1</td><td>40.9</td><td>29.0</td><td>31.8</td><td>41.8</td></tr><tr><td>goal</td><td colspan=\"2\">85.9 78.6</td><td>85.3</td><td>82.8</td><td>84.0</td><td>85.3</td></tr><tr><td>instr.</td><td colspan=\"2\">67.9 46.8</td><td>62.8</td><td>55.8</td><td>59.6</td><td>64.1</td></tr><tr><td>mann.</td><td colspan=\"2\">81.0 81.9</td><td>81.2</td><td>78.6</td><td>77.8</td><td>77.8</td></tr><tr><td>Acc.</td><td colspan=\"2\">85.2 79.5</td><td>84.6</td><td>81.6</td><td>83.8</td><td>84.2</td></tr><tr><td>18 roles</td><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table"
},
"TABREF5": {
"html": null,
"text": "Evaluation of Kernels on FrameNet semantic roles.",
"num": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}