ACL-OCL / Base_JSON /prefixJ /json /J08 /J08-2003.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J08-2003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:19:24.124149Z"
},
"title": "Tree Kernels for Semantic Role Labeling",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": "",
"affiliation": {},
"email": "moschitti@dit.unitn.it"
},
{
"first": "Daniele",
"middle": [],
"last": "Pighin",
"suffix": "",
"affiliation": {},
"email": "pighin@itc.it."
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Bruno",
"middle": [],
"last": "Fondazione",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Kessler",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The availability of large scale data sets of manually annotated predicate-argument structures has recently favored the use of machine learning approaches to the design of automated semantic role labeling (SRL) systems. The main research in this area relates to the design choices for feature representation and for effective decompositions of the task in different learning models. Regarding the former choice, structural properties of full syntactic parses are largely employed as they represent ways to encode different principles suggested by the linking theory between syntax and semantics. The latter choice relates to several learning schemes over global views of the parses. For example, re-ranking stages operating over alternative predicate-argument sequences of the same sentence have shown to be very effective. In this article, we propose several kernel functions to model parse tree properties in kernelbased machines, for example, perceptrons or support vector machines. In particular, we define different kinds of tree kernels as general approaches to feature engineering in SRL. Moreover, we extensively experiment with such kernels to investigate their contribution to individual stages of an SRL architecture both in isolation and in combination with other traditional manually coded features. The results for boundary recognition, classification, and re-ranking stages provide systematic evidence about the significant impact of tree kernels on the overall accuracy, especially when the amount of training data is small. As a conclusive result, tree kernels allow for a general and easily portable feature engineering method which is applicable to a large family of natural language processing tasks.",
"pdf_parse": {
"paper_id": "J08-2003",
"_pdf_hash": "",
"abstract": [
{
"text": "The availability of large scale data sets of manually annotated predicate-argument structures has recently favored the use of machine learning approaches to the design of automated semantic role labeling (SRL) systems. The main research in this area relates to the design choices for feature representation and for effective decompositions of the task in different learning models. Regarding the former choice, structural properties of full syntactic parses are largely employed as they represent ways to encode different principles suggested by the linking theory between syntax and semantics. The latter choice relates to several learning schemes over global views of the parses. For example, re-ranking stages operating over alternative predicate-argument sequences of the same sentence have shown to be very effective. In this article, we propose several kernel functions to model parse tree properties in kernelbased machines, for example, perceptrons or support vector machines. In particular, we define different kinds of tree kernels as general approaches to feature engineering in SRL. Moreover, we extensively experiment with such kernels to investigate their contribution to individual stages of an SRL architecture both in isolation and in combination with other traditional manually coded features. The results for boundary recognition, classification, and re-ranking stages provide systematic evidence about the significant impact of tree kernels on the overall accuracy, especially when the amount of training data is small. As a conclusive result, tree kernels allow for a general and easily portable feature engineering method which is applicable to a large family of natural language processing tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Much attention has recently been devoted to the design of systems for the automatic labeling of semantic roles (SRL) as defined in two important projects: FrameNet (Baker, Fillmore, and Lowe 1998) , based on frame semantics, and PropBank (Palmer, Gildea, and Kingsbury 2005) , inspired by Levin's verb classes. To annotate natural language sentences, such systems generally require (1) the detection of the target word embodying the predicate and (2) the detection and classification of the word sequences constituting the predicate's arguments.",
"cite_spans": [
{
"start": 164,
"end": 196,
"text": "(Baker, Fillmore, and Lowe 1998)",
"ref_id": "BIBREF0"
},
{
"start": 238,
"end": 274,
"text": "(Palmer, Gildea, and Kingsbury 2005)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Previous work has shown that these steps can be carried out by applying machine learning techniques M\u00e0rquez 2004, 2005; Litkowski 2004) , for which the most important features encoding predicate-argument relations are derived from (shallow or deep) syntactic information. The outcome of this research brings wide empirical evidence in favor of the linking theories between semantics and syntax, for example, Jackendoff (1990) . Nevertheless, as no such theory provides a sound and complete treatment, the choice and design of syntactic features to represent semantic structures requires remarkable research effort and intuition.",
"cite_spans": [
{
"start": 100,
"end": 119,
"text": "M\u00e0rquez 2004, 2005;",
"ref_id": null
},
{
"start": 120,
"end": 135,
"text": "Litkowski 2004)",
"ref_id": "BIBREF17"
},
{
"start": 408,
"end": 425,
"text": "Jackendoff (1990)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "For example, earlier studies on feature design for semantic role labeling were carried out by Gildea and Jurafsky (2002) and Thompson, Levy, and Manning (2003) . Since then, researchers have proposed several syntactic feature sets, where the more recent sets slightly enhanced the older ones.",
"cite_spans": [
{
"start": 94,
"end": 120,
"text": "Gildea and Jurafsky (2002)",
"ref_id": "BIBREF9"
},
{
"start": 125,
"end": 159,
"text": "Thompson, Levy, and Manning (2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "A careful analysis of such features reveals that most of them are syntactic tree fragments of training sentences, thus a viable way to alleviate the feature design complexity is the adoption of syntactic tree kernels (Collins and Duffy 2002) . For example, in Moschitti (2004) , the predicate-argument relation is represented by means of the minimal subtree that includes both of them. The similarity between two instances is evaluated by a tree kernel function in terms of common substructures. Such an approach is in line with current research on kernels for natural language learning, for example, syntactic parsing re-ranking (Collins and Duffy 2002) , relation extraction (Zelenko, Aone, and Richardella 2003) , and named entity recognition (Cumby and Roth 2003; Culotta and Sorensen 2004) .",
"cite_spans": [
{
"start": 217,
"end": 241,
"text": "(Collins and Duffy 2002)",
"ref_id": "BIBREF4"
},
{
"start": 260,
"end": 276,
"text": "Moschitti (2004)",
"ref_id": "BIBREF19"
},
{
"start": 630,
"end": 654,
"text": "(Collins and Duffy 2002)",
"ref_id": "BIBREF4"
},
{
"start": 677,
"end": 714,
"text": "(Zelenko, Aone, and Richardella 2003)",
"ref_id": "BIBREF37"
},
{
"start": 746,
"end": 767,
"text": "(Cumby and Roth 2003;",
"ref_id": "BIBREF6"
},
{
"start": 768,
"end": 794,
"text": "Culotta and Sorensen 2004)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Furthermore, recent work (Haghighi, Toutanova, and Manning 2005; Punyakanok et al. 2005) has shown that, to achieve high labeling accuracy, joint inference should be applied on the whole predicate-argument structure. For this purpose, we need to extract features from the sentence syntactic parse tree that encodes the relationships governing complex semantic structures. This task is rather difficult because we do not exactly know which syntactic clues effectively capture the relation between the predicate and its arguments. For example, to detect the interesting context, the modeling of syntax-/semantics-based features should take into account linguistic aspects like ancestor nodes or semantic dependencies (Toutanova, Markova, and Manning 2004) . In this scenario, the automatic feature generation/selection carried out by tree kernels can provide useful insights into the underlying linguistic phenomena. Other advantages coming from the use of tree kernels are the following.",
"cite_spans": [
{
"start": 25,
"end": 64,
"text": "(Haghighi, Toutanova, and Manning 2005;",
"ref_id": "BIBREF10"
},
{
"start": 65,
"end": 88,
"text": "Punyakanok et al. 2005)",
"ref_id": null
},
{
"start": 715,
"end": 753,
"text": "(Toutanova, Markova, and Manning 2004)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "First, we can implement them very quickly as the feature extractor module only requires the writing of a general procedure for subtree extraction. In contrast, traditional SRL systems use more than thirty features (e. g., Pradhan, Hacioglu, Krugler et al. 2005) , each of which requires the writing of a dedicated procedure.",
"cite_spans": [
{
"start": 222,
"end": 261,
"text": "Pradhan, Hacioglu, Krugler et al. 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Second, their combination with traditional attribute-value models produces more accurate systems, also when using the same machine learning algorithm in the combination, because the feature spaces are very different.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Third, we can carry out feature engineering using kernel combinations and marking strategies (Moschitti et al. 2005a; Moschitti, Pighin, and Basili 2006) . This allows us to boost the SRL accuracy in a relatively simple way.",
"cite_spans": [
{
"start": 93,
"end": 117,
"text": "(Moschitti et al. 2005a;",
"ref_id": "BIBREF21"
},
{
"start": 118,
"end": 153,
"text": "Moschitti, Pighin, and Basili 2006)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Next, tree kernels generate large tree fragment sets which constitute back-off models for important syntactic features. Using them, the learning algorithm generalizes better and produces a more accurate classifier, especially when the amount of training data is scarce.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Finally, once the learning algorithm using tree kernels has converged, we can identify the most important structured features of the generated model. One approach for such a reverse engineering process relies on the computation of the explicit feature space, at least for the highest-weighted features (Kudo and Matsumoto 2003) . Once the most relevant fragments are available, they can be used to design novel effective attribute-value features (which in turn can be used to design more efficient classifiers, e. g., with linear kernels) and inspire new linguistic theories.",
"cite_spans": [
{
"start": 302,
"end": 327,
"text": "(Kudo and Matsumoto 2003)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "These points suggest that tree kernels should always be applied, at least for an initial study of the problem. Unfortunately, they suffer from two main limitations: (a) poor impact on boundary detection as, in this task, correct and incorrect arguments may share a large portion of the encoding trees (Moschitti 2004) ; and (b) more expensive running time and limited contribution to the overall accuracy if compared with manually derived features (Cumby and Roth 2003) . Point (a) has been addressed by Moschitti, Pighin, and Basili (2006) by showing that a strategy of marking relevant parse-tree nodes makes correct and incorrect subtrees for boundary detection quite different. Point (b) can be tackled by studying approaches to kernel engineering that allow for the design of efficient and effective kernels.",
"cite_spans": [
{
"start": 301,
"end": 317,
"text": "(Moschitti 2004)",
"ref_id": "BIBREF19"
},
{
"start": 448,
"end": 469,
"text": "(Cumby and Roth 2003)",
"ref_id": "BIBREF6"
},
{
"start": 504,
"end": 540,
"text": "Moschitti, Pighin, and Basili (2006)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this article, we provide a comprehensive study of the use of tree kernels for semantic role labeling. For this purpose, we define tree kernels based on the composition of two different feature functions: canonical mappings, which map sentence-parse trees in tree structures encoding semantic information, and feature extraction functions, which encode these trees in the actual feature space. The latter functions explode the canonical trees into all their substructures and, in the literature, are usually referred to as tree kernels. For instance, in Collins and Duffy (2002) , Vishwanathan and Smola (2002) , and Moschitti (2006a) different tree kernels extract different types of tree fragments.",
"cite_spans": [
{
"start": 556,
"end": 580,
"text": "Collins and Duffy (2002)",
"ref_id": "BIBREF4"
},
{
"start": 583,
"end": 612,
"text": "Vishwanathan and Smola (2002)",
"ref_id": "BIBREF35"
},
{
"start": 619,
"end": 636,
"text": "Moschitti (2006a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Given the heuristic nature of canonical mappings, we studied their properties by experimenting with them within support vector machines and with the data set provided by CoNLL shared tasks (Carreras and M\u00e0rquez 2005) . The results show that carefully engineered tree kernels always boost the accuracy of the basic systems. Most importantly, in complex tasks such as the re-ranking of semantic role annotations, they provide an easy way to engineer new features which enhance the state-of-the-art in SRL.",
"cite_spans": [
{
"start": 189,
"end": 216,
"text": "(Carreras and M\u00e0rquez 2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In the remainder of this article, Section 2 presents traditional architectures for SRL and the typical features proposed in literature. Tree kernels are formally introduced in Section 3, and Section 4 describes our modular architecture employing support vector machines along with manually designed features, tree kernels (feature extraction functions), and their combinations. Section 5 presents our structured features (canonical mappings) inducing different kernels that we used for different SRL subtasks. The extensive experimental results obtained on the boundary recognition, role classification, and re-ranking stages are presented in Section 6. Finally, Section 7 summarizes the conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The recognition of semantic structures within a sentence relies on lexical and syntactic information provided by early stages of an NLP process, such as lexical analysis, part-ofspeech tagging, and syntactic parsing. The complexity of the SRL task mostly depends on two aspects: (a) the information is generally noisy, that is, in a real-world scenario the accuracy and reliability of NLP subsystems are generally not very high; and (b) the lack of a sound and complete linguistic or cognitive theory about the links between syntax and semantics does not allow an informed, deductive approach to the problem. Nevertheless, the large amount of available lexical and syntactic information favors the application of inductive approaches to the SRL task, which indeed is generally treated as a combination of statistical classification problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Shallow Semantic Parsing",
"sec_num": "2."
},
{
"text": "The next sections define the SRL task more precisely and summarize the most relevant work carried out to address these two problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Shallow Semantic Parsing",
"sec_num": "2."
},
{
"text": "The most well-known shallow semantic theories are studied in two different projects: PropBank (Palmer, Gildea, and Kingsbury 2005) and FrameNet (Baker, Fillmore, and Lowe 1998) . The former is based on a linguistic model inspired by Levin's verb classes (Levin 1993) , focusing on the argument structure of verbs and on the alternation patterns that describe movements of verbal arguments within a predicate structure. The latter refers to the application of frame semantics (Fillmore 1968) in the annotation of predicate-argument structures based on frame elements (semantic roles). These theories have been investigated in two CoNLL shared tasks M\u00e0rquez 2004, 2005) and a Senseval-3 evaluation (Litkowski 2004) , respectively.",
"cite_spans": [
{
"start": 94,
"end": 130,
"text": "(Palmer, Gildea, and Kingsbury 2005)",
"ref_id": "BIBREF24"
},
{
"start": 144,
"end": 176,
"text": "(Baker, Fillmore, and Lowe 1998)",
"ref_id": "BIBREF0"
},
{
"start": 254,
"end": 266,
"text": "(Levin 1993)",
"ref_id": "BIBREF15"
},
{
"start": 475,
"end": 490,
"text": "(Fillmore 1968)",
"ref_id": "BIBREF7"
},
{
"start": 648,
"end": 667,
"text": "M\u00e0rquez 2004, 2005)",
"ref_id": null
},
{
"start": 696,
"end": 712,
"text": "(Litkowski 2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "2.1"
},
{
"text": "Given a sentence and a predicate word, an SRL system outputs an annotation of the sentence in which the sequences of words that make up the arguments of the predicate are properly labeled, for example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "2.1"
},
{
"text": "[ Arg0 He] got [ Arg1 his money] [ C-V back] 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "2.1"
},
{
"text": "in response to the input He got his money back. This processing requires that: (1) the predicates within the sentence are identified and (2) the word sequences that span the boundaries of each predicate argument are delimited and assigned the proper role label. The first sub-task can be performed either using statistical methods or hand-crafted lexical and syntactic rules. In the case of verbal predicates, it is quite easy to write simple rules matching regular expressions built on POS tags. The second task is more complex and is generally viewed as a combination of statistical classification problems: The learning algorithms are trained to recognize the extension of predicate arguments and the semantic role they play.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "2.1"
},
{
"text": "An SRL model and the resulting architecture are largely influenced by the kind of data available for the task. As an example, a model relying on a shallow syntactic parser would assign roles to chunks, whereas with a full syntactic parse of the sentence it would be straightforward to establish a correspondence between nodes of the parse tree and semantic roles. We focused on the latter as it has been shown to be more accurate by the CoNLL 2005 shared task results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models for Semantic Role Labeling",
"sec_num": "2.2"
},
{
"text": "According to the deep syntactic formulation, the classifying instances are pairs of parse-tree nodes which dominate the exact span of the predicate and the target argument. Such pairs are usually represented in terms of attribute-value vectors, where the attributes describe properties of predicates, arguments, and the way they are related. There is large agreement on an effective set of linguistic features (Gildea and Jurafsky 2002; Pradhan, Hacioglu, Krugler, et al. 2005 ) that have been employed in the vast majority of SRL systems. The most relevant features are summarized in Table 1 .",
"cite_spans": [
{
"start": 410,
"end": 436,
"text": "(Gildea and Jurafsky 2002;",
"ref_id": "BIBREF9"
},
{
"start": 437,
"end": 476,
"text": "Pradhan, Hacioglu, Krugler, et al. 2005",
"ref_id": null
}
],
"ref_spans": [
{
"start": 585,
"end": 592,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Models for Semantic Role Labeling",
"sec_num": "2.2"
},
{
"text": "Once the representation for the predicate-argument pairs is available, a multiclassifier is used to recognize the correct node pairs, namely, nodes associated with correct arguments (given a predicate), and assign them a label (which is the label of the argument). This can be achieved by training a multi-classifier on n + 1 classes, where the first n classes correspond to the different roles and the (n + 1) th is a NARG (nonargument) class to which non-argument nodes are assigned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models for Semantic Role Labeling",
"sec_num": "2.2"
},
{
"text": "A more efficient solution consists in dividing the labeling process into two steps: boundary detection and argument classification. A Boundary Classifier (BC) is a binary classifier that recognizes the tree nodes that exactly cover a predicate argument, that is, that dominate all and only the words that belong to target arguments. Then, such nodes are classified by a Role Multi-classifier (RM) that assigns to each example the most appropriate label. This two-step approach (Gildea and Jurafsky 2002) has the advantage of only applying BC on all parse-tree nodes. RM can ignore non-boundary nodes, resulting in a much faster classification. Other approaches have extended this solution and suggested other multi-stage classification models (e. g., Moschitti et al. 2005b in which a four-step hierarchical SRL architecture is described).",
"cite_spans": [
{
"start": 477,
"end": 503,
"text": "(Gildea and Jurafsky 2002)",
"ref_id": "BIBREF9"
},
{
"start": 751,
"end": 773,
"text": "Moschitti et al. 2005b",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models for Semantic Role Labeling",
"sec_num": "2.2"
},
{
"text": "After node labeling has been carried out, it is possible that the output of the argument classifier does not result in a consistent annotation, as the labeling scheme may not be compatible with the underlying linguistic model. As an example, PropBank-style annotations do not allow arguments to be nested. This happens when two or more The simplest solution relies on the application of heuristics that take into account the whole predicate-argument structure to remove the incorrect labels (e. g., Moschitti et al. 2005a; Tjong Kim Sang et al. 2005) . A much more complex solution consists in the application of some joint inference model to the whole predicate-argument structure, as in Pradhan et al. (2004) . As an example, Haghighi, Toutanova, and Manning (2005) associate a posterior probability with each argument node role assignment, estimate the likelihood of the alternative labeling schemes, and employ a re-ranking mechanism to select the best annotation.",
"cite_spans": [
{
"start": 499,
"end": 522,
"text": "Moschitti et al. 2005a;",
"ref_id": "BIBREF21"
},
{
"start": 523,
"end": 550,
"text": "Tjong Kim Sang et al. 2005)",
"ref_id": "BIBREF31"
},
{
"start": 689,
"end": 710,
"text": "Pradhan et al. (2004)",
"ref_id": "BIBREF28"
},
{
"start": 728,
"end": 767,
"text": "Haghighi, Toutanova, and Manning (2005)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models for Semantic Role Labeling",
"sec_num": "2.2"
},
{
"text": "Additionally, the most accurate systems participating in CoNLL 2005 shared task (Pradhan, Hacioglu, Ward et al. 2005; Punyakanok et al. 2005) use different syntactic views of the same input sentence. This allows the SRL system to recover from syntactic parser errors; for example, a prepositional phrase specifying the direct object of the predicate would be attached to the verb instead of the argument. This kind of error prevents some arguments of the proposition from being recognized, as: (1) there may not be a node of the parse tree dominating (all and only) the words of the correct sequence; (2) a badly attached tree node may invalidate other argument nodes, generating unexpected overlapping situations.",
"cite_spans": [
{
"start": 80,
"end": 117,
"text": "(Pradhan, Hacioglu, Ward et al. 2005;",
"ref_id": "BIBREF28"
},
{
"start": 118,
"end": 141,
"text": "Punyakanok et al. 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models for Semantic Role Labeling",
"sec_num": "2.2"
},
{
"text": "The manual design of features which capture important properties of complete predicate-argument structures (also coming from different syntactic views) is quite complex. Tree kernels are a valid alternative to manual design as the next section points out.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models for Semantic Role Labeling",
"sec_num": "2.2"
},
{
"text": "Tree kernels have been applied to reduce the feature design effort in the context of several natural language tasks, for example, syntactic parsing re-ranking (Collins and Duffy 2002) , relation extraction (Zelenko, Aone, and Richardella 2003) , named entity recognition (Cumby and Roth 2003; Culotta and Sorensen 2004) , and semantic role labeling (Moschitti 2004 ).",
"cite_spans": [
{
"start": 159,
"end": 183,
"text": "(Collins and Duffy 2002)",
"ref_id": "BIBREF4"
},
{
"start": 206,
"end": 243,
"text": "(Zelenko, Aone, and Richardella 2003)",
"ref_id": "BIBREF37"
},
{
"start": 271,
"end": 292,
"text": "(Cumby and Roth 2003;",
"ref_id": "BIBREF6"
},
{
"start": 293,
"end": 319,
"text": "Culotta and Sorensen 2004)",
"ref_id": "BIBREF5"
},
{
"start": 349,
"end": 364,
"text": "(Moschitti 2004",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tree Kernels",
"sec_num": "3."
},
{
"text": "On the one hand, these studies show that the kernel ability to generate large feature sets is useful to quickly model new and not well understood linguistic phenomena in learning machines. On the other hand, they show that sometimes it is possible to manually design features for linear kernels that produce higher accuracy and faster computation time. One of the most important causes of such mixed behavior is the inappropriate choice of kernel functions. For example, in Moschitti, Pighin, and Basili (2006) and Moschitti (2006a) , several kernels have been designed and shown to produce different impacts on the training algorithms.",
"cite_spans": [
{
"start": 474,
"end": 510,
"text": "Moschitti, Pighin, and Basili (2006)",
"ref_id": "BIBREF23"
},
{
"start": 515,
"end": 532,
"text": "Moschitti (2006a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tree Kernels",
"sec_num": "3."
},
{
"text": "In the next sections, we briefly introduce the kernel trick and describe the subtree (ST) kernel devised in Vishwanathan and Smola (2002) , the subset tree (SST) kernel defined in Collins and Duffy (2002) , and the partial tree (PT) kernel proposed in Moschitti (2006a) .",
"cite_spans": [
{
"start": 108,
"end": 137,
"text": "Vishwanathan and Smola (2002)",
"ref_id": "BIBREF35"
},
{
"start": 180,
"end": 204,
"text": "Collins and Duffy (2002)",
"ref_id": "BIBREF4"
},
{
"start": 252,
"end": 269,
"text": "Moschitti (2006a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tree Kernels",
"sec_num": "3."
},
{
"text": "The main concept underlying machine learning for classification tasks is the automatic learning of classification functions based on examples labeled with the class information. Such examples can be described by means of feature vectors in an n dimensional space over the real numbers, namely, n . The learning algorithm uses space metrics over vectors, for example, the scalar product, to learn an abstract representation of all instances belonging to the target class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "For example, support vector machines (SVMs) are linear classifiers which learn a hyperplane f ( x) = w \u00d7 x + b = 0, separating positive from negative examples. x is the feature vector representation of a classifying object o, whereas w \u2208 n and b \u2208 are parameters learned from the data by applying the Structural Risk Minimization principle (Vapnik 1998) . The object o is mapped to x via a feature function \u03c6 : O \u2192 n , O being the set of the objects that we want to classify. o is categorized in the target class only if f ( x) \u2265 0.",
"cite_spans": [
{
"start": 340,
"end": 353,
"text": "(Vapnik 1998)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "The kernel trick allows us to rewrite the decision hyperplane as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "f ( x) = i=1..l y i \u03b1 i x i \u2022 x + b = i=1..l y i \u03b1 i x i \u2022 x + b = i=1..l y i \u03b1 i \u03c6(o i ) \u2022 \u03c6(o) + b = 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "where y i is equal to 1 for positive examples and \u22121 for negative examples,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "\u03b1 i \u2208 with \u03b1 i \u2265 0, o i \u2200i \u2208 {1, .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "., l} are the training instances and the product",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "K(o i , o) = \u03c6(o i ) \u2022 \u03c6(o)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "is the kernel function associated with the mapping \u03c6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "Note that we do not need to apply the mapping \u03c6; we can use K(o i , o) directly. This allows us, under Mercer's conditions (Shawe-Taylor and Cristianini 2004), to define abstract kernel functions which generate implicit feature spaces. A traditional example is given by the polynomial kernel:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "K p (o 1 , o 2 ) = (c + x 1 \u2022 x 2 ) d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": ", where c is a constant and d is the degree of the polynomial. This kernel generates the space of all conjunctions of feature groups up to d elements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "Additionally, we can carry out two interesting operations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "r kernel combinations, for example, K 1 + K 2 or K 1 \u00d7 K 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "r feature mapping compositions, for example,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "K(o 1 , o 2 ) = \u03c6(o 1 ) \u2022 \u03c6(o 2 ) = \u03c6 B (\u03c6 A (o 1 )) \u2022 \u03c6 B (\u03c6 A (o 2 ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "Kernel combinations are very useful for integrating the knowledge provided by the manually defined features with the knowledge automatically obtained with structural kernels; feature mapping compositions are useful methods to describe diverse kernel classes (see Section 5). In this perspective, we propose to split the mapping \u03c6 by defining our tree kernel as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "r Canonical Mapping, \u03c6 M (), in which a linguistic object (e. g., a syntactic parse tree) is transformed into a more meaningful structure (e. g., the subtree corresponding to a verb subcategorization frame).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "r Feature Extraction, \u03c6 S (), which maps the canonical structure in all its fragments according to different fragment spaces S (e. g., ST, SST, and PT).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "For example, given the kernel",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "K ST = \u03c6 ST (o 1 ) \u2022 \u03c6 ST (o 2 ), we can apply a canonical mapping \u03c6 M (), obtaining K M ST = \u03c6 ST (\u03c6 M (o 1 )) \u2022 \u03c6 ST (\u03c6 M (o 2 )) = \u03c6 ST \u2022 \u03c6 M (o 1 ) \u2022 \u03c6 ST \u2022 \u03c6 M (o 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": ", which is a noticeably different kernel, which is induced by the mapping",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "\u03c6 ST \u2022 \u03c6 M .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "In the remainder of this section we start the description of our engineered kernels by defining three different feature extraction mappings based on three different kernel spaces (i. e., ST, SST, and PT).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Kernel Trick",
"sec_num": "3.1"
},
{
"text": "The kernels that we consider represent trees in terms of their substructures (fragments). The kernel function detects if a tree subpart (common to both trees) belongs to the feature space that we intend to generate. For this purpose, the desired fragments need to be described. We consider three main characterizations: the subtrees (STs) (Vishwanathan and Smola 2002) , the subset trees (SSTs) or all subtrees (Collins and Duffy 2002) , and the partial trees (PTs) (Moschitti 2006a ).",
"cite_spans": [
{
"start": 339,
"end": 368,
"text": "(Vishwanathan and Smola 2002)",
"ref_id": "BIBREF35"
},
{
"start": 411,
"end": 435,
"text": "(Collins and Duffy 2002)",
"ref_id": "BIBREF4"
},
{
"start": 466,
"end": 482,
"text": "(Moschitti 2006a",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tree Kernel Spaces",
"sec_num": "3.2"
},
{
"text": "As we consider syntactic parse trees, each node with its children is associated with a grammar production rule, where the symbol on the left-hand side corresponds to the parent and the symbols on the right-hand side are associated with the children. The terminal symbols of the grammar are always associated with tree leaves.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree Kernel Spaces",
"sec_num": "3.2"
},
{
"text": "A subtree (ST) is defined as a tree rooted in any non-terminal node along with all its descendants. For example, Figure 1a shows the parse tree of the sentence Mary brought a cat together with its six STs. A subset tree (SST) is a more general structure because its leaves can be non-terminal symbols. For example, Figure 1 (b) shows ten SSTs (out of 17) of the subtree in Figure 1a rooted in VP. SSTs satisfy the constraint that grammatical rules cannot be broken. For example, [VP [V NP]] is an SST which has two non-terminal symbols, V and NP, as leaves. On the contrary, ] ] is not an SST as it violates the production VP\u2192V NP. If we relax the constraint over the SSTs, we obtain a more general form of substructures that we call partial trees (PTs). These can be generated by the application of partial production rules of the grammar; consequently Figure 1c shows that the number of PTs derived from the same tree as before is still higher (i. e., 30 PTs). These numbers provide an intuitive quantification of the different degrees of information encoded by each representation. ",
"cite_spans": [
{
"start": 575,
"end": 576,
"text": "]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 113,
"end": 122,
"text": "Figure 1a",
"ref_id": "FIGREF1"
},
{
"start": 315,
"end": 323,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 373,
"end": 382,
"text": "Figure 1a",
"ref_id": "FIGREF1"
},
{
"start": 854,
"end": 863,
"text": "Figure 1c",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Tree Kernel Spaces",
"sec_num": "3.2"
},
{
"text": "[VP [V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tree Kernel Spaces",
"sec_num": "3.2"
},
{
"text": "The main idea underlying tree kernels is to compute the number of common substructures between two trees T 1 and T 2 without explicitly considering the whole fragment space. In the following, we report on the Subset Tree (SST) kernel proposed in Collins and Duffy (2002) . The algorithms to efficiently compute it along with the ST and PT kernels can be found in Moschitti (2006a) .",
"cite_spans": [
{
"start": 246,
"end": 270,
"text": "Collins and Duffy (2002)",
"ref_id": "BIBREF4"
},
{
"start": 363,
"end": 380,
"text": "Moschitti (2006a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction Functions",
"sec_num": "3.3"
},
{
"text": "Given two trees T 1 and T 2 , let { f 1 , f 2 , ..} = F be the set of substructures (fragments) and I i (n) be equal to 1 if f i is rooted at node n, 0 otherwise. Collins and Duffy's kernel is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction Functions",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "K(T 1 , T 2 ) = n 1 \u2208N T 1 n 2 \u2208N T 2 \u2206(n 1 , n 2 )",
"eq_num": "( 1 )"
}
],
"section": "Feature Extraction Functions",
"sec_num": "3.3"
},
{
"text": "where N T 1 and N T 2 are the sets of nodes in T 1 and T 2 , respectively, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction Functions",
"sec_num": "3.3"
},
{
"text": "\u2206(n 1 , n 2 ) = |F| i=1 I i (n 1 )I i (n 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction Functions",
"sec_num": "3.3"
},
{
"text": ". The latter is equal to the number of common fragments rooted in nodes n 1 and n 2 . \u2206 can be computed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction Functions",
"sec_num": "3.3"
},
{
"text": "1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction Functions",
"sec_num": "3.3"
},
{
"text": "If the productions (i.e. the nodes with their direct children) at n 1 and n 2 are different, then \u2206(n 1 , n 2 ) = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction Functions",
"sec_num": "3.3"
},
{
"text": "If the productions at n 1 and n 2 are the same, and n 1 and n 2 only have leaf children (i.e., they are pre-terminal symbols), then \u2206(n 1 , n 2 ) = 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "If the productions at n 1 and n 2 are the same, and n 1 and n 2 are not pre-terminals, then \u2206(n 1 , n 2 ) = nc(n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "1 ) j=1 (1 + \u2206(c j n 1 , c j n 2 ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": ", where nc(n 1 ) is the number of children of n 1 and c j n is the j-th child of n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "Such tree kernels can be normalized and a \u03bb factor can be added to reduce the weight of large structures (refer to Collins and Duffy [2002] for a complete description).",
"cite_spans": [
{
"start": 115,
"end": 139,
"text": "Collins and Duffy [2002]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "Although the literature on SRL is extensive, there is almost no study of the use of tree kernels for its solution. Consequently, the reported research is mainly based on diverse natural language learning problems tackled by means of tree kernels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3.4"
},
{
"text": "In Collins and Duffy (2002) , the SST kernel was experimented with using the voted perceptron for the parse tree re-ranking task. A combination with the original PCFG model improved the syntactic parsing. Another interesting kernel for re-ranking was defined in Toutanova, Markova, and Manning (2004) . This represents parse trees as lists of paths (leaf projection paths) from leaves to the top level of the tree. It is worth noting that the PT kernel includes tree fragments identical to such paths.",
"cite_spans": [
{
"start": 3,
"end": 27,
"text": "Collins and Duffy (2002)",
"ref_id": "BIBREF4"
},
{
"start": 262,
"end": 300,
"text": "Toutanova, Markova, and Manning (2004)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3.4"
},
{
"text": "In Kazama and Torisawa (2005) , an interesting algorithm that speeds up the average running time is presented. This algorithm looks for node pairs in which the rooted subtrees share many substructures (malicious nodes) and applies a transformation to the trees rooted in such nodes to make the kernel computation faster. The results show a several-hundred-fold speed increase with respect to the basic implementation.",
"cite_spans": [
{
"start": 3,
"end": 29,
"text": "Kazama and Torisawa (2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3.4"
},
{
"text": "In Zelenko, Aone, and Richardella (2003) , two kernels over syntactic shallow parser structures were devised for the extraction of linguistic relations, for example, person-affiliation. To measure the similarity between two nodes, the contiguous string kernel and the sparse string kernel were used. In Culotta and Sorensen (2004) such kernels were slightly generalized by providing a matching function for the node pairs. The time complexity for their computation limited the experiments to a data set of just 200 news items.",
"cite_spans": [
{
"start": 3,
"end": 40,
"text": "Zelenko, Aone, and Richardella (2003)",
"ref_id": "BIBREF37"
},
{
"start": 303,
"end": 330,
"text": "Culotta and Sorensen (2004)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3.4"
},
{
"text": "In Shen, Sarkar, and Joshi (2003) , a tree kernel based on lexicalized tree adjoining grammar (LTAG) for the parse re-ranking task was proposed. The subtrees induced by this kernel are built using the set of elementary trees as defined by LTAG.",
"cite_spans": [
{
"start": 3,
"end": 33,
"text": "Shen, Sarkar, and Joshi (2003)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3.4"
},
{
"text": "In Cumby and Roth (2003) , a feature description language was used to extract structured features from the syntactic shallow parse trees associated with named entities. Their experiments on named entity categorization showed that when the description language selects an adequate set of tree fragments the voted perceptron algorithm increases its classification accuracy. The explanation was that the complete tree fragment set contains many irrelevant features and may cause overfitting.",
"cite_spans": [
{
"start": 3,
"end": 24,
"text": "Cumby and Roth (2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3.4"
},
{
"text": "In Zhang, Zhang, and Su (2006) , convolution tree kernels for relation extraction were applied in a way similar to the one proposed in Moschitti (2004) . The combination of standard features along with several tree subparts, tailored according to their importance for the task, produced again an improvement on the state of the art.",
"cite_spans": [
{
"start": 3,
"end": 30,
"text": "Zhang, Zhang, and Su (2006)",
"ref_id": "BIBREF38"
},
{
"start": 135,
"end": 151,
"text": "Moschitti (2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3.4"
},
{
"text": "Such previous work, as well as that described previously, show that tree kernels can efficiently represent syntactic objects, for example, constituent parse trees, in huge feature spaces. The next section describes our SRL system adopting tree kernels within SVMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "3.4"
},
{
"text": "A meaningful study of tree kernels for SRL cannot be carried out without a comparison with a state-of-the-art architecture: Kernel models that improve average performing systems are just a technical exercise whose findings would have a reduced value. A state-of-the-art architecture, instead, can be used as a basic system upon which tree kernels should improve. Because kernel functions in general introduce a sensible slowdown with respect to the linear approach, we also have to consider efficiency issues. These aims drove us in choosing the following components for our SRL system: r SVMs as our learning algorithm; these provide both a state-of-the-art learning model (in terms of accuracy) and the possibility of using kernel functions r a two-stage role labeling module to improve learning and classification efficiency; this comprises: -a feature extractor that can represent candidate arguments using both linear and structured features a boundary classifier (BC) -a role multi-classifier (RM), which is obtained by applying the OVA (One vs. All) approach r a conflict resolution module, that is, a software component that resolves inconsistencies in the annotations using either a rule-based approach or a tree kernel classifier; the latter allows experimentation with the classification of complete predicate-argument annotations in correct and incorrect structures r a joint inference re-ranking module, which employs a combination of standard features and tree kernels to rank alternative candidate labeling schemes for a proposition; this module, as shown in Gildea and Jurafsky (2002) , Pradhan et al. (2004) , , is mandatory in order to achieve state-of-the-art accuracy",
"cite_spans": [
{
"start": 1574,
"end": 1600,
"text": "Gildea and Jurafsky (2002)",
"ref_id": "BIBREF9"
},
{
"start": 1603,
"end": 1624,
"text": "Pradhan et al. (2004)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A State-of-the-Art Architecture for Semantic Role Labeling",
"sec_num": "4."
},
{
"text": "We point out that we did not use any heuristic to filter out the nodes which are likely to be incorrect boundaries, for example, as done in Xue and Palmer (2004) . On the one hand, this makes the learning and classification phases more complex because they involve more instances. On the other hand, our results are not biased by the quality of the heuristics, leading to more meaningful findings.",
"cite_spans": [
{
"start": 140,
"end": 161,
"text": "Xue and Palmer (2004)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A State-of-the-Art Architecture for Semantic Role Labeling",
"sec_num": "4."
},
{
"text": "In the remainder of this section, we describe the main functional modules of our architecture for SRL and introduce some basic concepts about the use of structured features for SRL. Specific feature engineering for the above SRL subtasks is described and discussed in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A State-of-the-Art Architecture for Semantic Role Labeling",
"sec_num": "4."
},
{
"text": "Given a sentence in natural language, our SRL system identifies all the verb predicates and their respective arguments. We divide this step into three subtasks: (a) predicate detection, which can be carried out by simple heuristics based on part-of-speech information, (b) the detection of predicate-argument boundaries (i. e., the span of their words in the sentence), and (c) the classification of the argument type (e. g., Arg0 or ArgM in PropBank).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Basic Two-Stage Role Labeling System",
"sec_num": "4.1"
},
{
"text": "The standard approach to learning both the detection and the classification of predicate arguments is summarized by the following steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Basic Two-Stage Role Labeling System",
"sec_num": "4.1"
},
{
"text": "Given a sentence from the training set, generate a full syntactic parse tree;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "2. let P and A be the set of predicates and the set of parse-tree nodes (i. e., the potential arguments), respectively;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "3. for each pair p, a \u2208 P \u00d7 A: For instance, given the example in Figure 2 (a), we would consider all the pairs p, a where p is the node associated with the predicate took and a is any other tree node not overlapping with p. If the node a exactly covers the word sequences John or the book, then \u03c6(p, a) is added to the set E + , otherwise it is added to E \u2212 , as in the case of the node (NN book).",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 74,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "The E + and E \u2212 sets are used to train the boundary classifier. To train the role multiclassifier, the elements of E + can be reorganized as positive E + arg i and negative E \u2212 arg i examples for each role type i. In this way, a binary OVA classifier for each argument i can be trained. We adopted this solution following Pradhan, Hacioglu, Krugler et al. (2005) because it is simple and effective. In the classification phase, given an unseen sentence, all the pairs p, a are generated and classified by each individual role classifier C i . The argument label associated with the maximum among the scores provided by C i is eventually selected. The feature extraction function \u03c6 can be implemented according to different linguistic theories and intuitions. From a technical point of view, we can use \u03c6 to map p, a in feature vectors or in structures to be used in a tree kernel function. The next section describes our choices in more detail.",
"cite_spans": [
{
"start": 322,
"end": 362,
"text": "Pradhan, Hacioglu, Krugler et al. (2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "Our feature extractor module and our learning algorithms are designed to cope with both linear and structured features, used for the different stages of the SRL process. The standard features that we adopted are shown in r the Syntactic Frame defined in Xue and Palmer (2004) .",
"cite_spans": [
{
"start": 254,
"end": 275,
"text": "Xue and Palmer (2004)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linear and Structured Representation",
"sec_num": "4.2"
},
{
"text": "We indicate with structured features the basic syntactic structures extracted from the sentence-parse tree or their canonical transformation (see Section 3.1). In particular, we focus on the minimal spanning tree that includes the predicate along with all of its arguments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear and Structured Representation",
"sec_num": "4.2"
},
{
"text": "More formally, given a parse tree t, a node set spanning tree (NST) over a set of nodes N t = {n 1 , . . . , n k } is a partial tree of t that (1) is rooted at the deepest level and (2) contains all and only the nodes n i \u2208 N t , along with their ancestors and descendants. An NST can be built as follows. For any choice of N t , we call r the lowest common ancestor of n 1 , . . . , n k . Then, from the set of all the descendants of r, we remove all the nodes n j that: (1) do not belong to N t and (2) are neither ancestors nor descendants of any node belonging to N t .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear and Structured Representation",
"sec_num": "4.2"
},
{
"text": "Because predicate arguments are associated with tree nodes, we can define the predicate argument spanning tree (AST n ) of a predicate argument node set A p = {a 1 , . . . , a n } as the NST over these nodes and the predicate node, that is, the node exactly covering the predicate p. 2 An AST n corresponds to the minimal parse subtree whose leaves are all and only the word sequences belonging to the arguments and the predicate. For example, Figure 3a shows the parse tree of the sentence: John took the book and read its title. took {ARG 0 ,ARG 1 } and read {ARG 0 ,ARG 1 } are two AST n structures associated with the two predicates took and read, respectively, and are shown in Figure 3b and 3c.",
"cite_spans": [],
"ref_spans": [
{
"start": 444,
"end": 453,
"text": "Figure 3a",
"ref_id": "FIGREF4"
},
{
"start": 683,
"end": 692,
"text": "Figure 3b",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Linear and Structured Representation",
"sec_num": "4.2"
},
{
"text": "For each predicate, only one NST is a valid AST n . Careful manipulations of an AST n can be employed for those tasks that require a representation of the whole predicateargument structure, for example, overlap resolution or proposition re-ranking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear and Structured Representation",
"sec_num": "4.2"
},
{
"text": "It is worth noting that the predicate-argument feature, or PAF in Moschitti (2004) , is a canonical transformation of the AST n in the subtree including the predicate p and only one of its arguments. For the sake of uniform notation, PAF will be referred to as AST 1 (argument spanning tree), the subscript 1 stressing the fact that the structure only encompasses one of the predicate arguments. An example AST 1 is shown in Figure 3d . Manipulations of an AST 1 structure can lead to interesting tree kernels for local learning tasks, such as boundary detection and argument classification.",
"cite_spans": [
{
"start": 66,
"end": 82,
"text": "Moschitti (2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 425,
"end": 434,
"text": "Figure 3d",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Linear and Structured Representation",
"sec_num": "4.2"
},
{
"text": "Regardless of the adopted feature space, our multiclassification approach suffers from the problem of selecting both boundaries and argument roles independently of the whole structures. Thus, it is possible that (a) two labeled nodes refer to the same arguments (node overlaps) and (b) invalid role sequences are generated (e. g., Arg0, Arg0, Arg0, . . . ). Next, we describe our approach to solving such problems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear and Structured Representation",
"sec_num": "4.2"
},
{
"text": "We call a conflict, or ambiguity, or overlap resolution a stage of the SRL process which resolves annotation conflicts that invalidate the underlying linguistic model. This happens, for example, when both a node and one of its descendants are classified as positive boundaries, namely, they received a role label. We say that such nodes are overlapping as their leaf (i. e., word) sequences overlap. Because this situation is not allowed by the PropBank annotation definition, we need a method to select the most appropriate word sequence. Our system architecture can employ one of three different disambiguation strategies: r a basic solution which, given two overlapping nodes, randomly selects one to be removed; r the following heuristics:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conflict Resolution",
"sec_num": "4.3"
},
{
"text": "1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conflict Resolution",
"sec_num": "4.3"
},
{
"text": "The node causing the major number of overlaps is removed, for example, a node which dominates two nodes labeled as arguments 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conflict Resolution",
"sec_num": "4.3"
},
{
"text": "Core arguments (i. e., arguments associated with the subcategorization frame of the target verb) are always preferred over adjuncts (i. e., arguments that are not specific to verbs or verb senses) 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conflict Resolution",
"sec_num": "4.3"
},
{
"text": "In case the two previous rules do not eliminate all conflicts, the nodes located deeper in the tree are discarded; and r a tree kernel-based overlap resolution strategy consisting of an SVM trained to recognize non-clashing configurations that often correspond to correct propositions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conflict Resolution",
"sec_num": "4.3"
},
{
"text": "The latter approach consists of: (1) a software module that generates all the possible nonoverlapping configurations of nodes. These are built using the output of the local node classifiers by generating all the permutations of argument nodes of a predicate and removing the configurations that contain at least one overlap;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conflict Resolution",
"sec_num": "4.3"
},
{
"text": "(2) an SVM trained on such non-overlapping configurations, where the positive examples are correct predicateargument structures (although eventually not complete) and negative ones are not. At testing time, we classify all the alternative non-clashing configurations. In case more than one structure is selected as correct, we choose the one associated with the highest SVM score. These disambiguation modules can be invoked after either the BC or the RM classification. The different information available after each phase can be used to design different kinds of features. For example, the knowledge of the candidate role of an argument node can be a key issue in the design of effective conflict resolution methodologies, for example, by eliminating ArgX, ArgX, ArgX, . . . sequences. These different approaches are discussed in Section 5.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conflict Resolution",
"sec_num": "4.3"
},
{
"text": "The next section describes a more advanced approach that can eliminate overlaps and choose the most correct annotation for a proposition among a set of alternative labeling schemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conflict Resolution",
"sec_num": "4.3"
},
{
"text": "The heuristics considered in the previous sections only act when a conflict is detected. In a real situation, many incorrect annotations are generated with no overlaps. To deal with such cases, we need a re-ranking module based on a joint BC and RM model as suggested in Haghighi, Toutanova, and Manning (2005) . Such a model is based on 1an algorithm to evaluate the most likely labeling schemes for a given predicate, and (2) a re-ranker that sorts the labeling schemes according to their correctness.",
"cite_spans": [
{
"start": 271,
"end": 310,
"text": "Haghighi, Toutanova, and Manning (2005)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Joint Model for Re-Ranking",
"sec_num": "4.4"
},
{
"text": "Step 1 uses the probabilities associated with each possible annotation of parse tree nodes, hence requiring a probabilistic output from BC and RM. As the SVM learning algorithm produces metric values, we applied Platt's algorithm (Platt 1999) to convert them into probabilities, as already proposed in Pradhan, Ward et al. (2005) . These posterior probabilities are then combined to generate the n labelings that maximize a likelihood measure.",
"cite_spans": [
{
"start": 230,
"end": 242,
"text": "(Platt 1999)",
"ref_id": "BIBREF25"
},
{
"start": 302,
"end": 329,
"text": "Pradhan, Ward et al. (2005)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Joint Model for Re-Ranking",
"sec_num": "4.4"
},
{
"text": "Step 2 requires the training of an automatic re-ranker. This can be designed using a binary classifier that, given two annotations, decides which one is more accurate. We modeled such a classifier by means of three different kernels based on standard features, structured features, and their combination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Joint Model for Re-Ranking",
"sec_num": "4.4"
},
{
"text": "Annotations. First, we converted the output of each nodeclassifier into a posterior probability conditioned by its output scores (Platt 1999) . This method uses a parametric model to fit onto a sigmoid distribution the posterior probability P(y = 1, f ), where f is the output of the classifier and the parameters are dynamically adapted to give the best probability output. 3 Second, we selected the n most likely sequences of node labelings. Given a predicate, the likelihood of a labeling scheme (or state) s for the K candidate argument nodes is given by:",
"cite_spans": [
{
"start": 129,
"end": 141,
"text": "(Platt 1999)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of the N-best",
"sec_num": "4.4.1"
},
{
"text": "p(s) = K i=1 p i (l), p i (l) = p i (l i )p i (ARG) if l i = NARG (1 \u2212 p i (ARG)) 2 otherwise (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of the N-best",
"sec_num": "4.4.1"
},
{
"text": "where p i (l) is the probability of node i being assigned the label l, and p i (l) is the same probability weighted by the probability p i (ARG) of the node being an argument. If l = NARG (not an argument) then both terms evaluate to (1 \u2212 p i (ARG)) and the likelihood of the NARG label assignment is given by (1 \u2212 p i (ARG)) 2 . To select the n states associated with the highest probability, we cannot evaluate the likelihood of all possible states because they are exponential in number. In order to reduce the search space we (a) limit the number of possible labelings of each node to m and (b) avoid traversing all the states by applying a Viterbi algorithm to search for the most likely labeling schemes. From each state we generate the states in which a candidate argument is assigned different labels. This operation is bound to output at most n states which are generated by traversing a maximum of n \u00d7 m states. Therefore, in the worst case scenario the number of traversed states is V = n \u00d7 m \u00d7 k, k being the number of candidate argument nodes in the tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of the N-best",
"sec_num": "4.4.1"
},
{
"text": "During the search we also enforce overlap resolution policies. Indeed, for any given state in which a node n j is assigned a label l = NARG, we generate all; and only the states in which all the nodes that are dominated by n j are assigned the NARG label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of the N-best",
"sec_num": "4.4.1"
},
{
"text": "Re-Ranker. The Viterbi algorithm generates the n most likely annotations for the proposition associated with a predicate p. These can be used to build annotation pairs, s i , s j , which, in turn, are used to train a binary classifier that decides if s i is more accurate that s j . Each candidate proposition s i can be described by a structured feature t i and a vector of standard features v i . As a whole, an example e i is described by the tuple",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling an Automatic",
"sec_num": "4.4.2"
},
{
"text": "t 1 i , t 2 i , v 1 i , v 2 i ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling an Automatic",
"sec_num": "4.4.2"
},
{
"text": "where t 1 i and v 1 i refer to the first candidate annotation, whereas t 2 i and v 2 i refer to the second one. Given such data, we can define the following re-ranking kernels:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling an Automatic",
"sec_num": "4.4.2"
},
{
"text": "K tr (e 1 , e 2 ) = K t (t 1 1 , t 1 2 ) + K t (t 2 1 , t 2 2 ) \u2212 K t (t 1 1 , t 2 2 ) \u2212 K t (t 2 1 , t 1 2 ) K pr (e 1 , e 2 ) = K p (v 1 1 , v 1 2 ) + K p (v 2 1 , v 2 2 ) \u2212 K p (v 1 1 , v 2 2 ) \u2212 K p (v 2 1 , v 1 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling an Automatic",
"sec_num": "4.4.2"
},
{
"text": "where K t is one of the tree kernel functions defined in Section 3 and K p is a polynomial kernel applied to the feature vectors. The final kernel that we use is the following combination:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling an Automatic",
"sec_num": "4.4.2"
},
{
"text": "K(e 1 , e 2 ) = K tr (e 1 , e 2 ) |K tr (e 1 , e 2 )| + K pr (e 1 , e 2 ) |K pr (e 1 , e 2 )| Previous sections have shown how our SRL architecture exploits tree kernel functions to a large extent. In the next section, we describe in more detail our structured features and the engineering methods applied for the different subtasks of the SRL process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling an Automatic",
"sec_num": "4.4.2"
},
{
"text": "Structured features are an effective alternative to standard features in many aspects. An important advantage is that the target feature space can be completely changed even by small modifications of the applied kernel function. This can be exploited to identify features relevant to learning problems lacking a clear and sound linguistic or cognitive justification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structured Feature Engineering",
"sec_num": "5."
},
{
"text": "As shown in Section 3.1, a kernel function is a scalar product \u03c6(o 1 ) \u2022 \u03c6(o 2 ), where \u03c6 is a mapping in an Euclidean space, and o 1 and o 2 are the target data, for example, parse trees. To make the engineering process easier, we decompose \u03c6 into a canonical mapping, \u03c6 M , and a feature extraction function, \u03c6 S , over the set of incoming parse trees. \u03c6 M transforms a tree into a canonical structure equivalent to an entire class of input parses and \u03c6 S shatters an input tree into its subparts (e. g., subtrees, subset trees, or partial trees as described in Section 3). A large number of different feature spaces can thus be explored by suitable combinations \u03c6 = \u03c6 S \u2022 \u03c6 M of mappings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structured Feature Engineering",
"sec_num": "5."
},
{
"text": "We study different canonical mappings to capture syntactic/semantic aspects useful for SRL. In particular, we define structured features for the different phases of the SRL process, namely, boundary detection, argument classification, conflict resolution, and proposition re-ranking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structured Feature Engineering",
"sec_num": "5."
},
{
"text": "The AST 1 or PAF structures, already mentioned in Section 4.2, have shown to be very effective for argument classification but not for boundary detection. The reason is that two nodes that encode correct and incorrect boundaries may generate very similar AST 1 s and, consequently, have many fragments in common. To solve this problem, we specify the node that exactly covers the target argument node by simply marking it (or marking all its descendants) with the label B, denoting the boundary property.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structures for Boundary Detection and Argument Classification",
"sec_num": "5.1"
},
{
"text": "For example, Figure 4 shows the parse tree of the sentence Paul delivers a talk in formal style, highlighting the predicate with its two arguments, that is, Arg0 and Arg1. Figure 5 shows the AST 1 , AST m 1 , and AST cm 1 , that is, the basic structure, the structure with the marked argument node, and the completely marked structure, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 21,
"text": "Figure 4",
"ref_id": "FIGREF5"
},
{
"start": 172,
"end": 180,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Structures for Boundary Detection and Argument Classification",
"sec_num": "5.1"
},
{
"text": "To understand the usefulness of node-marking strategies, we can examine Figure 6 . This reports the case in which a correct and an incorrect argument node are chosen by also showing the corresponding AST 1 and AST m 1 representations ((a) and (b)). Figure 6c shows that the number of common fragments of two AST 1 structures is 14. This is much larger than the number of common AST m 1 fragments, that is, only 3 substructures (Figure 6d) .",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 80,
"text": "Figure 6",
"ref_id": "FIGREF7"
},
{
"start": 249,
"end": 258,
"text": "Figure 6c",
"ref_id": "FIGREF7"
},
{
"start": 427,
"end": 438,
"text": "(Figure 6d)",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Structures for Boundary Detection and Argument Classification",
"sec_num": "5.1"
},
{
"text": "Additionally, because the type of a target argument strongly depends on the type and number of the other predicate arguments 4 (Punyakanok et al. 2005; Haghighi, and Manning 2005), we should extract features from the whole predicate argument structure. In contrast, AST 1 s completely neglect the information (i. e., the tree portions) related to non-target arguments.",
"cite_spans": [
{
"start": 125,
"end": 151,
"text": "4 (Punyakanok et al. 2005;",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Structures for Boundary Detection and Argument Classification",
"sec_num": "5.1"
},
{
"text": "One way to use this further information with tree kernels is to use the minimum subtree that spans all the predicate-argument structures, that is, the AST n defined in Section 4.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structures for Boundary Detection and Argument Classification",
"sec_num": "5.1"
},
{
"text": "However, AST n s pose two problems. First, we cannot use them for the boundary detection task since we do not know the predicate-argument structure yet. We can derive the AST n (its approximation) from the nodes selected by a boundary classifier, that is, the nodes that correspond to potential arguments. Such approximated AST n s can be easily used in the argument classification stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structures for Boundary Detection and Argument Classification",
"sec_num": "5.1"
},
{
"text": "Second, an AST n is the same for all the arguments in a proposition, thus we need a way to differentiate it for each target argument. Again, we can mark the target argument node as shown in the previous section. We refer to this subtree as a marked target AST n (AST mt n ). However, for large arguments (i. e., spread over a large part of the sentence tree) the substructures' likelihood of being part of different arguments is quite high.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structures for Boundary Detection and Argument Classification",
"sec_num": "5.1"
},
{
"text": "To address this problem, we can mark all the nodes that descend from the target argument node. We refer to this structure as a completely marked target AST n (AST cmt n ). AST cmt n s may be seen as AST 1 s enriched with new information coming from the other arguments (i. e., the non-marked subtrees). Note that if we only consider the AST 1 subtree from a AST cmt n , we obtain AST cm 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structures for Boundary Detection and Argument Classification",
"sec_num": "5.1"
},
{
"text": "This section describes structured features employed by the tree kernel-based conflict resolution module of the SRL architecture described in Section 4.3. This subtask is performed by means of:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structured Features for Conflict Resolution",
"sec_num": "5.2"
},
{
"text": "A first annotation of potential arguments using a high recall boundary classifier and, eventually, the role information provided by a role multiclassifier (RM).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "2. An AST n classification step aiming at selecting, among the substructures that do not contain overlaps, those that are more likely to encode the correct argument set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "The set of argument nodes recognized by BC can be associated with a subtree of the corresponding sentence parse, which can be classified using tree kernel functions. These should evaluate whether a subtree encodes a correct predicate-argument structure or not. As it encodes features from the whole predicate-argument structure, the AST n that we introduced in Section 4.2 is a structure that can be employed for this task. Let A p be the set of potential argument nodes for the predicate p output by BC; the classifier examples are built as follows: (1) we look for node pairs n 1 , n 2 \u2208 A p \u00d7 A p where n 1 is the ancestor of n 2 or vice versa; (2) we create two node sets A 1 = A \u2212 {n 1 } and A 2 = A \u2212 {n 2 } and classify the two NSTs associated with A 1 and A 2 with the tree kernel classifier to select the most correct set of argument boundaries. This procedure can be generalized to a set of overlapping nodes O with more than two elements, as we simply need to generate all and only the permutations of A's nodes that do not contain overlapping pairs. Figure 7 shows a working example of such a multi-stage classifier. In (Figure 7a ), the BC labels as potential arguments four nodes (circled), three of which are overlapping (in bold circles). The overlap resolution algorithm proposes two solutions (Figure 7b ) of which only one is correct. In fact, according to the second solution, the prepositional phrase of the book would incorrectly be attached to the verbal predicate, that is, in contrast with the parse tree. The AST n classifier, applied to the two NSTs, should detect this inconsistency and provide the correct output. Figure 7 also highlights a critical problem the AST n classifier has to deal with: as the two NSTs are perfectly identical, it is not possible to distinguish between them using only their fragments.",
"cite_spans": [],
"ref_spans": [
{
"start": 1062,
"end": 1070,
"text": "Figure 7",
"ref_id": "FIGREF8"
},
{
"start": 1132,
"end": 1142,
"text": "(Figure 7a",
"ref_id": "FIGREF8"
},
{
"start": 1311,
"end": 1321,
"text": "(Figure 7b",
"ref_id": "FIGREF8"
},
{
"start": 1643,
"end": 1651,
"text": "Figure 7",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "In order to engineer novel features, we simply add the boundary information provided by BC to the NSTs. We mark with a progressive number the phrase type corresponding to an argument node, starting from the leftmost argument. We call the resulting structure an ordinal predicate-argument spanning tree (AST ord n ). For example, in the first NST of Figure 7c , we mark as NP-0 and NP-1 the first and second argument nodes, whereas in the second NST, we have a hypothesis of three arguments on three nodes that we transform as NP-0, NP-1, and PP-2.",
"cite_spans": [],
"ref_spans": [
{
"start": 349,
"end": 358,
"text": "Figure 7c",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "This simple modification enables the tree kernel to generate features useful for distinguishing between two identical parse trees associated with different argument struc- We also experimented with another structure, the marked predicate-argument spanning tree (AST m n ), in which each argument node is marked with a role label assigned by a role multi-classifier (RM). Of course, this model requires a RM to classify all the nodes recognized by BC first. An example AST m n is shown in Figure 7d .",
"cite_spans": [],
"ref_spans": [
{
"start": 488,
"end": 497,
"text": "Figure 7d",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "1.",
"sec_num": null
},
{
"text": "In Section 4.4, we presented our re-ranking mechanism, which is inspired by the joint inference model described in Haghighi, Toutanova, and Manning (2005) . Designing structured features for the re-ranking classifier is complex in many aspects. Unlike the other structures that we have discussed so far, the defined mappings should:",
"cite_spans": [
{
"start": 115,
"end": 154,
"text": "Haghighi, Toutanova, and Manning (2005)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Structures for Proposition Re-Ranking",
"sec_num": "5.3"
},
{
"text": "(1) preserve as much information as possible about the whole predicate-argument structure;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structures for Proposition Re-Ranking",
"sec_num": "5.3"
},
{
"text": "(2) focus the learning algorithm on the whole structure; and (3) be able to identify those small differences that distinguish more or less accurate labeling schemes. Among the possible solutions that we have explored, three are especially interesting in terms of accuracy improvement or linguistic properties, and are described hereinafter. The AST cm n (completely marked AST n , see Figure 8a ) is an AST n in which each argument node label is enriched with the role assigned to the node by RM. The labels of the descendants of each argument node are modified accordingly, down to pre-terminal nodes. The AST cmt n is a variant of AST cm n in which only the target is marked. Marking a node descendant is meant to force substructures matching only among homogeneous argument types. This representation should provide rich syntactic and lexical information about the parse tree encoding the predicate-argument structure.",
"cite_spans": [],
"ref_spans": [
{
"start": 385,
"end": 394,
"text": "Figure 8a",
"ref_id": "FIGREF10"
}
],
"eq_spans": [],
"section": "Structures for Proposition Re-Ranking",
"sec_num": "5.3"
},
{
"text": "The PAS (predicate-argument structure, see Figure 8b ) is a completely different structure that preserves the parse subtrees associated with each argument node while discarding the intra-argument syntactic parse information. Indeed, the syntactic links between the argument nodes are represented as a dummy 1-level tree, which appears in any PAS and therefore does not influence the evaluation of similarity between pairs of structures. This structure accommodates the predicate and all the arguments of an annotation in a sequence of seven slots. 5 To each slot is attached an argument label to which in turn is attached the subtree rooted in the argument node. The predicate is represented by means of a pre-terminal node labeled rel to which the lemmatization of the predicate word is attached as a leaf node. In general, a proposition consists of m arguments, with m \u2264 6, where m varies according to the predicate and the context. To guarantee that predicate structures with a different number of arguments are matched in the SST kernel function, we attach a dummy descendant marked null to the slots not filled by an argument.",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 52,
"text": "Figure 8b",
"ref_id": "FIGREF10"
}
],
"eq_spans": [],
"section": "Structures for Proposition Re-Ranking",
"sec_num": "5.3"
},
{
"text": "The PAS tl (type-only, lemmatized PAS, see Figure 8c ) is a specialization of the PAS that only focuses on the syntax of the predicate-argument structure, namely, the type and relative position of each argument, minimizing the amount of lexical and syntactic information derived from the parse tree. The differences with the PAS are that: (1) each slot is attached to a pre-terminal node representing the argument type and a terminal node whose label indicates the syntactic type of the argument; and (2) the predicate word is lemmatized.",
"cite_spans": [],
"ref_spans": [
{
"start": 43,
"end": 52,
"text": "Figure 8c",
"ref_id": "FIGREF10"
}
],
"eq_spans": [],
"section": "Structures for Proposition Re-Ranking",
"sec_num": "5.3"
},
{
"text": "The next section presents the experiments used to evaluate the effectiveness of the proposed canonical structures in SRL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Structures for Proposition Re-Ranking",
"sec_num": "5.3"
},
{
"text": "The experiments aim to measure the contribution and the effectiveness of our proposed kernel engineering models and of the diverse structured features that we designed (Section 5). From this perspective, the role of feature extraction functions is not fundamental because the study carried out in Moschitti (2006a) strongly suggests that the SST (Collins and Duffy 2002) kernel produces higher accuracy than the PT kernel when dealing with constituent parse trees, which are adopted in our study. 6 We then selected the SST kernel and designed the following experiments:",
"cite_spans": [
{
"start": 297,
"end": 314,
"text": "Moschitti (2006a)",
"ref_id": "BIBREF20"
},
{
"start": 346,
"end": 370,
"text": "(Collins and Duffy 2002)",
"ref_id": "BIBREF4"
},
{
"start": 497,
"end": 498,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6."
},
{
"text": "(a) A study of canonical functions based on node marking for boundary detection and argument classification, that is, AST m 1 (Section 6.2). Moreover, as the standard features have shown to be effective, we combined them with AST m 1 based kernels on the boundary detection and classification tasks (Section 6.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6."
},
{
"text": "(b) We varied the amount of training data to demonstrate the higher generalization ability of tree kernels (Section 6.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6."
},
{
"text": "(c) Given the promising results of kernel engineering, we also applied it to solve a more complex task, namely, conflict resolution in SRL annotations (see Section 6.4). As this involves the complete predicate-argument structure, we could test advanced canonical functions generating AST n , AST ord n , and AST m n . (d) Previous work has shown that re-ranking is very important in boosting the accuracy of SRL. Therefore, we tested advanced canonical mappings, that is, those based on AST cm n , PAS, and PAS tl , on such tasks (Section 6.5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6."
},
{
"text": "The empirical evaluations were mostly carried out within the setting defined in the CoNLL 2005 shared task (Carreras and M\u00e0rquez 2005) . As a target data set, we used the PropBank 7 and the automatic Charniak parse trees of the sentences of Penn TreeBank 2 corpus 8 (Marcus, Santorini, and Marcinkiewicz 1993) from the CoNLL 2005 shared-task data. 9 We employed the SVM-light-TK software 10 , which encodes fast tree kernel evaluation (Moschitti 2006b) , and combinations between multiple feature vectors and trees in the SVM-light software (Joachims 1999) . We used the default regularization parameter (option -c) and \u03bb = 0.4 (see Moschitti [2004] ).",
"cite_spans": [
{
"start": 107,
"end": 134,
"text": "(Carreras and M\u00e0rquez 2005)",
"ref_id": "BIBREF2"
},
{
"start": 266,
"end": 309,
"text": "(Marcus, Santorini, and Marcinkiewicz 1993)",
"ref_id": "BIBREF18"
},
{
"start": 348,
"end": 349,
"text": "9",
"ref_id": null
},
{
"start": 435,
"end": 452,
"text": "(Moschitti 2006b)",
"ref_id": "BIBREF21"
},
{
"start": 541,
"end": 556,
"text": "(Joachims 1999)",
"ref_id": "BIBREF12"
},
{
"start": 633,
"end": 649,
"text": "Moschitti [2004]",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General Setup",
"sec_num": "6.1"
},
{
"text": "In these experiments, we measured the impact of node marking strategies on boundary detection (BD) and the complete SRL task, that is, BD and role classification (RC). We employed a configuration of the architecture described in Section 4 and previously 6 Of course the PT kernel may be much more accurate in processing PAS and PAS tl because these are not simply constituent parse trees. Nevertheless, a study of the PT kernel potential is beyond the purpose of this article. 7 http://www.cis.upenn.edu/\u223cace. 8 http://www.cis.upenn.edu/\u223ctreebank. 9 http://www.lsi.upc.edu/\u223csrlconll/. 10 http://ai-nlp.info.uniroma2.it/moschitti/. adopted in Moschitti et al. (2005b) , in which the simple conflict resolution heuristic is applied. The results were derived within the CoNLL setting by means of the related evaluator.",
"cite_spans": [
{
"start": 642,
"end": 666,
"text": "Moschitti et al. (2005b)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Testing Canonical Functions Based on Node Marking",
"sec_num": "6.2"
},
{
"text": "In more detail, in the BD experiments, we used the first million instances from the Penn TreeBank Sections 2-6 for training 11 and Section 24 for testing. Our classification model applied to this data replicates the results obtained in the CoNLL 2005 shared task, that is, the highest accuracy in BD among the systems using only one parse tree and one learning algorithm. For the complete SRL task, we used the previous BC and all the available data, that is, the sections from 2 to 21, for training the role multiclassifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testing Canonical Functions Based on Node Marking",
"sec_num": "6.2"
},
{
"text": "It is worth mentioning that, as the automatic parse trees contain errors, some arguments cannot be associated with any covering node; thus we cannot extract a tree representation for them. In particular, Table 2 shows the number of arguments (column 2) for sections 2, 3, and 24 as well as the number of arguments that we could not take into account (Unrecoverable) due to the lack of parse tree nodes exactly covering their word spans. Note how Section 24 of the Penn TreeBank (which is not part of the Charniak training set) is much more affected by this problem.",
"cite_spans": [],
"ref_spans": [
{
"start": 204,
"end": 211,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Testing Canonical Functions Based on Node Marking",
"sec_num": "6.2"
},
{
"text": "Given this setting, the impact of node marking can be measured by comparing the AST 1 and the AST m 1 based kernels. The results are reported in the rows AST 1 and AST m 1 of Table 3 . Columns 2, 3, and 4 show their Precision, Recall, and F1 measure on BD and columns 5, 6, and 7 report the performance on SRL. We note that marking the argument node simplifies the generalization process as it improves both tasks by about 3.5 and 2.5 absolute percentage points, respectively. However, Row Poly shows that the polynomial kernel using state-of-the-art features (Moschitti et al. 2005b) outperforms AST m 1 by about 4.5 percentage points in BD and 8 points in the SRL task. The main reason is that the employed tree structures do not explicitly encode very important features like the passive voice or predicate position. In Moschitti (2004) , these are shown to be very effective especially when used in polynomial kernels. Of course, it is possible to engineer trees including these and other standard features with a canonical mapping, but the aim here is to provide new interesting representations rather than to abide by the simple exercise of representing already designed features within tree kernel functions. In other words, we follow the idea presented in Moschitti (2004) , where tree kernels were suggested as a means to derive new features rather than generate a stand-alone feature set.",
"cite_spans": [
{
"start": 560,
"end": 584,
"text": "(Moschitti et al. 2005b)",
"ref_id": "BIBREF22"
},
{
"start": 823,
"end": 839,
"text": "Moschitti (2004)",
"ref_id": "BIBREF19"
},
{
"start": 1264,
"end": 1280,
"text": "Moschitti (2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 175,
"end": 182,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Testing Canonical Functions Based on Node Marking",
"sec_num": "6.2"
},
{
"text": "Rows Poly+AST 1 and Poly+AST m 1 investigate this possibility by presenting the combination of polynomial and tree kernels. Unfortunately, the results on both BD and SRL do not show enough improvement to justify the use of tree kernels; for example, Poly+AST m 1 improves Poly by only 0.52 in BD and 0.3 in SRL. The small improvement is intuitively due to the use of (1) a state-of-the-art model as a baseline and (2) a very large amount of training data which decreases the contribution of tree features. In the next section an analysis in terms of training data will shed some light on the role of tree kernels for BD and RC in SRL.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Testing Canonical Functions Based on Node Marking",
"sec_num": "6.2"
},
{
"text": "The previous section has shown that if a state-of-the-art model 12 is adopted, then the tree kernel contribution is marginal. On the contrary, if a non state-of-the-art model is adopted tree kernels can play a significant role. To verify this hypothesis, we tested the polynomial kernel over the standard feature vector proposed in Gildea and Jurafsky (2002) obtaining an F1 of 67.3, which is comparable with the AST m 1 model, that is 65.71. Moreover, a kernel combination produced a significant improvement of both models reaching an F1 of 70.4.",
"cite_spans": [
{
"start": 332,
"end": 358,
"text": "Gildea and Jurafsky (2002)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Role of Tree Kernels for Boundary Detection and Argument Classification",
"sec_num": "6.3"
},
{
"text": "Thus, the role of tree kernels relates to the design of features for novel linguistic tasks for which the optimal data representation has not yet been developed. For example, although SRL has been studied for many years and many effective features have been designed, representations for languages like Arabic are still not very well understood and raise challenges in the design of effective predicate-argument descriptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Role of Tree Kernels for Boundary Detection and Argument Classification",
"sec_num": "6.3"
},
{
"text": "However, this hypothesis on the usefulness of tree kernels is not completely satisfactory as the huge feature space produced by them should play a more important role in predicate-argument representation. For example, the many fragments extracted by an AST 1 provide a very promising back-off model for the Path feature, which should improve the generalization process of SVMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Role of Tree Kernels for Boundary Detection and Argument Classification",
"sec_num": "6.3"
},
{
"text": "As back-off models show their advantages when the amount of training data is small, we experimented with Poly, AST 1 , AST m 1 , Poly+AST 1 , and Poly+AST m 1 and different bins of training data, starting from a very small set, namely, 10,000 instances (1%) to 1 million (100%) of instances. The results from the BD classifiers and the complete SRL task are very interesting and are illustrated by Figure 9 . We note several things. First, Figure 9a shows that with only 1% of data (i.e., 640 arguments) as positive examples, the F1 on BD of the AST m 1 kernel is surprisingly about 3 percentage points higher than the one obtained by the polynomial kernel (Poly) (i. e., the state of the art). When AST m 1 is combined with Poly the improvement reaches 5 absolute percentage points. This suggests that tree kernels should always be used when small training data sets are available.",
"cite_spans": [],
"ref_spans": [
{
"start": 398,
"end": 406,
"text": "Figure 9",
"ref_id": null
},
{
"start": 440,
"end": 449,
"text": "Figure 9a",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Role of Tree Kernels for Boundary Detection and Argument Classification",
"sec_num": "6.3"
},
{
"text": "Second, although the performance of AST 1 is much lower than all the other models, its combination with Poly produces results similar to Poly+AST m 1 , especially when the amount of training data increases. This, in agreement with the back-off property, indicates that the number of tree fragments is more relevant than their quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Role of Tree Kernels for Boundary Detection and Argument Classification",
"sec_num": "6.3"
},
{
"text": "Third, Figure 9b shows that as we increase training data, the advantage of using tree kernels decreases. This is rather intuitive as (i) in general less accurate data machine learning models trained with enough data can reach the accuracy of the most accurate models, and (ii) if the hypothesis that tree kernels provide back-off models is true, a lot of training data makes them less critical, for example, the probability of finding the Path feature of a test instance in the training set becomes high.",
"cite_spans": [],
"ref_spans": [
{
"start": 7,
"end": 16,
"text": "Figure 9b",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Role of Tree Kernels for Boundary Detection and Argument Classification",
"sec_num": "6.3"
},
{
"text": "Learning curves for BD (a and b) and the SRL task (c and d), where 100% of data corresponds to 1 million candidate argument nodes for boundary detection and 64,000 argument nodes for role classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 9",
"sec_num": null
},
{
"text": "Boundary detection accuracy (F1) on gold-standard parse trees and ambiguous structures employing the different conflict resolution methodologies described in Section 4.3. Finally, Figures 9c and 9d show learning curves 13 similar to Figures 9a and 9b , but with a reduced impact of tree kernels on the Poly model. This is due to the reduced impact of AST m 1 on role classification. Such findings are in agreement with the results in Moschitti (2004) , which show that for argument classification the SCF structure (a variant of the AST m n ) is more effective. Thus a comparison between learning curves of Poly and SCF on RC may show a behavior similar to Poly and AST m 1 for BD.",
"cite_spans": [
{
"start": 434,
"end": 450,
"text": "Moschitti (2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 180,
"end": 197,
"text": "Figures 9c and 9d",
"ref_id": null
},
{
"start": 233,
"end": 250,
"text": "Figures 9a and 9b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table 4",
"sec_num": null
},
{
"text": "In these experiments, we are interested in (1) the evaluation of the accuracy of our tree kernel-based conflict resolution strategy and (2) studying the most appropriate structured features for the task. A first evaluation was carried out over gold-standard Penn TreeBank parses and PropBank annotations. We compared the alternative conflict resolution strategies implemented by our architecture (see Section 4.3), namely the random (RND), the heuristic (HEU), and a tree kernel-based disambiguator working with AST ord n structures. The disambiguators were run on the output of BC, that is, without any information about the candidate arguments' roles. BC was trained on Sections 2 to 7 with a high-recall linear kernel. We applied it to classify Sections 8 to 21 and obtained 2,988 NSTs containing at least one overlapping node. These structures generated 3,624 positive NSTs (i. e., correct structures) and 4,461 negative NSTs (incorrect structures) in which no overlap is present. We used them to train the AST ord n classifier. The F1 measure on the boundary detection task was evaluated on the 385 overlapping annotations of Section 23, consisting of 642 argument and 15,408 non-argument nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conflict Resolution Results",
"sec_num": "6.4"
},
{
"text": "The outcome of this experiment is summarized in Table 4 . We note two points. (1) The RND disambiguator (slightly) outperforms the HEU. This suggests that the heuristics that we implemented were inappropriate for solving the problem. It also underlines how difficult it is to explicitly choose the aspects that are relevant for a complex, non-local task such as overlap resolution. (2) The AST ord n classifier outperforms the other strategies by about 20 percentage points, that is, 91.11 vs. 73.13 and 71.50. This datum along with the previous one is a good demonstration of how tree kernels can be effectively exploited to describe phenomena whose relevant features are largely unknown or difficult to represent explicitly. It should be noted that a more accurate baseline can be provided by using the Viterbi-style search (see Section 4.4.1). However, the experiments in Section 6.5 show that the heuristics produce the same accuracy (at least when the complete task is carried out). These experiments suggest that tree kernels are promising methods for resolving annotation conflicts; thus, we tried to also select the most representative structured features (i. e., AST n , AST ord n , or AST m n ) when automatic parse trees are used. We trained BC on Sections 2-8, whereas, to achieve a very accurate argument classifier, we trained a role multi-classifier (RM) on Sections 2-21. Then, we trained the AST n , AST ord n , and AST m n classifiers on the output of BC. To test BC, RM, and the tree kernel classifiers, we ran two evaluations on Section 23 and Section 21. 14 Table 5 shows the F1 measure for the different tree kernels (columns 2, 3, and 4) for conflict resolution over the NSTs of Sections 21 and 23. Several points should be noted.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 55,
"text": "Table 4",
"ref_id": null
},
{
"start": 1579,
"end": 1586,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Conflict Resolution Results",
"sec_num": "6.4"
},
{
"text": "(1) The general performance is much lower than that achieved on gold-standard trees, as shown in Table 4 . This datum and the gap of about 6 percentage points between Sections 21 and 23 confirm the impact of parsing accuracy on the subtasks of the SRL process.",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 104,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conflict Resolution Results",
"sec_num": "6.4"
},
{
"text": "(2) The ordinal numbering of arguments (AST ord n ) and the role type information (AST m n ) provide tree kernels with more meaningful fragments because they improve the basic model by about 4 percentage points.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conflict Resolution Results",
"sec_num": "6.4"
},
{
"text": "(3) The deeper semantic information generated by the argument labels provides useful clues for selecting correct predicate-argument structures because the AST m n model improves AST ord n performance on both sections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conflict Resolution Results",
"sec_num": "6.4"
},
{
"text": "In these experiments, Section 23 was used for testing our proposition re-ranking. We employed a BC trained on Sections 2 to 8, whereas RM was trained on Sections 2-12. 15 In order to provide a probabilistic interpretation of the SVM output (see Section 4.4.1), we evaluated each classifier distribution parameter based on its output on Section 12. For computational complexity reasons, we decided to consider the five most likely labelings for each node and the five first alternatives output by the Viterbi algorithm (i. e., m = 5 and n = 5). With this set-up, we evaluated the accuracy lower and upper bounds of our system. As our baseline, we consider the accuracy of a re-ranker that always chooses the first alternative output from the Viterbi algorithm, that is, the most likely according to the joint inference model. This accuracy has been measured as 75.91 F1 percentage points; this is practically identical to the 75.89 obtained by applying heuristics to remove overlaps generated by BC. This does not depend on the bad quality of the five top labelings. Indeed, we selected the best alternative produced by the Viterbi algorithm according to the goldstandard score, and we obtained an F1 of 84.76 for n = 5. Thus, the critical aspect resides in the selection of the best annotations, which should be carried out by an automatic re-ranker.",
"cite_spans": [
{
"start": 168,
"end": 170,
"text": "15",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposition Re-Ranking Results",
"sec_num": "6.5"
},
{
"text": "Rows 2 and 3 of Table 6 show the number of distinct propositions and alternative annotations output by the Viterbi algorithm for each of the employed sections. In row 3, the number of pair comparisons (i. e., the number of training/test examples for the classifier) is shown.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 23,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Proposition Re-Ranking Results",
"sec_num": "6.5"
},
{
"text": "Using this data, we carried out a complete SRL experiment, which is summarized in Table 7 . First, we compared the accuracy of the AST cm n , PAS, and PAS tl classifiers trained on Section 24 (in row 3, columns 2, 3, and 4) and discovered that the latter structure produces a noticeable F1 improvement, namely, 78.15 vs. 76.47 and 76.77, whereas the accuracy gap between the PAS and the AST cm n classifiers is very small, namely, 76.77 vs. 76.47 percentage points. We selected the most interesting structured feature, that is, the PAS tl , and extended it with the local (to each argument node) standard features commonly employed for the boundary detection and argument classification tasks, as in Haghighi, Toutanova, and Manning (2005) . This richer kernel (PAS tl +STD, column 5) was compared with the PAS tl one. The comparison was performed on two different training sets (rows 2 and 3): In both cases, the introduction of the standard features produced a performance decrement, most notably in the case of Section 12 (i. e., 82.07 vs. 75.06). Our best re-ranking kernel (i. e., the PAS tl ) was then employed in a larger experiment, using both Sections 12 and 24 for testing (row 4), achieving an F 1 measure of 78.44.",
"cite_spans": [
{
"start": 700,
"end": 739,
"text": "Haghighi, Toutanova, and Manning (2005)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 82,
"end": 89,
"text": "Table 7",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Proposition Re-Ranking Results",
"sec_num": "6.5"
},
{
"text": "First, we note that the accuracy of the AST cm n and PAS classifiers is very similar (i. e., 76.77 vs. 76.47). This datum suggests that the intra-argument syntactic information is not critical for the re-ranking task, as including it or not in the learning algorithm does not lead to noticeable differences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposition Re-Ranking Results",
"sec_num": "6.5"
},
{
"text": "Second, we note that the PAS tl kernel is much more effective than those based on AST cm n and PAS, which are always outperformed. This may be due to the fact that two AST cm n s (or PASs) always share a large number of substructures, because most alternative annotations tend to be very similar and the small differences among them only affect a small part of the encoding of syntactic information; on the other hand, the small amount of local parsing information encoded in the PAS tl s enables a good generalization process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposition Re-Ranking Results",
"sec_num": "6.5"
},
{
"text": "Finally, the introduction of the standard, local standard features in our re-ranking model caused a performance loss of about 0.5 percentage points on both Sections 12 and 24. This fact, which is in contrast with what has been shown in Haghighi, Toutanova, and Manning (2005) , might be the consequence of the small training sets that we employed. Indeed, local standard features tend to be very sparse and their effectiveness should be evaluated against a larger data set.",
"cite_spans": [
{
"start": 236,
"end": 275,
"text": "Haghighi, Toutanova, and Manning (2005)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposition Re-Ranking Results",
"sec_num": "6.5"
},
{
"text": "The design of automatic systems for the labeling of semantic roles requires the solution of complex problems. Among other issues, feature engineering is made difficult by the structured nature of the data, that is, features should represent information expressed by automatically generated parse trees. This raises two main problems: (1) the modeling of effective features, partially solved for some subtasks in previous works, and (2) the implementation of the software for the extraction of a large number of such features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Conclusions",
"sec_num": "7."
},
{
"text": "A system completely (or largely) based on tree kernels alleviates both problems as (1) kernel functions automatically generate features and (2) only a procedure for the extraction of subtrees is needed. Although some of the manually designed features seem to be superior to those derived with tree kernels, their combination still seems worth applying. Moreover, tree kernels provide a back-off model that greatly outperforms state-of-the-art SRL models when the amount of training data is small.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Conclusions",
"sec_num": "7."
},
{
"text": "To demonstrate these points, we carried out a comprehensive study of the use of tree kernels for semantic role labeling by designing several canonical mappings. These correspond to the application of innovative tree kernel engineering techniques tailored to different stages of an SRL process. The experiments with these methods and SVMs on the data set provided by the CoNLL 2005 shared task (Carreras and M\u00e0rquez 2005) show that, first, tree kernels are a valid support to manually designed features for many stages of the SRL process. We have shown that our improved tree kernel (i.e., the one based on AST m 1 ) highly improves accuracy in both boundary detection and the SRL task when the amount of training data is small (e.g., 5 absolute percentage points over a state-ofthe-art boundary classifier). In the case of argument classification the improvement is less evident but still consistent, at about 3%.",
"cite_spans": [
{
"start": 393,
"end": 420,
"text": "(Carreras and M\u00e0rquez 2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Conclusions",
"sec_num": "7."
},
{
"text": "Second, appropriately engineered tree kernels can replace standard features in many SRL subtasks. For example, in complex tasks such as conflict resolution or reranking, they provide an easy way to build new features that would be difficult to describe explicitly. More generally, tree kernels can be used to combine different sources of information for the design of complex learning models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Conclusions",
"sec_num": "7."
},
{
"text": "Third, in the specific re-ranking task, our structured features show a noticeable improvement over our baseline (i. e., about 2.5 percentage points). This could be increased considering that we have not been able to fully exploit the potential of our re-ranking model, whose theoretical upper bound is 6 percentage points away. Still, although we only used a small fraction of the available training data (i. e., only 2 sections out of 22 were used to train the re-ranker) our system's accuracy is in line with state-of-the-art systems (Carreras and M\u00e0rquez 2005) that do not employ tree kernels.",
"cite_spans": [
{
"start": 536,
"end": 563,
"text": "(Carreras and M\u00e0rquez 2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Conclusions",
"sec_num": "7."
},
{
"text": "Finally, although the study carried out in this article is quite comprehensive, several issues should be considered in more depth in the future: (a) The tree feature extraction functions ST, SST, and PT should be studied in combination with the proposed canonical mappings. For example, as the PT kernel seems more suitable for the processing of dependency information, it would be interesting to apply it in an architecture using these kinds of syntactic parse trees (e. g., Chen and Rambow 2003) . In particular, the combination of different extraction functions on different syntactic views may lead to very good results.",
"cite_spans": [
{
"start": 476,
"end": 497,
"text": "Chen and Rambow 2003)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Conclusions",
"sec_num": "7."
},
{
"text": "(b) Once the set of the most promising kernels is established, it would be interesting to use all the available CoNLL 2005 data. This would allow us to estimate the potential of our approach by comparing it with previous work on a fairer basis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Conclusions",
"sec_num": "7."
},
{
"text": "(c) The use of fast tree kernels (Moschitti 2006a) along with the proposed tree representations makes the learning and classification much faster, so that the overall running time is comparable with polynomial kernels. However, when used with SVMs their running time on very large data sets (e. g., millions of instances) becomes prohibitive. Exploiting tree kernel-derived features in a more efficient way (e. g., by selecting the most relevant fragments and using them in an explicit space) is thus an interesting line of future research. Note that such fragments would be the product of a reverse engineering process useful to derive linguistic insights on semantic role theory.",
"cite_spans": [
{
"start": 33,
"end": 50,
"text": "(Moschitti 2006a)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Conclusions",
"sec_num": "7."
},
{
"text": "(d) As CoNLL 2005 (Punyakanok et al. 2005 has shown that multiple parse trees provide the most important boost to the accuracy of SRL systems, we would like to extend our model to work with multiple syntactic views of each input sentence.",
"cite_spans": [
{
"start": 7,
"end": 17,
"text": "CoNLL 2005",
"ref_id": null
},
{
"start": 18,
"end": 41,
"text": "(Punyakanok et al. 2005",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussions and Conclusions",
"sec_num": "7."
},
{
"text": "In PropBank notation, Arg0 and Arg1 represent the logical subject and the logical object of the target verbal predicate, respectively. C-V represents the particle of a phrasal-verb predicate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The AST n of a predicate p and its argument nodes {a 1 , . . . , a n }, will also be referred to as p {a 1 ,..., a n } .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We actually implemented the pseudo-code proposed inLin, Lin, and Weng (2003) which, with respect to Platt's original formulation, is theoretically demonstrated to converge and avoids some numerical difficulties that may arise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is true at least for core arguments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We assume that predicate-argument structures cannot be composed by more than six arguments, which is generally true.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This was the most expensive process in terms of training time, requiring more than one week.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The adopted model is the same as used inMoschitti et al. (2005b), which is the most accurate among the systems that use a single learning model, a single source of syntactic information, and no accurate inference mechanism. If tree kernels improved this basic model they would likely improve the accuracy of more complex systems as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that using all training data, all the models reach lower F1s than the respective values shown inTable 3. This happens because the data for training the role multiclassifier is restricted to the first million instances, in other words, about 64,000 out of the total 253,129 arguments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As Section 21 of the Penn TreeBank is part of the Charniak parser training set, the performance derived on its parse trees represents an upper bound for our classifiers, i. e., the results using a nearly ideal syntactic parser and role multiclassifier. 15 In these experiments we did not use tree kernels for BC and RM as we wanted to measure the impact of tree kernels only on the re-ranking stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This article is the result of research on kernel methods for Semantic Role Labeling which started in 2003 and went under the review of several program committees of different scientific communities, from which it highly benefitted. In this respect, we would like to thank the reviewers of the SRL special issue as well as those of the ACL, CoNLL, EACL, ECAI, ECML, HLT-NAACL, and ICML conferences. We are indebted to Silvia Quarteroni for her help in reviewing the English formulation of an earlier version of this article.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Berkeley FrameNet project",
"authors": [
{
"first": "Collin",
"middle": [
"F"
],
"last": "Baker",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Charles",
"suffix": ""
},
{
"first": "John",
"middle": [
"B"
],
"last": "Fillmore",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "COLING-ACL '98: Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "86--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baker, Collin F., Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In COLING-ACL '98: Proceedings of the Conference, pages 86-90, Montr\u00e9al, Canada.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Introduction to the CoNLL-2004 shared task: Semantic role labeling",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2004,
"venue": "HLT-NAACL 2004 Workshop: Eighth Conference on Computational Natural Language Learning (CoNLL-2004)",
"volume": "",
"issue": "",
"pages": "89--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carreras, Xavier and Llu\u00eds M\u00e0rquez. 2004. Introduction to the CoNLL-2004 shared task: Semantic role labeling. In HLT-NAACL 2004 Workshop: Eighth Conference on Computational Natural Language Learning (CoNLL-2004), pages 89-97, Boston, MA.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Introduction to the CoNLL-2005 shared task: Semantic role labeling",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)",
"volume": "",
"issue": "",
"pages": "152--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carreras, Xavier and Llu\u00eds M\u00e0rquez. 2005. Introduction to the CoNLL-2005 shared task: Semantic role labeling. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 152-164, Ann Arbor, MI.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Use of deep linguistic features for the recognition and labeling of semantic arguments",
"authors": [
{
"first": "John",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "41--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, John and Owen Rambow. 2003. Use of deep linguistic features for the recognition and labeling of semantic arguments. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 41-48, Sapporo, Japan.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Nigel",
"middle": [],
"last": "Duffy",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "02",
"issue": "",
"pages": "263--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, Michael and Nigel Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In ACL02, pages 263-270, Philadelphia, PA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Dependency tree kernels for relation extraction",
"authors": [
{
"first": "Aron",
"middle": [],
"last": "Culotta",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Sorensen",
"suffix": ""
}
],
"year": 2004,
"venue": "ACL04",
"volume": "",
"issue": "",
"pages": "423--429",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Culotta, Aron and Jeffrey Sorensen. 2004. Dependency tree kernels for relation extraction. In ACL04, pages 423-429, Barcelona, Spain.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Kernel methods for relational learning",
"authors": [
{
"first": "Chad",
"middle": [],
"last": "Cumby",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ICML 2003",
"volume": "",
"issue": "",
"pages": "107--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cumby, Chad and Dan Roth. 2003. Kernel methods for relational learning. In Proceedings of ICML 2003, pages 107-114, Washington, DC.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The case for case",
"authors": [
{
"first": "Charles",
"middle": [
"J"
],
"last": "Fillmore",
"suffix": ""
}
],
"year": 1968,
"venue": "Universals in Linguistic Theory",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fillmore, Charles J. 1968. The case for case. In Emmon Bach and Robert T. Harms, editors, Universals in Linguistic Theory.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic labeling of semantic roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "3",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gildea, Daniel and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3): 245-288.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A joint model for semantic role labeling",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)",
"volume": "",
"issue": "",
"pages": "173--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haghighi, Aria, Kristina Toutanova, and Christopher Manning. 2005. A joint model for semantic role labeling. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 173-176, Ann Arbor, MI.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semantic Structures, Current Studies in Linguistics Series",
"authors": [
{
"first": "Ray",
"middle": [],
"last": "Jackendoff",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jackendoff, Ray. 1990. Semantic Structures, Current Studies in Linguistics Series. The MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Making large-scale SVM learning practical",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Kernel Methods-Support Vector Learning",
"volume": "",
"issue": "",
"pages": "169--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachims, Thorsten. 1999. Making large-scale SVM learning practical. In B. Sch\u00f6lkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods-Support Vector Learning. MIT Press, Cambridge, MA, pages 169-184.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Speeding up training with tree kernels for node relation labeling",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Kazama",
"suffix": ""
},
{
"first": "'",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Torisawa",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of EMNLP 2005",
"volume": "",
"issue": "",
"pages": "137--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazama, Jun'ichi and Kentaro Torisawa. 2005. Speeding up training with tree kernels for node relation labeling. In Proceedings of EMNLP 2005, pages 137-144, Toronto, Canada.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Fast methods for kernel-based text analysis",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "24--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kudo, Taku and Yuji Matsumoto. 2003. Fast methods for kernel-based text analysis. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 24-31, Sapporo, Japan.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "English Verb Classes and Alternations",
"authors": [
{
"first": "Beth",
"middle": [],
"last": "Levin",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Levin, Beth. 1993. English Verb Classes and Alternations. The University of Chicago Press, Chicago, IL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A note on Platt's probabilistic outputs for support vector machines",
"authors": [
{
"first": "H.-T",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "C.-J",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "R",
"middle": [
"C"
],
"last": "Weng",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, H.-T., C.-J. Lin, and R. C. Weng. 2003. A note on Platt's probabilistic outputs for support vector machines. Technical report, National Taiwan University.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Senseval-3 task: Automatic labeling of semantic roles",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Litkowski",
"suffix": ""
}
],
"year": 2004,
"venue": "Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text",
"volume": "",
"issue": "",
"pages": "9--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Litkowski, Kenneth. 2004. Senseval-3 task: Automatic labeling of semantic roles. In Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, pages 9-12, Barcelona, Spain.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Building a large annotated corpus of English: The Penn treebank",
"authors": [
{
"first": "M",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcus, M. P., B. Santorini, and M. A. Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn treebank. Computational Linguistics, 19:313-330.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A study on convolution kernels for shallow semantic parsing",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42 th Conference on Association for Computational Linguistic (ACL-2004)",
"volume": "",
"issue": "",
"pages": "335--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moschitti, Alessandro. 2004. A study on convolution kernels for shallow semantic parsing. In Proceedings of the 42 th Conference on Association for Computational Linguistic (ACL-2004), pages 335-342, Barcelona, Spain.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Efficient convolution kernels for dependency and constituent syntactic trees",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of The 17th European Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "318--329",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moschitti, Alessandro. 2006a. Efficient convolution kernels for dependency and constituent syntactic trees. In Proceedings of The 17th European Conference on Machine Learning, pages 318-329, Berlin, Germany.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Engineering of syntactic features for shallow semantic parsing",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Treato",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Italy",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Bonaventura",
"middle": [],
"last": "Coppola",
"suffix": ""
},
{
"first": "Daniele",
"middle": [],
"last": "Pighin",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL2006)",
"volume": "",
"issue": "",
"pages": "48--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moschitti, Alessandro. 2006b. Making tree kernels practical for natural language learning. In Proceedings of 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL2006), pages 113-120, Treato, Italy. Moschitti, Alessandro, Bonaventura Coppola, Daniele Pighin, and Roberto Basili. 2005a. Engineering of syntactic features for shallow semantic parsing. In Proceedings of the ACL Workshop on Feature Engineering for Machine Learning in Natural Language Processing, pages 48-56, Ann Arbor, MI.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Hierarchical semantic role labeling",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Ana-Maria",
"middle": [],
"last": "Giuglea",
"suffix": ""
},
{
"first": "Bonaventura",
"middle": [],
"last": "Coppola",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)",
"volume": "",
"issue": "",
"pages": "201--204",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moschitti, Alessandro, Ana-Maria Giuglea, Bonaventura Coppola, and Roberto Basili. 2005b. Hierarchical semantic role labeling. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 201-204, Ann Arbor, MI.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Tree kernel engineering in semantic role labeling systems",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
},
{
"first": "Daniele",
"middle": [],
"last": "Pighin",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Basili",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Workshop on Learning Structured Information in Natural Language Applications",
"volume": "",
"issue": "",
"pages": "49--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moschitti, Alessandro, Daniele Pighin, and Roberto Basili. 2006. Tree kernel engineering in semantic role labeling systems. In Proceedings of the Workshop on Learning Structured Information in Natural Language Applications, EACL 2006, pages 49-56, Trento, Italy.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The Proposition Bank: An annotated corpus of semantic roles",
"authors": [
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "1",
"pages": "71--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Palmer, Martha, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1): 71-106.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Probabilistic outputs for support vector machines and comparison to regularized likelihood methods",
"authors": [
{
"first": "J",
"middle": [],
"last": "Platt",
"suffix": ""
}
],
"year": 1999,
"venue": "Advances in Large Margin Classifiers",
"volume": "",
"issue": "",
"pages": "61--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Platt, J. 1999. Probabilistic outputs for support vector machines and comparison to regularized likelihood methods. In A. J. Smola, P. Bartlett, B. Schoelkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers. MIT Press, Cambridge, MA, pages 61-74.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Support vector learning for semantic argument classification",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Kadri",
"middle": [],
"last": "Hacioglu",
"suffix": ""
},
{
"first": "Valerie",
"middle": [],
"last": "Krugler",
"suffix": ""
},
{
"first": "Wayne",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2005,
"venue": "Machine Learning",
"volume": "60",
"issue": "",
"pages": "11--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pradhan, Sameer, Kadri Hacioglu, Valerie Krugler, Wayne Ward, James H. Martin, and Daniel Jurafsky. 2005a. Support vector learning for semantic argument classification. Machine Learning, 60(1-3):11-39.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Semantic role chunking combining complementary syntactic views",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Kadri",
"middle": [],
"last": "Hacioglu",
"suffix": ""
},
{
"first": "Wayne",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Martin",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)",
"volume": "",
"issue": "",
"pages": "217--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pradhan, Sameer, Kadri Hacioglu, Wayne Ward, James H. Martin, and Daniel Jurafsky. 2005b. Semantic role chunking combining complementary syntactic views. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 217-220, Ann Arbor, MI.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Generalized inference with multiple semantic role labeling systems",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Wayne",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Kadri",
"middle": [],
"last": "Hacioglu",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Mi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Sameer",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wayne",
"suffix": ""
},
{
"first": "Kadri",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "James",
"middle": [
"H"
],
"last": "Hacioglu",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "181--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pradhan, Sameer, Wayne Ward, Kadri Hacioglu, James Martin, and Daniel Jurafsky. 2005c. Semantic role labeling using different syntactic views. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 581-588, Ann Arbor, MI. Pradhan, Sameer S., Wayne H. Ward, Kadri Hacioglu, James H. Martin, and Dan Jurafsky. 2004. Shallow semantic parsing using support vector machines. In HLT-NAACL 2004: Main Proceedings, pages 233-240, Boston, MA. Punyakanok, Vasin, Peter Koomen, Dan Roth, and Wen-tau Yih. 2005. Generalized inference with multiple semantic role labeling systems. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 181-184, Ann Arbor, MI.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Kernel Methods for Pattern Analysis",
"authors": [
{
"first": "John",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
},
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shawe-Taylor, John and Nello Cristianini. 2004. Kernel Methods for Pattern Analysis. Cambridge University Press, Cambridge, UK.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A generative model for semantic role labeling",
"authors": [
{
"first": "Libin",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Sarkar",
"suffix": ""
},
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
}
],
"year": 2003,
"venue": "Empirical Methods for Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "397--408",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shen, Libin, Anoop Sarkar, and Aravind K. Joshi. 2003. Using LTAG based features in parse reranking. In Empirical Methods for Natural Language Processing (EMNLP), pages 89-96, Sapporo, Japan. Thompson, Cynthia A., Roger Levy, and Christopher Manning. 2003. A generative model for semantic role labeling. In 14th European Conference on Machine Learning, pages 397-408, Cavtat, Croatia.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Applying spelling error correction techniques for improving semantic role labelling",
"authors": [
{
"first": "Tjong",
"middle": [],
"last": "Kim Sang",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Sander",
"middle": [],
"last": "Canisius",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)",
"volume": "",
"issue": "",
"pages": "229--232",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tjong Kim Sang, Erik, Sander Canisius, Antal van den Bosch, and Toine Bogers. 2005. Applying spelling error correction techniques for improving semantic role labelling. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 229-232, Ann Arbor, MI.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Joint learning improves semantic role labeling",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "589--596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toutanova, Kristina, Aria Haghighi, and Christopher Manning. 2005. Joint learning improves semantic role labeling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 589-596, Ann Arbor, MI.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "The leaf path projection view of parse trees: Exploring string kernels for HPSG parse selection",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Penka",
"middle": [],
"last": "Markova",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP 2004",
"volume": "",
"issue": "",
"pages": "166--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toutanova, Kristina, Penka Markova, and Christopher Manning. 2004. The leaf path projection view of parse trees: Exploring string kernels for HPSG parse selection. In Proceedings of EMNLP 2004, pages 166-173, Barcelona, Spain.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Statistical Learning Theory",
"authors": [
{
"first": "Vladimir",
"middle": [
"N"
],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vapnik, Vladimir N. 1998. Statistical Learning Theory. John Wiley and Sons, New York.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Fast kernels on strings and trees",
"authors": [
{
"first": "S",
"middle": [
"V N"
],
"last": "Vishwanathan",
"suffix": ""
},
{
"first": "A",
"middle": [
"J"
],
"last": "Smola",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "569--576",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vishwanathan, S. V. N. and A. J. Smola. 2002. Fast kernels on strings and trees. In Proceedings of Neural Information Processing Systems, pages 569-576, Vancouver, British Columbia.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Calibrating features for semantic role labeling",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP 2004",
"volume": "",
"issue": "",
"pages": "88--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xue, Nianwen and Martha Palmer. 2004. Calibrating features for semantic role labeling. In Proceedings of EMNLP 2004, pages 88-94, Barcelona, Spain.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Kernel methods for relation extraction",
"authors": [
{
"first": "D",
"middle": [],
"last": "Zelenko",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Aone",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Richardella",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "1083--1106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zelenko, D., C. Aone, and A. Richardella. 2003. Kernel methods for relation extraction. Journal of Machine Learning Research, 3:1083-1106.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Exploring syntactic features for relation extraction using a convolution tree kernel",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL, Main Conference",
"volume": "",
"issue": "",
"pages": "288--295",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, Min, Jie Zhang, and Jian Su. 2006. Exploring syntactic features for relation extraction using a convolution tree kernel. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 288-295, New York, NY.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "[VP [V]] and [VP [NP]] are valid PTs. It is worth noting that PTs consider the position of the children as, for example, [A [B][C][D]] and [A [D][C][B]] only share single children, i.e., [A [B]], [A [C]], and [A [D]].",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Example of (a) ST, (b) SST, and (c) PT fragments.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "extract the feature representation, \u03c6(p, a), (e. g., attribute-values or tree fragments [see Section 3.1]); r if the leaves of the subtree rooted in a correspond to all and only the words of one argument of p (i. e., a exactly covers an argument), add \u03c6(p, a) in E + (positive examples), otherwise add it in E \u2212 (negative examples).",
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"text": "Positive (framed) and negative (unframed) examples of candidate argument nodes for the propositions (a) [ Arg0 John] took [ Arg1 the book] and read its title and (b) [ Arg0 John] took the book and read [ Arg1 its title].",
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"text": "(a) A sentence parse tree, the correct AST n s associated with two different predicates (b,c), and (d) a correct AST 1 relative to the argument Arg1 its title of the predicate read.",
"uris": null
},
"FIGREF5": {
"type_str": "figure",
"num": null,
"text": "Parse tree of the example proposition [ Arg0 Paul] delivers [ Arg1 a talk in formal style].",
"uris": null
},
"FIGREF6": {
"type_str": "figure",
"num": null,
"text": "to the argument Arg1 a talk in formal style of the predicate delivers of the example parse tree shown inFigure 4.",
"uris": null
},
"FIGREF7": {
"type_str": "figure",
"num": null,
"text": "(a) AST 1 s and (b) AST m 1 s extracted for the same target argument with their respective (c,b) common fragment spaces.",
"uris": null
},
"FIGREF8": {
"type_str": "figure",
"num": null,
"text": "An overlap situation (a) and the candidate solutions resulting from the employment of the different marking strategies.",
"uris": null
},
"FIGREF9": {
"type_str": "figure",
"num": null,
"text": "tures. For example, for the first NST the fragments [NP-1 [NP PP]], [NP [DT NN]], and [PP [IN NP]] are generated. They no longer match with the fragments of the second NST [NP-0 [NP PP]], [NP-1 [DT NN]], and [PP-2 [IN NP]].",
"uris": null
},
"FIGREF10": {
"type_str": "figure",
"num": null,
"text": "Different representations of the same proposition.",
"uris": null
},
"TABREF0": {
"num": null,
"html": null,
"content": "<table><tr><td>Feature Name</td><td>Description</td></tr><tr><td>Predicate</td><td>Lemmatization of the predicate word</td></tr><tr><td>Path</td><td>Syntactic path linking the predicate and an argument,</td></tr><tr><td/><td>e. g., NN\u2191NP\u2191VP\u2193VBX</td></tr><tr><td>Partial path</td><td>Path feature limited to the branching of the argument</td></tr><tr><td>No-direction path</td><td>Like Path, but without traversal directions</td></tr><tr><td>Phrase type</td><td>Syntactic type of the argument node</td></tr><tr><td>Position</td><td>Relative position of the argument with respect to the predicate</td></tr><tr><td>Voice</td><td>Voice of the predicate, i. e., active or passive</td></tr><tr><td>Head word</td><td>Syntactic head of the argument phrase</td></tr><tr><td>Verb subcategorization</td><td>Production rule expanding the predicate parent node</td></tr><tr><td>Named entities</td><td>Classes of named entities that appear in the argument node</td></tr><tr><td>Head word POS</td><td>POS tag of the argument node head word (less sparse than</td></tr><tr><td/><td>Head word)</td></tr><tr><td>Verb clustering</td><td>Type of verb \u2192 direct object relation</td></tr><tr><td>Governing Category</td><td>Whether the candidate argument is the verb subject or object</td></tr><tr><td>Syntactic Frame</td><td>Position of the NPs surrounding the predicate</td></tr><tr><td>Verb sense</td><td>Sense information for polysemous verbs</td></tr><tr><td>Head word of PP</td><td>Enriched POS of prepositional argument nodes (e. g., PP-for, PP-in)</td></tr><tr><td colspan=\"2\">First and last word/POS First and last words and POS tags of candidate argument phrases</td></tr><tr><td>Ordinal position</td><td>Absolute offset of a candidate argument within a proposition</td></tr><tr><td colspan=\"2\">Constituent tree distance Distance from the predicate with respect to the parse tree</td></tr><tr><td>Constituent features</td><td>Description of the constituents surrounding the argument node</td></tr><tr><td>Temporal Cue Words</td><td>Temporal markers which are very distinctive of some roles</td></tr></table>",
"text": "Standard linguistic features employed by most SRL systems.",
"type_str": "table"
},
"TABREF1": {
"num": null,
"html": null,
"content": "<table/>",
"text": "They include: r the Phrase Type, Predicate Word, Head Word, Governing Category, Position, and Voice defined in Gildea and Jurafsky (2002); r the Partial Path, No Direction Path, Constituent Tree Distance, Head Word POS, First and Last Word/POS, Verb Subcategorization, and Head Word of the Noun Phrase in the Prepositional Phrase proposed inPradhan, Hacioglu, Krugler et al. (2005); and",
"type_str": "table"
},
"TABREF2": {
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"3\">Sec. Arguments Unrecoverable</td></tr><tr><td>2</td><td>198,373</td><td>454 (0.23%)</td></tr><tr><td>3</td><td>147,193</td><td>347 (0.24%)</td></tr><tr><td>24</td><td>139,454</td><td>731 (0.52%)</td></tr></table>",
"text": "Number of arguments (Arguments) and of unrecoverable arguments (Unrecoverable) due to parse tree errors in Sections 2, 3, and 24 of the Penn TreeBank/PropBank.",
"type_str": "table"
},
"TABREF3": {
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"7\">Comparison between different models on Boundary Detection and the complete Semantic Role</td></tr><tr><td colspan=\"7\">Labeling tasks. The training set is constituted by the first 1 million instances from Sections 02-06</td></tr><tr><td colspan=\"7\">for the boundary classifier and all arguments from Sections 02-21 for the role multiclassifier</td></tr><tr><td colspan=\"7\">(253,129 instances). The performance is measured against Section 24 (149,140 instances).</td></tr><tr><td/><td colspan=\"3\">Boundary Detection</td><td colspan=\"3\">Semantic Role Labeling</td></tr><tr><td>K e r n e l s</td><td>P</td><td>R</td><td>F 1</td><td>P</td><td>R</td><td>F 1</td></tr><tr><td>AST 1 AST m 1</td><td colspan=\"3\">75.75% 71.68% 73.66 77.32% 74.80% 76.04</td><td colspan=\"3\">64.71% 61.71% 63.17 66.58% 64.87% 65.71</td></tr><tr><td>Poly</td><td colspan=\"3\">82.18% 79.19% 80.66</td><td colspan=\"3\">75.86% 72.60% 73.81</td></tr><tr><td>Poly+AST 1 Poly+AST m 1</td><td colspan=\"3\">81.74% 80.71% 81.22 81.64% 80.73% 81.18</td><td colspan=\"3\">74.23% 73.62% 73.92 74.36% 73.87% 74.11</td></tr></table>",
"text": "",
"type_str": "table"
},
"TABREF5": {
"num": null,
"html": null,
"content": "<table><tr><td/><td/><td>n</td><td>AST m n</td></tr><tr><td>21</td><td>73.7</td><td>77.3</td><td>78.7</td></tr><tr><td>23</td><td>68.9</td><td>71.2</td><td>72.1</td></tr></table>",
"text": "SRL accuracy on different PropBank target sections in terms of F1 measure of the different structured features employed for conflict resolution.Target section AST n AST ord",
"type_str": "table"
},
"TABREF6": {
"num": null,
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">Section 12 Section 23 Section 24</td></tr><tr><td>Propositions</td><td>4,899</td><td>5,267</td><td>3,248</td></tr><tr><td>Alternatives</td><td>24,494</td><td>26,325</td><td>16,240</td></tr><tr><td>Comparisons</td><td>74,650</td><td>81,162</td><td>48,582</td></tr></table>",
"text": "Number of propositions, alternative annotations (as output by the Viterbi algorithm), and pair comparisons (i. e., re-ranker input examples) for the PropBank sections used for the experiments.",
"type_str": "table"
},
"TABREF7": {
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"2\">Training Section AST cm n</td><td>PAS</td><td colspan=\"2\">PAS tl PAS tl +STD</td></tr><tr><td>12</td><td>-</td><td>-</td><td>78.27</td><td>77.61</td></tr><tr><td>24</td><td>76.47</td><td colspan=\"2\">76.77 78.15</td><td>77.77</td></tr><tr><td>12+24</td><td>-</td><td>-</td><td>78.44</td><td>-</td></tr></table>",
"text": "Summary of the proposition re-ranking experiments with different training sets.",
"type_str": "table"
}
}
}
}