| { |
| "paper_id": "D08-1008", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:30:39.153714Z" |
| }, |
| "title": "Dependency-based Semantic Role Labeling of PropBank", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Johansson", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Lund University", |
| "location": { |
| "country": "Sweden" |
| } |
| }, |
| "email": "richard@cs.lth.se" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Nugues", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Lund University", |
| "location": { |
| "country": "Sweden" |
| } |
| }, |
| "email": "pierre@cs.lth.se" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present a PropBank semantic role labeling system for English that is integrated with a dependency parser. To tackle the problem of joint syntactic-semantic analysis, the system relies on a syntactic and a semantic subcomponent. The syntactic model is a projective parser using pseudo-projective transformations, and the semantic model uses global inference mechanisms on top of a pipeline of classifiers. The complete syntactic-semantic output is selected from a candidate pool generated by the subsystems. We evaluate the system on the CoNLL-2005 test sets using segment-based and dependency-based metrics. Using the segment-based CoNLL-2005 metric, our system achieves a near state-of-the-art F1 figure of 77.97 on the WSJ+Brown test set, or 78.84 if punctuation is treated consistently. Using a dependency-based metric, the F1 figure of our system is 84.29 on the test set from CoNLL-2008. Our system is the first dependency-based semantic role labeler for PropBank that rivals constituent-based systems in terms of performance.", |
| "pdf_parse": { |
| "paper_id": "D08-1008", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present a PropBank semantic role labeling system for English that is integrated with a dependency parser. To tackle the problem of joint syntactic-semantic analysis, the system relies on a syntactic and a semantic subcomponent. The syntactic model is a projective parser using pseudo-projective transformations, and the semantic model uses global inference mechanisms on top of a pipeline of classifiers. The complete syntactic-semantic output is selected from a candidate pool generated by the subsystems. We evaluate the system on the CoNLL-2005 test sets using segment-based and dependency-based metrics. Using the segment-based CoNLL-2005 metric, our system achieves a near state-of-the-art F1 figure of 77.97 on the WSJ+Brown test set, or 78.84 if punctuation is treated consistently. Using a dependency-based metric, the F1 figure of our system is 84.29 on the test set from CoNLL-2008. Our system is the first dependency-based semantic role labeler for PropBank that rivals constituent-based systems in terms of performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Automatic semantic role labeling (SRL), the task of determining who does what to whom, is a useful intermediate step in NLP applications performing semantic analysis. It has obvious applications for template-filling tasks such as information extraction and question answering (Surdeanu et al., 2003; Moschitti et al., 2003) . It has also been used in prototypes of NLP systems that carry out complex reasoning, such as entailment recognition systems Hickl et al., 2006) . In addition, role-semantic features have recently been used to extend vector-space representations in automatic document categorization (Persson et al., 2008) .", |
| "cite_spans": [ |
| { |
| "start": 276, |
| "end": 299, |
| "text": "(Surdeanu et al., 2003;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 300, |
| "end": 323, |
| "text": "Moschitti et al., 2003)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 450, |
| "end": 469, |
| "text": "Hickl et al., 2006)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 608, |
| "end": 630, |
| "text": "(Persson et al., 2008)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The NLP community has recently devoted much attention to developing accurate and robust methods for performing role-semantic analysis automatically, and a number of multi-system evaluations have been carried out (Litkowski, 2004; Carreras and M\u00e0rquez, 2005; Baker et al., 2007; Surdeanu et al., 2008) . Following the seminal work of Gildea and Jurafsky (2002) , there have been many extensions in machine learning models, feature engineering (Xue and Palmer, 2004) , and inference procedures (Toutanova et al., 2005; Surdeanu et al., 2007; Punyakanok et al., 2008) .", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 229, |
| "text": "(Litkowski, 2004;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 230, |
| "end": 257, |
| "text": "Carreras and M\u00e0rquez, 2005;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 258, |
| "end": 277, |
| "text": "Baker et al., 2007;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 278, |
| "end": 300, |
| "text": "Surdeanu et al., 2008)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 333, |
| "end": 359, |
| "text": "Gildea and Jurafsky (2002)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 442, |
| "end": 464, |
| "text": "(Xue and Palmer, 2004)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 492, |
| "end": 516, |
| "text": "(Toutanova et al., 2005;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 517, |
| "end": 539, |
| "text": "Surdeanu et al., 2007;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 540, |
| "end": 564, |
| "text": "Punyakanok et al., 2008)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "With very few exceptions (e.g. Collobert and Weston, 2007) , published SRL methods have used some sort of syntactic structure as input (Gildea and Palmer, 2002; Punyakanok et al., 2008) . Most systems for automatic role-semantic analysis have used constituent syntax as in the Penn Treebank (Marcus et al., 1993) , although there has also been much research on the use of shallow syntax (Carreras and M\u00e0rquez, 2004) in SRL.", |
| "cite_spans": [ |
| { |
| "start": 31, |
| "end": 58, |
| "text": "Collobert and Weston, 2007)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 135, |
| "end": 160, |
| "text": "(Gildea and Palmer, 2002;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 161, |
| "end": 185, |
| "text": "Punyakanok et al., 2008)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 291, |
| "end": 312, |
| "text": "(Marcus et al., 1993)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 387, |
| "end": 415, |
| "text": "(Carreras and M\u00e0rquez, 2004)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In comparison, dependency syntax has received relatively little attention for the SRL task, despite the fact that dependency structures offer a more transparent encoding of predicate-argument relations. Furthermore, the few systems based on dependencies that have been presented have generally performed much worse than their constituent-based counterparts. For instance, Pradhan et al. (2005) reported that a system using a rule-based dependency parser achieved much inferior results compared to a system using a state-of-the-art statistical constituent parser: The F-measure on WSJ section 23 dropped from 78.8 to 47.2, or from 83.7 to 61.7 when using a head-based evaluation. In a similar vein, Swanson and Gordon (2006) reported that parse tree path features extracted from a rule-based dependency parser are much less reliable than those from a modern constituent parser.", |
| "cite_spans": [ |
| { |
| "start": 372, |
| "end": 393, |
| "text": "Pradhan et al. (2005)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 698, |
| "end": 723, |
| "text": "Swanson and Gordon (2006)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In contrast, we recently carried out a detailed comparison (Johansson and Nugues, 2008b) between constituent-based and dependency-based SRL systems for FrameNet, in which the results of the two types of systems where almost equivalent when using modern statistical dependency parsers. We suggested that the previous lack of progress in dependency-based SRL was due to low parsing accuracy. The experiments showed that the grammatical function information available in dependency representations results in a steeper learning curve when training semantic role classifiers, and it also seemed that the dependency-based role classifiers were more resilient to lexical problems caused by change of domain.", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 88, |
| "text": "(Johansson and Nugues, 2008b)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The recent CoNLL-2008 Shared Task (Surdeanu et al., 2008 was an attempt to show that SRL can be accurately carried out using only dependency syntax. However, these results are not easy to compare to previously published results since the task definitions and evaluation metrics were different. This paper compares the best-performing system in the CoNLL-2008 Shared Task (Johansson and Nugues, 2008a) with previously published constituent-based SRL systems. The system carries out joint dependency-syntactic and semantic analysis. We first describe its implementation in Section 2, and then compare the system with the best system in the CoNLL-2005 Shared Task in Section 3. Since the outputs of the two systems are different, we carry out two types of evaluations: first by using the traditional segment-based metric used in the CoNLL-2005 Shared Task, and then by using the dependency-based metric from the CoNLL-2008 Shared Task. Both evaluations require a transformation of the output of one system: For the segmentbased metric, we have to convert the dependency-based output to segments; and for the dependencybased metric, a head-finding procedure is needed to select heads in segments. For the first time for a system using only dependency syntax, we report results for PropBank-based semantic role labeling of English that are close to the state of the art, and for some measures even superior.", |
| "cite_spans": [ |
| { |
| "start": 11, |
| "end": 21, |
| "text": "CoNLL-2008", |
| "ref_id": null |
| }, |
| { |
| "start": 22, |
| "end": 56, |
| "text": "Shared Task (Surdeanu et al., 2008", |
| "ref_id": null |
| }, |
| { |
| "start": 348, |
| "end": 385, |
| "text": "CoNLL-2008 Shared Task (Johansson and", |
| "ref_id": null |
| }, |
| { |
| "start": 386, |
| "end": 400, |
| "text": "Nugues, 2008a)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The training corpus that we used is the dependencyannotated Penn Treebank from the 2008 CoNLL Shared Task on joint syntactic-semantic analysis (Surdeanu et al., 2008) . Figure 1 shows a sentence annotated in this framework. The CoNLL task involved semantic analysis of predicates from Prop-Bank (for verbs, such as plan) and NomBank (for nouns, such as investment); in this paper, we report the performance on PropBank predicates only since we compare our system with previously published PropBank-based SRL systems. We model the problem of constructing a syntactic and a semantic graph as a task to be solved jointly. Intuitively, syntax and semantics are highly interdependent and semantic interpretation should help syntactic disambiguation, and joint syntacticsemantic analysis has a long tradition in deeplinguistic formalisms. Using a discriminative model, we thus formulate the problem of finding a syntactic tree\u0177 syn and a semantic graph\u0177 sem for a sentence x as maximizing a function F joint that scores the complete syntactic-semantic structure:", |
| "cite_spans": [ |
| { |
| "start": 143, |
| "end": 166, |
| "text": "(Surdeanu et al., 2008)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 169, |
| "end": 177, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Syntactic-Semantic Dependency Analysis", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u0177 syn ,\u0177 sem = arg max ysyn,ysem F joint (x, y syn , y sem )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic-Semantic Dependency Analysis", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The dependencies in the feature representation used to compute F joint determine the tractability of the search procedure needed to perform the maximization. To be able to use complex syntactic features such as paths when predicting semantic structures, exact search is clearly intractable. This is true even with simpler feature representations -the problem is a special case of multi-headed dependency analysis, which is NP-hard even if the number of heads is bounded (Chickering et al., 1994) . This means that we must resort to a simplification such as an incremental method or a reranking approach. We chose the latter option and thus created syntactic and semantic submodels. The joint syntactic-semantic prediction is selected from a small list of candidates generated by the respective subsystems. Figure 2 shows the architecture.", |
| "cite_spans": [ |
| { |
| "start": 470, |
| "end": 495, |
| "text": "(Chickering et al., 1994)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 806, |
| "end": 814, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Syntactic-Semantic Dependency Analysis", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We model the process of syntactic parsing of a sentence x as finding the parse tree\u0177 syn = arg max ysyn F syn (x, y syn ) that maximizes a scoring function F syn . The learning problem consists of fitting this function so that the cost of the predictions is as low as possible according to a cost function \u03c1 syn . In this work, we consider linear scoring functions of the following form:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic Submodel", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "F syn (x, y syn ) = \u03a8 syn (x, y syn ) \u2022 w", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic Submodel", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "where \u03a8 syn (x, y syn ) is a numeric feature representation of the pair (x, y syn ) and w a vector of feature weights. We defined the syntactic cost \u03c1 syn as the sum of link costs, where the link cost was 0 for a correct dependency link with a correct label, 0.5 for a correct link with an incorrect label, and 1 for an incorrect link.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic Submodel", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "A widely used discriminative framework for fitting the weight vector is the max-margin model (Taskar et al., 2003) , which is a generalization of the well-known support vector machines to general cost-based prediction problems. Since the large number of training examples and features in our case make an exact solution of the max-margin optimization problem impractical, we used the online passive-aggressive algorithm (Crammer et al., 2006) , which approximates the optimization process in two ways:", |
| "cite_spans": [ |
| { |
| "start": 93, |
| "end": 114, |
| "text": "(Taskar et al., 2003)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 420, |
| "end": 442, |
| "text": "(Crammer et al., 2006)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic Submodel", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u2022 The weight vector w is updated incrementally, one example at a time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic Submodel", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u2022 For each example, only the most violated constraint is considered.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic Submodel", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The algorithm is a margin-based variant of the perceptron (preliminary experiments show that it outperforms the ordinary perceptron on this task). Algorithm 1 shows pseudocode for the algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic Submodel", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Algorithm 1 The Online PA Algorithm", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic Submodel", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "input Training set T = {(x t , y t )} T t=1 Number of iterations N Regularization parameter C Initialize w to zeros repeat N times for (x t , y t ) in T let\u1ef9 t = arg max y F (x t , y) + \u03c1(y t , y) let \u03c4 t = min C, F (xt,\u1ef9t)\u2212F (xt,yt)+\u03c1(yt,\u1ef9t) \u03a8(x,yt)\u2212\u03a8(x,\u1ef9t) 2 w \u2190 w + \u03c4 t (\u03a8(x, y t ) \u2212 \u03a8(x,\u1ef9 t )) return w average", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic Submodel", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We used a C value of 0.01, and the number of iterations was 6.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic Submodel", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The feature function \u03a8 syn is a factored representation, meaning that we compute the score of the complete parse tree by summing the scores of its parts, referred to as factors:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features and Search", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "\u03a8(x, y) \u2022 w = f \u2208y \u03c8(x, f ) \u2022 w", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features and Search", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "We used a second-order factorization (McDonald and Pereira, 2006; Carreras, 2007) , meaning that the factors are subtrees consisting of four links: the governor-dependent link, its sibling link, and the leftmost and rightmost dependent links of the dependent.", |
| "cite_spans": [ |
| { |
| "start": 37, |
| "end": 65, |
| "text": "(McDonald and Pereira, 2006;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 66, |
| "end": 81, |
| "text": "Carreras, 2007)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features and Search", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "This factorization allows us to express useful features, but also forces us to adopt the expensive search procedure by Carreras (2007) , which extends Eisner's span-based dynamic programming algorithm (1996) to allow second-order feature dependencies. This algorithm has a time complexity of O(n 4 ), where n is the number of words in the sentence. The search was constrained to disallow multiple root links.", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 134, |
| "text": "Carreras (2007)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features and Search", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "To evaluate the arg max in Algorithm 1 during training, we need to handle the cost function \u03c1 syn in addition to the factor scores. Since the cost function \u03c1 syn is based on the cost of single links, this can easily be integrated into the factor-based search.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features and Search", |
| "sec_num": "2.1.1" |
| }, |
| { |
| "text": "Although only 0.4% of the links in the training set are nonprojective, 7.6% of the sentences contain at least one nonprojective link. Many of these links represent long-range dependencies -such as wh-movement -that are valuable for semantic processing. Nonprojectivity cannot be handled by span-based dynamic programming algorithms. For parsers that consider features of single links only, the Chu-Liu/Edmonds algorithm can be used instead. However, this algorithm cannot be generalized to the second-order setting - McDonald and Pereira (2006) proved that this problem is NP-hard, and described an approximate greedy search algorithm.", |
| "cite_spans": [ |
| { |
| "start": 517, |
| "end": 544, |
| "text": "McDonald and Pereira (2006)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Handling Nonprojective Links", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "To simplify implementation, we instead opted for the pseudo-projective approach (Nivre and Nilsson, 2005) , in which nonprojective links are lifted upwards in the tree to achieve projectivity, and special trace labels are used to enable recovery of the nonprojective links at parse time. The use of trace labels in the pseudo-projective transformation leads to a proliferation of edge label types: from 69 to 234 in the training set, many of which occur only once. Since the running time of our parser depends on the number of labels, we used only the 20 most frequent trace labels.", |
| "cite_spans": [ |
| { |
| "start": 80, |
| "end": 105, |
| "text": "(Nivre and Nilsson, 2005)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Handling Nonprojective Links", |
| "sec_num": "2.1.2" |
| }, |
| { |
| "text": "Our semantic model consists of three parts:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Submodel", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u2022 A SRL classifier pipeline that generates a list of candidate predicate-argument structures.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Submodel", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u2022 A constraint system that filters the candidate list to enforce linguistic restrictions on the global configuration of arguments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Submodel", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "\u2022 A global reranker that assigns scores to predicate-argument structures in the filtered candidate list.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Submodel", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Rather than training the models on gold-standard syntactic input, we created an automatically parsed training set by 5-fold cross-validation. Training on automatic syntax makes the semantic classifiers more resilient to parsing errors, in particular adjunct labeling errors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Submodel", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The SRL pipeline consists of classifiers for predicate disambiguation, argument identification, and argument labeling. For the predicate disambiguation classifiers, we trained one subclassifier for each lemma. All classifiers in the pipeline were L2regularized linear logistic regression classifiers, implemented using the efficient LIBLINEAR package (Lin et al., 2008) . For multiclass problems, we used the one-vs-all binarization method, which makes it easy to prevent outputs not allowed by the PropBank frame.", |
| "cite_spans": [ |
| { |
| "start": 351, |
| "end": 369, |
| "text": "(Lin et al., 2008)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SRL Pipeline", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "Since our classifiers were logistic, their output values could be meaningfully interpreted as probabilities. This allowed us to combine the scores from subclassifiers into a score for the complete predicate-argument structure. To generate the candidate lists used by the global SRL models, we applied beam search based on these scores using a beam width of 4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SRL Pipeline", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "The argument identification classifier was preceded by a pruning step similar to the constituentbased pruning by Xue and Palmer (2004) .", |
| "cite_spans": [ |
| { |
| "start": 113, |
| "end": 134, |
| "text": "Xue and Palmer (2004)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "SRL Pipeline", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "The features used by the classifiers are listed in Table 1 , and are described in Appendix A. We selected the feature sets by greedy forward subset selection.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 51, |
| "end": 58, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "SRL Pipeline", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "ArgLab PREDWORD ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature PredDis ArgId", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 PREDLEMMA \u2022 PREDPARENTWORD/POS \u2022 CHILDDEPSET \u2022 \u2022 \u2022 CHILDWORDSET \u2022 CHILDWORDDEPSET \u2022 CHILDPOSSET \u2022 CHILDPOSDEPSET \u2022 DEPSUBCAT \u2022 PREDRELTOPARENT \u2022 PREDPARENTWORD/POS \u2022 PREDLEMMASENSE \u2022 \u2022 VOICE \u2022 \u2022 POSITION \u2022 \u2022 ARGWORD/POS \u2022 \u2022 LEFTWORD/POS \u2022 RIGHTWORD/POS \u2022 \u2022 LEFTSIBLINGWORD/POS \u2022 PREDPOS \u2022 \u2022 RELPATH \u2022 \u2022 VERBCHAINHASSUBJ \u2022 \u2022 CONTROLLERHASOBJ \u2022 PREDRELTOPARENT \u2022 \u2022 FUNCTION \u2022", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature PredDis ArgId", |
| "sec_num": null |
| }, |
| { |
| "text": "The following three global constraints were used to filter the candidates generated by the pipeline. CORE ARGUMENT CONSISTENCY. Core argument labels must not appear more than once.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linguistically Motivated Global Constraints", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "DISCONTINUITY CONSISTENCY. If there is a label C-X, it must be preceded by a label X. REFERENCE CONSISTENCY. If there is a label R-X and the label is inside an attributive relative clause, it must be preceded by a label X. Toutanova et al. (2005) have showed that a global model that scores the complete predicate-argument structure can lead to substantial performance gains. We therefore created a global SRL classifier using the following global features in addition to the features from the pipeline: CORE ARGUMENT LABEL SEQUENCE. The complete sequence of core argument labels. The sequence also includes the predicate and voice, for instance A0+break.01/Active+A1.", |
| "cite_spans": [ |
| { |
| "start": 223, |
| "end": 246, |
| "text": "Toutanova et al. (2005)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linguistically Motivated Global Constraints", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "MISSING CORE ARGUMENT LABELS. The set of core argument labels declared in the PropBank frame that are not present in the predicateargument structure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicate-Argument Reranker", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "Similarly to the syntactic submodel, we trained the global SRL model using the online passiveaggressive algorithm. The cost function \u03c1 was defined as the number of incorrect links in the predicate-argument structure. The number of iterations was 20 and the regularization parameter C was 0.01. Interestingly, we noted that the global SRL model outperformed the pipeline even when no global features were added. This shows that the global learning model can correct label bias problems introduced by the pipeline architecture.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Predicate-Argument Reranker", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "As described previously, we carried out reranking on the candidate set of complete syntactic-semantic structures. To do this, we used the top 16 trees from the syntactic module and applied a linear model:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic-Semantic Reranking", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "F joint (x, y syn , y sem ) = \u03a8 joint (x, y syn , y sem ) \u2022 w", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic-Semantic Reranking", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Our baseline joint feature representation \u03a8 joint contained only three features: the log probability of the syntactic tree and the log probability of the semantic structure according to the pipeline and the global model, respectively. This model was trained on the complete training set using cross-validation. The probabilities were obtained using the multinomial logistic function (\"softmax\").", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic-Semantic Reranking", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "We carried out an initial experiment with a more complex joint feature representation, but failed to improve over the baseline. Time prevented us from exploring this direction conclusively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Syntactic-Semantic Reranking", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "To compare our results with previously published results in SRL, we carried out an experiment comparing our system to the top system (Punyakanok et al., 2008) in the CoNLL-2005 Shared Task. However, comparison is nontrivial since the output of the CoNLL-2005 systems was a set of labeled segments, while the CoNLL-2008 systems (including ours) produced labeled semantic dependency links.", |
| "cite_spans": [ |
| { |
| "start": 133, |
| "end": 158, |
| "text": "(Punyakanok et al., 2008)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparisons with Previous Results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To have a fair comparison of our link-based system against previous segment-based systems, we carried out a two-way evaluation: In the first evaluation, the dependency-based output was converted to segments and evaluated using the segment scorer from CoNLL-2005, and in the second evaluation, we applied a head-finding procedure to the output of a segment-based system and scored the result using the link-based CoNLL-2008 scorer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparisons with Previous Results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "It can be discussed which of the two metrics is most correlated with application performance. The traditional metric used in the CoNLL-2005 task treats SRL as a bracketing problem, meaning that the entities scored by the evaluation procedure are labeled snippets of text; however, it is questionable whether this is the proper way to evaluate a task whose purpose is to find semantic relations between logical entities. We believe that the same criticisms that have been leveled at the PARSEVAL metric for constituent structures are equally valid for the bracket-based evaluation of SRL systems. The inappropriateness of the traditional metric has led to a number of alternative metrics (Litkowski, 2004; Baker et al., 2007; Surdeanu et al., 2008) .", |
| "cite_spans": [ |
| { |
| "start": 687, |
| "end": 704, |
| "text": "(Litkowski, 2004;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 705, |
| "end": 724, |
| "text": "Baker et al., 2007;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 725, |
| "end": 747, |
| "text": "Surdeanu et al., 2008)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparisons with Previous Results", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To be able to score the output of a dependency-based SRL system using the segment scorer, a conversion step is needed. Algorithm 2 shows how a set of segments is constructed from an argument dependency node. For each argument node, the algorithm computes the yield Y of the argument node, i.e. the set of dependency nodes to include in the bracketing. This set is then partitioned into contiguous parts, from which the segments are computed. In most cases, the yield is just the subtree dominated by the argument node. However, if the argument dominates the predicate, then the branch containing the predicate is removed. Table 2 shows the performance figures of our system on the WSJ and Brown corpora: precision, recall, F 1 -measure, and complete proposition accuracy (PP). These figures are compared to the best-performing system in the CoNLL-2005 Shared Task (Punyakanok et al., 2008 , referred to as Punyakanok in the table, and the best result currently published (Surdeanu et al., 2007) , referred to as Surdeanu. To validate the sanity of the segment creation algorithm, the table also shows the result of applying segment creation to gold-standard syntactic- semantic trees. We see that the two conversion procedures involved (constituent-to-dependency conversion by the CoNLL-2008 Shared Task organizers, and our dependency-to-segment conversion) work satisfactorily although the process is not completely lossless.", |
| "cite_spans": [ |
| { |
| "start": 841, |
| "end": 851, |
| "text": "CoNLL-2005", |
| "ref_id": null |
| }, |
| { |
| "start": 852, |
| "end": 888, |
| "text": "Shared Task (Punyakanok et al., 2008", |
| "ref_id": null |
| }, |
| { |
| "start": 971, |
| "end": 994, |
| "text": "(Surdeanu et al., 2007)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 622, |
| "end": 629, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Segment-based Evaluation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "During inspection of the output, we noted that many errors arise from inconsistent punctuation attachment in PropBank/Treebank. We therefore normalized the segments to exclude punctuation at the beginning or end of a segment. The results of this evaluation is shown in The results on the WSJ test set clearly show that dependency-based SRL systems can rival constituent-based systems in terms of performance -it clearly outperforms the Punyakanok system, and has a higher recall and complete proposition accuracy than the Surdeanu system. We interpret the high recall as a result of the dependency syntactic representation, which makes the parse tree paths simpler and thus the arguments easier to find.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Segment-based Evaluation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For the Brown test set, on the other hand, the dependency-based system suffers from a low precision compared to the constituent-based systems. Our error analysis indicates that the domain change caused problems with prepositional attachment for the dependency parser -it is well-known that prepositional attachment is a highly lexicalized problem, and thus sensitive to domain changes. We believe that the reason why the constituent-based systems are more robust in this respect is that they utilize a combination strategy, using inputs from two different full constituent parsers, a clause bracketer, and a chunker. However, caution is needed when drawing conclusions from results on the Brown test set, which is only 7,585 words, compared to the 59,100 words in the WSJ test set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Segment-based Evaluation", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "It has previously been noted (Pradhan et al., 2005 ) that a segment-based evaluation may be unfavorable to a dependency-based system, and that an evaluation that scores argument heads may be more indicative of its true performance. We thus carried out an evaluation using the evaluation script of the CoNLL-2008 Shared Task. In this evaluation method, an argument is counted as correctly identified if its head and label are correct. Note that this is not equivalent to the segment-based metric: In a perfectly identified segment, we may still pick out the wrong head, and if the head is correct, we may infer an incorrect segment. The evaluation script also scores predicate disambiguation performance; we did not include this score since the 2005 systems did not output predicate sense identifiers.", |
| "cite_spans": [ |
| { |
| "start": 29, |
| "end": 50, |
| "text": "(Pradhan et al., 2005", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency-based Evaluation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Since CoNLL-2005-style segments have no internal tree structure, it is nontrivial to extract a head. It is conceivable that the output of the parsers used by the Punyakanok system could be used to extract heads, but this is not recommendable because the Punyakanok system is an ensemble system and a segment does not always exactly match a constituent in a parse tree. Furthermore, the CoNLL-2008 constituent-to-dependency conversion method uses a richer structure than just the raw constituents: empty categories, grammatical functions, and named entities. To recreate this additional information, we would have to apply automatic systems and end up with unreliable results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency-based Evaluation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Instead, we thus chose to find an upper bound on the performance of the segment-based system. We applied a simple head-finding procedure (Algorithm 3) to find a set of head nodes for each segment. Since the CoNLL-2005 output does not include dependency information, the algorithm uses gold-standard dependencies and intersects segments with the gold-standard segments. This will give us an upper bound, since if the segment contains the correct head, it will always be counted as correct.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dependency-based Evaluation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The algorithm looks for dependencies leaving the segment, and if multiple outgoing edges are found, a couple of simple heuristics are applied. We found that the best performance is achieved when selecting only one outgoing edge. \"Small clauses,\" which are split into an object and a predicative complement in the dependency framework, are the only cases where we select two heads. Table 4 shows the results of the dependencybased evaluation. In the table, the output of the In this evaluation, the dependency-based system has a higher F1-measure than the Punyakanok system on both test sets. This suggests that the main advantage of using a dependency-based semantic role labeler is that it is better at finding the heads of semantic arguments, rather than finding segments. The results are also interesting in comparison to the multi-view system described by Pradhan et al. (2005) , which has a reported head F1 measure of 85.2 on the WSJ test set. The figure is not exactly compatible with ours, however, since that system used a different head extraction mechanism.", |
| "cite_spans": [ |
| { |
| "start": 860, |
| "end": 881, |
| "text": "Pradhan et al. (2005)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 381, |
| "end": 388, |
| "text": "Table 4", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dependency-based Evaluation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We have described a dependency-based system 1 for semantic role labeling of English in the PropBank framework. Our evaluations show that the performance of our system is close to the state of the art. This holds regardless of whether a segmentbased or a dependency-based metric is used. Interestingly, our system has a complete proposition accuracy that surpasses other systems by nearly 3 percentage points. Our system is the first semantic role labeler based only on syntactic dependency that achieves a competitive performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Evaluation and comparison is a difficult issue since the natural output of a dependency-based system is a set of semantic links rather than segments, as is normally the case for traditional systems. To handle this situation fairly to both types of systems, we carried out a two-way evaluation: conversion of dependencies to segments for the dependency-based system, and head-finding heuristics for segmentbased systems. However, the latter is difficult since no structure is available inside segments, and we had to resort to computing upper-bound results using gold-standard input; despite this, the dependencybased system clearly outperformed the upper bound of the performance of the segment-based system. The comparison can also be slightly misleading since the dependency-based system was optimized for the dependency metric and previous systems for the segment metric.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Our evaluations suggest that the dependencybased SRL system is biased to finding argument heads, rather than argument text snippets, and this is of course perfectly logical. Whether this is an advantage or a drawback will depend on the application -for instance, a template-filling system might need complete segments, while an SRL-based vector space representation for text categorization, or a reasoning application, might prefer using heads only.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In the future, we would like to further investigate whether syntactic and semantic analysis could be integrated more tightly. In this work, we used a sim-plistic loose coupling by means of reranking a small set of complete structures. The same criticisms that are often leveled at reranking-based models clearly apply here too: The set of tentative analyses from the submodules is too small, and the correct analysis is often pruned too early. An example of a method to mitigate this shortcoming is the forest reranking by Huang (2008) , in which complex features are evaluated as early as possible.", |
| "cite_spans": [ |
| { |
| "start": 523, |
| "end": 535, |
| "text": "Huang (2008)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Features Used in Predicate Disambiguation PREDWORD, PREDLEMMA. The lexical form and lemma of the predicate. PREDPARENTWORD and PREDPARENTPOS.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Classifier Features", |
| "sec_num": null |
| }, |
| { |
| "text": "Form and part-of-speech tag of the parent node of the predicate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Classifier Features", |
| "sec_num": null |
| }, |
| { |
| "text": "CHILDDEPSET, CHILDWORDSET, CHILD-WORDDEPSET, CHILDPOSSET, CHILD-POSDEPSET. These features represent the set of dependents of the predicate using combinations of dependency labels, words, and parts of speech. DEPSUBCAT. Subcategorization frame: the concatenation of the dependency labels of the predicate dependents. PREDRELTOPARENT. Dependency relation between the predicate and its parent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Classifier Features", |
| "sec_num": null |
| }, |
| { |
| "text": "Features Used in Argument Identification and Labeling PREDLEMMASENSE. The lemma and sense number of the predicate, e.g. give.01. VOICE. For verbs, this feature is Active or Passive.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Classifier Features", |
| "sec_num": null |
| }, |
| { |
| "text": "For nouns, it is not defined. POSITION. Position of the argument with respect to the predicate: Before, After, or On. ARGWORD and ARGPOS. Lexical form and partof-speech tag of the argument node. LEFTWORD, LEFTPOS, RIGHTWORD, RIGHT-POS. Form/part-of-speech tag of the leftmost/rightmost dependent of the argument. LEFTSIBLINGWORD, LEFTSIBLINGPOS. Form/part-of-speech tag of the left sibling of the argument. PREDPOS. Part-of-speech tag of the predicate. RELPATH. A representation of the complex grammatical relation between the predicate and the argument. It consists of the sequence of dependency relation labels and link directions in the path between predicate and argument, e.g. IM\u2191OPRD\u2191OBJ\u2193. VERBCHAINHASSUBJ. Binary feature that is set to true if the predicate verb chain has a subject. The purpose of this feature is to resolve verb coordination ambiguity as in Figure 3 . CONTROLLERHASOBJ. Binary feature that is true if the link between the predicate verb chain and its parent is OPRD, and the parent has an object. This feature is meant to resolve control ambiguity as in Figure 4 . FUNCTION. The grammatical function of the argument node. For direct dependents of the predicate, this is identical to the RELPATH. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 868, |
| "end": 876, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 1081, |
| "end": 1089, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Classifier Features", |
| "sec_num": null |
| }, |
| { |
| "text": "Our system is freely available for download at http://nlp.cs.lth.se/lth_srl.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "SemEval task 19: Frame semantic structure extraction", |
| "authors": [ |
| { |
| "first": "Collin", |
| "middle": [], |
| "last": "Baker", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Ellsworth", |
| "suffix": "" |
| }, |
| { |
| "first": "Katrin", |
| "middle": [], |
| "last": "Erk", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of SemEval-2007", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Collin Baker, Michael Ellsworth, and Katrin Erk. 2007. SemEval task 19: Frame semantic structure extraction. In Proceedings of SemEval-2007.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Introduction to the CoNLL-2004 shared task: Semantic role labeling", |
| "authors": [ |
| { |
| "first": "Xavier", |
| "middle": [], |
| "last": "Carreras", |
| "suffix": "" |
| }, |
| { |
| "first": "Llu\u00eds", |
| "middle": [], |
| "last": "M\u00e0rquez", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xavier Carreras and Llu\u00eds M\u00e0rquez. 2004. Introduction to the CoNLL-2004 shared task: Semantic role label- ing. In Proceedings of CoNLL-2004.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Introduction to the CoNLL-2005 shared task: Semantic role labeling", |
| "authors": [ |
| { |
| "first": "Xavier", |
| "middle": [], |
| "last": "Carreras", |
| "suffix": "" |
| }, |
| { |
| "first": "Llu\u00eds", |
| "middle": [], |
| "last": "M\u00e0rquez", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of CoNLL-2005", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xavier Carreras and Llu\u00eds M\u00e0rquez. 2005. Introduction to the CoNLL-2005 shared task: Semantic role label- ing. In Proceedings of CoNLL-2005.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Experiments with a higherorder projective dependency parser", |
| "authors": [ |
| { |
| "first": "Xavier", |
| "middle": [], |
| "last": "Carreras", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of CoNLL-2007", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xavier Carreras. 2007. Experiments with a higher- order projective dependency parser. In Proceedings of CoNLL-2007.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Learning Bayesian networks: The combination of knowledge and statistical data", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [ |
| "M" |
| ], |
| "last": "Chickering", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Geiger", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Heckerman", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David M. Chickering, Dan Geiger, and David Hecker- man. 1994. Learning Bayesian networks: The com- bination of knowledge and statistical data. Technical Report MSR-TR-94-09, Microsoft Research.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Fast semantic extraction using a novel neural network architecture", |
| "authors": [ |
| { |
| "first": "Ronan", |
| "middle": [], |
| "last": "Collobert", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of ACL-2007", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronan Collobert and Jason Weston. 2007. Fast semantic extraction using a novel neural network architecture. In Proceedings of ACL-2007.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Online passiveaggressive algorithms", |
| "authors": [ |
| { |
| "first": "Koby", |
| "middle": [], |
| "last": "Crammer", |
| "suffix": "" |
| }, |
| { |
| "first": "Ofer", |
| "middle": [], |
| "last": "Dekel", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Keshet", |
| "suffix": "" |
| }, |
| { |
| "first": "Shai", |
| "middle": [], |
| "last": "Shalev-Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoram", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "JMLR", |
| "volume": "", |
| "issue": "7", |
| "pages": "551--585", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev- Schwartz, and Yoram Singer. 2006. Online passive- aggressive algorithms. JMLR, 2006(7):551-585.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Three new probabilistic models for dependency parsing: An exploration", |
| "authors": [ |
| { |
| "first": "Jason", |
| "middle": [ |
| "M" |
| ], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of ICCL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceed- ings of ICCL.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Automatic labeling of semantic roles", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Computational Linguistics", |
| "volume": "28", |
| "issue": "3", |
| "pages": "245--288", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Gildea and Daniel Jurafsky. 2002. Automatic la- beling of semantic roles. Computational Linguistics, 28(3):245-288.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "The necessity of syntactic parsing for predicate argument recognition", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Gildea and Martha Palmer. 2002. The necessity of syntactic parsing for predicate argument recogni- tion. In Proceedings of the ACL-2002.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Robust textual inference via graph matching", |
| "authors": [ |
| { |
| "first": "Aria", |
| "middle": [], |
| "last": "Haghighi", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of EMNLP-2005", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aria Haghighi, Andrew Y. Ng, and Christopher D. Man- ning. 2005. Robust textual inference via graph match- ing. In Proceedings of EMNLP-2005.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Recognizing textual entailment with LCC's GROUNDHOG systems", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Hickl", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeremy", |
| "middle": [], |
| "last": "Bensley", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Williams", |
| "suffix": "" |
| }, |
| { |
| "first": "Kirk", |
| "middle": [], |
| "last": "Roberts", |
| "suffix": "" |
| }, |
| { |
| "first": "Bryan", |
| "middle": [], |
| "last": "Rink", |
| "suffix": "" |
| }, |
| { |
| "first": "Ying", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the Second PASCAL Recognizing Textual Entailment Challenge", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew Hickl, Jeremy Bensley, John Williams, Kirk Roberts, Bryan Rink, and Ying Shi. 2006. Recogniz- ing textual entailment with LCC's GROUNDHOG sys- tems. In Proceedings of the Second PASCAL Recog- nizing Textual Entailment Challenge.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Forest reranking: Discriminative parsing with non-local features", |
| "authors": [ |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL-2008.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Dependency-based syntactic-semantic analysis with PropBank and NomBank", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Johansson", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Nugues", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the Shared Task Session of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Johansson and Pierre Nugues. 2008a. Dependency-based syntactic-semantic analysis with PropBank and NomBank. In Proceedings of the Shared Task Session of CoNLL-2008.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The effect of syntactic representation on semantic role labeling", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Johansson", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Nugues", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Johansson and Pierre Nugues. 2008b. The effect of syntactic representation on semantic role labeling. In Proceedings of COLING-2008.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Trust region Newton method for large-scale logistic regression", |
| "authors": [ |
| { |
| "first": "Chih-Jen", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruby", |
| "middle": [ |
| "C" |
| ], |
| "last": "Weng", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Sathiya Keerthi", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "JMLR", |
| "volume": "", |
| "issue": "9", |
| "pages": "627--650", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chih-Jen Lin, Ruby C. Weng, and S. Sathiya Keerthi. 2008. Trust region Newton method for large-scale lo- gistic regression. JMLR, 2008(9):627-650.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Senseval-3 task: Automatic labeling of semantic roles", |
| "authors": [ |
| { |
| "first": "Ken", |
| "middle": [], |
| "last": "Litkowski", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of Senseval-3", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ken Litkowski. 2004. Senseval-3 task: Automatic label- ing of semantic roles. In Proceedings of Senseval-3.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Building a large annotated corpus of English: the Penn Treebank", |
| "authors": [ |
| { |
| "first": "Mitchell", |
| "middle": [ |
| "P" |
| ], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "Beatrice", |
| "middle": [], |
| "last": "Santorini", |
| "suffix": "" |
| }, |
| { |
| "first": "Mary", |
| "middle": [ |
| "Ann" |
| ], |
| "last": "Marcinkiewicz", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "313--330", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated cor- pus of English: the Penn Treebank. Computational Linguistics, 19(2):313-330.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Online learning of approximate dependency parsing algorithms", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of EACL-2006", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan McDonald and Fernando Pereira. 2006. On- line learning of approximate dependency parsing al- gorithms. In Proceedings of EACL-2006.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Open domain information extraction via automatic semantic labeling", |
| "authors": [ |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Moschitti", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Mor\u0203rescu", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanda", |
| "middle": [], |
| "last": "Harabagiu", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of FLAIRS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alessandro Moschitti, Paul Mor\u0203rescu, and Sanda Harabagiu. 2003. Open domain information extrac- tion via automatic semantic labeling. In Proceedings of FLAIRS.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Pseudo-projective dependency parsing", |
| "authors": [ |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Nilsson", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL-2005", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joakim Nivre and Jens Nilsson. 2005. Pseudo-projective dependency parsing. In Proceedings of ACL-2005.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Text categorization using predicate-argument structures", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Persson", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Johansson", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Nugues", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Persson, Richard Johansson, and Pierre Nugues. 2008. Text categorization using predicate-argument structures. Submitted.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Semantic role labeling using different syntactic views", |
| "authors": [ |
| { |
| "first": "Sameer", |
| "middle": [], |
| "last": "Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Wayne", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "Kadri", |
| "middle": [], |
| "last": "Hacioglu", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Martin", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL-2005", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sameer Pradhan, Wayne Ward, Kadri Hacioglu, James Martin, and Daniel Jurafsky. 2005. Semantic role la- beling using different syntactic views. In Proceedings of ACL-2005.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "The importance of syntactic parsing and inference in semantic role labeling", |
| "authors": [ |
| { |
| "first": "Vasin", |
| "middle": [], |
| "last": "Punyakanok", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| }, |
| { |
| "first": "Wen-Tau", |
| "middle": [], |
| "last": "Yih", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Computational Linguistics", |
| "volume": "34", |
| "issue": "2", |
| "pages": "257--287", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257-287.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Using predicate-argument structures for information extraction", |
| "authors": [ |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanda", |
| "middle": [], |
| "last": "Harabagiu", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Williams", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Aarseth", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of ACL-2003", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mihai Surdeanu, Sanda Harabagiu, John Williams, and Paul Aarseth. 2003. Using predicate-argument struc- tures for information extraction. In Proceedings of ACL-2003.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Combination strategies for semantic role labeling", |
| "authors": [ |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "Llu\u00eds", |
| "middle": [], |
| "last": "M\u00e0rquez", |
| "suffix": "" |
| }, |
| { |
| "first": "Xavier", |
| "middle": [], |
| "last": "Carreras", |
| "suffix": "" |
| }, |
| { |
| "first": "Pere", |
| "middle": [ |
| "R" |
| ], |
| "last": "Comas", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Journal of Artificial Intelligence Research", |
| "volume": "29", |
| "issue": "", |
| "pages": "105--151", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mihai Surdeanu, Llu\u00eds M\u00e0rquez, Xavier Carreras, and Pere R. Comas. 2007. Combination strategies for se- mantic role labeling. Journal of Artificial Intelligence Research, 29:105-151.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "The CoNLL-2008 shared task on joint parsing of syntactic and semantic dependencies", |
| "authors": [ |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Johansson", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Meyers", |
| "suffix": "" |
| }, |
| { |
| "first": "Llu\u00eds", |
| "middle": [], |
| "last": "M\u00e0rquez", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu\u00eds M\u00e0rquez, and Joakim Nivre. 2008. The CoNLL-2008 shared task on joint parsing of syntac- tic and semantic dependencies. In Proceedings of CoNLL-2008.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "A comparison of alternative parse tree paths for labeling semantic roles", |
| "authors": [ |
| { |
| "first": "Reid", |
| "middle": [], |
| "last": "Swanson", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "S" |
| ], |
| "last": "Gordon", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of COLING/ACL-2006", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Reid Swanson and Andrew S. Gordon. 2006. A compari- son of alternative parse tree paths for labeling semantic roles. In Proceedings of COLING/ACL-2006.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Max-margin Markov networks", |
| "authors": [ |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Taskar", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Guestrin", |
| "suffix": "" |
| }, |
| { |
| "first": "Daphne", |
| "middle": [], |
| "last": "Koller", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of NIPS-2003", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ben Taskar, Carlos Guestrin, and Daphne Koller. 2003. Max-margin Markov networks. In Proceedings of NIPS-2003.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Joint learning improves semantic role labeling", |
| "authors": [ |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| }, |
| { |
| "first": "Aria", |
| "middle": [], |
| "last": "Haghighi", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL-2005", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kristina Toutanova, Aria Haghighi, and Christopher D. Manning. 2005. Joint learning improves semantic role labeling. In Proceedings of ACL-2005.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Calibrating features for semantic role labeling", |
| "authors": [ |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nianwen Xue and Martha Palmer. 2004. Calibrating features for semantic role labeling. In Proceedings of EMNLP-2004.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "An example sentence annotated with syntactic and semantic dependency structures.", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "text": "The architecture of the syntactic-semantic analyzer.", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "text": "Coordination ambiguity: The subject I is in an ambiguous position with respect to drink. Subject/object control ambiguity: I is in an ambiguous position with respect to sleep.", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "TABREF0": { |
| "html": null, |
| "text": "", |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF1": { |
| "html": null, |
| "text": "Segment creation from an argument dependency node. input Predicate node p, argument node a if a does not dominate p Y \u2190 {n : a dominates n} else c \u2190 the child of a that dominates p Y \u2190 {n : a dominates n} \\ {n : c dominates n} end if S \u2190 partition of Y into contiguous subsets return {(min-index s, max-index s) : s \u2208 S}", |
| "content": "<table><tr><td>WSJ</td><td>P</td><td>R</td><td>F1</td><td>PP</td></tr><tr><td>Our system</td><td colspan=\"4\">82.22 77.72 79.90 57.24</td></tr><tr><td>Punyakanok</td><td colspan=\"4\">82.28 76.78 79.44 53.79</td></tr><tr><td>Surdeanu</td><td colspan=\"4\">87.47 74.67 80.56 51.66</td></tr><tr><td colspan=\"5\">Gold standard 97.38 96.77 97.08 93.20</td></tr><tr><td>Brown</td><td>P</td><td>R</td><td>F1</td><td>PP</td></tr><tr><td>Our system</td><td colspan=\"4\">68.79 61.87 65.15 32.34</td></tr><tr><td>Punyakanok</td><td colspan=\"4\">73.38 62.93 67.75 32.34</td></tr><tr><td>Surdeanu</td><td colspan=\"4\">81.75 61.32 70.08 34.33</td></tr><tr><td colspan=\"5\">Gold standard 97.22 96.55 96.89 92.79</td></tr><tr><td>WSJ+Brown</td><td>P</td><td>R</td><td>F1</td><td>PP</td></tr><tr><td>Our system</td><td colspan=\"4\">80.50 75.59 77.97 53.94</td></tr><tr><td>Punyakanok</td><td colspan=\"4\">81.18 74.92 77.92 50.95</td></tr><tr><td>Surdeanu</td><td colspan=\"4\">86.78 72.88 79.22 49.36</td></tr><tr><td colspan=\"5\">Gold standard 97.36 96.75 97.05 93.15</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "html": null, |
| "text": "Evaluation with unnormalized segments.", |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "html": null, |
| "text": "This table does not include the Surdeanu system since we did not have access to its output.", |
| "content": "<table><tr><td>WSJ</td><td>P</td><td>R</td><td>F1</td><td>PP</td></tr><tr><td>Our system</td><td colspan=\"4\">82.95 78.40 80.61 58.65</td></tr><tr><td>Punyakanok</td><td colspan=\"4\">82.67 77.14 79.81 54.55</td></tr><tr><td colspan=\"5\">Gold standard 97.85 97.24 97.54 94.34</td></tr><tr><td>Brown</td><td>P</td><td>R</td><td>F1</td><td>PP</td></tr><tr><td>Our system</td><td colspan=\"4\">70.84 63.71 67.09 36.94</td></tr><tr><td>Punyakanok</td><td colspan=\"4\">74.29 63.71 68.60 34.08</td></tr><tr><td colspan=\"5\">Gold standard 97.46 96.78 97.12 93.41</td></tr><tr><td>WSJ+Brown</td><td>P</td><td>R</td><td>F1</td><td>PP</td></tr><tr><td>Our system</td><td colspan=\"4\">81.39 76.44 78.84 55.77</td></tr><tr><td>Punyakanok</td><td colspan=\"4\">81.63 75.34 78.36 51.84</td></tr><tr><td colspan=\"5\">Gold standard 97.80 97.18 97.48 94.22</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "html": null, |
| "text": "Evaluation with normalized segments.", |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "html": null, |
| "text": "Algorithm 3 Finding head nodes in a segment. input Argument segment a if a overlaps with a segment in the gold standard a \u2190 intersection of a and gold standard F \u2190 {n : governor of n outside a}", |
| "content": "<table><tr><td>if |F | = 1</td><td/><td/><td/><td/></tr><tr><td>return F</td><td/><td/><td/><td/></tr><tr><td colspan=\"3\">remove punctuation nodes from F</td><td/><td/></tr><tr><td>if |F | = 1</td><td/><td/><td/><td/></tr><tr><td>return F</td><td/><td/><td/><td/></tr><tr><td colspan=\"5\">if F = {n 1 , n 2 , . . .} where n 1 is an object and n 2 is</td></tr><tr><td colspan=\"4\">the predicative part of a small clause</td><td/></tr><tr><td colspan=\"2\">return {n 1 , n 2 }</td><td/><td/><td/></tr><tr><td colspan=\"5\">if F contains a node n that is a subject or an object</td></tr><tr><td>return {n}</td><td/><td/><td/><td/></tr><tr><td>else</td><td/><td/><td/><td/></tr><tr><td colspan=\"5\">return {n}, where n is the leftmost node in F</td></tr><tr><td colspan=\"5\">dependency-based system is compared to the seman-</td></tr><tr><td colspan=\"5\">tic dependency links automatically extracted from</td></tr><tr><td colspan=\"4\">the segments of the Punyakanok system.</td><td/></tr><tr><td>WSJ</td><td>P</td><td>R</td><td>F1</td><td>PP</td></tr><tr><td>Our system</td><td colspan=\"4\">88.46 83.55 85.93 61.97</td></tr><tr><td colspan=\"5\">Punyakanok 87.25 81.59 84.32 58.17</td></tr><tr><td>Brown</td><td>P</td><td>R</td><td>F1</td><td>PP</td></tr><tr><td>Our system</td><td colspan=\"4\">77.67 69.63 73.43 41.32</td></tr><tr><td colspan=\"5\">Punyakanok 80.29 68.59 73.98 37.28</td></tr><tr><td>WSJ+Brown</td><td>P</td><td>R</td><td>F1</td><td>PP</td></tr><tr><td>Our system</td><td colspan=\"4\">87.07 81.68 84.29 59.22</td></tr><tr><td colspan=\"5\">Punyakanok 86.94 80.21 83.45 55.39</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "html": null, |
| "text": "Dependency-based evaluation.", |
| "content": "<table/>", |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |