| { |
| "paper_id": "D11-1012", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:32:07.719357Z" |
| }, |
| "title": "A Joint Model for Extended Semantic Role Labeling", |
| "authors": [ |
| { |
| "first": "Vivek", |
| "middle": [], |
| "last": "Srikumar", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "vsrikum2@illinois.edu" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "danr@illinois.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper presents a model that extends semantic role labeling. Existing approaches independently analyze relations expressed by verb predicates or those expressed as nominalizations. However, sentences express relations via other linguistic phenomena as well. Furthermore, these phenomena interact with each other, thus restricting the structures they articulate. In this paper, we use this intuition to define a joint inference model that captures the inter-dependencies between verb semantic role labeling and relations expressed using prepositions. The scarcity of jointly labeled data presents a crucial technical challenge for learning a joint model. The key strength of our model is that we use existing structure predictors as black boxes. By enforcing consistency constraints between their predictions, we show improvements in the performance of both tasks without retraining the individual models.", |
| "pdf_parse": { |
| "paper_id": "D11-1012", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper presents a model that extends semantic role labeling. Existing approaches independently analyze relations expressed by verb predicates or those expressed as nominalizations. However, sentences express relations via other linguistic phenomena as well. Furthermore, these phenomena interact with each other, thus restricting the structures they articulate. In this paper, we use this intuition to define a joint inference model that captures the inter-dependencies between verb semantic role labeling and relations expressed using prepositions. The scarcity of jointly labeled data presents a crucial technical challenge for learning a joint model. The key strength of our model is that we use existing structure predictors as black boxes. By enforcing consistency constraints between their predictions, we show improvements in the performance of both tasks without retraining the individual models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The identification of semantic relations between sentence constituents has been an important task in NLP research. It finds applications in various natural language understanding tasks that require complex inference going beyond the surface representation. In the literature, semantic role extraction has been studied mostly in the context of verb predicates, using the Propbank annotation of Palmer et al. (2005) , and also for nominal predicates, using the Nombank corpus of Meyers et al. (2004) .", |
| "cite_spans": [ |
| { |
| "start": 393, |
| "end": 413, |
| "text": "Palmer et al. (2005)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 477, |
| "end": 497, |
| "text": "Meyers et al. (2004)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "However, sentences express semantic relations through other linguistic phenomena. For example, consider the following sentence:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(1) The field goal by Brien changed the game in the fourth quarter.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Verb centered semantic role labeling would identify the arguments of the predicate change as (a) The field goal by Brien (A0, the causer of the change), (b) the game (A1, the thing changing), and (c) in the fourth quarter (temporal modifier). However, this does not tell us that the scorer of the field goal was Brien, which is expressed by the preposition by. Also, note that the in indicates a temporal relation, which overlaps with the verb's analysis. In this paper, we propose an extension of the standard semantic role labeling task to include relations expressed by lexical items other than verbs and nominalizations. Further, we argue that there are interactions between the different phenomena which suggest that there is a benefit in studying them together. However, one key challenge is that large jointly labeled corpora do not exist. This motivates the need for novel learning and inference schemes that address the data problem and can still benefit from the interactions among the phenomena.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper has two main contributions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "1. From the machine learning standpoint, we propose a joint inference scheme to combine existing structure predictors for multiple linguistic phenomena. We do so using hard constraints that involve only the labels of the phenomena.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The strength of our model is that it is easily extensible, since adding new phenomena does not require fully retraining the joint model from scratch. Furthermore, our approach minimizes the need for extensive jointly labeled corpora and, instead, uses existing predictors as black boxes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "2. From an NLP perspective, we motivate the extension of semantic role labeling beyond verbs and nominalizations. We instantiate our joint model for the case of extracting preposition and verb relations together. Our model uses existing systems that identify verb semantic roles and preposition object roles and jointly predicts the output of the two systems in the presence of linguistic constraints that enforce coherence between the predictions. We show that using constraints to combine models improves the performance on both tasks. Furthermore, since the constraints depend only on the labels of the two tasks and not on any specific dataset, our experiments also demonstrate that enforcing them allows for better domain adaptation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The rest of the paper is organized as follows: We motivate the need for extending semantic role labeling and the necessity for joint inference in Section 2. In Section 3, we describe the component verb SRL and preposition role systems. The global model is defined in Section 4. Section 5 provides details on the coherence constraints we use and demonstrates the effectiveness of the joint model through experiments. Section 6 discusses our approach in comparison to existing work and Section 7 provides concluding remarks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Semantic Role Labeling has been extensively studied in the context of verbs and nominalizations. While this analysis is crucial to understanding a sentence, it is clear that in many natural language sentences, information is conveyed via other lexical items. Consider, for example, the following sentences:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Definition and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(2) Einstein's theory of relativity changed physics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Definition and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(3) The plays of Shakespeare are widely read.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Definition and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(4) The bus, which was heading for Nairobi in Kenya, crashed in the Kabale district of Uganda.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Definition and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The examples contain information that cannot be captured by analyzing the verbs and the nominalizations. In sentence (2), the possessive form tells us that the theory of relativity was discovered by Einstein. Furthermore, the theory is on the subject of relativity. The usage of the preposition of is different in sentence 3, where it indicates a creatorcreation relationship. In the last sentence, the same preposition tells us that the Kabale district is located in Uganda. Prepositions, compound nouns, possessives, adjectival forms and punctuation marks often express relations, the identification of which is crucial for text understanding tasks like recognizing textual entailment, paraphrasing and question answering.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Definition and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The relations expressed by different linguistic phenomena often overlap. For example, consider the following sentence:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Definition and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "(5) Construction of the library began in 1968.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Definition and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The relation expressed by the nominalization construction recognizes the library as the argument of the predicate construct. However, the same analysis can also be obtained by identifying the sense of the preposition of, which tells us that the subject of the preposition is a nominalization of the underlying verb. A similar redundancy can be observed with analyses of the verb began and the preposition in. The above example motivates the following key intuition: The correct interpretation of a sentence is the one that gives a consistent analysis across all the linguistic phenomena expressed in it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Definition and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "An inference mechanism that simultaneously predicts the structure for different phenomena should account for consistency between the phenomena. A model designed to address this has the following desiderata:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Definition and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "1. It should account for the dependencies between phenomena.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Definition and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "2. It should be extensible to allow easy addition of new linguistic phenomena.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Definition and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "3. It should be able to leverage existing state-ofthe-art models with minimal use of jointly labeled data, which is expensive to obtain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Definition and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Systems that are trained on each task independently do not account for the interplay between them. One approach for tackling this is to define pipelines, where the predictions for one of the tasks acts as the input for another. However, a pipeline does not capture the two-way dependency between the tasks. Training a fully joint model from scratch is also unrealistic because it requires text that is annotated with all the tasks, thus making joint training implausible from a learning theoretic perspective (See Punyakanok et al. (2005) for a discussion about the learning theoretic requirements of joint training.)", |
| "cite_spans": [ |
| { |
| "start": 514, |
| "end": 538, |
| "text": "Punyakanok et al. (2005)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Definition and Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Before defining our proposed model that captures the requirements listed in the previous section, we introduce the tasks we consider and their independently trained systems that we improve using the joint system. Though the model proposed here is general and can be extended to several linguistic phenomena, in this paper, we focus on relations expressed by verbs and prepositions. This section describes the tasks, the data sets we used for our experiments and the current state-of-the-art systems for these tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and Individual Systems", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We use the following sentence as our running example to illustrate the phenomena: The company calculated the price trends on the major stock markets on Monday.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and Individual Systems", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Prepositions indicate a relation between the attachment point of the preposition and its object. As we have seen, the same preposition can indicate different types of relations. In the literature, the polysemy of prepositions is addressed by The Preposition Project 1 of Litkowski and Hargraves (2005) , which is a large lexical resource for English that labels prepositions with their sense. This sense inventory formed the basis of the SemEval-2007 task of preposition word sense disambiguation of Litkowski and Hargraves (2007) . In our example, the first on 1 http://www.clres.com/prepositions.html would be labeled with the sense 8(3) which identifies the object of the preposition as the topic, while the second instance would be labeled as 17(8), which indicates that argument is the day of the occurrence.", |
| "cite_spans": [ |
| { |
| "start": 271, |
| "end": 301, |
| "text": "Litkowski and Hargraves (2005)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 500, |
| "end": 530, |
| "text": "Litkowski and Hargraves (2007)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preposition Relations", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The preposition sense inventory, while useful to identify the fine grained distinctions between preposition usage, defines a unique sense label for each preposition by indexing the definitions of the prepositions in the Oxford Dictionary of English. For example, in the phrase at noon, the at would be labeled with the sense 2(2), while the preposition in I will see you in an hour will be labeled 4(3). Note that both these (and also the second on in our running example) indicate a temporal relation, but are assigned different labels based on the preposition. To counter this problem we collapsed preposition senses that are semantically similar to define a new label space, which we refer to as Preposition Roles.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preposition Relations", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We retrained classifiers for preposition sense for the new label space. Before describing the preposition role dataset, we briefly describe the datasets and the features for the sense problem. The best performing system at the SemEval-2007 shared task of preposition sense disambiguation (Ye and Baldwin (2007) ) achieves a mean precision of 69.3% for predicting the fine grained senses. Tratz and Hovy (2009) and Hovy et al. (2010) attained significant improvements in performance using features derived from the preposition's neighbors in the parse tree. We extended the feature set defined in the former for our independent system. Table 1 summarizes the rules for identifying the syntactically related words for each preposition. We used dependencies from the easy-first dependency parser of Goldberg and Elhadad (2010) .", |
| "cite_spans": [ |
| { |
| "start": 288, |
| "end": 310, |
| "text": "(Ye and Baldwin (2007)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 388, |
| "end": 409, |
| "text": "Tratz and Hovy (2009)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 414, |
| "end": 432, |
| "text": "Hovy et al. (2010)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 796, |
| "end": 823, |
| "text": "Goldberg and Elhadad (2010)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preposition Relations", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For each word extracted from these rules, the features include the word itself, its lemma, the POS tag, synonyms and hypernyms of the first WordNet sense and an indicator for capitalization. These features improved the accuracy of sense identification to 75.1% on the SemEval test set. In addition, we also added the following new features for each word:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preposition Relations", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "1. Indicators for gerunds and nominalizations of verbs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preposition Relations", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "2. The named entity tag (Person, Location or Organization) associated with a word, if any. We", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preposition Relations", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Id. Feature 1. Head noun/verb that dominates the preposition along with its modifiers 2. Head noun/verb that is dominated by the preposition along with its modifiers 3. Subject, negator and object(s) of the immediately dominating verb 4. Heads of sibling prepositions 5. Words withing a window of 5 centered at the preposition Table 1 : Features for preposition relation from Tratz and Hovy (2009) . These rules were used to identify syntactically related words for each preposition.", |
| "cite_spans": [ |
| { |
| "start": 376, |
| "end": 397, |
| "text": "Tratz and Hovy (2009)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 327, |
| "end": 334, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Preposition Relations", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "used the state-of-the-art named entity tagger of Ratinov and Roth (2009) to label the text.", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 72, |
| "text": "Ratinov and Roth (2009)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preposition Relations", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "3. Gazetteer features, which are active if a word is a part of a phrase that belongs to a gazetteer list. We used the gazetteer lists which were used by the NER system. We also used the CBC word clusters of Pantel and Lin (2002) as additional gazetteers and Brown cluster features as used by Ratinov and Roth (2009) and Koo et al. (2008) . Dahlmeier et al. (2009) annotated senses for the prepositions at, for, in, of, on, to and with in the sections 2-4 and 23 of the Wall Street Journal portion of the Penn Treebank 2 . We trained sense classifiers on both datasets using the Averaged Perceptron algorithm with the one-vs-all scheme using the Learning Based Java framework of Rizzolo and Roth (2010) 3 . Table 2 reports the performance of our sense disambiguation systems for the Treebank prepositions.", |
| "cite_spans": [ |
| { |
| "start": 207, |
| "end": 228, |
| "text": "Pantel and Lin (2002)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 292, |
| "end": 315, |
| "text": "Ratinov and Roth (2009)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 320, |
| "end": 337, |
| "text": "Koo et al. (2008)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 340, |
| "end": 363, |
| "text": "Dahlmeier et al. (2009)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 706, |
| "end": 713, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Preposition Relations", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "As mentioned earlier, we collapsed the sense labels onto the newly defined preposition role labels. Table 3 shows this label set along with frequencies of the labels in the Treebank dataset. According to this labeling scheme, the first on in our running example will be labeled TOPIC and the second one will be labeled TEMPORAL 4 . We re-trained the sense disambiguation system to predict preposition roles. When trained on the Treebank data, our system attains an accuracy of 67.82% on Section 23 of the Treebank. We use this system as our independent baseline for preposition role identification.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 100, |
| "end": 107, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Preposition Relations", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The goal of verb Semantic Role Labeling (SRL) is to identify the predicate-argument structure defined by verbs in sentences. The CoNLL Shared Tasks of 2004 and 2005 (See Carreras and M\u00e0rquez (2004) , Carreras and M\u00e0rquez (2005) ) studied the identification of the predicate-argument structure of verbs using the PropBank corpus of Palmer et al. (2005) . Punyakanok et al. (2008) and Toutanova et al. (2008) used global inference to ensure that the predictions across all arguments of the same predicate are coherent. We re-implemented the system of Punyakanok et al. (2008) , which we briefly describe here, to serve as our baseline verb semantic role labeler 5 . We refer the reader to the original paper for further details. The verb SRL system of Punyakanok et al. (2008) consists of four stages -candidate generation, argument identification, argument classification and inference. The candidate generation stage involves using the heuristic of Xue and Palmer (2004) to generate an over-complete set of argument candidates for each predicate. The identification stage uses a classifier to prune the candidates. In the argument classification step, the candidates that remain after the identification step are assigned scores for the SRL arguments using a multiclass classifier. One of the labels of the classifier is \u2205, which indicates that the candidate is, in fact, not an argument. The inference step produces a combined prediction for all argument candidates of a verb proposition by enforcing global constraints.", |
| "cite_spans": [ |
| { |
| "start": 170, |
| "end": 197, |
| "text": "Carreras and M\u00e0rquez (2004)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 200, |
| "end": 227, |
| "text": "Carreras and M\u00e0rquez (2005)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 331, |
| "end": 351, |
| "text": "Palmer et al. (2005)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 354, |
| "end": 378, |
| "text": "Punyakanok et al. (2008)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 383, |
| "end": 406, |
| "text": "Toutanova et al. (2008)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 549, |
| "end": 573, |
| "text": "Punyakanok et al. (2008)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 750, |
| "end": 774, |
| "text": "Punyakanok et al. (2008)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 949, |
| "end": 970, |
| "text": "Xue and Palmer (2004)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Verb SRL", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The inference enforces the following structural and linguistic constraints: (1) Each candidate can have at most one label. (2) No duplicate core arguments. (3) No overlapping or embedding arguments. (4) Given the predicate, some argument classes are illegal. (5) If a candidate is labeled as an R-arg, then there should be one labeled as arg. (6) If a candidate is labeled as a C-arg, there should be one labeled arg that occurs before the C-arg.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Verb SRL", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Instead of using the identifier to filter candidates for the classifier, in our SRL system, we added the identifier to the global inference and enforced consistency constraints between the identifier and the argument classifier predictions -the identifier should predict that a candidate is an argument if, and only if, the argument classifier does not predict the label \u2205. This change is in keeping with the idea of using joint inference to combine independently learned systems, in this case, the argument identifier and the role classifier. Furthermore, we do not need to explicitly tune the identifier for high recall.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Verb SRL", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We phrase the inference task as an integer linear program (ILP) following the approach developed in Roth and Yih (2004) . Integer linear programs were used by to add general constraints for inference with conditional random fields. ILPs have since been used successfully in many NLP applications involving complex structures - Punyakanok et al. (2008) for semantic role labeling, Riedel and Clarke (2006) and Martins et al. (2009) for dependency parsing and several others 6 .", |
| "cite_spans": [ |
| { |
| "start": 100, |
| "end": 119, |
| "text": "Roth and Yih (2004)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 327, |
| "end": 351, |
| "text": "Punyakanok et al. (2008)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 380, |
| "end": 404, |
| "text": "Riedel and Clarke (2006)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 409, |
| "end": 430, |
| "text": "Martins et al. (2009)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Verb SRL", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Let v C i,a be the Boolean indicator variable that denotes that the i th argument candidate for a predicate is assigned a label a and let \u0398 C i,a represent the score assigned by the argument classifier for this decision. Similarly, let v I i denote the identifier decision for the i th argument candidate of the predicate and \u0398 I i denote its identifier score. Then, the objective of inference is to maximize the total score of the assignment", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Verb SRL", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "max v C ,v I i,a \u0398 C i,a v C i,a + i \u0398 I i v I i", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Verb SRL", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Here, v C and v I denote all the argument classifier and identifier variables respectively. This maximization is subject to the constraints described above, which can be transformed to linear (in)equalities. We denote these constraints as C SRL . In addition to C SRL which were defined by Punyakanok et al. (2008) , we also have the constraints linking the predictions of the identifier and classifier:", |
| "cite_spans": [ |
| { |
| "start": 290, |
| "end": 314, |
| "text": "Punyakanok et al. (2008)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Verb SRL", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "v C v,i,\u2205 + v I v,i = 1; \u2200v, i.", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Verb SRL", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Inference in our baseline SRL system is, thus, the maximization of the objective defined in (1) subject to constraints C SRL , the identifier-classifier constraints defined in (2) and the restriction of the variables to take values in {0, 1}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Verb SRL", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "To train the classifiers, we used parse trees from the Charniak and Johnson (2005) parser with the same feature representation as in the original system. We trained the classifiers on the standard Propbank training set using the one-vs-all extension of the average Perceptron algorithm. As with the preposition roles, we implemented our system using Learning Based Java of Rizzolo and Roth (2010) . We normalized all classifier scores using the softmax function. Compared to the 76.29% F1 score reported by Punyakanok et al. (2008) using single parse tree predictions from the parser, our system obtained 76.22% F1 score on section 23 of the Penn Treebank.", |
| "cite_spans": [ |
| { |
| "start": 55, |
| "end": 82, |
| "text": "Charniak and Johnson (2005)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 373, |
| "end": 396, |
| "text": "Rizzolo and Roth (2010)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 507, |
| "end": 531, |
| "text": "Punyakanok et al. (2008)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Verb SRL", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We now introduce our model that captures the needs identified in Section 2. The approach we develop in this paper follows the one proposed by Roth and Yih (2004) of training individual models and combining them at inference time. Our joint model is a Constrained Conditional Model (See Chang et al. 2011), which allows us to build upon existing learned models using declarative constraints. We represent our component inference problems as integer linear program instances. As we saw in Section 3.2, the inference for SRL is instantiated as an ILP problem. The problem of predicting preposition roles can be easily transformed into an ILP instance. Let v R p,r denote the decision variable that encodes the prediction that the preposition p is assigned a role r and let \u0398 R p,r denote its score. Let v R denote all the role variables for a sentence. Then role prediction is equivalent to the following maximization problem:", |
| "cite_spans": [ |
| { |
| "start": 142, |
| "end": 161, |
| "text": "Roth and Yih (2004)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Joint Model for Verbs and Prepositions", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "max v R p,r \u0398 R p,r \u2022 v R p,r", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "A Joint Model for Verbs and Prepositions", |
| "sec_num": "4" |
| }, |
| { |
| "text": "subj. to", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Joint Model for Verbs and Prepositions", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "r v R p,r = 1, \u2200p (4) v R p,r \u2208 {0, 1}, \u2200p, r.", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "A Joint Model for Verbs and Prepositions", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In general, let p denote a linguistic structure prediction task of interest and let P denote all such tasks. Let Z p denote the set of labels that the parts of the structure associated with phenomenon p can take. For example, for the SRL argument classification component, the parts of the structure are all the candidates that need to be labeled for a given sentence and the set Z p is the set of all argument labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Joint Model for Verbs and Prepositions", |
| "sec_num": "4" |
| }, |
| { |
| "text": "For each phenomenon p \u2208 P, we use v p to denote its set of inference variables for a given sentence. Each inference variable v p Z,y \u2208 v p corresponds to the prediction that the part y has the label Z in the final structure. Each variable is associated with a score \u0398 p Z,y that is obtained from a learned score predictor. Let C p denote the structural constraints that are \"local\" to the phenomenon. Thus, for verb SRL, these would be the constraints defined in the previous section, and for preposition role, the only local constraint would be the constraint (4) defined above.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Joint Model for Verbs and Prepositions", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The independent inference problem for the phenomenon p is the following integer program:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Joint Model for Verbs and Prepositions", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "max v p Z\u2208Z p v p v p Z,y \u2022 \u0398 p Z,y ,", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "A Joint Model for Verbs and Prepositions", |
| "sec_num": "4" |
| }, |
| { |
| "text": "subj. to", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Joint Model for Verbs and Prepositions", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "C p (v p ),", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "A Joint Model for Verbs and Prepositions", |
| "sec_num": "4" |
| }, |
| { |
| "text": "v p Z,y \u2208 {0, 1}, \u2200v p Z,y . (8)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Joint Model for Verbs and Prepositions", |
| "sec_num": "4" |
| }, |
| { |
| "text": "As a technical point, this defines one inference problem per sentence, rather than per predicate as in the verb SRL system of Punyakanok et al. (2008) . This simple extension enabled Surdeanu et al. (2007) to study the impact of incorporating crosspredicate constraints for verb SRL. In this work, this extension allows us to incorporate cross-phenomena inference.", |
| "cite_spans": [ |
| { |
| "start": 126, |
| "end": 150, |
| "text": "Punyakanok et al. (2008)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 183, |
| "end": 205, |
| "text": "Surdeanu et al. (2007)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Joint Model for Verbs and Prepositions", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We consider the problem of jointly predicting several phenomena incorporating linguistic knowledge that enforce consistency between the output labels. Suppose p 1 and p 2 are two phenomena. If z p 1 1 is a label associated with the former and z p 2 1 , z p 2 2 , \u2022 \u2022 \u2022 are labels associated with the latter, we consider constraints of the form", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint inference", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "z p 1 1 \u2192 z p 2 1 \u2228 z p 2 2 \u2228 \u2022 \u2022 \u2022 \u2228 z p 2 n", |
| "eq_num": "(9)" |
| } |
| ], |
| "section": "Joint inference", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We expand this language of constraints by allowing the specification of pre-conditions for a constraint to apply. This allows us to enforce constraints of the form \"If an argument that starts with the preposition 'at' is labeled AM-TMP, then the preposition can be labeled either NUMERIC/LEVEL or TEMPO-RAL.\" This constraint is universally quantified for all arguments that satisfy the precondition of starting with the preposition at. Given a first-order constraint in this form and an input sentence, suppose the inference variable", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint inference", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "v p 1 1 is a grounding of z p 1 1 and v p 2 1 , v p 2 2 , \u2022 \u2022", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint inference", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2022 are groundings of the right hand labels such that the preconditions are satisfied, then the constraint can be phrased as the following linear inequality.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint inference", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u2212v p 1 1 + i v p 2 i \u2265 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint inference", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In the context of the preposition role and verb SRL, we consider constraints between labels for a preposition and SRL argument candidates that begin with that preposition. This restriction forms the precondition for all the joint constraints considered in this paper. Since the joint constraints involve only the labels, they can be derived either manually from the definition of the tasks or using statistical relation learning techniques. In addition to mining constraints of the form (9), we also use manually specified joint constraints. The constraints used in our experiments are described further in Section 5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint inference", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In general, let J denote a set of pairwise joint constraints. The joint inference problem can be phrased as that of maximizing the score of the assignment subject to the structural constraints of each phenomenon (C p ) and the joint linguistic constraints (J). However, since, the individual tasks were not trained on the same datasets, the scoring functions need not be in the same numeric scale. In our model, each label Z for a phenomenon p is associated with a scoring function \u0398 p Z,y for a part y. To scale the scoring functions, we associate each label with a parameter \u03bb p Z . This gives us the following integer linear program for joint inference:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint inference", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "max v p\u2208P Z\u2208Z p \u03bb p Z y p v p Z,y \u2022 \u0398 p Z,y , (10) subj. to C p (v p ), \u2200p \u2208 P (11) J(v),", |
| "eq_num": "(12)" |
| } |
| ], |
| "section": "Joint inference", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "v p Z,y \u2208 {0, 1}, \u2200v p Z,y .", |
| "eq_num": "(13)" |
| } |
| ], |
| "section": "Joint inference", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Here, v is the vector of inference variables which is obtained by stacking all the inference variables of each phenomena.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint inference", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For our experiments, we use a cutting plane solver to solve the integer linear program as in Riedel (2009) . This allows us to solve the inference problem without explicitly having to instantiate all the joint constraints.", |
| "cite_spans": [ |
| { |
| "start": 93, |
| "end": 106, |
| "text": "Riedel (2009)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Joint inference", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Given the individual models and the constraints, we only need to learn the scaling parameters \u03bb p Z . Note that the number of scaling parameters is the total number of labels. When we jointly predict verb SRL and preposition role, we have 22 preposition roles (from table 3), one SRL identifier label and 54 SRL argument classifier labels. Thus we learn only 77 parameters for our joint model. This means that we only need a very small dataset that is jointly annotated with all the phenomena.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning to rescale the individual systems", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We use the Structure Perceptron of Collins (2002) to learn the scaling weights. Note that for learning the scaling weights, we need each label to be associated with a real-valued feature. Given an assignment of the inference variables v, the value of the feature corresponding to the label Z of task p is given by the sum of scores of all parts in the structure for p that have been assigned this label, i.e.", |
| "cite_spans": [ |
| { |
| "start": 35, |
| "end": 49, |
| "text": "Collins (2002)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning to rescale the individual systems", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "y p v p Z,y \u2022\u0398 p Z,y", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning to rescale the individual systems", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": ". This feature is computed for the gold and the predicted structures and is used for updating the weights.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning to rescale the individual systems", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In this section, we describe our experimental setup and evaluate the performance of our approach. The research question addressed by the experiments is the following: Given independently trained systems for verb SRL and preposition roles, can their performance be improved using joint inference between the two tasks? To address this, we report the results of the following two experiments:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "1. First, we compare the joint system against the baseline systems and with pipelines in both directions. In this setting, both base systems are trained on the Penn Treebank data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "2. Second, we show that using joint inference can provide strong a performance gain even when the underlying systems are trained on different domains.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In all experiments, we report the F1 measure for the verb SRL performance using the CoNLL 2005 evaluation metric and the accuracy for the preposition role labeling task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For both the verb SRL and preposition roles, we used the first 500 sentences of section 2 of the Penn Treebank corpus to train our scaling parameters. For the first set of experiments, we trained our underlying systems on the rest of the available Penn Treebank training data for each task. For the adaptation experiment, we train the role classifier on the Se-mEval data (restricted to the same Treebank prepositions). In both cases, we report performance on section 23 of the Treebank.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and Constraints", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We mined consistency constraints from the sections 2, 3 and 4 of the Treebank data. As mentioned in Section 4.1, we considered joint constraints relating preposition roles to verb argument candidates that start with the preposition. We identified the following types of constraints: (1) For each preposition, the set of invalid verb arguments and preposition roles. (2) For each preposition role, the set of allowed verb argument labels if the role occurred more than ten times in the data, and (3) For each verb argument, the set of allowed preposition roles, similarly with a support of ten. Note that, while the constraints were obtained from jointly labeled data, the constraints could be written down because they encode linguistic intuition about the labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and Constraints", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The following is a constraint extracted from the data, which applies to the preposition with:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and Constraints", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "srlarg(A2) \u2192 prep-role(ATTRIBUTE) \u2228 prep-role(CAUSE) \u2228 prep-role(INSTRUMENT) \u2228 prep-role(OBJECTOFVERB) \u2228 prep-role(PARTWHOLE) \u2228 prep-role(PARTICIPANT/ACCOMPAINER) \u2228 prep-role(PROFESSIONALASPECT).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and Constraints", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "This constraint says that if any candidate that starts with with is labeled as an A2, then the preposition can be labeled only with one of the roles on the right hand side.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and Constraints", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Some of the mined constraints have negated variables to enforce that a role or an argument label should not be allowed. These can be similarly converted to linear inequalities. See Rizzolo and Roth (2010) for a further discussion about converting logical expressions into linear constraints.", |
| "cite_spans": [ |
| { |
| "start": 181, |
| "end": 204, |
| "text": "Rizzolo and Roth (2010)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and Constraints", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "In addition to these constraints that were mined from data, we also enforce the following handwritten constraints: (1) If the role of a verb attached preposition is labeled TEMPORAL, then there should be a verb predicate for which this prepositional phrase is labeled AM-TMP. (2) For verb attached prepositions, if the preposition is labeled with one of ACTIVITY, ENDCONDITION, INSTRUMENT or PROFESSIONALASPECT, there should be at least one predicate for which the corresponding prepositional phrase is not labeled \u2205.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and Constraints", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The conversion of the first constraint to a linear inequality is similar to the earlier cases. For each of the roles in the second constraint, let r denote a role variable that assigns the label to some preposition. Suppose there are n SRL candidates across all verb predicates begin with that preposition, and let s 1 , s 2 , \u2022 \u2022 \u2022 , s n denote the SRL variables that assign these candidates to the label \u2205. Then the second constraint corresponds to the following inequality:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and Constraints", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "r + n i=1 s i \u2264 n", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and Constraints", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "First, we compare our approach to the performance of the baseline independent systems and to pipelines in both directions in Table 4 . For one pipeline, we added the prediction of the baseline preposition role system as an additional feature to both the identifier and the argument classifier for argument candidates that start with a preposition. Similarly, for the second pipeline, we added the SRL predictions as features for prepositions that were the first word of an SRL argument. In all cases, we performed five-fold cross validation to train the classifiers.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 125, |
| "end": 132, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results of joint learning", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The results show that both pipelines improve performance. This justifies the need for a joint system because the pipeline can improve only one of the tasks. The last line of the table shows that the joint inference system improves upon both the baselines. We achieve this improvement without retraining the underlying models, as done in the case of the pipelines.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results of joint learning", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "On analyzing the output of the systems, we found that the SRL precision improved by 2.75% but the Table 4 : Performance of the joint system, compared to the individual systems and the pipelines. All performance measures are reported on Section 23 of the Penn Treebank. The verb SRL systems were trained on sections 2-21, while the preposition role classifiers were trained on sections 2-4. For the joint inference system, the scaling parameters were trained on the first 500 sentences of section 2, which were held out. All the improvements in this table are statistically significant at the 0.05 level.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 98, |
| "end": 105, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results of joint learning", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "recall decreased by 0.98%, contributing to the overall F1 improvement. The decrease in recall is due to the joint hard constraints that prohibit certain assignments to the variables which would have otherwise been possible. Note that, for a given sentence, even if the joint constraints affect only a few argument candidates directly, they can alter the labels of the other candidates via the \"local\" SRL constraints.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results of joint learning", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Consider the following example of the system output which highlights the effect of the constraints.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results of joint learning", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "(6) Weatherford said market conditions led to the cancellation of the planned exchange.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results of joint learning", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The independent preposition role system incorrectly identifies the to as a LOCATION. The semantic role labeling component identifies the phrase to the cancellation of the planned exchange as the A2 of the verb led. One of the constraints mined from the data prohibits the label LOCATION for the preposition to if the argument it starts is labeled A2. This forces the system to change the preposition label to the correct one, namely ENDCONDITION. Both the independent and the joint systems also label the preposition of as OBJECTOFVERB, which indicates that the phrase the planned exchange is the object of the deverbal noun cancellation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results of joint learning", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Our second experiment compares the performance of the preposition role classifier that has been trained on the SemEval dataset with and without joint constraints. Note that Table 2 in Section 3, shows the drop in performance when applying the preposition sense classifier. We see that the SemEvaltrained preposition role classifier (baseline in the table) achieves an accuracy of 53.29% when tested on the Treebank dataset. Using this classifier jointly with the verb SRL classifier via joint constraints gets an improvement of almost 3 percent in accuracy.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 173, |
| "end": 180, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Effect of constraints on adaptation", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Preposition Role (Accuracy) Baseline 53.29 Joint inference 56.22 Table 5 : Performance of the SemEval-trained preposition role classifier, when tested on the Treebank dataset with and without joint inference with the verb SRL system. The improvement, in this case is statistically significant at the 0.01 level using the sign test.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 65, |
| "end": 72, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Setting", |
| "sec_num": null |
| }, |
| { |
| "text": "The primary reason for this improvement, even without re-training the classifier, is that the constraints are defined using only the labels of the systems. This avoids the standard adaptation problems of differing vocabularies and unseen features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setting", |
| "sec_num": null |
| }, |
| { |
| "text": "6 Discussion and Related work Roth and Yih (2004) formulated the problem of extracting entities and relations as an integer linear program, allowing them to use global structural constraints at inference time even though the component classifiers were trained independently. In this paper, we use this idea to combine classifiers that were trained for two different tasks on different datasets using constraints to encode linguistic knowledge.", |
| "cite_spans": [ |
| { |
| "start": 30, |
| "end": 49, |
| "text": "Roth and Yih (2004)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setting", |
| "sec_num": null |
| }, |
| { |
| "text": "In the recent years, we have seen several joint models that combine two or more NLP tasks . Andrew et al. (2004) studied verb subcategorization and sense disambiguation of verbs by treating it as a problem of learning with partially labeled structures and proposed to use EM to train the joint model. Finkel and Manning (2009) modeled the task of named entity recognition together with parsing. Meza-Ruiz and Riedel (2009) modeled verb SRL, predicate identification and predicate sense recognition jointly using Markov Logic. Henderson et al. (2008) was designed for jointly learning to predict syntactic and semantic dependencies. Dahlmeier et al. (2009) addressed the problem of jointly learning verb SRL and preposition sense using the Penn Treebank annotation that was introduced in that work. The key difference between these and the model presented in this paper lies in the simplicity of our model and its easy extensibility because it leverages existing trained systems. Moreover, our model has the advantage that the complexity of the joint parameters is small, hence does not require a large jointly labeled dataset to train the scaling parameters.", |
| "cite_spans": [ |
| { |
| "start": 301, |
| "end": 326, |
| "text": "Finkel and Manning (2009)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 409, |
| "end": 422, |
| "text": "Riedel (2009)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 526, |
| "end": 549, |
| "text": "Henderson et al. (2008)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 632, |
| "end": 655, |
| "text": "Dahlmeier et al. (2009)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setting", |
| "sec_num": null |
| }, |
| { |
| "text": "Our approach is conceptually similar to that of Rush et al. (2010) , which combined separately trained models by enforcing agreement using global inference and solving its linear programming relaxation. They applied this idea to jointly predict dependency and phrase structure parse trees and on the task of predicting full parses together with part-ofspeech tags. The main difference in our approach is that we treat the scaling problem as a separate learning problem in itself and train a joint model specifically for re-scaling the output of the trained systems.", |
| "cite_spans": [ |
| { |
| "start": 48, |
| "end": 66, |
| "text": "Rush et al. (2010)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setting", |
| "sec_num": null |
| }, |
| { |
| "text": "The SRL combination system of Surdeanu et al. (2007) studied the combination of three different SRL systems using constraints and also by training secondary scoring functions over the individual systems. Their approach is similar to the one presented in this paper in that, unlike standard reranking, as in Collins (2000) , we entertain all possible solutions during inference, while reranking approaches train a discriminative scorer for the top-K solutions of an underlying system. Unlike the SRL combination system, however, our approach spans multiple phenomena. Moreover, in contrast to their re-scoring approaches, we do not define joint features drawn from the predictions of the underlying components to define our global model. We consider the tasks verb SRL and preposition roles and combine their predictions to provide a richer semantic annotation of text. This approach can be easily extended to include systems that predict structures for other linguistic phenomena because we do not retrain the underlying systems. The semantic relations can be enriched by incorporating more linguistic phenomena such as nominal SRL, defined by the Nombank annotation scheme of Meyers et al. (2004) , the preposition function analysis of O'Hara and Wiebe (2009) and noun compound analysis as defined by Girju (2007) and Girju et al. (2009) and others. This presents an exciting direction for future work.", |
| "cite_spans": [ |
| { |
| "start": 30, |
| "end": 52, |
| "text": "Surdeanu et al. (2007)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 307, |
| "end": 321, |
| "text": "Collins (2000)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 1177, |
| "end": 1197, |
| "text": "Meyers et al. (2004)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1248, |
| "end": 1260, |
| "text": "Wiebe (2009)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 1302, |
| "end": 1314, |
| "text": "Girju (2007)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1319, |
| "end": 1338, |
| "text": "Girju et al. (2009)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Setting", |
| "sec_num": null |
| }, |
| { |
| "text": "This paper presents a strategy for extending semantic role labeling without the need for extensive retraining or data annotation. While standard semantic role labeling focuses on verb and nominal relations, sentences can express relations using other lexical items also. Moreover, the different relations interact with each other and constrain the possible structures that they can take. We use this intuition to define a joint model for inference. We instantiate our model using verb semantic role labeling and preposition role labeling and show that, using linguistic constraints between the tasks and minimal joint learning, we can improve the performance of both tasks. The main advantage of our approach is that we can use existing trained models without re-training them, thus making it easy to extend this work to include other linguistic phenomena.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "This dataset does not annotate all prepositions and restricts itself mainly to prepositions that start a Propbank argument. The data is available at http://nlp.comp.nus. edu.sg/corpora3 Learning Based Java can be downloaded from http:// cogcomp.cs.illinois.edu.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The mapping from the preposition senses to the roles defines a new dataset and is available for download at http: //cogcomp.cs.illinois.edu/.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The verb SRL system be downloaded from http:// cogcomp.cs.illinois.edu/page/software", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The primary advantage of using ILP for inference is that this representation enables us to add arbitrary coherence constraints between the phenomena. If the underlying optimization problem itself is tractable, then so is the corresponding integer program. However, other approaches to solve the constrained maximization problem can also be used for inference.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The authors thank the members of the Cognitive Computation Group at the University of Illinois for insightful discussions and the anonymous reviewers for valuable feedback.This research is supported by the Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181. Any opinions, ndings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reect the view of the DARPA, AFRL, or the US government.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Verb sense and subcategorization: Using joint inference to improve performance on complementary tasks", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Andrew", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Grenager", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Andrew, T. Grenager, and C. D. Manning. 2004. Verb sense and subcategorization: Using joint infer- ence to improve performance on complementary tasks. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Introduction to the CoNLL-2004 shared tasks: Semantic role labeling", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Carreras", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "M\u00e0rquez", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "X. Carreras and L. M\u00e0rquez. 2004. Introduction to the CoNLL-2004 shared tasks: Semantic role labeling. In Proceedings of CoNLL-2004.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Introduction to the CoNLL-2005 shared task: Semantic role labeling", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Carreras", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "M\u00e0rquez", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of CoNLL-2005", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "X. Carreras and L. M\u00e0rquez. 2005. Introduction to the CoNLL-2005 shared task: Semantic role labeling. In Proceedings of CoNLL-2005.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Structured learning with constrained conditional models", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Ratinov", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Chang, L. Ratinov, and D. Roth. 2011. Structured learning with constrained conditional models. Ma- chine Learning (To appear).", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Coarse-to-fine nbest parsing and maxent discriminative reranking", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Charniak and M. Johnson. 2005. Coarse-to-fine n- best parsing and maxent discriminative reranking. In ACL.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Discriminative reranking for natural language parsing", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Collins. 2000. Discriminative reranking for natural language parsing. In ICML.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Collins. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Joint learning of preposition senses and semantic roles of prepositional phrases", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Dahlmeier", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "T" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Schultz", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Dahlmeier, H. T. Ng, and T. Schultz. 2009. Joint learning of preposition senses and semantic roles of prepositional phrases. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Joint parsing and named entity recognition", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "R" |
| ], |
| "last": "Finkel", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. R. Finkel and C. D. Manning. 2009. Joint parsing and named entity recognition. In NAACL.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Classification of semantic relations between nominals. Language Resources and Evaluation", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Girju", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Nakov", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Nastase", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Szpakowicz", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Yuret", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Girju, P. Nakov, V. Nastase, S. Szpakowicz, P. Tur- ney, and D. Yuret. 2009. Classification of semantic relations between nominals. Language Resources and Evaluation.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Improving the interpretation of noun phrases with cross-linguistic information", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Girju", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Girju. 2007. Improving the interpretation of noun phrases with cross-linguistic information. In ACL.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "An efficient algorithm for easy-first non-directional dependency parsing", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Elhadad", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. Goldberg and M. Elhadad. 2010. An efficient algo- rithm for easy-first non-directional dependency pars- ing. In NAACL.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "A latent variable model of synchronous parsing for syntactic and semantic dependencies", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Henderson", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Merlo", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Musillo", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Titov", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Henderson, P. Merlo, G. Musillo, and I. Titov. 2008. A latent variable model of synchronous parsing for syn- tactic and semantic dependencies. In CoNLL.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "What's in a preposition? dimensions of sense disambiguation for an interesting word class", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Tratz", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Coling 2010: Posters", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Hovy, S. Tratz, and E. Hovy. 2010. What's in a prepo- sition? dimensions of sense disambiguation for an in- teresting word class. In Coling 2010: Posters.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Simple semisupervised dependency parsing", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Koo", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Carreras", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Koo, X. Carreras, and M. Collins. 2008. Simple semi- supervised dependency parsing. In ACL.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "The preposition project", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Litkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Hargraves", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the Second ACL-SIGSEM Workshop on the Linguistic Dimensions of Prepositions and their Use in Computational Linguistics Formalisms and Applications", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Litkowski and O. Hargraves. 2005. The preposition project. In Proceedings of the Second ACL-SIGSEM Workshop on the Linguistic Dimensions of Preposi- tions and their Use in Computational Linguistics For- malisms and Applications.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Semeval-2007 task 06: Word-sense disambiguation of prepositions", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Litkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Hargraves", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "SemEval-2007: 4th International Workshop on Semantic Evaluations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Litkowski and O. Hargraves. 2007. Semeval-2007 task 06: Word-sense disambiguation of prepositions. In SemEval-2007: 4th International Workshop on Se- mantic Evaluations.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Concise integer linear programming formulations for dependency parsing", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Martins", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Xing", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Martins, N. A. Smith, and E. Xing. 2009. Concise integer linear programming formulations for depen- dency parsing. In ACL.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "The Nom-Bank project: An interim report", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Meyers", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Reeves", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Macleod", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Szekely", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Zielinska", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "HLT-NAACL 2004 Workshop: Frontiers in Corpus Annotation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Meyers, R. Reeves, C. Macleod, R. Szekely, V. Zielin- ska, B. Young, and R. Grishman. 2004. The Nom- Bank project: An interim report. In HLT-NAACL 2004 Workshop: Frontiers in Corpus Annotation.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Jointly identifying predicates, arguments and senses using markov logic", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Meza-Ruiz", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "I Meza-Ruiz and S. Riedel. 2009. Jointly identifying predicates, arguments and senses using markov logic. In NAACL.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Exploiting semantic role resources for preposition disambiguation", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "O'hara", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Computational Linguistics", |
| "volume": "35", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. O'Hara and J. Wiebe. 2009. Exploiting semantic role resources for preposition disambiguation. Computa- tional Linguistics, 35(2), June.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "The proposition bank: An annotated corpus of semantic roles", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Kingsbury", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Computational Linguistics", |
| "volume": "", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Palmer, P. Kingsbury, and D. Gildea. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1).", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Discovering word senses from text", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "The Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Pantel and D. Lin. 2002. Discovering word senses from text. In The Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Min- ing.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Learning and inference over constrained output", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Punyakanok", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Yih", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Zimak", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "V. Punyakanok, D. Roth, W. Yih, and D. Zimak. 2005. Learning and inference over constrained output. In IJ- CAI.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "The importance of syntactic parsing and inference in semantic role labeling", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Punyakanok", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Yih", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "V. Punyakanok, D. Roth, and W. Yih. 2008. The impor- tance of syntactic parsing and inference in semantic role labeling. Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Design challenges and misconceptions in named entity recognition", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Ratinov", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. Ratinov and D. Roth. 2009. Design challenges and misconceptions in named entity recognition. In CoNLL.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Incremental integer linear programming for non-projective dependency parsing", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Clarke", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Riedel and J. Clarke. 2006. Incremental integer linear programming for non-projective dependency parsing. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Cutting plane map inference for markov logic", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "SRL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Riedel. 2009. Cutting plane map inference for markov logic. In SRL 2009.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Learning based java for rapid development of nlp systems", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Rizzolo", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "N. Rizzolo and D. Roth. 2010. Learning based java for rapid development of nlp systems. In Language Re- sources and Evaluation.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "A linear programming formulation for global inference in natural language tasks", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Yih", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Roth and W. Yih. 2004. A linear programming formu- lation for global inference in natural language tasks. In CoNLL.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Integer linear programming inference for conditional random fields", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Yih", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Roth and W. Yih. 2005. Integer linear programming inference for conditional random fields. In ICML.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "On dual decomposition and linear programming relaxations for natural language processing", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "M" |
| ], |
| "last": "Rush", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Sontag", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Jaakkola", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A.M. Rush, D. Sontag, M. Collins, and T. Jaakkola. 2010. On dual decomposition and linear program- ming relaxations for natural language processing. In EMNLP. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Combination strategies for semantic role labeling", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "M\u00e0rquez", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Carreras", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [ |
| "R" |
| ], |
| "last": "Comas", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "J. Artif. Int. Res", |
| "volume": "29", |
| "issue": "", |
| "pages": "105--151", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Surdeanu, L. M\u00e0rquez, X. Carreras, and P. R. Comas. 2007. Combination strategies for semantic role label- ing. J. Artif. Int. Res., 29:105-151, June.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "A global joint model for semantic role labeling", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Haghighi", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Computational Linguistics", |
| "volume": "34", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Toutanova, A. Haghighi, and C. D. Manning. 2008. A global joint model for semantic role labeling. Compu- tational Linguistics, 34(2).", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Disambiguation of preposition sense using linguistically motivated features", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Tratz", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "NAACL: Student Research Workshop and Doctoral Consortium", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Tratz and D. Hovy. 2009. Disambiguation of prepo- sition sense using linguistically motivated features. In NAACL: Student Research Workshop and Doctoral Consortium.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Calibrating features for semantic role labeling", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "N. Xue and M. Palmer. 2004. Calibrating features for semantic role labeling. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "MELB-YB: Preposition Sense Disambiguation Using Rich Semantic Features", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Ye", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Ye and T. Baldwin. 2007. MELB-YB: Preposition Sense Disambiguation Using Rich Semantic Features. In SemEval-2007.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF1": { |
| "text": "Preposition sense performance. This table reports accuracy of sense prediction on the prepositions that have been annotated for the Penn Treebank dataset.", |
| "content": "<table><tr><td>Role ACTIVITY ATTRIBUTE BENEFICIARY CAUSE CONCOMITANT ENDCONDITION EXPERIENCER INSTRUMENT LOCATION MEDIUMOFCOMMUNICATION NUMERIC/LEVEL OBJECTOFVERB OTHER PARTWHOLE PARTICIPANT/ACCOMPANIER PHYSICALSUPPORT POSSESSOR PROFESSIONALASPECT RECIPIENT SPECIES TEMPORAL TOPIC</td><td>Train Test 57 23 119 51 78 17 255 116 156 74 88 66 88 42 37 19 1141 414 39 30 301 174 365 112 65 49 485 133 122 58 32 18 195 56 24 10 150 70 240 58 582 270 148 54</td></tr></table>", |
| "num": null, |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "text": "Preposition role data statistics for the Penn Treebank preposition dataset.", |
| "content": "<table/>", |
| "num": null, |
| "html": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |