| { |
| "paper_id": "C08-1002", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:25:36.629379Z" |
| }, |
| "title": "A Supervised Algorithm for Verb Disambiguation into VerbNet Classes", |
| "authors": [ |
| { |
| "first": "Omri", |
| "middle": [], |
| "last": "Abend", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "ICNC Hebrew University of Jerusalem", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Ari", |
| "middle": [], |
| "last": "Rappoport", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "VerbNet (VN) is a major large-scale English verb lexicon. Mapping verb instances to their VN classes has been proven useful for several NLP tasks. However, verbs are polysemous with respect to their VN classes. We introduce a novel supervised learning model for mapping verb instances to VN classes, using rich syntactic features and class membership constraints. We evaluate the algorithm in both in-domain and corpus adaptation scenarios. In both cases, we use the manually tagged Semlink WSJ corpus as training data. For indomain (testing on Semlink WSJ data), we achieve 95.9% accuracy, 35.1% error reduction (ER) over a strong baseline. For adaptation, we test on the GENIA corpus and achieve 72.4% accuracy with 10.7% ER. This is the first large-scale experimentation with automatic algorithms for this task.", |
| "pdf_parse": { |
| "paper_id": "C08-1002", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "VerbNet (VN) is a major large-scale English verb lexicon. Mapping verb instances to their VN classes has been proven useful for several NLP tasks. However, verbs are polysemous with respect to their VN classes. We introduce a novel supervised learning model for mapping verb instances to VN classes, using rich syntactic features and class membership constraints. We evaluate the algorithm in both in-domain and corpus adaptation scenarios. In both cases, we use the manually tagged Semlink WSJ corpus as training data. For indomain (testing on Semlink WSJ data), we achieve 95.9% accuracy, 35.1% error reduction (ER) over a strong baseline. For adaptation, we test on the GENIA corpus and achieve 72.4% accuracy with 10.7% ER. This is the first large-scale experimentation with automatic algorithms for this task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The organization of verbs into classes whose members exhibit similar syntactic and semantic behavior has been discussed extensively in the linguistics literature (see e.g. (Levin and Rappaport Hovav, 2005; Levin, 1993) ). Such an organization helps in avoiding lexicon representation redundancy and enables generalizations across similar verbs. It can also be of great practical use, e.g. in compensating NLP statistical models for data sparseness. Indeed, Levin's seminal work had motivated c 2008.", |
| "cite_spans": [ |
| { |
| "start": 172, |
| "end": 205, |
| "text": "(Levin and Rappaport Hovav, 2005;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 206, |
| "end": 218, |
| "text": "Levin, 1993)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license (http://creativecommons.org/licenses/by-nc-sa/3.0/). Some rights reserved. much research aimed at automatic discovery of verb classes (see Section 2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "VerbNet (VN) (Kipper et al., 2000; Kipper-Schuler, 2005 ) is a large scale, publicly available domain independent verb lexicon that builds on Levin classes and extends them with new verbs, new classes, and additional information such as semantic roles and selectional restrictions. VN classes were proven beneficial for Semantic Role Labeling (SRL) (Swier and Stevenson, 2005) , Semantic Parsing (Shi and Mihalcea, 2005) and building conceptual graphs (Hensman and Dunnion, 2004 ). Levin-inspired classes have been used in several NLP tasks, such as Machine Translation (Dorr, 1997) and Document Classification (Klavans and Kan, 1998) .", |
| "cite_spans": [ |
| { |
| "start": 13, |
| "end": 34, |
| "text": "(Kipper et al., 2000;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 35, |
| "end": 55, |
| "text": "Kipper-Schuler, 2005", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 349, |
| "end": 376, |
| "text": "(Swier and Stevenson, 2005)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 396, |
| "end": 420, |
| "text": "(Shi and Mihalcea, 2005)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 452, |
| "end": 478, |
| "text": "(Hensman and Dunnion, 2004", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 570, |
| "end": 582, |
| "text": "(Dorr, 1997)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 611, |
| "end": 634, |
| "text": "(Klavans and Kan, 1998)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Many applications that use VN need to map verb instances onto their VN classes. However, verbs are polysemous with respect to VN classes. Semlink is a dataset that maps each verb instance in the WSJ Penn Treebank to its VN class. The mapping has been created using a combination of automatic and manual methods. have used Semlink to improve SRL.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper we present the first large-scale experimentation with a supervised machine learning classification algorithm for disambiguating verb instances to their VN classes. We use rich syntactic features extracted from a treebank-style parse tree, and utilize a learning algorithm capable of imposing class membership constraints, thus taking advantage of the nature of our task. We use Semlink as the training set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We evaluate our algorithm in both in-domain and corpus adaptation scenarios. In the former, we test on the WSJ (using Semlink again), obtaining 95.9% accuracy with 35.1% error reduction (ER) over a strong baseline (most frequent class) when using a modern statistical parser. In the corpus adaptation scenario, we disambiguate verbs in sentences taken from outside the training domain. Since the manual annotation of new corpora is costly, and since VN is designed to be a domain independent resource, adaptation results are important to the usability in NLP in practice. We manually annotated 400 sentences from GE-NIA (Kim et al., 2003) , a medical domain corpus 1 . Testing on these, we achieved 72.4% accuracy with 10.7% ER. Our adaptation scenario is complete in the sense that the parser we use was also trained on a different corpus (WSJ). We also report experiments done using gold-standard (manually created) parses.", |
| "cite_spans": [ |
| { |
| "start": 620, |
| "end": 638, |
| "text": "(Kim et al., 2003)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The most relevant previous works addressing verb instance class classification are (Lapata and Brew, 2004; Li and Brew, 2007; Girju et al., 2005) . The former two do not use VerbNet and their experiments were narrower than ours, so we cannot compare to their results. The latter mapped to VN, but used a preliminary highly restricted setup where most instances were monosemous. For completeness, we compared our method to theirs 2 , achieving similar results.", |
| "cite_spans": [ |
| { |
| "start": 83, |
| "end": 106, |
| "text": "(Lapata and Brew, 2004;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 107, |
| "end": 125, |
| "text": "Li and Brew, 2007;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 126, |
| "end": 145, |
| "text": "Girju et al., 2005)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We review related work in Section 2, and discuss the task in Section 3. Section 4 introduces the model, Section 5 describes the experimental setup, and Section 6 presents our results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "VerbNet. VN is a major electronic English verb lexicon. It is organized in a hierarchical structure of classes and sub-classes, each sub-class inheriting the full characterization of its super-class. VN is built on a refinement of the Levin classes, the intersective Levin classes (Dang et al., 1998) , aimed at achieving more coherent classes both semantically and syntactically. VN was also substantially extended (Kipper et al., 2006) using the Levin classes extension proposed in (Korhonen and Briscoe, 2004) . VN today contains 3626 verb lemmas (forms), organized in 237 main classes having 4991 verb types (we refer to a lemma with an ascribed class as a type). Of the 3626 lemmas, 912 are polysemous (i.e., appear in more than a single class). VN's significant coverage of the English verb lexicon is demonstrated by the 75.5% coverage of VN classes over PropBank 3 instances . Each class contains rich semantic information, including semantic roles of the arguments augmented with selectional restrictions, and possible subcategorization frames consisting of a syntactic description and semantic predicates with temporal information. VN thematic roles are relatively coarse, vs. the situation-specific FrameNet role system or the verb-specific PropBank role system, enabling generalizations across a wide semantic scope. Swier and Stevenson (2005) and used VN for SRL.", |
| "cite_spans": [ |
| { |
| "start": 281, |
| "end": 300, |
| "text": "(Dang et al., 1998)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 416, |
| "end": 437, |
| "text": "(Kipper et al., 2006)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 484, |
| "end": 512, |
| "text": "(Korhonen and Briscoe, 2004)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 1329, |
| "end": 1355, |
| "text": "Swier and Stevenson (2005)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Verb type classification. Quite a few works have addressed the issue of verb type classification and in particular classification to 'Levin inspired' classes (e.g., (Schulte im Walde, 2000; Merlo and Stevenson, 2001) ). Such work is not comparable to ours, as it deals with verb type (sense) rather than verb token (instance) classification.", |
| "cite_spans": [ |
| { |
| "start": 133, |
| "end": 189, |
| "text": "'Levin inspired' classes (e.g., (Schulte im Walde, 2000;", |
| "ref_id": null |
| }, |
| { |
| "start": 190, |
| "end": 216, |
| "text": "Merlo and Stevenson, 2001)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Verb token classification. Lapata and Brew (2004) dealt with classification to Levin classes of polysemous verbs. They established a prior from the BNC in an unsupervised manner. They also showed that this prior helps in the training of a naive Bayes classifier employed to distinguish between possible verb classes of a given verb in a given frame (when the ambiguity is not solved by knowing the frame alone). Li and Brew (2007) extended this model by proposing a method to train the class disambiguator without using hand-tagged data. While these papers have good results, their experimental setup was rather narrow and used only at most 67 polysemous verbs (in 4 frames). VN includes 912 polysemous verbs, of which 695 appeared in our in-domain experiments. Girju et al. (2005) performed the only previous work we are aware of that addresses the problem of token level verb disambiguation into VN classes. They treated the task as a supervised learning problem, proposing features based on a POS tagger, a Chunker and a named entity classifier. In order to create the data 4 , they used a mapping between Propbank rolesets and VN classes, and took the instances in WSJ sections 15-18,20,21 that were annotated by Propbank and for which the roleset determines the VN class uniquely. This resulted in most instances being in fact monosemous. Their experiment was conducted in a WSJ in-domain scenario, and in a much narrower scope than in this paper. They had 870 (39 polysemous) unique verb lemmas, compared to 2091 (695 polysemous) in our in-domain scenario. They did not test their model in an adaptation scenario. The scope and difficulty contrast between our setup and theirs are demonstrated by the large differences in the number of instances and in the percentage of polysemous instances: 972/12431 (7.8%) in theirs, compared to 49571/84749 (58.5%) in our in-domain scenario (training+test). We compared our method to theirs for completeness and achieved similar results.", |
| "cite_spans": [ |
| { |
| "start": 27, |
| "end": 49, |
| "text": "Lapata and Brew (2004)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 412, |
| "end": 430, |
| "text": "Li and Brew (2007)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 762, |
| "end": 781, |
| "text": "Girju et al. (2005)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Semlink. The Semlink project aims to create a mapping of PropBank, FrameNet (Baker et al., 1998) , Word-Net (henceforth WN) and VN to one another, thus allowing these resources to synergize. In addition, the project includes the most extensive token mapping of verbs to their VN classes available today. It covers all verbs in the WSJ sections of the Penn Treebank within VN coverage (out of 113K verb instances, 97K have lemmas present in VN).", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 96, |
| "text": "(Baker et al., 1998)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Polysemy is a major issue in NLP. Verbs are not an exception, resulting in a single verb form (lemma) appearing in more than a single class. This polysemy is also present in the original Levin classification, where polysemous classes account for more than 48% of the BNC verb instances (Lapata and Brew, 2004) .", |
| "cite_spans": [ |
| { |
| "start": 286, |
| "end": 309, |
| "text": "(Lapata and Brew, 2004)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Nature of the Task", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Given a verb instance whose lemma is within the coverage of VN, given the sentence in which it appears, given a parse tree of this sentence (see below), and given the VN resource, our task is to classify the verb instance to its correct VN class. There are currently 237 possible classes 5 . Each verb has only a few possible classes (no more than 10, but only about 2.5 on the average over the polysemous verbs). Depending on the application, the parse tree for the sentence may be either a gold standard parse or a parse tree generated by a parser. We have experimented with both options.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Nature of the Task", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The task can be viewed in two complementary ways: per-class and per-verb type. The perclass perspective takes into consideration the small number of classes relative to the number of types 6 . A classifier may gather valuable information for all members of a certain VN class, without seeing all of its members in the training data. From this perspective the task resembles POS tagging. In both tasks there are many dozens (or more) of possible labels, while each word has only a small subset of possible labels. Different words may receive the same label.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Nature of the Task", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The per-verb perspective takes into consideration the special properties of every verb type. Even the best lexicons necessarily ignore certain idiosyncratic characteristics of the verb when assigning it to a certain class. If a verb appears many times in the corpus, it is possible to estimate its parameters to a reasonable reliability, and thus to use its specific distributional properties for disambiguation. Viewed in this manner, the task resembles a word sense disambiguation (WSD) task: each verb has a small distinct set of senses (types), and no two different verbs have the same sense.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Nature of the Task", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The similarity to WSD suggests that our task might be solved by WN sense disambiguation followed by a mapping from WN to VN. However, good results are not to be expected, due to the medium quality of today's WSD algorithms and because the mapping between WN and VN is both incomplete and many-to-many 7 . Even for a perfect WN WSD algorithm, the resulting WN synset may not be mapped to VN at all or may be mapped onto multiple VN classes. We experimented with this method and obtained results below the MF baseline we used 8 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Nature of the Task", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The above discussion does not rule out the possibility of obtaining reasonable results through applying a high quality WSD engine followed by a WN to VN mapping. However, there are much fewer VN classes than WN classes per verb. This may result in the WSD engine learning many distinctions that are not useful in this context, which may in turn jeopardize its performance with respect to our task. Moreover, a word sense may belong to a single verb only while a VN class contains many verbs. Consequently, the performance on a certain lemma may benefit from training instances of other lemmas.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Nature of the Task", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Note that our task is not reducible to VN frame identification (a non-trivial task given the richness of the information used to define a frame in VN). Although the categorizing criterion for Levin's classification is the subset of frames the verb may appear in (equivalently, the diathesis alternations the verbal proposition may perform), knowing only the frame in which an instance appears does not suffice, as frames are shared among classes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Nature of the Task", |
| "sec_num": "3" |
| }, |
| { |
| "text": "As common in supervised learning models, we encode the verb instances into feature vectors and then apply a learning algorithm to induce a classifier. We first discuss the feature set and then the learning algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Learning Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Features. Our feature set heavily relies on syntactic annotation. Dorr and Jones (1996) showed that perfect knowledge of the allowable syntactic frames for a verb allows 98% accuracy in type assignment to Levin classes. This motivates the encoding of the syntactic structure of the sentence as features, since we have no access to all frames, only to the one appearing in the sentence.", |
| "cite_spans": [ |
| { |
| "start": 66, |
| "end": 87, |
| "text": "Dorr and Jones (1996)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Learning Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Since some verbs may appear in the same syntactic frame in different VN classes, a model relying on the syntactic frame alone would not be able to disambiguate instances of these verbs when appearing in those frames. Hence our features include lexical context words. The parse tree enables us to use words that appear in specific syntactic slots rather than in a linear window around the verb. To this end, we use the head words of the neighboring constituents. The definition of the head of a constituent is given in (Collins, 1999) .", |
| "cite_spans": [ |
| { |
| "start": 518, |
| "end": 533, |
| "text": "(Collins, 1999)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Learning Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Our feature set is comprised of two parallel sets of features. The first contains features extracted from the parse tree and the verb's lemma as a standalone feature. In the second set, each feature is a conjunction of a feature from the first set with the verb's lemma. By doing so we created a general feature space shared by all verbs, and replications of it for each and every verb. This feature selection strategy was chosen in view of the two perspectives on the task (per-class and per-verb) discussed in Section 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Learning Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Our first set of features encodes the verb's context as inferred from the sentence's parse tree ( ure 1). We attempt to encode both the syntactic frame, by encoding the tree structure, and the argument preferences, by encoding the head words of the arguments and their POS. The restriction on the verb's parent being the head constituent of its grandparent is done in order to focus on the correct verb in verb series such as 'intend to run'.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Learning Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The 3rd cell in the table makes use of a 'second head word' node, defined as follows. Consider a left sibling (right siblings are addressed analogously) M of the verb's node. Take the node H in the subtree of M where M 's head appears. H is a descendent of a node J which is a child of M . The 'second head word' node is J's sibling on the right. For example, in the sentence We went to school (see Figure 2 ) the head word of the PP 'to school' is 'to', and the 'second head word' node is 'school'. The rationale is that 'school' could be a useful feature for 'went', in addition to 'to', which is highly polysemous (note that it is also a feature for 'went', in the 1st and 2nd cells of the table). The voice feature was computed using a simple heuristic based on the verb's POS tag (past participle) and presence of auxiliary verbs to its left. The current set of features does not detect verb particle constructions. We leave this for future research.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 399, |
| "end": 407, |
| "text": "Figure 2", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Learning Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Learning Algorithm. Our learning task can be formulated as follows. Let x i denote the feature vector of an instance i, and let X denote the space of all such feature vectors. The subset of possible labels for x i is denoted by C i , and the correct label by c i \u2208 C i . We denote the label space by S. Let T be the training set of instances", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Learning Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "T = {< x 1 , C 1 , c 1 >, < x 2 , C 2 , c 2 >, ..., < x n , C n , c n > } \u2286 (X \u00d7 2 S \u00d7 S) n ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Learning Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "where n is the size of the training set. Let < x n+1 , C n+1 >\u2208 (X \u00d7 2 S ) be a new instance. Our task is to select which of the labels in C n+1 is its correct label c n+1 (x n+1 does not have to be a previously observed lemma, but its lemma must appear in a VN class).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Learning Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The structure of the task lets us apply a learning algorithm that is especially appropriate for it. What we need is an algorithm that allows us to restrict the possible labels of each instance, both in training and in testing. The sequential model algorithm presented by Even-Zohar and Roth (2001) directly supports this requirement. We use the SNOW learning architecture for multi-class classification (Roth, 1998) , which contains an implementation of that algorithm 9 .", |
| "cite_spans": [ |
| { |
| "start": 286, |
| "end": 297, |
| "text": "Roth (2001)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 403, |
| "end": 415, |
| "text": "(Roth, 1998)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Learning Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We used SemLink VN annotations and parse trees on sections 02-21 of the WSJ Penn Treebank for training, and section 00 as a development set, as is common in the parsing community. We performed two parallel sets of experiments, one using manually created gold standard parse trees and one using parse trees created by a state-of-the-art 9 Experiments on development data revealed that for verbs for which almost all of the training instances are mapped to the same VN class, it is most beneficial to select that class. Thus, where more than 90% of the training instances of a verb are mapped to the same class, our algorithm mapped the instances of the verb to that class regardless of the context. parser (Charniak and Johnson, 2005 ) (Note that this parser does not output function tags). The parser was also trained on sections 02-21 and tuned on section 00 10 . Consequently, our adaptation scenario is a full adaptation situation in which both the parser and the VerbNet training data are not in the test domain. Note that generative parser adaptation results are known to be of much lower quality than in-domain results (Lease and Charniak, 2005) . The quality of the discriminative parser we used did indeed decrease in our adaptation scenario (Section 7).", |
| "cite_spans": [ |
| { |
| "start": 705, |
| "end": 732, |
| "text": "(Charniak and Johnson, 2005", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1125, |
| "end": 1151, |
| "text": "(Lease and Charniak, 2005)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The training data included 71209 VN in-scope instances (of them 41753 polysemous) and the development 3624 instances (2203 polysemous). An 'in-scope' instance is one that appears in VN and is tagged with a verb POS. The same trained model was used in both the in-domain and adaptation scenarios, which only differ in their test sets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In-Domain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Tests were held on sections 01,22,23,24 of WSJ PTB. Test data includes all inscope instances for which there is a SemLink annotation, yielding 13540 instances, 7798 (i.e., 57.6%) of them polysemous.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Adaptation. For the testing we annotated sentences from GENIA (Kim et al., 2003 ) (version 3.0.2). The GENIA corpus is composed of MED-LINE abstracts related to transcription factors in human blood cells. We annotated 400 sentences from the corpus, each including at least one inscope verb instance. We took the first 400 sentences from the corpus that met that criterion 11 . After cleaning some GENIA POS inconsistencies, this amounts to 690 in-scope instances (380 of them polysemous). The tagging was done by two annotators with an inter-annotator agreement rate of 80.35% and Kappa 67.66%.", |
| "cite_spans": [ |
| { |
| "start": 62, |
| "end": 79, |
| "text": "(Kim et al., 2003", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Baselines. We used two baselines, random and most frequent (MF). The random baseline selects uniformly and independently one of the possible classes of the verb. The most frequent (MF) baseline selects the most frequent class of the verb in the training data for verbs seen while training, and selects in random for the unseen ones. Consequently, it obtains a perfect score over the monosemous verbs. This baseline is a strong one and is common in disambiguation tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We repeated all of the setup above in two sce-narios. In the first (main) scenario, in-scope instances were always mapped to VN classes, while in the second ('other is possible' (OIP)) scenario, in-scope instances were allowed to be tagged (during training) and classified (during test) as not belonging to any existing VN class 12 . In all cases, out-of-scope instances (verbs whose lemmas do not appear in VN) were ignored. For the OIP scenario, we used a different 'other' label for each of the lemmas, not a single label shared by them all. Table 1 shows our results. In addition to the overall results, we also show results for the polysemous ones alone, since the task is trivial for the monosemous ones. The results using gold standard parses effectively set an upper bound on our model's performance, while those using statistical parser output demonstrate its current usability. In-Domain. Results are shown in the WSJ \u2192 WSJ columns of Table 1 . Using gold standard parses (top), we achieve 96.42% accuracy overall. Over the polysemous verbs, the accuracy is 93.68%. This translates to an error reduction over the MF baseline of 43.35% overall and 43.22% for the polysemous verbs. In the 'other is possible' scenario (right), we obtained 36.67% error reduction. Using a state-of-the-art parser (Charniak and Johnson, 2005 ) (bottom), we experienced some degradation of the results (as expected), but they remained significantly above baseline. We achieve 95.9% accuracy overall and 92.77% for the polysemous verbs, which translates to about 35.13% and 35.04% error reduction respectively. In the OIP scenario, we obtained 28.95% error reduction.", |
| "cite_spans": [ |
| { |
| "start": 1303, |
| "end": 1330, |
| "text": "(Charniak and Johnson, 2005", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 545, |
| "end": 552, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 945, |
| "end": 952, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The results of the random baseline for the indomain scenario are substantially worse than the MF baseline. On the WSJ the random baseline scored 66.97% (37.51%) accuracy in the main (OIP) scenarios.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Adaptation. Here we test our model's ability to generalize across domains. Since VN is supposed to be a domain independent resource, we hope to acquire statistics that are relevant across domains as well and so to enable us to automatically map verbs in domains of various genres. The results are shown in the WSJ \u2192 GENIA columns of Table 1 . When using gold standard parses, our model scored 73.16%. This translates to about 13.17% ER on GENIA. We interestingly experi-enced very little degradation in the results when moving to parser output, achieving 72.4% accuracy which translates to 10.71% error reduction over the MF baseline. The random baseline on GE-NIA was again worse than MF, obtaining 66.04% accuracy as compared to 69.09% of MF (in the OIP scenario, 39.12% compared to 46.41%).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 333, |
| "end": 340, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Run-time performance. Given a parsed corpus, our main model trains and runs in no more than a few minutes for a training set of \u223c60K instances and a test set of \u223c11K instances, using a Pentium 4 CPU 2.40GHz with 1GB main memory. The bottleneck in tagging large corpora using our model is thus most likely the running time of current parsers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In this paper we introduced a new statistical model for automatically mapping verb instances into VerbNet classes, and presented the first large-scale experiments for this task, for both in-domain and corpus adaptation scenarios.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Using gold standard parse trees, we achieved 96.42% accuracy on WSJ test data, showing 43.35% error reduction over a strong baseline. For adaptation to the GENIA corpus, we showed 13.1% error reduction over the baseline. A surprising result in the context of adaptation is the little influence of using gold standard parses versus using parser output, especially given the relatively low performance of today's parsers in the adaptation task (91.4% F-score for the WSJ in-domain scenario compared to 81.24% F-score when parsing our GENIA test set). This is an interesting direction for future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "In addition, we conducted some additional preliminary experiments in order to shed light on some aspects of the task. The experiments reported below were conducted on the development data, given gold standard parse trees.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "First, motivated by the close connection between WSD and our task (see Section 3), we conducted an experiment to test the applicability of using a WSD engine. In addition to the experiments listed above, we also attempted to encode the output of a modern WSD engine (the VBCollocations Model of SenseLearner 2.0 (Mihalcea and Csomai, 2005) ), both by encoding the synset (if exists) of the verb instance as a feature, and by encoding each possible mapped class of the WSD engine output synset as a feature. There are k (Charniak and Johnson, 2005) state-of-the-art parses of the test data. In the main scenario (left), instances were always mapped to VN classes, while in the OIP one (right) it was possible (during both training and test) to map instances as not belonging to any existing class. For the latter, no results are displayed for polysemous verbs, since each verb can be mapped both to 'other' and to at least one class.", |
| "cite_spans": [ |
| { |
| "start": 312, |
| "end": 339, |
| "text": "(Mihalcea and Csomai, 2005)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 519, |
| "end": 547, |
| "text": "(Charniak and Johnson, 2005)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "features if there are k possible classes 13 . There was no improvement over the previous model. A possible reason for this is the performance of the WSD engine (e.g. 56.1% precision on the verbs in Senseval-3 all-words task data). Naturally, more research is needed to establish better methods of incorporating WSD information to assist in this task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Second, we studied the relative usability of class information as opposed to verb idiosyncratic information in the VN disambiguation task. By measuring the accuracy of our model, first given the per-class features (the first set of features excluding the verb's lemma feature) and second given the per-verb features (the conjunction of the first set with the verb's lemma), we tried to address this question. We obtained 94.82% accuracy for the per-class experiment, and 95.51% for the per-verb experiment, compared to 95.95% when using both in the in-domain gold standard scenario. The MF baseline scored 92.45% on this development set. These results, which are close in the per-class experiment to those of the MF baseline, indicate that combining both approaches in the construction of the classifier is justified.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Third, we studied the importance of having a learning algorithm utilizing the task's structure (mapping into a large label space where each in-stance can be mapped to only a small subspace). Our choice of the algorithm in (Even-Zohar and Roth, 2001 ) was done in light of this requirement. We conducted an experiment in which we omitted these per-instance restrictions on the label space, effectively allowing each verb to take every label in the label space. We obtained 94.54% accuracy, which translates to 27.68% error reduction, compared to 95.95% accuracy (46.36% error reduction) when using the restrictions. These results indicate that although our feature set keeps us substantially above baseline even without the above algorithm, using it boosts our results even further. This result is different from the results obtained in (Girju et al., 2005) , where the results of the unconstrained (flat) model were significantly below baseline.", |
| "cite_spans": [ |
| { |
| "start": 238, |
| "end": 248, |
| "text": "Roth, 2001", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 836, |
| "end": 856, |
| "text": "(Girju et al., 2005)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "As noted earlier, the field of instance level verb classification into Levin-inspired classes is far from being exhaustively explored. We intend to make our implementation of the model available to the community, to enable others to engage in further research on this task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Our annotations will be made available to the community. 2 Using the same sentences and instances, obtained from the authors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Propbank(Palmer et al., 2005) is a corpus annotation of the WSJ sections of the Penn Treebank with semantic roles of each verbal proposition.4 Semlink was not available then.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We ignore sub-class distinctions. This is justified since in 98.2% of the in-coverage instances in Semlink, knowing the verb and its class suffices for knowing its exact sub-class.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "237 classes vs. 4991 types.7 In the WN to VN mapping built into VN, 14.69% of the covered WN synsets were mapped to more than a single VN class.8 We used the publicly available SenseLearner 2.0, the VB-Collocations model. We chose VN classes containing the lemma in random when a single mapping is not specified. We obtained 67.74% accuracy on section 00 of the WSJ, which is less than the MF baseline. See Sections 5 and 7.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For the very few sentences out of coverage for the parser, we used the MF baseline (see below).11 Discarding the first 120 sentences, which were used to design the annotator guidelines.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "i.e., including instances tagged by SemLink as 'none'.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The mapping is many-to-many and partial. To overcome the first issue, given a WN sense of the verb, we encoded all possible VN classes that correspond to it. To overcome the second, we treated a verb in a certain VN class, for which the mapping to WN was available, as one that can be mapped to all WN senses of the verb.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Acknowledgements. We would like to thank Dan Roth, Mark Sammons and Ran Luria for their help.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "acknowledgement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "The Berkeley FrameNet Project. Proc. of the 36th Meeting of the ACL and the 17th COLING", |
| "authors": [ |
| { |
| "first": "Collin", |
| "middle": [ |
| "F" |
| ], |
| "last": "Baker", |
| "suffix": "" |
| }, |
| { |
| "first": "Charles", |
| "middle": [ |
| "J" |
| ], |
| "last": "Fillmore", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [ |
| "B" |
| ], |
| "last": "Lowe", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Collin F. Baker, Charles J. Fillmore and John B. Lowe, 1998. The Berkeley FrameNet Project. Proc. of the 36th Meeting of the ACL and the 17th COLING.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Coarseto-fine n-best parsing and maxent discriminative reranking", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. of the 43rd Meeting of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugene Charniak and Mark Johnson, 2005. Coarse- to-fine n-best parsing and maxent discriminative reranking. Proc. of the 43rd Meeting of the ACL.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Head-driven statistical models for natural language parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins, 1999. Head-driven statistical models for natural language parsing. Ph.D. thesis, Univer- sity of Pennsylvania.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Investigating regular sense extensions based on intersective Levin classes", |
| "authors": [ |
| { |
| "first": "Hoa", |
| "middle": [ |
| "Trang" |
| ], |
| "last": "Dang", |
| "suffix": "" |
| }, |
| { |
| "first": "Karin", |
| "middle": [], |
| "last": "Kipper", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Rosenzweig", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proc. of the 36th Meeting of the ACL and the 17th COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hoa Trang Dang, Karin Kipper, Martha Palmer and Joseph Rosenzweig, 1998. Investigating regular sense extensions based on intersective Levin classes. Proc. of the 36th Meeting of the ACL and the 17th COLING.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Large-Scale Dictionary Construction for Foreign Language Tutoring and Interlingual Machine Translation", |
| "authors": [ |
| { |
| "first": "Bonnie", |
| "middle": [ |
| "J" |
| ], |
| "last": "Dorr", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Machine Translation", |
| "volume": "12", |
| "issue": "", |
| "pages": "1--55", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bonnie J. Dorr, 1997. Large-Scale Dictionary Con- struction for Foreign Language Tutoring and Inter- lingual Machine Translation. Machine Translation, 12:1-55.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Role of Word Sense Disambiguation in Lexical Acquisition: Predicting Semantics from Syntactic Cues", |
| "authors": [ |
| { |
| "first": "Bonnie", |
| "middle": [ |
| "J" |
| ], |
| "last": "Dorr", |
| "suffix": "" |
| }, |
| { |
| "first": "Douglas", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proc. of the 16th COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bonnie J. Dorr and Douglas Jones, 1996. Role of Word Sense Disambiguation in Lexical Acquisition: Pre- dicting Semantics from Syntactic Cues. Proc. of the 16th COLING.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A Sequential Model for Multi-Class Classification", |
| "authors": [ |
| { |
| "first": "Yair", |
| "middle": [], |
| "last": "Even-Zohar", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proc. of the 2001 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yair Even-Zohar and Dan Roth, 2001. A Sequential Model for Multi-Class Classification. Proc. of the 2001 Conference on Empirical Methods in Natural Language Processing.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Token-level Disambiguation of VerbNet classes", |
| "authors": [ |
| { |
| "first": "Roxana", |
| "middle": [], |
| "last": "Girju", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Sammons", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "The Interdisciplinary Workshop on Verb Features and Verb Classes", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roxana Girju, Dan Roth and Mark Sammons, 2005. Token-level Disambiguation of VerbNet classes. The Interdisciplinary Workshop on Verb Features and Verb Classes.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Automatically building conceptual graphs using VerbNet and WordNet. International Symposium on Information and Communication Technologies (ISICT)", |
| "authors": [ |
| { |
| "first": "Svetlana", |
| "middle": [], |
| "last": "Hensman", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Dunnion", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Svetlana Hensman and John Dunnion, 2004. Automat- ically building conceptual graphs using VerbNet and WordNet. International Symposium on Information and Communication Technologies (ISICT).", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "GENIA corpus -a semantically annotated corpus for bio-textmining", |
| "authors": [ |
| { |
| "first": "Jin-Dong", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomoko", |
| "middle": [], |
| "last": "Ohta", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuka", |
| "middle": [], |
| "last": "Teteisi", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun'ichi", |
| "middle": [], |
| "last": "Tsujii", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Bioinformatics", |
| "volume": "19", |
| "issue": "", |
| "pages": "180--182", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jin-Dong Kim, Tomoko Ohta, Yuka Teteisi and Jun'ichi Tsujii, 2003. GENIA corpus -a seman- tically annotated corpus for bio-textmining. Bioin- formatics, 19:i180-i182, Oxford U. Press 2003.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Class-Based Construction of a Verb Lexicon", |
| "authors": [ |
| { |
| "first": "Karin", |
| "middle": [], |
| "last": "Kipper", |
| "suffix": "" |
| }, |
| { |
| "first": "Hoa", |
| "middle": [ |
| "Trang" |
| ], |
| "last": "Dang", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proc. of the 17th National Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karin Kipper, Hoa Trang Dang and Martha Palmer, 2000. Class-Based Construction of a Verb Lexicon. Proc. of the 17th National Conference on Artificial Intelligence.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "VerbNet: A Broad-Coverage, Comprehensive Verb Lexicon", |
| "authors": [ |
| { |
| "first": "Karin", |
| "middle": [], |
| "last": "Kipper", |
| "suffix": "" |
| }, |
| { |
| "first": "-", |
| "middle": [], |
| "last": "Schuler", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karin Kipper-Schuler, 2005. VerbNet: A Broad- Coverage, Comprehensive Verb Lexicon. Ph. D. the- sis, University of Pennsylvania.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Extending VerbNet with Novel Verb Classes", |
| "authors": [ |
| { |
| "first": "Karin", |
| "middle": [], |
| "last": "Kipper", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| }, |
| { |
| "first": "Neville", |
| "middle": [], |
| "last": "Ryant", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proc. of the 5th International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karin Kipper, Anna Korhonen, Neville Ryant and Martha Palmer, 2006. Extending VerbNet with Novel Verb Classes. Proc. of the 5th International Conference on Language Resources and Evaluation.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Role of verbs in document analysis", |
| "authors": [ |
| { |
| "first": "Judith", |
| "middle": [], |
| "last": "Klavans", |
| "suffix": "" |
| }, |
| { |
| "first": "Min-Yen", |
| "middle": [], |
| "last": "Kan", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proc. of the 36th Meeting of the ACL and the 17th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Judith Klavans and Min-Yen Kan, 1998. Role of verbs in document analysis. Proc. of the 36th Meeting of the ACL and the 17th International Conference on Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Extended Lexical-Semantic Classification of English Verbs", |
| "authors": [ |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| }, |
| { |
| "first": "Ted", |
| "middle": [], |
| "last": "Briscoe", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. of the 42nd Meeting of the ACL, Workshop on Computational Lexical Semantics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anna Korhonen and Ted Briscoe, 2004. Extended Lexical-Semantic Classification of English Verbs. Proc. of the 42nd Meeting of the ACL, Workshop on Computational Lexical Semantics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Verb Class Disambiguation using Informative Priors. Computational Linguistics", |
| "authors": [ |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Brew", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "30", |
| "issue": "", |
| "pages": "45--73", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mirella Lapata and Chris Brew, 2004. Verb Class Disambiguation using Informative Priors. Compu- tational Linguistics, 30(1):45-73", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Towards a Syntactic Account of Punctuation", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Lease", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. of the 2nd International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew Lease and Eugene Charniak, 2005. Towards a Syntactic Account of Punctuation. Proc. of the 2nd International Joint Conference on Natural Language Processing.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "English Verb Classes And Alternations: A Preliminary Investigation", |
| "authors": [ |
| { |
| "first": "Beth", |
| "middle": [], |
| "last": "Levin", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Beth Levin, 1993. English Verb Classes And Alterna- tions: A Preliminary Investigation. The University of Chicago Press.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Argument Realization", |
| "authors": [ |
| { |
| "first": "Beth", |
| "middle": [], |
| "last": "Levin", |
| "suffix": "" |
| }, |
| { |
| "first": "Malka", |
| "middle": [ |
| "Rappaport" |
| ], |
| "last": "Hovav", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Beth Levin and Malka Rappaport Hovav, 2005. Argu- ment Realization. Cambridge University Press.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Disambiguating Levin Verbs Using Untagged Data", |
| "authors": [ |
| { |
| "first": "Juanguo", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Brew", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proc. of the 2007 International Conference on Recent Advances in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Juanguo Li and Chris Brew, 2007. Disambiguating Levin Verbs Using Untagged Data. Proc. of the 2007 International Conference on Recent Advances in Natural Language Processing.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Combining Lexical Resources: Mapping Between PropBank and VerbNet", |
| "authors": [ |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Loper", |
| "suffix": "" |
| }, |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Szu-Ting", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proc. of the 7th International Workshop on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Edward Loper, Szu-ting Yi and Martha Palmer, 2007. Combining Lexical Resources: Mapping Between PropBank and VerbNet. Proc. of the 7th Inter- national Workshop on Computational Linguistics, Tilburg, the Netherlands.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Automatic Verb-Classification Based On Statistical Distribution Of Argument Structure", |
| "authors": [ |
| { |
| "first": "Paola", |
| "middle": [], |
| "last": "Merlo", |
| "suffix": "" |
| }, |
| { |
| "first": "Suzanne", |
| "middle": [], |
| "last": "Stevenson", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Computational Linguistics", |
| "volume": "27", |
| "issue": "3", |
| "pages": "373--408", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paola Merlo and Suzanne Stevenson. 2001. Automatic Verb-Classification Based On Statistical Distribu- tion Of Argument Structure. Computational Linguis- tics, 27(3):373-408.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Sense-Learner: word sense disambiguation for all words in unrestricted text", |
| "authors": [ |
| { |
| "first": "Rada", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| }, |
| { |
| "first": "Andras", |
| "middle": [], |
| "last": "Csomai", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. of the 43rd Meeting of the ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rada Mihalcea and Andras Csomai 2005. Sense- Learner: word sense disambiguation for all words in unrestricted text. Proc. of the 43rd Meeting of the ACL , Poster Session.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "The proposition bank: A corpus annotated with semantic roles", |
| "authors": [ |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Kingsbury", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Computational Linguistics", |
| "volume": "", |
| "issue": "1", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Martha Palmer, Daniel Gildea and Paul Kingsbury, 2005. The proposition bank: A corpus annotated with semantic roles. Computational Linguistics, 31(1).", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Learning to resolve natural language ambiguities: A unified approach", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proc. of the 15th National Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dan Roth, 1998. Learning to resolve natural language ambiguities: A unified approach. Proc. of the 15th National Conference on Artificial Intelligence", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Clustering verbs semantically according to their alternation behavior", |
| "authors": [ |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Schulte Im Walde", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proc. of the 18th COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sabine Schulte im Walde, 2000. Clustering verbs se- mantically according to their alternation behavior. Proc. of the 18th COLING.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Putting pieces together: Combining FrameNet, VerbNet and WordNet for robust semantic parsing", |
| "authors": [ |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| }, |
| { |
| "first": "Rada", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. of the International Conference on Intelligent Text Processing and Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lei Shi and Rada Mihalcea, 2005. Putting pieces to- gether: Combining FrameNet, VerbNet and WordNet for robust semantic parsing. Proc. of the Interna- tional Conference on Intelligent Text Processing and Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Exploiting a Verb Lexicon in Automatic Semantic Role Labelling", |
| "authors": [ |
| { |
| "first": "Robert", |
| "middle": [ |
| "S" |
| ], |
| "last": "Swier", |
| "suffix": "" |
| }, |
| { |
| "first": "Suzanne", |
| "middle": [], |
| "last": "Stevenson", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. of the 2005 conference on empirical methods in natural language processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert S. Swier and Suzanne Stevenson, 2005. Ex- ploiting a Verb Lexicon in Automatic Semantic Role Labelling. Proc. of the 2005 conference on empirical methods in natural language processing.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Can Semantic Roles Generalize Across Genres?", |
| "authors": [ |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Szu-Ting Yi", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Loper", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proc. of the 2007 conference of the north american chapter of the association for computational linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Szu-ting Yi, Edward Loper and Martha Palmer, 2007. Can Semantic Roles Generalize Across Genres? Proc. of the 2007 conference of the north american chapter of the association for computational linguis- tics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "text": "Fig-First Feature SetThe stemmed head words, POS, parse tree labels, function tags, and ordinals of the verb's right k r siblings (k r is the maximum number of right siblings in the corpus. These are at most 5k r different features). The stemmed head words, POS, labels, function tags and ordinals of the verb's left k l siblings, as above. The stemmed head word & POS of the 'second head word' nodes on the left and right (see text for precise definition). All of the above features employed on the siblings of the parent of the verb (only if the verb's parent is the head constituent of its grandparent) The number of right/left siblings of the verb. The number of right/left siblings of the verb's parent. The parse tree label of the verb's parent. The verb's voice (active or passive). The verb's lemma.", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "text": "The first set of features in our model. All of them are binary. The final feature set includes two sets: the set here, and a set obtained by its conjunction with the verb's lemma.", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "text": "An example parse tree for the 'second head word' feature.", |
| "num": null, |
| "uris": null |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "text": "", |
| "num": null, |
| "uris": null |
| }, |
| "TABREF1": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "text": "Accuracy and error reduction (ER) results (in percents) for our model and the MF baseline. Error reduction is computed as M ODEL\u2212M F 100\u2212M F . Results are given for the WSJ and GENIA corpora test sets. The top table is for a model receiving gold standard parses of the test data. The bottom is for a model using", |
| "html": null |
| } |
| } |
| } |
| } |