| { |
| "paper_id": "Y09-1003", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:42:40.050320Z" |
| }, |
| "title": "Automatic Lexical Classification -Balancing between Machine Learning and Linguistics", |
| "authors": [ |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Cambridge", |
| "location": { |
| "addrLine": "Computer Laboratory 15 JJ Thomson Avenue", |
| "postCode": "CB3 0GD", |
| "settlement": "Cambridge", |
| "country": "UK" |
| } |
| }, |
| "email": "alk23@cl.cam.ac.uk" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Verb classifications have been used to support a number of practical tasks and applications, such as parsing, information extraction, question-answering, and machine translation. However, large-scale exploitation of verb classes in real-world or domain-sensitive tasks has not been possible because existing manually built classifications are incomprehensive. This paper describes recent and ongoing research on extending and acquiring lexical classifications automatically. The automatic approach is attractive since it is cost-effective and opens up the opportunity of learning and tuning lexical classifications for the application and domain in question. However, the development of an optimal approach is challenging, and requires not only expertise in machine learning but also a good understanding of the linguistic principles of lexical classification.", |
| "pdf_parse": { |
| "paper_id": "Y09-1003", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Verb classifications have been used to support a number of practical tasks and applications, such as parsing, information extraction, question-answering, and machine translation. However, large-scale exploitation of verb classes in real-world or domain-sensitive tasks has not been possible because existing manually built classifications are incomprehensive. This paper describes recent and ongoing research on extending and acquiring lexical classifications automatically. The automatic approach is attractive since it is cost-effective and opens up the opportunity of learning and tuning lexical classifications for the application and domain in question. However, the development of an optimal approach is challenging, and requires not only expertise in machine learning but also a good understanding of the linguistic principles of lexical classification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Verb classifications have attracted a great deal of interest in both linguistics and natural language processing (NLP). They have proved useful for various important tasks and applications, including e.g. computational lexicography, parsing, word sense disambiguation, semantic role labeling, information extraction, question-answering, and machine translation (Swier and Stevenson, 2004; Dang, 2004; Shi and Mihalcea, 2005; Kipper et al., 2008; Zapirain et al., 2008) .", |
| "cite_spans": [ |
| { |
| "start": 361, |
| "end": 388, |
| "text": "(Swier and Stevenson, 2004;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 389, |
| "end": 400, |
| "text": "Dang, 2004;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 401, |
| "end": 424, |
| "text": "Shi and Mihalcea, 2005;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 425, |
| "end": 445, |
| "text": "Kipper et al., 2008;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 446, |
| "end": 468, |
| "text": "Zapirain et al., 2008)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Particularly useful are classes which capture generalizations over a range of (cross-)linguistic properties, such as the ones proposed by Levin (1993) . Being defined in terms of similar meaning and (morpho-)syntactic behaviour of words, these classes generally incorporate a wider range of properties than e.g. classes defined solely on semantic grounds (Miller, 1995) .", |
| "cite_spans": [ |
| { |
| "start": 138, |
| "end": 150, |
| "text": "Levin (1993)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 355, |
| "end": 369, |
| "text": "(Miller, 1995)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For example, verbs which share the meaning component of 'manner of motion ' (e.g. travel, run, walk) , behave similarly in terms of subcategorization (e.g. I travelled/ran/walked, I travelled/ran/walked to London, I travelled/ran/walked five miles) and usually have zero-related nominals (e.g. a run, a walk) can be grouped to the same lexical class. Such verb classes can be identified across the entire lexicon and they can also apply across languages, since the basic meaning components they are comprised of are cross-linguistically applicable or overlapping.", |
| "cite_spans": [ |
| { |
| "start": 74, |
| "end": 100, |
| "text": "' (e.g. travel, run, walk)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "While the classes do not provide means for full semantic inferencing, they can offer a powerful tool for generalization, abstraction and prediction which is beneficial for practical tasks. Fundamentally, the classes are a critical component of any system which needs mapping from surface realization of arguments to predicate-argument structure. As the classes capture higher level abstractions they can be used as a principled means to abstract away from individual words when required. For example, they can be utilized to organize a default inheritance hierarchy which effectively captures generalizations over words and predicts much of the syntactic/semantic behaviour of a new word simply by associating it with an appropriate class. The predictive power of the classes can help compensate for lack of sufficient data. In addition, the classes have theoretical benefits. For example, classified data can be used to evaluate empirical claims of different linguistic and psycholinguistic theories.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Although lexical classes have proved helpful for a number of (multilingual) tasks, their largescale exploitation in real-world or highly domain-sensitive tasks has been limited because no fully accurate or comprehensive lexical classification is available. There is no such resource because manual classification of large numbers of words has proved very time-consuming. Class-based differences are typically manifested in differences in the statistics over usages of syntactic-semantic features. This statistical information is difficult collect by hand as it is highly domain-sensitive, i.e. it varies with predominant word senses, which change across corpora and domains.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In recent years, automatic induction of verb classes from corpus data has become increasingly popular (Merlo and Stevenson, 2001; Schulte im Walde, 2006; Joanis et al., 2008; Sun et al., 2008; Li and Brew, 2008; Korhonen et al., 2008; \u00d3 S\u00e9aghdha and Copestake, 2008; Vlachos et al., 2009) . This work is important as it opens up the opportunity of learning and tuning classifications for the application and domain in question. Automatic classification is not only cost-effective but it also gathers the important statistical information as side effect of the acquisition process and can easily be applied to new domains and usage patterns provided relevant corpus data is available.", |
| "cite_spans": [ |
| { |
| "start": 102, |
| "end": 129, |
| "text": "(Merlo and Stevenson, 2001;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 130, |
| "end": 153, |
| "text": "Schulte im Walde, 2006;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 154, |
| "end": 174, |
| "text": "Joanis et al., 2008;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 175, |
| "end": 192, |
| "text": "Sun et al., 2008;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 193, |
| "end": 211, |
| "text": "Li and Brew, 2008;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 212, |
| "end": 234, |
| "text": "Korhonen et al., 2008;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 235, |
| "end": 266, |
| "text": "\u00d3 S\u00e9aghdha and Copestake, 2008;", |
| "ref_id": null |
| }, |
| { |
| "start": 267, |
| "end": 288, |
| "text": "Vlachos et al., 2009)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To date, a variety of approaches have been proposed for verb classification and applied to general English and other languages. Both supervised and unsupervised machine learning (ML) methods have been used to classify a variety of features extracted from raw, tagged and/or parsed corpus data. Although the results have been generally encouraging, the accuracy of automatic classification shows room for improvement. After providing a short introduction to the basic principles of manual verb classification, this paper reviews recent research in automatic classification -particularly focussing on work conducted in English -and discusses then the various current challenges that need to be met for substantial further advances. Meeting these challenges requires solid expertise in both machine learning and (computational) linguistics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The largest and most widely deployed verb classification in English is the classification of Levin (1993) . This classification provides a summary of the variety of theoretical research done on lexical-semantic verb classification over the past decades. Verbs which display the same or a similar set of diathesis alternations in the realization of their argument structure are assumed to share certain meaning components and are organized into a semantically coherent class. Although alternations are chosen as the primary means for identifying verb classes, additional properties related to subcategorization, morphology and extended meanings of verbs are taken into account as well. For instance, the Levin class of \"Break Verbs\" (class 45.1), which refers to actions that bring about a change in the material integrity of some entity, is characterized by its participation (1-3) or non-participation (4-6) in the following alternations and other constructions (7-8): ", |
| "cite_spans": [ |
| { |
| "start": 93, |
| "end": 105, |
| "text": "Levin (1993)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Classification", |
| "sec_num": "2" |
| }, |
| { |
| "text": "To date, most work on automatic verb classification has focussed on English (Joanis et al., 2008; Sun et al., 2008; Li and Brew, 2008; \u00d3 S\u00e9aghdha and Copestake, 2008; Vlachos et al., 2009; Sun and Korhonen, 2009) , although some work has also been done on other languages, in particular on German (Schulte im Walde, 2006) , and recently also on sub-languages (Korhonen et al., 2008) . In this section, we provide an overview of recent work mostly conducted on English (for other languages and domains, see section 5). We will first describe the features and techniques used for classification, and then evaluation and performance of current systems.", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 97, |
| "text": "(Joanis et al., 2008;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 98, |
| "end": 115, |
| "text": "Sun et al., 2008;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 116, |
| "end": 134, |
| "text": "Li and Brew, 2008;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 135, |
| "end": 166, |
| "text": "\u00d3 S\u00e9aghdha and Copestake, 2008;", |
| "ref_id": null |
| }, |
| { |
| "start": 167, |
| "end": 188, |
| "text": "Vlachos et al., 2009;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 189, |
| "end": 212, |
| "text": "Sun and Korhonen, 2009)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 309, |
| "end": 321, |
| "text": "Walde, 2006)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 359, |
| "end": 382, |
| "text": "(Korhonen et al., 2008)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automatic Verb Classification -the State of the Art", |
| "sec_num": "3" |
| }, |
| { |
| "text": "As discussed above in section 2, the main feature of manual verb classification is a diathesis alternation which manifests at the level of syntax in alternating sets of subcategorization frames (SCFs). Since automatic detection of diathesis alternations is challenging (McCarthy, 2001) , most work on automatic classification has focussed on syntactic features, exploiting the fact that similar alternations tend to result in similar syntactic behaviour. The syntactic features have been shallow syntactic slots (e.g. NPs preceding or following the verb) extracted using a lemmatizer or a chunker, or verb SCFs extracted using a chunker or a parser. These both feature types have been refined with information about prepositional preferences (PPs) of verbs. Joanis et al. (2008) have reported better results using syntactic slots, while several others have obtained good results using SCFs, e.g. (Schulte im Walde, 2006; Li and Brew, 2008; Sun and Korhonen, 2009) . While SCFs correspond better (than syntactic slots) with the features used in manual work, optimal results have required including in SCFs also additional information about adjuncts (not only arguments) of verbs (Sun et al., 2008) which are typically not used in manual classification.", |
| "cite_spans": [ |
| { |
| "start": 269, |
| "end": 285, |
| "text": "(McCarthy, 2001)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 758, |
| "end": 778, |
| "text": "Joanis et al. (2008)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 908, |
| "end": 920, |
| "text": "Walde, 2006;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 921, |
| "end": 939, |
| "text": "Li and Brew, 2008;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 940, |
| "end": 963, |
| "text": "Sun and Korhonen, 2009)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 1178, |
| "end": 1196, |
| "text": "(Sun et al., 2008)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Recent research has also experimented with replacing or supplementing SCFs with information about basic lexical context (co-occurrences (COs)) of verbs, or lexical preferences (LPs) in specific grammatical relations (GRs) associated with verbs in parsed data (for example, the type and frequency of prepositions in the indirect object relation) (Li and Brew, 2008; Sun and Korhonen, 2009) . Some experiments have also explored the usefulness of verb tense (e.g. the part-of-speech tags of verbs), voice (the knowledge whether the verb was used in active or passive) and/or aspect for verb classification (Joanis et al., 2008; Korhonen et al., 2008) .", |
| "cite_spans": [ |
| { |
| "start": 345, |
| "end": 364, |
| "text": "(Li and Brew, 2008;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 365, |
| "end": 388, |
| "text": "Sun and Korhonen, 2009)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 604, |
| "end": 625, |
| "text": "(Joanis et al., 2008;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 626, |
| "end": 648, |
| "text": "Korhonen et al., 2008)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "While most work has focussed on syntactic or lexical features, a few attempts have also been made to refine syntactic features with semantic information about verb selectional preferences (SPs). Following Merlo and Stevenson (2001) , Joanis et al. (2008) used an simple 'animacy' feature which was determined by classifying e.g. pronouns and proper names in data to this single SP class. Joanis (2002) employed as SP models the top level WordNet (Miller, 1995) classes (Schulte im Walde (2006) tried a similar approach for German). Recently, Sun and Korhonen (2009) experimented with automatically acquired SPs. The latter were obtained by clustering argument head data in GRs related to specific verbs.", |
| "cite_spans": [ |
| { |
| "start": 205, |
| "end": 231, |
| "text": "Merlo and Stevenson (2001)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 234, |
| "end": 254, |
| "text": "Joanis et al. (2008)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 388, |
| "end": 401, |
| "text": "Joanis (2002)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 446, |
| "end": 460, |
| "text": "(Miller, 1995)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 542, |
| "end": 565, |
| "text": "Sun and Korhonen (2009)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Finally, combinations of lexical, syntactic, semantic and other features have been explored. (Joanis et al., 2008; Sun et al., 2008; Li and Brew, 2008; \u00d3 S\u00e9aghdha and Copestake, 2008) . Unsupervised methods have the benefit that they can be used to to discover novel information from corpus data. The latter is particularly useful for supplementing or improving existing classifications or learning new classifications for languages and domains where no manually built classifications are available. Again a range of methods have been explored, including e.g. the K means, Expectation-Maximization, spectral clustering, Information Bottleneck, Probabilistic Latent Semantic Analysis, cost-based pairwise clustering (Brew and Schulte im Walde, 2002; Schulte im Walde, 2006; Korhonen et al., 2008; Sun and Korhonen, 2009; Vlachos et al., 2009) . Both soft and hard clustering methods have been tried, but attempts to deal with polysemy (the fact that many verbs can be classified in more than one class) have not been successful yet (see section 4).", |
| "cite_spans": [ |
| { |
| "start": 93, |
| "end": 114, |
| "text": "(Joanis et al., 2008;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 115, |
| "end": 132, |
| "text": "Sun et al., 2008;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 133, |
| "end": 151, |
| "text": "Li and Brew, 2008;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 152, |
| "end": 183, |
| "text": "\u00d3 S\u00e9aghdha and Copestake, 2008)", |
| "ref_id": null |
| }, |
| { |
| "start": 715, |
| "end": 748, |
| "text": "(Brew and Schulte im Walde, 2002;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 749, |
| "end": 772, |
| "text": "Schulte im Walde, 2006;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 773, |
| "end": 795, |
| "text": "Korhonen et al., 2008;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 796, |
| "end": 819, |
| "text": "Sun and Korhonen, 2009;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 820, |
| "end": 841, |
| "text": "Vlachos et al., 2009)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Research on automatic verb classification has typically been evaluated against a manually constructed gold standard. The subsequent sections describe the most commonly used gold standards, evaluation measures, and test sets, and compares the performance of the state-of-the-art approaches for English which have been evaluated using these test sets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The most common evaluation resource in English verb classification has been that of Levin (1993) supplemented with additional information from VerbNet or WordNet. In particular, two gold standards based on (Levin, 1993) have been used to evaluate much of the recent research:", |
| "cite_spans": [ |
| { |
| "start": 84, |
| "end": 96, |
| "text": "Levin (1993)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 206, |
| "end": 219, |
| "text": "(Levin, 1993)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gold standards", |
| "sec_num": "3.3.1" |
| }, |
| { |
| "text": "GS1 The gold standard of Joanis et al. (2008) provides a classification of 835 verbs into 15 (some coarse, some fine-grained) Levin classes. We consider here the '14 way' version of this resource because this corresponds the closest to the target (Levin's fine-grained) classification 2 . When the frequency-based selection criteria of Joanis et al. (2008) is applied and the class imbalance is restricted to 1:1.5, GS1 provides a classification of 205 verbs in 10-15 classes.", |
| "cite_spans": [ |
| { |
| "start": 25, |
| "end": 45, |
| "text": "Joanis et al. (2008)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 336, |
| "end": 356, |
| "text": "Joanis et al. (2008)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gold standards", |
| "sec_num": "3.3.1" |
| }, |
| { |
| "text": "The gold standard of Sun et al. (2008) classifies 204 medium-high frequency verbs to 17 fine-grained Levin classes, so that each class has 12 member verbs. Table 1 from (Sun and Korhonen, 2009) shows the classes in GS1 and GS2.", |
| "cite_spans": [ |
| { |
| "start": 21, |
| "end": 38, |
| "text": "Sun et al. (2008)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 169, |
| "end": 193, |
| "text": "(Sun and Korhonen, 2009)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 156, |
| "end": 163, |
| "text": "Table 1", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "GS2", |
| "sec_num": null |
| }, |
| { |
| "text": "The classification techniques have been typically applied to large cross-domain corpora and evaluated (against a chosen gold standard) using various measures. Although the measures have differed (e.g. for supervised or unsupervised approaches), the general tendency has been to prefer measures which are (i) applicable to all classification methods under comparison, (ii) deliver a numerical value easy to interpret and (ii) preferably do not introduce biases towards specific numbers of classes or class sizes. The measures mentioned here are measures that have been used to evaluate many of the recent clustering approaches compared in the following section: A modified purity (mPUR) is a global measure which evaluates the mean precision of clusters. Each cluster is associated with its prevalent class. The number of verbs in a cluster K that take this class is denoted by n prevalent (K). Verbs that do not take it are considered as errors. Clusters where n prevalent (K) = 1 are disregarded as not to introduce a bias towards singletons:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and evaluation measures", |
| "sec_num": "3.3.2" |
| }, |
| { |
| "text": "mPUR = n prevalent(k i )>2 n prevalent(k i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and evaluation measures", |
| "sec_num": "3.3.2" |
| }, |
| { |
| "text": "number of verbs ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and evaluation measures", |
| "sec_num": "3.3.2" |
| }, |
| { |
| "text": "To give an idea of how current approaches perform, we examined the recent supervised and unsupervised works on general English verb classification which were evaluated on GS1 and GS2 using either the evaluation measures described in the previous section or measures comparable to them. These works are summarized in Table 2 . ACC and F-measure are shown for GS1 and GS2, respectively 3 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 316, |
| "end": 323, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Performance", |
| "sec_num": "3.3.3" |
| }, |
| { |
| "text": "On GS1 4 , the best performing supervised method reported so far is that of Li and Brew (2008) . integrating COs, SCFs and/or LPs were extracted from a large corpus using a lemmatizer and a grammatical parser. The combination of COs and SCFs gave the best result, shown in the table. Joanis et al. (2008) have reported the second best supervised result on GS1, using Support Vector Machines for classification. They compared various features derived from linguistic analysis and extracted using shallow syntactic processing (mainly chunking): syntactic slots, slot overlaps, tense, voice, aspect, and animacy of NPs. They concluded that syntactic information about core constituents occurring with a verb (syntactic slots) is most important to verb classification. Stevenson and Joanis (2003) reached a similar conclusion in their unsupervised experiment on GS1. A feature set similar to that of Joanis et al. (2008) was employed (features were selected in a semi-supervised fashion) and hierarchical clustering was used.", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 94, |
| "text": "Li and Brew (2008)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 284, |
| "end": 304, |
| "text": "Joanis et al. (2008)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 765, |
| "end": 792, |
| "text": "Stevenson and Joanis (2003)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 896, |
| "end": 916, |
| "text": "Joanis et al. (2008)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance", |
| "sec_num": "3.3.3" |
| }, |
| { |
| "text": "The recent unsupervised method of Sun and Korhonen (2009) performs better on GS1 than the unsupervised method of Joanis et al. (2008) and nearly as well as the supervised approach of Joanis et al. (2008) . Sun and Korhonen used a variation of spectral clustering based on the MNCut algorithm (Meila and Shi, 2001) and experimented with a variety of features (e.g. COs, SCFs, LPs, voice, tense), including also semantic ones (SPs). The features were extracted using a SCF acquisition system which makes use of a grammatical parser. The SPs were obtained by clustering argument head data in relevant syntactic slots. The best result was obtained when using SCFs in conjunction with SPs.", |
| "cite_spans": [ |
| { |
| "start": 34, |
| "end": 57, |
| "text": "Sun and Korhonen (2009)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 113, |
| "end": 133, |
| "text": "Joanis et al. (2008)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 183, |
| "end": 203, |
| "text": "Joanis et al. (2008)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 292, |
| "end": 313, |
| "text": "(Meila and Shi, 2001)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Li and Brew used Bayesian Multinomial Regression for classification. A range of feature sets", |
| "sec_num": null |
| }, |
| { |
| "text": "On GS2, the best performing supervised method so far is that of\u00d3 S\u00e9aghdha and Copestake (2008) which employs a distributional kernel method to classify SCF features parameterized for prepositions in the automatically acquired VALEX SCF lexicon. Using exactly the same data and feature set, Sun et al. (2008) obtained a slightly lower result when using a supervised method (Gaussian) and a notably lower result when using an unsupervised method (pairwise clustering). The recent unsupervised approach of Sun and Korhonen (2009) (discussed above with GS1) outperforms these both methods on this gold standard when SCFs are used in conjunction with automatically acquired SPs.", |
| "cite_spans": [ |
| { |
| "start": 290, |
| "end": 307, |
| "text": "Sun et al. (2008)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 503, |
| "end": 526, |
| "text": "Sun and Korhonen (2009)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Li and Brew used Bayesian Multinomial Regression for classification. A range of feature sets", |
| "sec_num": null |
| }, |
| { |
| "text": "Although this brief comparison focuses on recent work on English classification and does not cover approaches evaluated on other gold standards, languages or domains, it does serve to summarise the state of the art: current approaches perform at their best around 66 accuracy and 80 F measure. While this performance is clearly better than the baseline (chance) performance on the task and is likely to be high enough to benefit many practical tasks, it is still much lower than the realistic upper bound for the task: Merlo and Stevenson (2001) estimated that the accuracy of classification performed by experts in lexical classification is likely to be around 85%.", |
| "cite_spans": [ |
| { |
| "start": 519, |
| "end": 545, |
| "text": "Merlo and Stevenson (2001)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Li and Brew used Bayesian Multinomial Regression for classification. A range of feature sets", |
| "sec_num": null |
| }, |
| { |
| "text": "This section discusses the various challenges that need to be met in order to improve the state of the art further.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Current Challenges", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Section 3.1 reviewed the features employed in verb classification so far. Section 3.3.3 showed that to date, syntactic features (syntactic slots and SCFs) have been the most useful features in verb classification. Although semantic features play a key role in manual verb classification and could thus be expected to offer a considerable contribution to automatic classification, they have not proved equally successful. Until recently, no significant additional improvement was reported using verb SPs (Joanis, 2002; Schulte im Walde, 2006) . This was surprising since SPs are strong indicators of diathesis alternations (McCarthy, 2001 ) and fairly precise semantic descriptions can be assigned to the majority of Levin classes (Kipper-Schuler, 2005) . However, in their recent experiment, Sun and Korhonen (2009) obtained a considerable improvement using SPs in conjunction with syntactic features on both GS1 and GS2, although they used a fully unsupervised approach to both verb clustering and SP acquisition. This suggests that NLP and ML techniques have now developed to the point where the use of deeper, theoretically-motivated features is becoming feasible. Yet high accuracy SP acquisition from undisambiguated corpus data is still an unmet challenge and is especially complex in the context of verb classification where SP models are needed for specific syntactic slots for which the data may be sparse. Recently a number of techniques have been proposed which may offer ideas for further improvement of the approach (Erk, 2007; Bergsma et al., 2008; Schulte im Walde et al., 2008) . The number and type (and combination) of GRs for which SPs can be reliably acquired, especially when the data is sparse, requires also further investigation.", |
| "cite_spans": [ |
| { |
| "start": 503, |
| "end": 517, |
| "text": "(Joanis, 2002;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 518, |
| "end": 541, |
| "text": "Schulte im Walde, 2006)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 622, |
| "end": 637, |
| "text": "(McCarthy, 2001", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 730, |
| "end": 752, |
| "text": "(Kipper-Schuler, 2005)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 792, |
| "end": 815, |
| "text": "Sun and Korhonen (2009)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 1529, |
| "end": 1540, |
| "text": "(Erk, 2007;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 1541, |
| "end": 1562, |
| "text": "Bergsma et al., 2008;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1563, |
| "end": 1593, |
| "text": "Schulte im Walde et al., 2008)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "However, the main semantic features in manual classification are actually diathesis alternations. Some studies have attempted automatic alternation detection using WordNet classes as SP models (Lapata, 1999; McCarthy, 2001 ), but no recent large-scale work has been conducted, and no attepts have been made to detect alternations in a fully unsupervised fashion. The time may now be ripe for this research and its integration in verb classification. The development of an optimal approach will require a good understanding of the linguistic basis of verb classification as well as adequate NLP and ML expertise. The approach will need to be general enough to cover most types of alternations, efficient enough for a large scale use and resistant to the problems of sparse data.", |
| "cite_spans": [ |
| { |
| "start": 193, |
| "end": 207, |
| "text": "(Lapata, 1999;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 208, |
| "end": 222, |
| "text": "McCarthy, 2001", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In section 3.2 we reviewed various supervised and unsupervised methods that have been used for automatic classification. For optimal results, the choice of a machine learning method is not random but involves understanding of the basic principles of the method and its suitability for the data and the task. For example, Sun and Korhonen (2009) obtained promising results in their recent experiment with SP features not only because the features made theoretical sense but also because the clustering method (spectral clustering) was particularly suited for the resulting, high dimensional feature space. Novel ML methods have been developed recently which combine clustering with an element of guidance based on a prior intuition and have useful properties such as not having to define the number of clusters in advance (e.g. unsupervised and constrained Dirichlet Process Mixture Models for verb clustering by Vlachos et al. (2009) ). This shows the benefit of following the recent developments in the ML community. However, semi-supervised approaches have not been used for the task yet (except for the sub-task of feature selection) although they are wellknown in the NLP community and would combine the benefits of supervised and unsupervised approaches (Abney, 2008) .", |
| "cite_spans": [ |
| { |
| "start": 321, |
| "end": 344, |
| "text": "Sun and Korhonen (2009)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 912, |
| "end": 933, |
| "text": "Vlachos et al. (2009)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 1259, |
| "end": 1272, |
| "text": "(Abney, 2008)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Polysemy is frequent in language. In particular, many high frequency verbs have several senses and can therefore be members of several classes. Most work on automatic classification has bypassed this issue by assuming a single class for each verb -usually the one corresponding to its predominating (the most frequent sense) in language according to e.g. WordNet. This is not only unrealistic thinking of real-world application of verb classes but also the predominating sense is not static but varies across domains and sub-languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polysemy", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Few attempts have been made to address this problem. Korhonen et al. (2003) performed a clustering experiment with highly polysemous verbs. They constructed a polysemous gold standard for c. 200 English verbs and examined whether a soft clustering method (Information Bottleneck) could be used to assign these verbs to several classes. The clustering turned out hard, with the majority of verbs being assigned to one class only. Yet the investigation showed that polysemy has a considerable impact on verb classification: optimal results were obtained with when clustering was evaluated against the polysemous gold standard, not the monosemous version of it which assumed the predominant sense according to WordNet.", |
| "cite_spans": [ |
| { |
| "start": 53, |
| "end": 75, |
| "text": "Korhonen et al. (2003)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polysemy", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Clearly polysemy is an issue that needs to be dealt with, and this amounts to both extending gold standards to capture non-predominant senses as well as finding a suitable ML method. Recently a multi-label classification method was used for supervised adjective classification Boleda et al. (2007) which might yield useful results also with verbs. Also methods for modelling the overlap between lexical categories might be of use.", |
| "cite_spans": [ |
| { |
| "start": 277, |
| "end": 297, |
| "text": "Boleda et al. (2007)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Polysemy", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Most work on verb classification has been conducted on English. Considerable research has also been done on German (Schulte im Walde, 2006), but only small scale experiments exist on other languages, e.g. Chinese, Italian (Merlo et al., 2002) , Spanish (Ferrer, 2004) and Japanese (Oishi and Matsumoto, 1997) . Evaluating the applicability of classification techniques to several languages is critical for both theoretical and practical reasons; for 1) improving the accuracy, scalability and robustness of the techniques mainly developed for English or German, 2) advancing work in other languages, 3) gaining a better understanding of the language-specific / cross-linguistic components of lexical information (e.g. the extent to which the features used for English or German are also valid for other languages), and 4) in a long term, improving the performance of such multilingual NLP applications (e.g. machine translation, information extraction) which can benefit from verb classes.", |
| "cite_spans": [ |
| { |
| "start": 222, |
| "end": 242, |
| "text": "(Merlo et al., 2002)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 253, |
| "end": 267, |
| "text": "(Ferrer, 2004)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 281, |
| "end": 308, |
| "text": "(Oishi and Matsumoto, 1997)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Other languages and domains", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "The same can be said also about different domains and sub-languages. The only work (which we are aware of) which has applied verb classification technology to a specific domain is that of Korhonen et al. (2008) . This work focussed on the important domain of biomedicine for which no large verb classification was available. It involved learning a classification using clustering technology originally developed for general English. The experiment revealed interesting facts about automatic classification, e.g. the fact that domain-specific classifications can be very different from general classifications (even the shared verb classes may have a specialised, narrower meaning). Also, the features performed differently than in general language classification. The fact that many domains tend to be more uniform or conventionalized in terms of language use has many consequences for automatic classification which require further investigation.", |
| "cite_spans": [ |
| { |
| "start": 188, |
| "end": 210, |
| "text": "Korhonen et al. (2008)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Other languages and domains", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Most evaluation has been quantitative in nature and involved the gold standards discussed earlier in section 3.3.1. While these gold standards provide suitably small test sets for thorough evaluation, it would be important to also investigate the extent to which existing approaches generalise across the entire language. Whilst the classification of over 5000 word senses offered by VerbNet may not be fully comprehensive, it does offer a valuable larger resource for evaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "For many languages and domains, no evaluation resources are available. Both manual (Kipper et al., 2008) and semi-automatic methods (Korhonen et al., 2008) have been proposed for building gold standards from scratch. For example, in the recent work on biomedical verb classification, human experts (linguists and biologists) constructed a gold standard by examining verb classes formed on the basis on syntactic similarity and deciding which ones of them were also semantically related (Korhonen et al., 2008) . However, such work requires not only clear guidelines but also adequate linguistic and/or domain expertise.", |
| "cite_spans": [ |
| { |
| "start": 83, |
| "end": 104, |
| "text": "(Kipper et al., 2008)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 132, |
| "end": 155, |
| "text": "(Korhonen et al., 2008)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 486, |
| "end": 509, |
| "text": "(Korhonen et al., 2008)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "Some of the works have supplemented quantitative evaluation with qualitative analysis. This has required also linguistic (or domain) expertise, and interestingly, has not only helped to find error types but has often also shown that automatic classification can discover novel, valuable information in data, e.g. classes which are actually related although distinct in a gold standard or classes which are distinct in a gold standard although ought to be related (Schulte im Walde, 2006; Sun et al., 2008; Korhonen et al., 2008; Vlachos et al., 2009) . Qualitative evaluation can thus show the true potential of automatic classification and is therefore vital for further development of classification technology. Equally important is evaluation in the context of practical tasks and applications. To the best of our knowledge, no approaches to automatic verb classification have been evaluated in this manner, although the work on automatic verb classification is largely motivated by the practical potential of accurate and relevant classifications.", |
| "cite_spans": [ |
| { |
| "start": 475, |
| "end": 487, |
| "text": "Walde, 2006;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 488, |
| "end": 505, |
| "text": "Sun et al., 2008;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 506, |
| "end": 528, |
| "text": "Korhonen et al., 2008;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 529, |
| "end": 550, |
| "text": "Vlachos et al., 2009)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "During the past years, a lot has been achieved in automatic verb classification. Yet a lot remains to be done in terms of improving and extending current technology and applying it to larger data sets and novel (sub-)languages. This paper has discussed the various areas which require further improvement (ranging from features to evaluation techniques) and highlighted the fact that further improvements can only be obtained by combining the best available (computational) linguistic and ML expertise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "23rd Pacific Asia Conference on Language, Information and Computation, pages 19-28", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "See http://verbs.colorado.edu/verb-index/index.php for details.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "However, the correspondence is not perfect with half of the classes including two or more Levin's classes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "A smaller-scale version of this comparison was presented in(Sun and Korhonen, 2009).4 Note that the different experiments did not necessarily employ identical sub-sets of GS1 so are not entirely comparable.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Semisupervised Learning for Computational Linguistics", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Abney", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abney, S. 2008. Semisupervised Learning for Computational Linguistics. Chapman and Hall.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Discriminative learning of selectional preference from unlabeled text", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Bergsma", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Goebel", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proc. of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bergsma, S., D. Lin, and R. Goebel. 2008. Discriminative learning of selectional preference from unlabeled text. In Proc. of EMNLP.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Modelling polysemy in adjective classes by multi-label classification", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Boleda", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Schulte Im Walde", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Badia", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proc. of EMNLP-CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Boleda, G., S. Schulte im Walde, and T. Badia. 2007. Modelling polysemy in adjective classes by multi-label classification. In Proc. of EMNLP-CoNLL.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Spectral clustering for German verbs", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Brew", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Schulte Im Walde", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brew, C. and S. Schulte im Walde. 2002. Spectral clustering for German verbs. In Proc. of EMNLP.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Investigations into the Role of Lexical Semantics in Word Sense Disambiguation", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [ |
| "T" |
| ], |
| "last": "Dang", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dang, H. T. 2004. Investigations into the Role of Lexical Semantics in Word Sense Disambiguation. PhD thesis, CIS, University of Pennsylvania.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A simple, similarity-based model for selectional preferences", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Erk", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Erk, K. 2007. A simple, similarity-based model for selectional preferences. In Proc. of ACL.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Towards a semantic classification of spanish verbs based on subcategorisation information", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [ |
| "E" |
| ], |
| "last": "Ferrer", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the ACL 2004 Workshop on Student Research", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ferrer, E. E. 2004. Towards a semantic classification of spanish verbs based on subcategorisation information. In Proceedings of the ACL 2004 Workshop on Student Research.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Automatic Verb Classification Using a General Feature Space", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Joanis", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joanis, E. 2002. Automatic Verb Classification Using a General Feature Space. Master's thesis, University of Toronto.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A general feature space for automatic verb classification", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Joanis", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Stevenson", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "James", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Natural Language Engineering", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Joanis, E., S. Stevenson, and D. James. 2008. A general feature space for automatic verb classifi- cation. Natural Language Engineering.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A large-scale classification of English verbs. Language Resources and Evaluation", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Kipper", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Ryant", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kipper, K., A. Korhonen, N. Ryant, and M. Palmer. 2008. A large-scale classification of English verbs. Language Resources and Evaluation.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "VerbNet: A broad-coverage, comprehensive verb lexicon", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Kipper-Schuler", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kipper-Schuler, K. 2005. VerbNet: A broad-coverage, comprehensive verb lexicon.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "The Choice of Features for Classification of Verbs in Biomedical Texts", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Krymolowski", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Collier", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proc. of COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Korhonen, A., Y. Krymolowski, and N. Collier. 2008. The Choice of Features for Classification of Verbs in Biomedical Texts. In Proc. of COLING.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Clustering polysemic subcategorization frame distributions semantically", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Krymolowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Marx", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "64--71", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Korhonen, A., Y. Krymolowski, and Z. Marx. 2003. Clustering polysemic subcategorization frame distributions semantically. In Proc. of ACL, pages 64-71.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Acquiring lexical generalizations from corpora: A case study for diathesis alternations", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lapata, M. 1999. Acquiring lexical generalizations from corpora: A case study for diathesis alternations. In Proc. of ACL.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "English verb classes and alternations: A preliminary investigation", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Levin", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Levin, B. 1993. English verb classes and alternations: A preliminary investigation. Chicago, IL.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Which Are the Best Features for Automatic Verb Classification", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Brew", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Li, J. and C. Brew. 2008. Which Are the Best Features for Automatic Verb Classification. In Proc. of ACL.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Lexical Acquisition at the Syntax-Semantics Interface: Diathesis Alternations, Subcategorization Frames and Selectional Preferences", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Mccarthy", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "McCarthy, D. 2001. Lexical Acquisition at the Syntax-Semantics Interface: Diathesis Alternations, Subcategorization Frames and Selectional Preferences. PhD thesis, University of Sussex, UK.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "A random walks view of spectral segmentation", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Meila", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Meila, M. and J. Shi. 2001. A random walks view of spectral segmentation. AISTATS.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Automatic verb classification based on statistical distributions of argument structure", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Merlo", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Stevenson", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Computational Linguistics", |
| "volume": "27", |
| "issue": "", |
| "pages": "373--408", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Merlo, P. and S. Stevenson. 2001. Automatic verb classification based on statistical distributions of argument structure. Computational Linguistics, 27:373-408.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "A multilingual paradigm for automatic verb classification", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Merlo", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Stevenson", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Tsang", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Allaria", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Merlo, P., S. Stevenson, V. Tsang, and G. Allaria. 2002. A multilingual paradigm for automatic verb classification. In Proc. of ACL.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "WordNet: A lexical database for English", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [ |
| "A" |
| ], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Communications of the ACM", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Miller, G. A. 1995. WordNet: A lexical database for English. Communications of the ACM.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Detecting the organization of semantic subclasses of Japanese verbs", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Oishi", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "In International Journal of Corpus Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "65--89", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Oishi, A. and Y. Matsumoto. 1997. Detecting the organization of semantic subclasses of Japanese verbs. In International Journal of Corpus Linguistics, volume 2, pages 65-89.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Semantic classification with distributional kernels", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "O S\u00e9aghdha", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Copestake", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proc. of COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "O S\u00e9aghdha, D. and A. Copestake. 2008. Semantic classification with distributional kernels. In Proc. of COLING.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Experiments on the automatic induction of German semantic verb classes", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Schulte Im Walde", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Schulte im Walde, S. 2006. Experiments on the automatic induction of German semantic verb classes. Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Combining EM Training and the MDL Principle for an Automatic Verb Classification incorporating Selectional Preferences", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Schulte Im Walde", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Hying", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Scheible", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Schmid", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "496--504", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Schulte im Walde, S., C. Hying, C. Scheible, and H. Schmid. 2008. Combining EM Training and the MDL Principle for an Automatic Verb Classification incorporating Selectional Preferences. In Proc. of ACL, pages 496-504.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Putting pieces together: Combining FrameNet, VerbNet and WordNet for robust semantic parsing", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. of CICLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shi, L. and R. Mihalcea. 2005. Putting pieces together: Combining FrameNet, VerbNet and WordNet for robust semantic parsing. In Proc. of CICLING.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Semi-supervised verb class discovery using noisy features", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Stevenson", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Joanis", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proc. of HLT-NAACL 2003", |
| "volume": "", |
| "issue": "", |
| "pages": "71--78", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stevenson, S. and E. Joanis. 2003. Semi-supervised verb class discovery using noisy features. In Proc. of HLT-NAACL 2003, pages 71-78.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Improving Verb Clustering with Automatically Acquired Selectional Preferences", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proc. of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sun, L. and A. Korhonen. 2009. Improving Verb Clustering with Automatically Acquired Selec- tional Preferences. In Proc. of EMNLP.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Verb class discovery from rich syntactic data", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Krymolowski", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Lecture Notes in Computer Science", |
| "volume": "4919", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sun, L., A. Korhonen, and Y. Krymolowski. 2008. Verb class discovery from rich syntactic data. Lecture Notes in Computer Science, 4919:16.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Unsupervised semantic role labelling", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Swier", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Stevenson", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Swier, R. and S. Stevenson. 2004. Unsupervised semantic role labelling. In Proc. of EMNLP.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Unsupervised and constrained dirichlet process mixture models for verb clustering", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Vlachos", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Ghahramani", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proc. of the Workshop on Geometrical Models of Natural Language Semantics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vlachos, A., A. Korhonen, and Z. Ghahramani. 2009. Unsupervised and constrained dirichlet process mixture models for verb clustering. In Proc. of the Workshop on Geometrical Models of Natural Language Semantics.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Robustness and generalization of role sets: Prop-Bank vs. VerbNet", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Zapirain", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Agirre", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "M\u00e0rquez", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zapirain, B., E. Agirre, and L. M\u00e0rquez. 2008. Robustness and generalization of role sets: Prop- Bank vs. VerbNet. In Proc. of ACL.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "The weighted class accuracy (ACC) is the proportion of members of dominant clusters DOM-CLUST i within all classes c i .ACC = C i=1 verbs in DOM-CLUST inumber of verbs mPUR and ACC have been used as measures of precision (P) and recall(R) respectively. F measure has been calculated as the harmonic mean of P and R:F = 2 \u2022 mPUR \u2022 ACC mPUR + ACCThe random baseline (BL) is typically calculated as follows: BL = 1/number of classes", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "text": "Both supervised and unsupervised machine learning (ML) methods have been used to classify features discussed in the above section. Supervised methods yield optimal performance where adequate and accurate training data are available. A wide range of methods have been employed, including the K Nearest Neighbours, Maximum Entropy, Support Vector Machines, Gaussian, Distributional Kernel methods, and Bayesian Multinomial Regression, among others", |
| "content": "<table/>", |
| "num": null, |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "text": "Levin classes in GS1 and GS2", |
| "content": "<table><tr><td>GS1</td><td/><td>GS2</td><td/></tr><tr><td>Object Drop</td><td>26.{1,3,7}</td><td>Remove</td><td>10.1</td></tr><tr><td>Recipient</td><td>13.{1,3}</td><td>Send</td><td>11.1</td></tr><tr><td>Admire</td><td>31.2</td><td>Get</td><td>13.5.1</td></tr><tr><td>Amuse</td><td>31.1</td><td>Hit</td><td>18.1</td></tr><tr><td>Run</td><td>51.3.2</td><td>Amalgamate</td><td>22.2</td></tr><tr><td>Sound</td><td>43.2</td><td>Characterize</td><td>29.2</td></tr><tr><td>Light & Substance</td><td>43.{1,4}</td><td>Peer Amuse</td><td>30.3 31.1</td></tr><tr><td>Cheat</td><td>10.6</td><td>Correspond</td><td>36.1</td></tr><tr><td>Steal & Remove</td><td>10.{5,1}</td><td>Manner of speaking Say</td><td>37.3 37.7</td></tr><tr><td>Wipe</td><td>10.4.{1,2}</td><td>Nonverbal expression</td><td>40.2</td></tr><tr><td>Spray / Load</td><td>9.7</td><td>Light</td><td>43.1</td></tr><tr><td>Fill</td><td>9.8</td><td>Other change of state</td><td>45.4</td></tr><tr><td>Putting</td><td>9.1-6</td><td>Mode with motion</td><td>47.3</td></tr><tr><td>Change of State</td><td>45.1-4</td><td>Run</td><td>51.3.2</td></tr><tr><td/><td/><td>Put</td><td>9.1</td></tr></table>", |
| "num": null, |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "text": "Performance of recent approaches", |
| "content": "<table><tr><td/><td/><td>Method</td><td>Result</td></tr><tr><td/><td>Li et al. 2008</td><td>supervised</td><td>66.3</td></tr><tr><td>GS1</td><td>Joanis et al. 2008 Stevenson et al. 2003</td><td colspan=\"2\">supervised semi-supervised 29 58.4 unsupervised 31</td></tr><tr><td/><td colspan=\"2\">Sun and Korhonen 2009 unsupervised</td><td>57.55</td></tr><tr><td colspan=\"2\">GS2 Sun et al. 2008</td><td>supervised unsupervised</td><td>62.50 51.6</td></tr><tr><td/><td>O S\u00e9aghdha et al. 2008</td><td>supervised</td><td>67.3</td></tr><tr><td/><td colspan=\"2\">Sun and Korhonen 2009 unsupervised</td><td>80.35</td></tr></table>", |
| "num": null, |
| "html": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |