| { |
| "paper_id": "N09-1002", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:43:25.868089Z" |
| }, |
| "title": "Integrating Knowledge for Subjectivity Sense Labeling", |
| "authors": [ |
| { |
| "first": "Yaw", |
| "middle": [], |
| "last": "Gyamfi", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Pittsburgh", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Pittsburgh", |
| "location": {} |
| }, |
| "email": "wiebe@cs.pitt.edu" |
| }, |
| { |
| "first": "Rada", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of North Texas", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Cem", |
| "middle": [], |
| "last": "Akkaya", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Pittsburgh", |
| "location": {} |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper introduces an integrative approach to automatic word sense subjectivity annotation. We use features that exploit the hierarchical structure and domain information in lexical resources such as WordNet, as well as other types of features that measure the similarity of glosses and the overlap among sets of semantically related words. Integrated in a machine learning framework, the entire set of features is found to give better results than any individual type of feature.", |
| "pdf_parse": { |
| "paper_id": "N09-1002", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper introduces an integrative approach to automatic word sense subjectivity annotation. We use features that exploit the hierarchical structure and domain information in lexical resources such as WordNet, as well as other types of features that measure the similarity of glosses and the overlap among sets of semantically related words. Integrated in a machine learning framework, the entire set of features is found to give better results than any individual type of feature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Automatic extraction of opinions, emotions, and sentiments in text (subjectivity analysis) to support applications such as product review mining, summarization, question answering, and information extraction is an active area of research in NLP.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Many approaches to opinion, sentiment, and subjectivity analysis rely on lexicons of words that may be used to express subjectivity. However, words may have both subjective and objective senses, which is a source of ambiguity in subjectivity and sentiment analysis. We show that even words judged in previous work to be reliable clues of subjectivity have significant degrees of subjectivity sense ambiguity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To address this ambiguity, we present a method for automatically assigning subjectivity labels to word senses in a taxonomy, which uses new features and integrates more diverse types of knowledge than in previous work. We focus on nouns, which are challenging and have received less attention in automatic subjectivity and sentiment analysis.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A common approach to building lexicons for subjectivity analysis is to begin with a small set of seeds which are prototypically subjective (or positive/negative, in sentiment analysis), and then follow semantic links in WordNet-like resources. By far, the emphasis has been on horizontal relations, such as synonymy and antonymy. Exploiting vertical links opens the door to taking into account the information content of ancestor concepts of senses with known and unknown subjectivity. We develop novel features that measure the similarity of a target word sense with a seed set of senses known to be subjective, where the similarity between two concepts is determined by the extent to which they share information, measured by the information content associated with their least common subsumer (LCS). Further, particularizing the LCS features to domain greatly reduces calculation while still maintaining effective features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We find that our new features do lead to significant improvements over methods proposed in previous work, and that the combination of all features gives significantly better performance than any single type of feature alone.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We also ask, given that there are many approaches to finding subjective words, if it would make sense for word-and sense-level approaches to work in tandem, or should we best view them as competing approaches? We give evidence suggesting that first identifying subjective words and then disambiguating their senses would be an effective approach to subjectivity sense labeling.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There are several motivations for assigning subjectivity labels to senses. First, (Wiebe and Mihalcea, 2006) provide evidence that word sense labels, together with contextual subjectivity analysis, can be exploited to improve performance in word sense disambiguation. Similarly, given subjectivity sense labels, word-sense disambiguation may potentially help contextual subjectivity analysis. In addition, as lexical resources such as WordNet are developed further, subjectivity labels would provide principled criteria for refining word senses, as well as for clustering similar meanings to create more coursegrained sense inventories.", |
| "cite_spans": [ |
| { |
| "start": 82, |
| "end": 108, |
| "text": "(Wiebe and Mihalcea, 2006)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For many opinion mining applications, polarity (positive, negative) is also important. The overall framework we envision is a layered approach: classifying instances as objective or subjective, and further classifying the subjective instances by polarity. Decomposing the problem into subproblems has been found to be effective for opinion mining. This paper addresses the first of these subproblems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We adopt the definitions of subjective and objective from Wiebe and Mihalcea (2006) (hereafter WM) . Subjective expressions are words and phrases being used to express opinions, emotions, speculations, etc. WM give the following examples:", |
| "cite_spans": [ |
| { |
| "start": 58, |
| "end": 98, |
| "text": "Wiebe and Mihalcea (2006) (hereafter WM)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "His alarm grew. He absorbed the information quickly. UCC/Disciples leaders roundly condemned the Iranian President's verbal assault on Israel.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Background", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Polarity (also called semantic orientation) is also important to NLP applications in sentiment analysis and opinion extraction. In review mining, for example, we want to know whether an opinion about a product is positive or negative. Even so, we believe there are strong motivations for a separate subjective/objective (S/O) classification as well.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "What's the catch?", |
| "sec_num": null |
| }, |
| { |
| "text": "First, expressions may be subjective but not have any particular polarity. An example given by (Wilson et al., 2005) is Jerome says the hospital feels no different than a hospital in the states. An NLP application system may want to find a wide range of private states attributed to a person, such as their motivations, thoughts, and speculations, in addition to their positive and negative sentiments.", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 116, |
| "text": "(Wilson et al., 2005)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "What's the catch?", |
| "sec_num": null |
| }, |
| { |
| "text": "Second, distinguishing S and O instances has often proven more difficult than subsequent polarity classification. Researchers have found this at various levels of analysis, including the manual annotation of phrases (Takamura et al., 2006) , sentiment classification of phrases (Wilson et al., 2005) , sentiment tagging of words (Andreevskaia and Bergler, 2006b) , and sentiment tagging of word senses (Esuli and Sebastiani, 2006a) . Thus, effective methods for S/O classification promise to improve performance for sentiment classification. In fact, researchers in sentiment analysis have realized benefits by decomposing the problem into S/O and polarity classification (Yu and Hatzivassiloglou, 2003; Pang and Lee, 2004; Wilson et al., 2005; Kim and Hovy, 2006) . One reason is that different features may be relevant for the two subproblems. For example, negation features are more important for polarity classification than for subjectivity classification.", |
| "cite_spans": [ |
| { |
| "start": 216, |
| "end": 239, |
| "text": "(Takamura et al., 2006)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 278, |
| "end": 299, |
| "text": "(Wilson et al., 2005)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 329, |
| "end": 362, |
| "text": "(Andreevskaia and Bergler, 2006b)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 402, |
| "end": 431, |
| "text": "(Esuli and Sebastiani, 2006a)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 672, |
| "end": 703, |
| "text": "(Yu and Hatzivassiloglou, 2003;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 704, |
| "end": 723, |
| "text": "Pang and Lee, 2004;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 724, |
| "end": 744, |
| "text": "Wilson et al., 2005;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 745, |
| "end": 764, |
| "text": "Kim and Hovy, 2006)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "What's the catch?", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that some of our features require vertical links that are present in WordNet for nouns and verbs but not for other parts of speech. Thus we address nouns (leaving verbs to future work). There are other motivations for focusing on nouns. Relatively little work in subjectivity and sentiment analysis has focused on subjective nouns. Also, a study (Bruce and Wiebe, 1999) showed that, of the major parts of speech, nouns are the most ambiguous with respect to the subjectivity of their instances.", |
| "cite_spans": [ |
| { |
| "start": 351, |
| "end": 374, |
| "text": "(Bruce and Wiebe, 1999)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "What's the catch?", |
| "sec_num": null |
| }, |
| { |
| "text": "Turning to word senses, we adopt the definitions from WM. First, subjective: \"Classifying a sense as S means that, when the sense is used in a text or conversation, we expect it to express subjectivity; we also expect the phrase or sentence containing it to be subjective [WM, .\"", |
| "cite_spans": [ |
| { |
| "start": 272, |
| "end": 276, |
| "text": "[WM,", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "What's the catch?", |
| "sec_num": null |
| }, |
| { |
| "text": "In WM, it is noted that sentences containing objective senses may not be objective, as in the sentence Will someone shut that darn alarm off? Thus, objective senses are defined as follows: \"Classifying a sense as O means that, when the sense is used in a text or conversation, we do not expect it to express subjectivity and, if the phrase or sentence containing it is subjective, the subjectivity is due to something else [WM, p 3] .\"", |
| "cite_spans": [ |
| { |
| "start": 423, |
| "end": 427, |
| "text": "[WM,", |
| "ref_id": null |
| }, |
| { |
| "start": 428, |
| "end": 432, |
| "text": "p 3]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "What's the catch?", |
| "sec_num": null |
| }, |
| { |
| "text": "The following subjective examples are given in WM:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "What's the catch?", |
| "sec_num": null |
| }, |
| { |
| "text": "His alarm grew. alarm, dismay, consternation -(fear resulting from the awareness of danger) => fear, fearfulness, fright -(an emotion experienced in anticipation of some specific pain or danger (usually accompanied by a desire to flee or fight))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "What's the catch?", |
| "sec_num": null |
| }, |
| { |
| "text": "What's the catch? catch -(a hidden drawback; \"it sounds good but what's the catch?\") => drawback -(the quality of being a hindrance; \"he pointed out all the drawbacks to my plan\")", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "What's the catch?", |
| "sec_num": null |
| }, |
| { |
| "text": "The following objective examples are given in WM:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "What's the catch?", |
| "sec_num": null |
| }, |
| { |
| "text": "The alarm went off. alarm, warning device, alarm system -(a device that signals the occurrence of some undesirable event) => device -(an instrumentality invented for a particular purpose; \"the device is small enough to wear on your wrist\"; \"a device intended to conserve water\")", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "What's the catch?", |
| "sec_num": null |
| }, |
| { |
| "text": "He sold his catch at the market. catch, haul -(the quantity that was caught; \"the catch was only 10 fish\") => indefinite quantity -(an estimated quantity)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "What's the catch?", |
| "sec_num": null |
| }, |
| { |
| "text": "WM performed an agreement study and report that good agreement (\u03ba=0.74) can be achieved between human annotators labeling the subjectivity of senses. For a similar task, (Su and Markert, 2008) also report good agreement.", |
| "cite_spans": [ |
| { |
| "start": 170, |
| "end": 192, |
| "text": "(Su and Markert, 2008)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "What's the catch?", |
| "sec_num": null |
| }, |
| { |
| "text": "Many methods have been developed for automatically identifying subjective (opinion, sentiment, attitude, affect-bearing, etc.) words, e.g., (Turney, 2002; Riloff and Wiebe, 2003; Kim and Hovy, 2004; Taboada et al., 2006; Takamura et al., 2006) .", |
| "cite_spans": [ |
| { |
| "start": 140, |
| "end": 154, |
| "text": "(Turney, 2002;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 155, |
| "end": 178, |
| "text": "Riloff and Wiebe, 2003;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 179, |
| "end": 198, |
| "text": "Kim and Hovy, 2004;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 199, |
| "end": 220, |
| "text": "Taboada et al., 2006;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 221, |
| "end": 243, |
| "text": "Takamura et al., 2006)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Five groups have worked on subjectivity sense labeling. WM and Su and Markert (2008) (Valitutti et al., 2004) assign polarity labels.", |
| "cite_spans": [ |
| { |
| "start": 63, |
| "end": 84, |
| "text": "Su and Markert (2008)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 85, |
| "end": 109, |
| "text": "(Valitutti et al., 2004)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "WM, SM, and ES have evaluated their systems against manually annotated word-sense data. WM's annotations are described above; SM's are similar. In the scheme ES use (Cerini et al., 2007) , senses are assigned three scores, for positivity, negativity, and neutrality. There is no unambiguous mapping between the labels of WM/SM and ES, first because WM/SM use distinct classes and ES use numerical ratings, and second because WM/SM distinguish between objective senses on the one hand and neutral subjective senses on the other, while those are both neutral in the scheme used by ES.", |
| "cite_spans": [ |
| { |
| "start": 165, |
| "end": 186, |
| "text": "(Cerini et al., 2007)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "WM use an unsupervised corpus-based approach, in which subjectivity labels are assigned to word senses based on a set of distributionally similar words in a corpus annotated with subjective expressions. SM explore methods that use existing resources that do not require manually annotated data; they also implement a supervised system for comparison, which we will call SMsup. The other three groups start with positive and negative seed sets and expand them by adding synonyms and antonyms, and traversing horizontal links in WordNet. AB, ES, and SMsup additionally use information contained in glosses; AB also use hyponyms; SMsup also uses relation and POS features. AB perform multiple runs of their system to assign fuzzy categories to senses. ES use a semi-supervised, multiple-classifier learning approach. In a later paper, (Esuli and Sebastiani, 2007) , ES again use information in glosses, applying a random walk ranking algorithm to a graph in which synsets are linked if a member of the first synset appears in the gloss of the second.", |
| "cite_spans": [ |
| { |
| "start": 832, |
| "end": 860, |
| "text": "(Esuli and Sebastiani, 2007)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Like ES and SMsup, we use machine learning, but with more diverse sources of knowledge. Further, several of our features are novel for the task. The LCS features (Section 6.1) detect subjectivity by measuring the similarity of a candidate word sense with a seed set. WM also use a similarity measure, but as a way to filter the output of a measure of distributional similarity (selecting words for a given word sense), not as we do to cumulatively calculate the subjectivity of a word sense. Another novel aspect of our similarity features is that they are particularized to domain, which greatly reduces calculation. The domain subjectivity LCS features (Section 6.2) are also novel for our task. So is augmenting seed sets with monosemous words, for greater coverage without requiring human intervention or sacrificing quality. Note that none of our features as we specifically define them has been used in previous work; combining them together, our approach outperforms previous approaches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We use the subjectivity lexicon of (Wiebe and Riloff, 2005) 1 both to create a subjective seed set and to create the experimental data sets. The lexicon is a list of words and phrases that have subjective uses, though only word entries are used in this paper (i.e., we do not address phrases at this point). Some entries are from manually developed resources, including the General Inquirer, while others were derived from corpora using automatic methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicon and Annotations", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Through manual review and empirical testing on data, (Wiebe and Riloff, 2005) divided the clues into strong (strongsubj) and weak (weaksubj) subjectivity clues. Strongsubj clues have subjective meanings with high probability, and weaksubj clues have subjective meanings with lower probability.", |
| "cite_spans": [ |
| { |
| "start": 53, |
| "end": 77, |
| "text": "(Wiebe and Riloff, 2005)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicon and Annotations", |
| "sec_num": "4" |
| }, |
| { |
| "text": "To support our experiments, we annotated the senses 2 of polysemous nouns selected from the lexicon, using WM's annotation scheme described in Section 2. Due to time constraints, only some of the data was labeled through consensus labeling by two annotators; the rest was labeled by one annotator.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicon and Annotations", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Overall, 2875 senses for 882 words were annotated. Even though all are senses of words from the subjectivity lexicon, only 1383 (48%) of the senses are subjective.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicon and Annotations", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The words labeled strongsubj are in fact less ambiguous than those labeled weaksubj in our analysis, thus supporting the reliability classifications in the lexicon. 55% (1038/1924) of the senses of strongsubj words are subjective, while only 36% (345/951) of the senses of weaksubj words are subjective.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicon and Annotations", |
| "sec_num": "4" |
| }, |
| { |
| "text": "For the analysis in Section 7.3, we form subsets of the data annotated here to test performance of our method on different data compositions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicon and Annotations", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Both subjective and objective seed sets are used to define the features described below. For seeds, a large number is desirable for greater coverage, although high quality is also important. We begin to build our subjective seed set by adding the monosemous strongsubj nouns of the subjectivity lexicon (there are 397 of these). Since they are monosemous, they pose no problem of sense ambiguity. We then expand the set with their hyponyms, as they were found useful in previous work by AB (2006b; 2006a) . This yields a subjective seed set of 645 senses. After removing the word senses that belong to the same synset, so that only one word sense per synset is left, we ended up with 603 senses.", |
| "cite_spans": [ |
| { |
| "start": 490, |
| "end": 504, |
| "text": "(2006b; 2006a)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seed Sets", |
| "sec_num": "5" |
| }, |
| { |
| "text": "To create the objective seed set, two annotators manually annotated 800 random senses from Word-Net, and selected for the objective seed set the ones they both agreed are clearly objective. This creates an objective seed set of 727. Again we removed multiple senses from the same synset leaving us with 722. The other 73 senses they annotated are added to the mixed data set described below. As this sampling shows, WordNet nouns are highly skewed toward objective senses, so finding an objective seed set is not difficult.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seed Sets", |
| "sec_num": "5" |
| }, |
| { |
| "text": "This feature measures the similarity of a target sense with members of the subjective seed set. Here, similarity between two senses is determined by the extent to which they share information, measured by using the information content associated with their least common subsumer. For an intuition behind this feature, consider this example. In WordNet, the hypernym of the \"strong criticism\" sense of attack is criticism. Several other negative subjective senses are descendants of criticism, including the relevant senses of fire, thrust, and rebuke. Going up one more level, the hypernym of criticism is the \"expression of disapproval\" meaning of disapproval, which has several additional negative subjective descendants, such as the \"expression of opposition and disapproval\" sense of discouragement. Our hypothesis is that the cases where subjectivity is preserved in the hypernym structure, or where hypernyms do lead from subjective senses to others, are the ones that have the highest least common subsumer score with the seed set of known subjective senses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Subjectivity LCS Feature", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We calculate similarity using the informationcontent based measure proposed in (Resnik, 1995) , as implemented in the WordNet::Similarity package (using the default option in which LCS values are computed over the SemCor corpus). 3 Given a taxonomy such as WordNet, the information content associated with a concept is determined as the likelihood of encountering that concept, defined as \u2212log(p(C)), where p(C) is the probability of seeing concept C in a corpus. The similarity between two concepts is then defined in terms of information content as:", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 93, |
| "text": "(Resnik, 1995)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 230, |
| "end": 231, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Subjectivity LCS Feature", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "LCS s (C 1 , C 2 ) = max[\u2212log(p(C))],", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Subjectivity LCS Feature", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "where C is the concept that subsumes both C 1 and C 2 and has the highest information content (i.e., it is the least common subsumer (LCS)).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Subjectivity LCS Feature", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "For this feature, a score is assigned to a target sense based on its semantic similarity to the members of a seed set; in particular, the maximum such similarity is used.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Subjectivity LCS Feature", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "For a target sense t and a seed set S, we could have used the following score:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Subjectivity LCS Feature", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Score(t, S) = max s\u2208S LCS s (t, s)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Subjectivity LCS Feature", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "However, several researchers have noted that subjectivity may be domain specific. A version of WordNet exists, WordNet Domains (Gliozzo et al., 2005) , which associates each synset with one of the domains in the Dewey Decimal library classification. After sorting our subjective seed set into different domains, we observed that over 80% of the subjective seed senses are concentrated in six domains (the rest are distributed among 35 domains).", |
| "cite_spans": [ |
| { |
| "start": 127, |
| "end": 149, |
| "text": "(Gliozzo et al., 2005)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Subjectivity LCS Feature", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Thus, we decided to particularize the semantic similarity feature to domain, such that only the subset of the seed set in the same domain as the target sense is used to compute the feature. This involves much less calculation, as LCS values are calculated only with respect to a subset of the seed set. We hypothesized that this would still be an effective feature, while being more efficient to calculate. This will be important when this method is applied to large resources such as the entire WordNet.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Subjectivity LCS Feature", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "Thus, for seed set S and target sense t which is in domain D, the feature is defined as the following score:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Subjectivity LCS Feature", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "SenseLCSscore(t, D, S) = max d\u2208D\u2229S LCS s (t, d)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Subjectivity LCS Feature", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "The seed set is a parameter, so we could have defined a feature reflecting similarity to the objective seed set as well. Since WordNet is already highly skewed toward objective noun senses, any naive classifier need only guess the majority class for high accuracy for the objective senses. We in-cluded only a subjective feature to put more emphasis on the subjective senses. In the future, features could be defined with respect to objectivity, as well as polarity and other properties of subjectivity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sense Subjectivity LCS Feature", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We also include a feature reflecting the subjectivity of the domain of the target sense. Domains are assigned scores as follows. For domain D and seed set S:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Domain Subjectivity LCS Score", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "DomainLCSscore(D, S) = ave d\u2208D\u2229S M emLCSscore(d, D, S)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Domain Subjectivity LCS Score", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "where:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Domain Subjectivity LCS Score", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "M emLCSscore(d, D, S) = max d i \u2208D\u2229S,d i =d LCS s (d, d i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Domain Subjectivity LCS Score", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "The value of this feature for a sense is the score assigned to that sense's domain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Domain Subjectivity LCS Score", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "This feature is based on the intersection between the set of senses related (via WordNet relations) to the target sense and the set of senses related to members of a seed set. First, for the target sense and each member of the seed set, a set of related senses is formed consisting of its synonyms, antonyms and direct hypernyms as defined by WordNet. For a sense s, R(s) is s together with its related senses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Common Related Senses", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Then, given a target sense t and a seed set S we compute an average percentage overlap as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Common Related Senses", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "RelOverlap(t, S) = s i \u2208S |R(t)\u2229R(s i )| max (|R(t)|,|R(s i )|) |S|", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Common Related Senses", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "The value of a feature is its score. Two features are included in the experiments below, one for each of the subjective and objective seed sets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Common Related Senses", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "These features are Lesk-style features (Lesk, 1986) that exploit overlaps between glosses of target and seed senses. We include two types in our work.", |
| "cite_spans": [ |
| { |
| "start": 39, |
| "end": 51, |
| "text": "(Lesk, 1986)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Gloss-based features", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "Features For a sense s, gloss(s) is the set of stems in the gloss of s (excluding stop words). Then, given a tar-get sense t and a seed set S, we compute an average percentage overlap as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Average Percentage Gloss Overlap", |
| "sec_num": "6.4.1" |
| }, |
| { |
| "text": "GlOverlap(t, S) = s i \u2208S | gloss(t)\u2229\u222a r\u2208R(s i ) gloss(r) | max (|gloss(t)|,|\u222a r\u2208R(s i ) gloss(r)|) |S|", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Average Percentage Gloss Overlap", |
| "sec_num": "6.4.1" |
| }, |
| { |
| "text": "As above, R(s) is considered for each seed sense s, but now only the target sense t is considered, not R(t). We did this because we hypothesized that the gloss can provide sufficient context for a given target sense, so that the addition of related words is not necessary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Average Percentage Gloss Overlap", |
| "sec_num": "6.4.1" |
| }, |
| { |
| "text": "We include two features, one for each of the subjective and objective seed sets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Average Percentage Gloss Overlap", |
| "sec_num": "6.4.1" |
| }, |
| { |
| "text": "For this feature we also consider overlaps of stems in glosses (excluding stop words). The overlaps considered are between the gloss of the target sense t and the glosses of R(s) for all s in a seed set (for convenience, we will refer to these as seedRelationSets).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Vector Gloss Overlap Features", |
| "sec_num": "6.4.2" |
| }, |
| { |
| "text": "A vector of stems is created, one for each stem (excluding stop words) that appears in a gloss of a member of seedRelationSets. If a stem in the gloss of the target sense appears in this vector, then the vector entry for that stem is the total count of that stem in the glosses of the target sense and all members of seedRelationSets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Vector Gloss Overlap Features", |
| "sec_num": "6.4.2" |
| }, |
| { |
| "text": "A feature is created for each vector entry whose value is the count at that position. Thus, these features consider counts of individual stems, rather than average proportions of overlaps, as for the previous type of gloss feature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Vector Gloss Overlap Features", |
| "sec_num": "6.4.2" |
| }, |
| { |
| "text": "Two vectors of features are used, one where the seed set is the subjective seed set, and one where it is the objective seed set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Vector Gloss Overlap Features", |
| "sec_num": "6.4.2" |
| }, |
| { |
| "text": "In summary, we use the following features (here, SS is the subjective seed set and OS is the objective one). ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Summary", |
| "sec_num": "6.5" |
| }, |
| { |
| "text": "We perform 10-fold cross validation experiments on several data sets, using SVM light (Joachims, 1999) 4 under its default settings. Based on our random sampling of WordNet, it appears that WordNet nouns are highly skewed toward objective senses. (Esuli and Sebastiani, 2007) argue that random sampling from WordNet would yield a corpus mostly consisting of objective (neutral) senses, which would be \"pretty useless as a benchmark for testing derived lexical resources for opinion mining [p. 428] .\" So, they use a mixture of subjective and objective senses in their data set.", |
| "cite_spans": [ |
| { |
| "start": 247, |
| "end": 275, |
| "text": "(Esuli and Sebastiani, 2007)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 489, |
| "end": 497, |
| "text": "[p. 428]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "7" |
| }, |
| { |
| "text": "To create a mixed corpus for our task, we annotated a second random sample from WordNet (which is as skewed as the previously mentioned one). We added together all of the senses of words in the lexicon which we annotated, the leftover senses from the selection of objective seed senses, and this new sample. We removed duplicates, multiple senses from the same synset, and any senses belonging to the same synset in either of the seed sets. This resulted in a corpus of 2354 senses, 993 (42.18%) of which are subjective and 1361 (57.82%) of which are objective.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "7" |
| }, |
| { |
| "text": "The results with all of our features on this mixed corpus are given in Row 1 of Table 1. In Table 1 , the first column identifies the features, which in this case is all of them. The next three columns show overall accuracy, and precision and recall for finding subjective senses. The baseline accuracy for the mixed data set (guessing the more frequent class, which is objective) is 57.82%. As the table shows, the accuracy is substantially above baseline. 5", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 92, |
| "end": 99, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "7" |
| }, |
| { |
| "text": "In this section, we seek to gain insights by performing ablation studies, evaluating our method on different data compositions, and comparing our results to previous results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis and Discussion", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "Since there are several features, we divided them into sets for the ablation studies. The vector-ofgloss-words features are the most similar to ones used in previous work. Thus, we opted to treat them as one ablation group (Gloss vector). The Overlaps group includes the RelOverlap(t, SS), RelOverlap(t, OS), GlOverlap(t, SS), and GlOverlap(t, OS) features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ablation Studies", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "Finally, the LCS group includes the SenseLCSscore and the DomainLCSscore features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ablation Studies", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "There are two types of ablation studies. In the first, one group of features at a time is included. Those results are in the middle section of Table 1 . Thus, for example, the row labeled LCS in this section is for an experiment using only the LCS features. In comparison to performance when all features are used, F-measure for the Overlaps and LCS ablations is significantly different at the p < .01 level, and, for the Gloss Vector ablation, it is significantly different at the p = .052 level (one-tailed t-test). Thus, all of the features together have better performance than any single type of feature alone.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 143, |
| "end": 150, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Ablation Studies", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "In the second type of ablation study, we use all the features minus one group of features at a time. The results are in the bottom section of Table 1 . Thus, for example, the row labeled LCS in this section is for an experiment using all but the LCS features. F-measures for LCS and Gloss vector are significantly different at the p = .056 and p = .014 levels, respectively. However, F-measure for the Overlaps ablation is not significantly different (p = .39). These results provide evidence that LCS and Gloss vector are better together than either of them alone.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 142, |
| "end": 149, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Ablation Studies", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "Several methods have been developed for identifying subjective words. Perhaps an effective strategy would be to begin with a word-level subjectivity lexicon, and then perform subjectivity sense labeling to sort the subjective from objective senses of those words. We also wondered about the relative effectiveness of our method on strongsubj versus weaksubj clues.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results on Different Data Sets", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "To answer these questions, we apply the full model (again in 10-fold cross validation experiments) to data sets composed of senses of polysemous words in the subjectivity lexicon. To support comparison, all of the data sets in this section have a 50%-50% objective/subjective distribution. 6 The results are presented in Table 2. For comparison, the first row repeats the results for the mixed corpus from Table 1. The second row shows results for a corpus of senses of a mixture of strongsubj and weaksubj words. The corpus was created by selecting a mixture of strongsubj and weaksubj words, extracting their senses and the S/O labels applied to them in Section 4, and then randomly removing senses of the more frequent class until the distribution is uniform. We see that the results on this corpus are better than on the mixed data set, even though the baseline accuracy is lower and the corpus is smaller. This supports the idea that an effective strategy would be to first identify opinion-bearing words, and then apply our method to those words to sort out their subjective and objective senses.", |
| "cite_spans": [ |
| { |
| "start": 290, |
| "end": 291, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 321, |
| "end": 329, |
| "text": "Table 2.", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results on Different Data Sets", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "The third row shows results for a weaksubj subset of the strong+weak corpus and the fourth shows results for a strongsubj subset that is of the same size. As expected, the results for the weaksubj senses are lower while those for the strongsubj senses are higher, as weaksubj clues are more ambiguous.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results on Different Data Sets", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "WM and SM address the same task as we do. To compare our results to theirs, we apply our full model (in 10-fold cross validation experiments) to their data sets. 7 Table 3 has the WM data set results. WM rank their senses and present their results in the form of precision recall curves. The second row of Table 3 shows their results at the recall level achieved by our method (66%). Their precision at that level is substantially below ours.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 164, |
| "end": 171, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 306, |
| "end": 313, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparisons with Previous Work", |
| "sec_num": "7.4" |
| }, |
| { |
| "text": "Turning to ES, to create S/O annotations, we applied the following heuristic mapping (which is also used by SM for the purpose of comparison): any sense for which the sum of positive and negative scores is greater than or equal to 0.5 is S, otherwise it is O. We then evaluate the mapped tags against the gold standard of WM. The results are in Row 3 of Table 3 . Note that this mapping is not fair to Sen-tiWordNet, as the tasks are quite different, and we do not believe any conclusions can be drawn. We include the results to eliminate the possibility that their method is as good ours on our task, despite the differences between the tasks. Table 4 has the results for the noun subset of SM's 7 The WM data set is available at http://www.cs.pitt.edu/www.cs.pitt.edu/\u02dcwiebe. ES applied their method in (2006b) to WordNet, and made the results available as SentiWordNet at http://sentiwordnet.isti.cnr.it/. data set, which is the data set used by ES, reannotated by SM. CV* is their supervised system and SL* is their best non-supervised one. Our method has higher F-measure than the others. 8 Note that the focus of SM's work is not supervised machine learning.", |
| "cite_spans": [ |
| { |
| "start": 697, |
| "end": 698, |
| "text": "7", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 354, |
| "end": 361, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 645, |
| "end": 652, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Comparisons with Previous Work", |
| "sec_num": "7.4" |
| }, |
| { |
| "text": "In this paper, we introduced an integrative approach to automatic subjectivity word sense labeling which combines features exploiting the hierarchical structure and domain information of WordNet, as well as similarity of glosses and overlap among sets of semantically related words. There are several contributions. First, we learn several things. We found (in Section 4) that even reliable lists of subjective (opinion-bearing) words have many objective senses. We asked if word-and sense-level approaches could be used effectively in tandem, and found (in Section 7.3) that an effective strategy is to first identify opinion-bearing words, and then apply our method to sort out their subjective and objective senses. We also found (in Section 7.2) that the entire set of features gives better results than any individual type of feature alone.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Second, several of the features are novel for our task, including those exploiting the hierarchical structure of a lexical resource, domain information, and relations to seed sets expanded with monosemous senses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Finally, the combination of our particular features is effective. For example, on senses of words from a subjectivity lexicon, accuracies range from 20 to 29 percentage points above baseline. Further, our combination of features outperforms previous approaches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Available at http://www.cs.pitt.edu/mpqa 2 In WordNet 2.0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://search.cpan.org/dist/WordNet-Similarity/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://svmlight.joachims.org/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that, because the majority class is O, baseline recall (and thus F-measure) is 0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "As with the mixed data set, we removed from these data sets multiple senses from the same synset and any senses in the same synset in either of the seed sets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We performed the same type of evaluation as in SM's paper. That is, we assign a subjectivity label to one word sense for each synset, which is the same as applying a subjectivity label to a synset as a whole as done by SM.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was supported in part by National Science Foundation awards #0840632 and #0840608. The authors are grateful to Fangzhong Su and Katja Markert for making their data set available, and to the three paper reviewers for their helpful suggestions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Mining wordnet for a fuzzy sentiment: Sentiment tag extraction from wordnet glosses", |
| "authors": [ |
| { |
| "first": "Alina", |
| "middle": [], |
| "last": "Andreevskaia", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Bergler", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 11rd Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alina Andreevskaia and Sabine Bergler. 2006a. Mining wordnet for a fuzzy sentiment: Sentiment tag extrac- tion from wordnet glosses. In Proceedings of the 11rd Conference of the European Chapter of the Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Sentiment tag extraction from wordnet glosses", |
| "authors": [ |
| { |
| "first": "Alina", |
| "middle": [], |
| "last": "Andreevskaia", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Bergler", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of 5th International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alina Andreevskaia and Sabine Bergler. 2006b. Sen- timent tag extraction from wordnet glosses. In Pro- ceedings of 5th International Conference on Language Resources and Evaluation.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Recognizing subjectivity: A case study of manual tagging", |
| "authors": [ |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Bruce", |
| "suffix": "" |
| }, |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Natural Language Engineering", |
| "volume": "5", |
| "issue": "2", |
| "pages": "187--205", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rebecca Bruce and Janyce Wiebe. 1999. Recognizing subjectivity: A case study of manual tagging. Natural Language Engineering, 5(2):187-205.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Micro-wnop: A gold standard for the evaluation of automatically compiled lexical resources for opinion mining", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Cerini", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Campagnoni", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Demontis", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Formentelli", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Gandini", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Language resources and linguistic theory: Typology, second language acquisition", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Cerini, V. Campagnoni, A. Demontis, M. Formentelli, and C. Gandini. 2007. Micro-wnop: A gold standard for the evaluation of automatically compiled lexical re- sources for opinion mining. In Language resources and linguistic theory: Typology, second language ac- quisition, English linguistics. Milano.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Determining term subjectivity and term orientation for opinion mining", |
| "authors": [ |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Esuli", |
| "suffix": "" |
| }, |
| { |
| "first": "Fabrizio", |
| "middle": [], |
| "last": "Sebastiani", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "11th Meeting of the European Chapter", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrea Esuli and Fabrizio Sebastiani. 2006a. Determin- ing term subjectivity and term orientation for opinion mining. In 11th Meeting of the European Chapter of the Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Senti-WordNet: A publicly available lexical resource for opinion mining", |
| "authors": [ |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Esuli", |
| "suffix": "" |
| }, |
| { |
| "first": "Fabrizio", |
| "middle": [], |
| "last": "Sebastiani", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 5th Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrea Esuli and Fabrizio Sebastiani. 2006b. Senti- WordNet: A publicly available lexical resource for opinion mining. In Proceedings of the 5th Conference on Language Resources and Evaluation, Genova, IT.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "PageRanking wordnet synsets: An application to opinion mining", |
| "authors": [ |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Esuli", |
| "suffix": "" |
| }, |
| { |
| "first": "Fabrizio", |
| "middle": [], |
| "last": "Sebastiani", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "424--431", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrea Esuli and Fabrizio Sebastiani. 2007. PageRank- ing wordnet synsets: An application to opinion min- ing. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 424- 431, Prague, Czech Republic, June.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Automatic acquisition of domain specific lexicons", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Gliozzo", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Strapparava", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Avanzo", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Magnini", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Irst", |
| "suffix": "" |
| }, |
| { |
| "first": ".", |
| "middle": [ |
| "T" |
| ], |
| "last": "Italy", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Joachims", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Advances in Kernel Methods -Support Vector Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Gliozzo, C. Strapparava, E. d'Avanzo, and B. Magnini. 2005. Automatic acquisition of domain specific lexicons. Tech. report, IRST, Italy. T. Joachims. 1999. Making large-scale SVM learning practical. In B. Scholkopf, C. Burgess, and A. Smola, editors, Advances in Kernel Methods -Support Vector Learning, Cambridge, MA. MIT-Press.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Determining the sentiment of opinions", |
| "authors": [ |
| { |
| "first": "Min", |
| "middle": [], |
| "last": "Soo", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the Twentieth International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1267--1373", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Soo-Min Kim and Eduard Hovy. 2004. Determining the sentiment of opinions. In Proceedings of the Twentieth International Conference on Computational Linguis- tics, pages 1267-1373, Geneva, Switzerland.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Identifying and analyzing judgment opinions", |
| "authors": [ |
| { |
| "first": "Min", |
| "middle": [], |
| "last": "Soo", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "200--207", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Soo-Min Kim and Eduard Hovy. 2006. Identifying and analyzing judgment opinions. In Proceedings of Empirical Methods in Natural Language Processing, pages 200-207, New York.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "E" |
| ], |
| "last": "Lesk", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "Proceedings of the SIGDOC Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M.E. Lesk. 1986. Automatic sense disambiguation us- ing machine readable dictionaries: How to tell a pine cone from an ice cream cone. In Proceedings of the SIGDOC Conference 1986, Toronto, June.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts", |
| "authors": [ |
| { |
| "first": "Bo", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "Lillian", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "271--278", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the Annual Meeting of the Association for Computational Linguis- tics , pages 271-278, Barcelona, ES. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Using information content to evaluate semantic similarity in a taxonomy", |
| "authors": [ |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proc. International Joint Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philip Resnik. 1995. Using information content to eval- uate semantic similarity in a taxonomy. In Proc. Inter- national Joint Conference on Artificial Intelligence.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Learning extraction patterns for subjective expressions", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "105--112", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Riloff and J. Wiebe. 2003. Learning extraction pat- terns for subjective expressions. In Conference on Empirical Methods in Natural Language Processing, pages 105-112.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "From word to sense: a case study of subjectivity recognition", |
| "authors": [ |
| { |
| "first": "Fangzhong", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "Katja", |
| "middle": [], |
| "last": "Markert", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 22nd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fangzhong Su and Katja Markert. 2008. From word to sense: a case study of subjectivity recognition. In Proceedings of the 22nd International Conference on Computational Linguistics, Manchester.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Methods for creating semantic orientation databases", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Taboada", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Anthony", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Voll", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of 5th International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Taboada, C. Anthony, and K. Voll. 2006. Methods for creating semantic orientation databases. In Pro- ceedings of 5th International Conference on Language Resources and Evaluation .", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Latent variable models for semantic orientations of phrases", |
| "authors": [ |
| { |
| "first": "Hiroya", |
| "middle": [], |
| "last": "Takamura", |
| "suffix": "" |
| }, |
| { |
| "first": "Takashi", |
| "middle": [], |
| "last": "Inui", |
| "suffix": "" |
| }, |
| { |
| "first": "Manabu", |
| "middle": [], |
| "last": "Okumura", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 11th Meeting of the European Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hiroya Takamura, Takashi Inui, and Manabu Okumura. 2006. Latent variable models for semantic orienta- tions of phrases. In Proceedings of the 11th Meeting of the European Chapter of the Association for Com- putational Linguistics , Trento, Italy.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "417--424", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Turney. 2002. Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of re- views. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 417-424, Philadelphia.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Developing affective lexical resources", |
| "authors": [ |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Valitutti", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlo", |
| "middle": [], |
| "last": "Strapparava", |
| "suffix": "" |
| }, |
| { |
| "first": "Oliviero", |
| "middle": [], |
| "last": "Stock", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "PsychNology Journal", |
| "volume": "2", |
| "issue": "1", |
| "pages": "61--83", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alessandro Valitutti, Carlo Strapparava, and Oliviero Stock. 2004. Developing affective lexical resources. PsychNology Journal, 2(1):61-83.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Word sense and subjectivity", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Wiebe and R. Mihalcea. 2006. Word sense and subjec- tivity. In Proceedings of the Annual Meeting of the As- sociation for Computational Linguistics, Sydney, Aus- tralia.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Creating subjective and objective sentence classifiers from unannotated texts", |
| "authors": [ |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellen", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 6th International Conference on Intelligent Text Processing and Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "486--497", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Janyce Wiebe and Ellen Riloff. 2005. Creating sub- jective and objective sentence classifiers from unan- notated texts. In Proceedings of the 6th International Conference on Intelligent Text Processing and Com- putational Linguistics , pages 486-497, Mexico City, Mexico.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Recognizing contextual polarity in phrase-level sentiment analysis", |
| "authors": [ |
| { |
| "first": "Theresa", |
| "middle": [], |
| "last": "Wilson", |
| "suffix": "" |
| }, |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Hoffmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the Human Language Technologies Conference/Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "347--354", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of the Human Lan- guage Technologies Conference/Conference on Empir- ical Methods in Natural Language Processing , pages 347-354, Vancouver, Canada.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences", |
| "authors": [ |
| { |
| "first": "Hong", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Vasileios", |
| "middle": [], |
| "last": "Hatzivassiloglou", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "129--136", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hong Yu and Vasileios Hatzivassiloglou. 2003. Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sen- tences. In Conference on Empirical Methods in Nat- ural Language Processing , pages 129-136, Sapporo, Japan.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "html": null, |
| "text": "(hereafter SM) assign S/O labels to senses, while Esuli and Sebastiani (hereafter ES) (2006a; 2007), Andreevskaia and Bergler (hereafter AB) (2006b; 2006a), and", |
| "content": "<table/>", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF4": { |
| "html": null, |
| "text": "Results for SM Corpus (484 senses, 76.9% O)", |
| "content": "<table/>", |
| "type_str": "table", |
| "num": null |
| } |
| } |
| } |
| } |