| { |
| "paper_id": "Q13-1009", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:08:30.661385Z" |
| }, |
| "title": "Using Pivot-Based Paraphrasing and Sentiment Profiles to Improve a Subjectivity Lexicon for Essay Data", |
| "authors": [ |
| { |
| "first": "Beata", |
| "middle": [ |
| "Beigman" |
| ], |
| "last": "Klebanov", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "bbeigmanklebanov@ets.org" |
| }, |
| { |
| "first": "Nitin", |
| "middle": [], |
| "last": "Madnani", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "nmadnani@ets.org" |
| }, |
| { |
| "first": "Jill", |
| "middle": [], |
| "last": "Burstein", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "jburstein@ets.org" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We demonstrate a method of improving a seed sentiment lexicon developed on essay data by using a pivot-based paraphrasing system for lexical expansion coupled with sentiment profile enrichment using crowdsourcing. Profile enrichment alone yields up to 15% improvement in the accuracy of the seed lexicon on 3way sentence-level sentiment polarity classification of essay data. Using lexical expansion in addition to sentiment profiles provides a further 7% improvement in performance. Additional experiments show that the proposed method is also effective with other subjectivity lexicons and in a different domain of application (product reviews).", |
| "pdf_parse": { |
| "paper_id": "Q13-1009", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We demonstrate a method of improving a seed sentiment lexicon developed on essay data by using a pivot-based paraphrasing system for lexical expansion coupled with sentiment profile enrichment using crowdsourcing. Profile enrichment alone yields up to 15% improvement in the accuracy of the seed lexicon on 3way sentence-level sentiment polarity classification of essay data. Using lexical expansion in addition to sentiment profiles provides a further 7% improvement in performance. Additional experiments show that the proposed method is also effective with other subjectivity lexicons and in a different domain of application (product reviews).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In almost any sub-field of computational linguistics, creation of working systems starts with an investment in manually-generated or manually-annotated data for computational exploration. In subjectivity and sentiment analysis, annotation of training and testing data and construction of subjectivity lexicons have been the loci of costly labor investment.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Many subjectivity lexicons are mentioned in the literature. The two large manually-built lexicons for English -the General Inquirer (Stone et al., 1966) and the lexicon provided with the Opinion-Finder distribution (Wiebe and Riloff, 2005) -are available for research and education only 1 and under GNU GPL license that disallows their incorporation into proprietary materials, 2 respectively. Those wishing to integrate sentiment analysis into products, along with those studying subjectivity in languages other than English, or for specific domains such as finance, or for particular genres such as MySpace comments, reported construction of lexicons (Taboada et al., 2011; Loughran and McDonald, 2011; Thelwall et al., 2010; Rao and Ravichandran, 2009; Jijkoun and Hofmann, 2009; Pitel and Grefenstette, 2008; Mihalcea et al., 2007) .", |
| "cite_spans": [ |
| { |
| "start": 132, |
| "end": 152, |
| "text": "(Stone et al., 1966)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 215, |
| "end": 239, |
| "text": "(Wiebe and Riloff, 2005)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 653, |
| "end": 675, |
| "text": "(Taboada et al., 2011;", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 676, |
| "end": 704, |
| "text": "Loughran and McDonald, 2011;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 705, |
| "end": 727, |
| "text": "Thelwall et al., 2010;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 728, |
| "end": 755, |
| "text": "Rao and Ravichandran, 2009;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 756, |
| "end": 782, |
| "text": "Jijkoun and Hofmann, 2009;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 783, |
| "end": 812, |
| "text": "Pitel and Grefenstette, 2008;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 813, |
| "end": 835, |
| "text": "Mihalcea et al., 2007)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we address the step of expanding a small-scale, manually-built subjectivity lexicon (a seed lexicon, typically for a domain or language in question) into a much larger but noisier lexicon using an automatic procedure. We present a novel expansion method using a state-of-the-art paraphrasing system. The expansion yields a 4-fold increase in lexicon size; yet, the expansion alone is insufficient in order to improve performance on sentence-level sentiment polarity classification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper we test the following hypothesis. We suggest that the effectiveness of the expansion is hampered by (1) introduction of opposite-polarity items, such as introducing resolute as an expansion of forceful, or remarkable as an expansion of peculiar;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(2) introduction of weakly polar, neutral, or ambiguous words as expansions of polar seed words, such as generating concern as an expansion of anxiety or future as an expansion of aftermath; 3 (3) inability to distinguish between stronger or clear-cut versus weaker or ambiguous sentiment and to make a differential use of those.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We address items (1) and (2) by enriching the lexicon with sentiment profiles (section 3), and propose a way of effectively utilizing this information for the sentence-level sentiment polarity classification task (sections 5 and 6). Profile-enrichment alone yields up to 15% increase in performance for the seed lexicon when using different machine learning algorithms; paraphraser-based expansion with sentiment profiles improves performance by an additional 7%. Overall, we observe an improvement of up to 25% in classification accuracy over the seed lexicon without profiles.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In section 7, we present comparative evaluations, demonstrating the competitiveness of the expanded and profile-enriched lexicon, as well as the effectiveness of the expansion and enrichment paradigm presented here for different subjectivity lexicons, different lexical expansion methods, and in a different domain of application (product reviews).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The goal of our sentiment analysis project is to allow for the identification of sentiment in sentences that appear in essay responses to a variety of tasks designed to test English proficiency in both native-and non-native-speaker populations in a standardized assessment as well as in an instructional settings. In order to allow for the future use of the sentiment analyzer in a proprietory product and to ensure its fit to the test-taker essay domain, we began our work with the construction of a seed lexicon relying on our materials (section 2.1). We then used a statistical paraphrasing system to expand the seed lexicon (section 2.2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Building Subjectivity Lexicons", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In order to inform the process of lexicon construction, we randomly sampled 5,000 essays from a corpus of about 100,000 essays containing writing samples across many topics. Essays were responses to several different writing assignments, including graduate school entrance exams, non-native English speaker proficiency exams, and professional licensure exams. Our seed lexicon is a combination of (1) positive and negative sentiment words manually selected from a full list of word types in these data, and (2) words marked in a small-scale annotation of a sample of sentences from these data for all positive and negative words. A more detailed descrip-tion of the construction of seed lexicon can be found in Beigman Klebanov et al (2012) . The seed lexicon contains 749 single words, 406 positive and 343 negative.", |
| "cite_spans": [ |
| { |
| "start": 719, |
| "end": 740, |
| "text": "Klebanov et al (2012)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Seed Lexicon", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We used a pivot-based lexical and phrasal paraphrase generation system (Madnani and Dorr, 2013) . The paraphraser implements the pivot-based method as described by Bannard and Callison-Burch (2005) with several additional filtering mechanisms to increase the precision of the extracted pairs. The pivot-based method utilizes the inherent monolingual semantic knowledge from bilingual corpora: We first identify phrasal correspondences between English and a given foreign language F , then map from English to English by following translation units from English to the other language and back. For example, if the two English phrases e1 and e2 both correspond to the same foreign phrase f , then they may be considered to be paraphrases of each other with the following probability:", |
| "cite_spans": [ |
| { |
| "start": 71, |
| "end": 95, |
| "text": "(Madnani and Dorr, 2013)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 164, |
| "end": 197, |
| "text": "Bannard and Callison-Burch (2005)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Expanded Lexicon", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "p(e1|e2) \u2248 p(e1|f )p(f |e2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Expanded Lexicon", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "If there are several pivot phrases that link the two English phrases, then they are all used in computing the probability:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Expanded Lexicon", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "p(e1|e2) \u2248 f p(e1|f )p(f |e2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Expanded Lexicon", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Expansion Seed Expansion abuse exploitation costly onerous accuse reproach dangerous unsafe anxiety disquiet improve reinforce conflict crisis invaluable precious Some examples of expansions generated by the paraphraser are shown in Table 1 . More details about this kind of approach can be found in Bannard and Callison-Burch (2005) . We use the French-English parallel corpus (approximately 1.2 million sentences) from the corpus of European parliamentary proceedings (Koehn, 2005) as the data on which pivoting is performed to extract the paraphrases. However, the base paraphrase system is susceptible to large amounts of noise due to the imperfect bilingual word alignments. Therefore, we implement additional heuristics in order to minimize the number of noisy paraphrase pairs (Madnani and Dorr, 2013) . For example, one such heuristic filters out pairs where a function word may have been inferred as a paraphrase of a content word. For the lexicon expansion experiment reported here, we use the top 15 single-word paraphrases for every word from the seed lexicon, excluding morphological variants of the seed word. This process results in an expanded lexicon of 2,994 different words, 1,666 positive and 1,761 negative (433 words are in both the positive and the negative lists). The expanded lexicon includes the seed lexicon.", |
| "cite_spans": [ |
| { |
| "start": 300, |
| "end": 333, |
| "text": "Bannard and Callison-Burch (2005)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 470, |
| "end": 483, |
| "text": "(Koehn, 2005)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 784, |
| "end": 808, |
| "text": "(Madnani and Dorr, 2013)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 233, |
| "end": 240, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Seed", |
| "sec_num": null |
| }, |
| { |
| "text": "Let \u03b3 w be the sentiment profile of the word w.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inducing sentiment profiles", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03b3 w = (p pos w , p neg w , p neu w )", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Inducing sentiment profiles", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where \u03a3 i\u2208{pos,neg,neu} p i w = 1. Thus, a sentiment profile of a word is essentially a 3-sided coin, corresponding to its probability of coming out positive, negative, and neutral, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inducing sentiment profiles", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our goal is to estimate the profile using outcomes of multiple trials as follows. For every word, a person is shown the word and asked whether it is positive, negative, or neutral. A person's decision is modeled as flipping the coin corresponding to the word, and recording the outcome -positive, negative, or neutral. We run N =20 such trials for every word in the expanded lexicon using the CrowdFlower crowdsourcing site, 4 for a total cost of $800. We use maximum likelihood estimate of sentiment profile:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating sentiment profiles", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "p i w = n i w (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating sentiment profiles", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where n i w is the proportion of N trials on the word w that fell in cell i \u2208 {pos, neg, neu}. Table 2 shows some estimated profiles.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 95, |
| "end": 102, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Estimating sentiment profiles", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Following Goodman (1965) and Quesenberry and Hurst (1964) , we calculate confidence intervals for the parameters p i w : ", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 24, |
| "text": "Goodman (1965)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 29, |
| "end": 57, |
| "text": "Quesenberry and Hurst (1964)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating sentiment profiles", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "(p i w ) \u2212 = (B + 2n i w \u2212 T )/(2(N + B))", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Estimating sentiment profiles", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "(p i w ) + = (B + 2n i w + T )/(2(N + B)) (4) where T = B[B + 4n i w (N \u2212 n i w )/N ])", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Estimating sentiment profiles", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For confidence \u03b1 that all p i w , i \u2208 {pos, neg, neu} are simultaneously within their respective intervals, the value of B is determined as the upper \u03b1/3\u00d7100 th percentile of the \u03c7 2 distribution with one degree of freedom. We use \u03b1=0.1, resulting in B=4.55. The resulting interval is about 0.2 around the estimated value whenp i w is close to 0.5, and somewhat narrower forp i w closer to 0 or 1. We will use this information when inducing features from the profiles.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating sentiment profiles", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The estimated sentiment profiles per word allow us to visualize the distributions of the two lexicons. In Figure 1 , we plot the number of entries in the lexicon as a function of the difference in positive and negative parts of the profile, in 0.2-wide bins. Thus, a word w would be in the second-leftmost bin if \u22120.8 < (p pos w \u2212p neg w ) < \u22120.6. While the expansion process more than doubles the number of words in the highest bins for both the positive and the negative polarity, it clearly introduces a large number of words in the lowand medium bins into the lexicon. It is in this sense that the expansion process is noisy; apparently, seed words with clear and strong polarity are often expanded into low intensity, neutral, or ambiguous ones, as in pairs like absurd/laughable, deadly/fateful, anxiety/concern shown in Table 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 106, |
| "end": 114, |
| "text": "Figure 1", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 827, |
| "end": 834, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Sentiment distributions of the lexicons", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The most popular seed expansion methods discussed in the literature are based on WordNet (Miller, 1995) or another lexicographic resource, on distributional similarity with the seeds, or on a mixture thereof (Cruz et al., 2011; Baccianella et al., 2010; Velikovich et al., 2010; Qiu et al., 2009; Mohammad et al., 2009; Esuli and Sebastiani, 2006; Kim and Hovy, 2004; Andreevskaia and Bergler, 2006; Hu and Liu, 2004; Kanayama and Nasukawa, 2006; Strapparava and Valitutti, 2004; Kamps et al., 2004; Takamura et al., 2005; Turney and Littman, 2003; Hatzivassiloglou and McKeown, 1997) . The paraphrase-based expansion method is in the distributional similarity camp; we also experimented with WordNet-based expansion as descibed in section 7.2. The task of assigning sentiment profiles to words in a sentiment lexicon has been addressed in the literature. SentiWordNet assigns profiles to all words in WordNet based on a propagation algorithm from a small seed set manually annotated by a small number of judges (Baccianella et al., 2010; Cerini et al., 2007) . Andreevskaia and Bergler (2006) use graph propagation algorithms on WordNet to assign cen-trality scores in positive and negative categories; a similar approach based on web-scale co-occurrence graphs is discussed in Velikovich et al (2010) . Thelwall et al (2010) manually annotated a set of words for strength of sentiment and used machine learning to fine-tune it. Taboada et al (2011) produced an expert annotation of their lexicon with strength of sentiment. Subasic and Huettner (2001) manually built an affect lexicon with intensities. Wiebe and Riloff (2005) classifed lexicon entries into weakly and strongly subjective, based on their relative frequency of appearance in subjective versus objective contexts in a large annotated dataset.", |
| "cite_spans": [ |
| { |
| "start": 89, |
| "end": 103, |
| "text": "(Miller, 1995)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 208, |
| "end": 227, |
| "text": "(Cruz et al., 2011;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 228, |
| "end": 253, |
| "text": "Baccianella et al., 2010;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 254, |
| "end": 278, |
| "text": "Velikovich et al., 2010;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 279, |
| "end": 296, |
| "text": "Qiu et al., 2009;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 297, |
| "end": 319, |
| "text": "Mohammad et al., 2009;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 320, |
| "end": 347, |
| "text": "Esuli and Sebastiani, 2006;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 348, |
| "end": 367, |
| "text": "Kim and Hovy, 2004;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 368, |
| "end": 399, |
| "text": "Andreevskaia and Bergler, 2006;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 400, |
| "end": 417, |
| "text": "Hu and Liu, 2004;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 418, |
| "end": 446, |
| "text": "Kanayama and Nasukawa, 2006;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 447, |
| "end": 479, |
| "text": "Strapparava and Valitutti, 2004;", |
| "ref_id": null |
| }, |
| { |
| "start": 480, |
| "end": 499, |
| "text": "Kamps et al., 2004;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 500, |
| "end": 522, |
| "text": "Takamura et al., 2005;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 523, |
| "end": 548, |
| "text": "Turney and Littman, 2003;", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 549, |
| "end": 584, |
| "text": "Hatzivassiloglou and McKeown, 1997)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 1012, |
| "end": 1038, |
| "text": "(Baccianella et al., 2010;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1039, |
| "end": 1059, |
| "text": "Cerini et al., 2007)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 1062, |
| "end": 1093, |
| "text": "Andreevskaia and Bergler (2006)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1279, |
| "end": 1302, |
| "text": "Velikovich et al (2010)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 1430, |
| "end": 1450, |
| "text": "Taboada et al (2011)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 1526, |
| "end": 1553, |
| "text": "Subasic and Huettner (2001)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 1605, |
| "end": 1628, |
| "text": "Wiebe and Riloff (2005)", |
| "ref_id": "BIBREF40" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Our sentiment profiles are best thought of as relatively fine-grained priors for the sentiment expressed by a given word out-of-context. These reflect a mixture of strength of sentiment (p pos good > p pos decent ), contextual ambiguity (concern can be interpreted as similar to worry or to care, as in \"Her condition was causing concern\" versus \"He showed genuine concern for her\"), and dominance of a polar connotation (abandon isp neg =1; it has a negative overtone even if the actual sense is not that of desert but of vacate, as in \"You must abandon your office\").", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "To the best of our knowledge, this paper presents the first attempt to integrate judgements obtained through crowdsourcing on a large scale into a sentiment lexicon, showing the effectiveness of this lexicon-enrichment procedure for a sentiment classification task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "To evaluate the usefulness of the lexicons, we use them to generate features for machine learning systems, and compare performance on 3-way sentencelevel sentiment polarity classification. To ensure robustness of the observed trends, we experiment with a number of machine learning algorithms: SVM Linear and RBF, Na\u00efve Bayes, Logistic Regression (using WEKA (Hall et al., 2009) ), and c5.0 Decision Trees (Quinlan, 1993 ). 5", |
| "cite_spans": [ |
| { |
| "start": 347, |
| "end": 378, |
| "text": "(using WEKA (Hall et al., 2009)", |
| "ref_id": null |
| }, |
| { |
| "start": 406, |
| "end": 420, |
| "text": "(Quinlan, 1993", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using profiles for sentence-level sentiment polarity classification", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We generated the data for training and testing the machine learning systems as follows. We used our pool of 100,000 essays to sample a second, nonoverlapping set of 5,000 essays, so that no essay used for lexicon development appears in this set. From these essays, we randomly sampled 550 sentences, and submitted them to sentiment polarity annotation by two experienced research assistants; 50 double-annotated sentenced showed \u03ba=0.8. TEST set contains the 43 agreed double-annotated sentences, and additional 238 sampled from the 500 single-annotated sentences, 281 sentence in total. The category distribution in the TEST set is 46.6% neutral, 32.4% positive, and 21% negative. The TRAIN set contains the remaining sentences, plus positive, negative, and neutral sentences annotated during lexicon development, for the total of 1,631 sentences. The category distribution in TRAIN is 39% neutral, 35% positive, 26% negative.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Our goal is to evaluate the impact of sentiment profiles on sentence-level sentiment polarity classification for the seed and the expanded lexicons, while also looking for the most effective ways to represent this information for machine learners.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "From lexicons to features", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We implement two baseline systems. One provides the machine learner with the most detailed information contained in a lexicon: BL-full has 2 features for every lexicon word, taking the values (1,0) for positive match in a sentence, (0,1) -for negative, (1,1) for a word in both positive and negative parts of the lexicon, and (0,0) otherwise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "From lexicons to features", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The second baseline provides the machine learner with only summary information about the overall sentiment of the sentence. BL-sum uses only 2 features: (1) the total count of positive words in the sentence; (2) the total count of negative words in the sentence, according to the given lexicon.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "From lexicons to features", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "For the sentiment-enriched runs, we construct a number of representations: Int-full, Int-sum, Intbin, and Int-c. Int-full and Int-sum are parallel to the respective baseline systems. Int-full represents each lexicon word as 2 features corresponding to the word's estimatedp pos w andp neg w , providing the most detailed information to the machine learner. In the Int-sum condition, we usep pos w andp neg w for every word to induce 2 features: (1) the sum of positive probabilities of all words in the sentence; (2) the sum of negative probabilities for all words in the sentence, according to the given lexicon.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "From lexicons to features", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "For Int-bin runs, we use bins of the size of 0.2half of the maximal confidence interval -to group together words with close estimates. We produce 10 features. For positive bins, the 5 features count the number of words in the sentence that fall in bin i , 1 \u2264 i \u2264 5, respectively, that is, words with 0.2(i \u2212 1) <p pos w \u2264 0.2i. Bin 1 also includes words withp pos w = 0, since these cannot be distinguished with high confidence fromp pos w =0.1. Note that we do not provide a scale, we merely represent different ranges with different features. This should allow the machine learners the flexibility to weight the different bins differently when inducing classifiers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "From lexicons to features", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The Int-c condition represents a coarse-grained setting. We produce 4 features, two for each polarity: (1) the number of words such that 0 \u2264p pos w < 0.4; (2) the number of words such that 0.4 \u2264p pos w \u2264 1; similarity for the negative polarity. Table 3 summarizes conditions and features. L is a lexicon; L pos is the part of the lexicon containing positive words (same with negatives); S is a sentence for which a feature vector is built; A = L \u2229 S. For all w \u2208 L \u2212 S in the -full conditions, w is represented with (0,0). full or BL-sum) for a combination of machine learner and lexicon. The results show that (1) Intbin > Int-sum > BL = Int-c = Int-full;", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 245, |
| "end": 252, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "From lexicons to features", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "BL-full 2|L| (1 L pos \u2229S (w), 1 L neg \u2229S (w)) BL-sum 2 f 1 =|{w : w \u2208 L pos \u2229 S}| f 2 =|{w : w \u2208 L neg \u2229 S}| Int-full 2|L| (p pos w ,p neg w ) \u2200w \u2208 A Int-sum 2 (\u03a3 w\u2208Ap pos w , \u03a3 w\u2208Ap neg w ) Int-bin 10 f 1 =|{w \u2208 A : 0 \u2264p pos w \u2264 0.2}| ... f 10 =|{w \u2208 A : 0.8 <p neg w \u2264 1}| Int-c 4 f 1 =|{w \u2208 A : 0 \u2264p pos w < 0.4}| ... f 4 =|{w \u2208 A : 0.4 \u2264p neg w \u2264 1}|", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Cond. #F Feature Description", |
| "sec_num": null |
| }, |
| { |
| "text": "(2) Expanded > Seed under Int condition. All inequalities are statistically significant at p=0.05 (see caption of Table 4 for details).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 114, |
| "end": 121, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "First, both the seed and the expanded lexicons benefit from profile enrichment, although, as predicted, the expanded lexicon yields larger gains due to its more varied profiles: The seed lexicon gains up to 15% in accuracy (c5.0 BL-sum vs Int-bin), while the expanded lexicon gains up to 30%, as SVM RBF scores go up from 0.495 to 0.644. Second, observe that profiling allows the expanded lexicon to leverage its improved coverage: While it is inferior to the best baseline run with the seed lexicon for all systems, it succeeds in improving the seed lexicon accuracies by 5%-12% across the different systems for the Int-bin runs. The best run of the expanded lexicon (Int-bin for SVM RBF) improves upon the best run of the seed lexicon (Intsum for SVM-linear) by 7%, demonstrating the success of the paraphraser-based expansion once profiles are taken into account. Overall, comparing the best baseline for the seed lexicon with Int-bin condition of the expanded lexicon, we observe an improvement between 5% (0.598 to 0.626 for Na\u00efve Bayes) and 25% (0.512 to 0.641 for c5.0), proving the effectiviness of the paraphrase-based expansion with profile enrichment paradigm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Third, representing profiles using 10 bins (Int-bin) provides a small but consistent improvement over the summary representation (Int-sum) that sums positivity and negativity of the sentiment-bearing words in a sentence, over a coarse-grained representation (Int-c), as well as over the full-information representation (Int-full). Even Na\u00efve Bayes and SVM linear, known to work well with large feature sets, show better performance in the Int-bin condition for the expanded lexicon. The results indicate that an intermediate degree of detail -between summary-only and coarse-grained representation on the one hand and full-information representation on the other -is the best choice in our setting.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In this section, we present comparative evaluations of the work presented in this paper with respect to related work. This section shows that the paraphrase expansion+profile enrichment solution proposed in this paper is effective for our task beyond off-theshelf solutions, and that its effectiveness generalizes to sentiment analysis in a different domain. We also show that profile enrichment can be effectively coupled with other methods of lexical expansion, although the paraphraser-based expansion receives a larger boost in performance from profile enrichment than the alternative expansion methods we consider.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparative Evaluations", |
| "sec_num": "7" |
| }, |
| { |
| "text": "In section 7.1, we demonstrate that the paraphrase-based expansion and profile enrichment yield superior performance on our data relative to state-of-art subjectivity lexicons -Opin-ionFinder, General Inquirer, and SentiWordNet. In section 7.2, we show that profile enrichment can be effectively coupled with other methods of lexical expansion, such as a WordNet-based expansion and an expansion that utilizes Lin's distributional thesaurus. However, we find that the paraphraser-based expansion benefits the most from profile enrichment, and attains better performance on our data than the alterantive expansion methods. In section 7.3, we show that the paraphrase-based expansion and profile enrichment paradigm is effective for other subjecitivy lexicons on other data. We use a dataset of product reviews annotated for sentence-level positivity and negativity as new data for evaluation (Hu and Liu, 2004) . We use subsets of OpinionFinder, General Inquirer, and sentiment lexicon from Hu and Liu (2004) . We demonstrate that paraphrase-based expansion and profile enrichment improve the accuracy of sentiment classification of product reviews for every lexicon and machine learner combination; the magnitude of improvement is 5% on average.", |
| "cite_spans": [ |
| { |
| "start": 891, |
| "end": 909, |
| "text": "(Hu and Liu, 2004)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 990, |
| "end": 1007, |
| "text": "Hu and Liu (2004)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparative Evaluations", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Had we been able to use the OpinionFinder or the General Inquirer lexicons (OFL and GIL) asis, how would the results have compared to those attained using our lexicons? We performed the baseline runs with both lexicons; OFL accuracies were 0.544-0.594 across machine learning systems, GIL's -0.491-0.584 (see GIL column in Table 5 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 323, |
| "end": 330, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Competitiveness of the Expanded Lexicon", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "We also experimented with using the weaksubj and strongsubj labels in OFL as somewhat parallel distinctions to the ones presented here (see section 4 -Related Work -for a more detailed discussion). We used (1,0,0) profile for strong positives, (0.3,0,0.7) for weak positives, (0,1,0) for strong negatives, and (0,0.3,0.7) for weak negatives, and ran all the feature representations discussed in section 5.2. Table 5 column OFL shows the best run for every machine learning system, across the different feature representations, and choosing the better performing run between vanilla OFL and the version enriched with weak/strong distinctions. 0.598 0.573 0.584 0.587 0.626 Table 5 : Performance of different lexicons on essay data using various machine learning systems. For each system and lexicon, the best performance across the applicable feature representations from section 5.2 and the variants (see text) is shown. Seed BL column shows the best baseline performance of our seed lexicon -before paraphraser expansion and profile enrichment were applied. Exp. column shows the performance of Int-bin feature representation for the expanded lexicon after profile enrichment.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 408, |
| "end": 415, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 672, |
| "end": 679, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Competitiveness of the Expanded Lexicon", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "Additionally, we experimented with SentiWord-Net (Baccianella et al., 2010) . SentiWordNet is a resource for opinion mining built on top of Word-Net, which assigns each synset in WordNet a score triplet (positive, negative, and objective), indicating the strength of each of these three properties for the words in the synset. The SentiWordNet annotations were automatically generated, starting with a set of manually labeled synsets. Currently, SentiWordNet includes an automatic annotation for all the synsets in WordNet, totaling more than 100,000 words. It is therefore the largest-scale lexicon with intensity information that is currently available.", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 75, |
| "text": "(Baccianella et al., 2010)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Competitiveness of the Expanded Lexicon", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "Since SentiWordNet assigns scores to synsets and since our data is not sense-tagged, we induced Sen-tiWordNet scores in the following ways. We partof-speech tagged our train and test data using Stanford tagger (Toutanova et al., 2003) . Then, we took the SentiWordNet scores for the top sense for the given part-of-speech (SWN-1) . In a different variant, we took a weighted average of the scores for the different senses, using the weighting algorithm provided on SentiWordNet website 6 (SWN-2). Table 5 column SWN shows the best performance figures between SWN-1 and SWN-2, across the feature representations in section 5.2.", |
| "cite_spans": [ |
| { |
| "start": 210, |
| "end": 234, |
| "text": "(Toutanova et al., 2003)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 322, |
| "end": 329, |
| "text": "(SWN-1)", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 497, |
| "end": 504, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Competitiveness of the Expanded Lexicon", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "The comparative results in Table 5 clearly show that while our vanilla seed lexicon performs comparably to off-the-shelf lexicons on our data, the paraphraser-expanded lexicon with sentitment profiles outperforms OpinionFinder, General Inquirer, and SentiWordNet.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 27, |
| "end": 34, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Competitiveness of the Expanded Lexicon", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "We presented a novel lexicon expansion method using a paraphrasing system. We also experimented with more standard methods, using WordNet and distributional similarity (Beigman Klebanov et al., 2012; Esuli and Sebastiani, 2006; Kim and Hovy, 2004; Andreevskaia and Bergler, 2006; Hu and Liu, 2004; Kanayama and Nasukawa, 2006; Strapparava and Valitutti, 2004; Kamps et al., 2004; Takamura et al., 2005; Turney and Littman, 2003; Hatzivassiloglou and McKeown, 1997) . Specifically, we implemented a WordNet (Miller, 1995) based expansion that uses the 3 most frequent synonyms of the top sense of the seed word (WN-e). We also implemented a method based on distributional similarity: Using Lin's proximity-based thesaurus (Lin, 1998) trained on our in-house essay data as well as on wellformed newswire texts, we took all words with the proximity score > 1.80 to any of the seed lexicon words (Lin-e) . Just like the paraphraser lexicon, both perform worse than the seed lexicon in 9 out of 10 baseline runs (BL-sum and Bl-full conditions for the 5 machine learners).", |
| "cite_spans": [ |
| { |
| "start": 168, |
| "end": 199, |
| "text": "(Beigman Klebanov et al., 2012;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 200, |
| "end": 227, |
| "text": "Esuli and Sebastiani, 2006;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 228, |
| "end": 247, |
| "text": "Kim and Hovy, 2004;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 248, |
| "end": 279, |
| "text": "Andreevskaia and Bergler, 2006;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 280, |
| "end": 297, |
| "text": "Hu and Liu, 2004;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 298, |
| "end": 326, |
| "text": "Kanayama and Nasukawa, 2006;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 327, |
| "end": 359, |
| "text": "Strapparava and Valitutti, 2004;", |
| "ref_id": null |
| }, |
| { |
| "start": 360, |
| "end": 379, |
| "text": "Kamps et al., 2004;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 380, |
| "end": 402, |
| "text": "Takamura et al., 2005;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 403, |
| "end": 428, |
| "text": "Turney and Littman, 2003;", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 429, |
| "end": 464, |
| "text": "Hatzivassiloglou and McKeown, 1997)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 506, |
| "end": 520, |
| "text": "(Miller, 1995)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 721, |
| "end": 732, |
| "text": "(Lin, 1998)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 892, |
| "end": 899, |
| "text": "(Lin-e)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment Profile Enrichment with Other Lexical Expansion Methods", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "To test the effect of profile enrichment, all words in WN-e and Lin-e underwent profile estimation as described in section 3.1, yielding lexicons WN-e-p and Lin-e-p, respectively. Figure 2 shows the distri- butions. WN-e-p and Lin-e-p exhibit similar trends to those of the paraphraser. Substituting WN-e-p for Expanded data in Table 4 , we find the same relationships between the different feature sets: Int-bin>Int-sum>Int-full=BL. For Lin-e-p, Int-sum deteriorates: Int-bin>Int-sum=Int-full=BL. For the 20 runs in the Int condition, Paraphraser>WN-e-p>Lin-e-p. 7 Note that this is also the order of lexicon sizes: Lin-e is the most conservative expansion (1,907 words), WN-e is the second with 2,527 words, and the lexicon expanded using paraphrasing is the largest with 2,994 words. Table 6 shows the performance of Lin-e-p, WN-e-p, and of the Expanded lexicon from Table 4 using the Int-bin feature representation. The average relative improvements over the best baseline range between 6.6% to 14.6% for the different expansion methods. Profile induction appears to be a powerful lexicon clean-up procedure that works especially well with more aggressive and thus potentially noisier expansions: The machine learners depress low-intensity and ambiguous expansions, thereby allowing the effective utilization of the improved coverage of sentiment-bearing vocabulary.", |
| "cite_spans": [ |
| { |
| "start": 564, |
| "end": 565, |
| "text": "7", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 180, |
| "end": 188, |
| "text": "Figure 2", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 328, |
| "end": 335, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 787, |
| "end": 794, |
| "text": "Table 6", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 870, |
| "end": 877, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Sentiment Profile Enrichment with Other Lexical Expansion Methods", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "In order to check whether the paraphrase-based expasion and profile enrichment paradigm discussed in this paper generalizes to other subjectivity lexicons , where M = {c5.0, SVM-RBF, SVM-linear, Logistic Regression, Na\u00efve Bayes}, and lex \u2208 {Lin-e-p, Wn-e-p, Exp}. and domains of application, we experimented with a product reviews dataset (Hu and Liu, 2004) and additional lexicons as follows.", |
| "cite_spans": [ |
| { |
| "start": 339, |
| "end": 357, |
| "text": "(Hu and Liu, 2004)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Effectiveness of the Paraphrase Expansion with Profile Enrichment Paradigm in a Different Domain", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "We use the OpinionFinder and General Inquirer lexicons (OFL and GIL) as before, as well as the lexicon of positive and negative sentiment and opinion words available along with (Hu and Liu, 2004) product reviews dataset -HL. 8 Since each of these lexicons contains more than 3,000 words, enrichment of the full lexicons with profiles is beyond the financial scope of our project. We therefore restrict each of the lexicons to the size of their overlap with our seed lexicon (see 2.1); the overlaps have between 415 and 467 words. These restricted lexicons are our initial lexicons for the new experiment that parallel the role of the seed lexicon in the experiments on essay data.", |
| "cite_spans": [ |
| { |
| "start": 177, |
| "end": 195, |
| "text": "(Hu and Liu, 2004)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 225, |
| "end": 226, |
| "text": "8", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicons", |
| "sec_num": "7.3.1" |
| }, |
| { |
| "text": "For each of the 3 initial lexicons L, L\u2208{OFL, GIL, HL}, we follow the paraphrase-based expansion as described in section 2.2. This results in about 4.5-fold expansion of each lexicon, the new lexicons L-e, L\u2208{OFL, GIL, HL}, numbering between 2,015 and 2,167 words. Both the initial and the expanded lexicons now undergo profile enrichment as described in section 3.1, producing lexicons L-p and L-e-p, L\u2208{OFL, GIL, HL}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexicons", |
| "sec_num": "7.3.1" |
| }, |
| { |
| "text": "We use the dataset from Hu and Liu (2004) 9 that contains reviews of 5 products from amazon.com: two digital cameras, a DVD player, an MP3 player, and a cellular phone. The reviews are annotated at sentence level with a label that desrcibes the particular feature that is the subject of the positive or negative evaluation and the polarity and extent of the evaluation. For example, the sentence \"The phone book is very user-friendly and the speakerphone is excellent\" is labeled as PHONE BOOK[+2], SPEAKERPHONE[+2], while the sentence \"I am bored with the silver look\" is labeled LOOK[\u22121]. We used all sentences that were labeled with a numerical score for at least one feature, removing a small number of sentences labeled with both positive and negative scores for different features. 10 We used the sign of the numerical score to label the sentences as positive or negative. The resulting dataset consists of 1,695 sentences, 1,061 positive and 634 negative; accuracy for a majority baseline on this dataset is 0.626. Our experiments on this dataset are done using 5-fold cross-validation. Table 7 shows classification accuracies for the product review data using different lexicons and machine learners. We observe that the combination of paraphrase-based expansion and profile enrichment (L-e-p column in the table) resulted in an improved performance over the initial lexicon (L column in the table) in all cases, with the average gain of 5% in accuracy.", |
| "cite_spans": [ |
| { |
| "start": 24, |
| "end": 41, |
| "text": "Hu and Liu (2004)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 788, |
| "end": 790, |
| "text": "10", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1094, |
| "end": 1101, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "7.3.2" |
| }, |
| { |
| "text": "Furthermore, the contributions of the expansion and the profile enrichment are complementary, since their combination performs better than each in isolation. We note that profile enrichment alone for the initial lexicon did not yield an improvement. This can be explained by the fact that the initial lexicons are highly polar, so profiles provide little additional information: The percentage of words withp pos \u2265 0.8 orp neg \u2265 0.8 is 84%, 86% and 91% for GIL, OFL, and HL-derived lexicons, respectively. In contrast, for the expanded lexicons, these percentages are 51%, 53%, and 56%; these lexicons benefit from profile enrichment.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "7.3.3" |
| }, |
| { |
| "text": "We demonstrated a method of improving a seed sentiment lexicon by using a pivot-based paraphrasing system for lexical expansion and sentiment profile enrichment using crowdsourcing. Profile enrichment alone yielded up to 15% improvement in the performance of the seed lexicon on the task of 3way sentence-level sentiment polarity classification of test-taker essay data. While the lexical expansion on its own failed to improve upon the performance of the seed lexicon, it became much more effective on top of sentiment profiles, generating a 7% performance boost over the best profile-enriched run with the seed lexicon. Overall, paraphrase-based expansion coupled with profile enrichment yields an up to 25% improvement in accuracy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Additionally, we showed that our paraphraseexpanded and profile-enriched lexicon performs significantly better on our data than off-the-shelf subjectivity lexicons, namely, Opinion Finder, General Inquirer, and SentiWordNet. Furthermore, our results suggest that paraphrase-based expansion derives more benefit from profiles than two competing expansion mechanisms based on WordNet and on Lin's distributional thesaurus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Finally, we demonstrated the effectiveness of the paraphraser-based expansion with profile enrichment paradigm on a different dataset. We used Hu and Liu (2004) product review data with sentencelevel sentiment polarity labels. Paraphrase-based expansion with profile enrichment yielded an improved performance across all lexicons and machine learning algorithms we tried, with an average improvement rate of 5% in classification accuracy.", |
| "cite_spans": [ |
| { |
| "start": 143, |
| "end": 160, |
| "text": "Hu and Liu (2004)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Recent literature argues that sentiment polarity is a property of word senses, rather than of words (Gyamfi et al., 2009; Su and Markert, 2008; Wiebe and Mihalcea, 2006) , although Dragut et al (2012) successfully operate with \"mostly negative\" and \"mostly positive\" words based on the polarity distributions of word senses. We plan to address in future work sense disambiguation for words that have multiple senses with very different sentiment, such as stress, as either anxiety (negative) or emphasis (neutral).", |
| "cite_spans": [ |
| { |
| "start": 100, |
| "end": 121, |
| "text": "(Gyamfi et al., 2009;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 122, |
| "end": 143, |
| "text": "Su and Markert, 2008;", |
| "ref_id": null |
| }, |
| { |
| "start": 144, |
| "end": 169, |
| "text": "Wiebe and Mihalcea, 2006)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 181, |
| "end": 200, |
| "text": "Dragut et al (2012)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "8" |
| }, |
| { |
| "text": "http://www.wjh.harvard.edu/ inquirer/j1 1/manual/ 2 http://www.gnu.org/copyleft/gpl.html", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Table 2andFigure 1provide support to these assessments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "available from http://rulequest.com/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "All > are signficant at p=0.05 using Wilcoxon test.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.cs.uic.edu/\u223cliub/FBS/sentimentanalysis.html#lexicon", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.cs.uic.edu/\u223cliub/FBS/sentimentanalysis.html#datasets, the link under \"Customer Review Datasets (5 products)\"10 such as \"The headset that comes with the phone has good sound volume but it hurts the ears like you cannot imagine!\"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": " Table 7 : Accuracies on product review data. For each machine learner and lexicon, the best baseline performance is shown as L for the initial lexicon and as L-e for the paraphrase-expanded lexicon. L-p and L-e-p show the performance of Int-bin feature set on the profile-enriched initial and paraphrase-expanded lexicons, respectively. The three initial lexicons L are OpinionFinder (OFL), General Inquirer (GIL), and (Hu and Liu, 2004 ) (HL), each intersected with our seed lexicon. Sizes of the intial and expanded lexicons are provided.", |
| "cite_spans": [ |
| { |
| "start": 420, |
| "end": 437, |
| "text": "(Hu and Liu, 2004", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1, |
| "end": 8, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Machine", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Mining WordNet for a fuzzy sentiment: Sentiment tag extraction of WordNet glosses", |
| "authors": [ |
| { |
| "first": "Alina", |
| "middle": [], |
| "last": "Andreevskaia", |
| "suffix": "" |
| }, |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Bergler", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "209--216", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alina Andreevskaia and Sabine Bergler. 2006. Mining WordNet for a fuzzy sentiment: Sentiment tag extrac- tion of WordNet glosses. In Proceedings of EACL, pages 209-216, Trento, Italy.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "SENTIWORDNET 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining", |
| "authors": [ |
| { |
| "first": "Stefano", |
| "middle": [], |
| "last": "Baccianella", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Esuli", |
| "suffix": "" |
| }, |
| { |
| "first": "Fabrizio", |
| "middle": [], |
| "last": "Sebastiani", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "2200--2204", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stefano Baccianella, Andrea Esuli, and Fabrizio Sebas- tiani. 2010. SENTIWORDNET 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining. In Proceedings of LREC, pages 2200-2204, Malta.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Paraphrasing with bilingual parallel corpora", |
| "authors": [ |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Bannard", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "597--604", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Colin Bannard and Chris Callison-Burch. 2005. Para- phrasing with bilingual parallel corpora. In Proceed- ings of ACL, pages 597-604, Ann Arbor, MI.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Building sentiment lexicon(s) from scratch for essay data", |
| "authors": [ |
| { |
| "first": "Jill", |
| "middle": [], |
| "last": "Beata Beigman Klebanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Nitin", |
| "middle": [], |
| "last": "Burstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Madnani", |
| "suffix": "" |
| }, |
| { |
| "first": "Joel", |
| "middle": [], |
| "last": "Faulkner", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Tetreault", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 13th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Beata Beigman Klebanov, Jill Burstein, Nitin Madnani, Adam Faulkner, and Joel Tetreault. 2012. Build- ing sentiment lexicon(s) from scratch for essay data. In Proceedings of the 13th International Conference on Intelligent Text Processing and Computational Lin- guistics (CICLing), New Delhi, India, March.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Micro-WNOp: A gold standard for the evaluation of automatically compiled lexical resources for opinion mining", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Cerini", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Compagnoni", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Demontis", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Formentelli", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Gandini", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Language resources and linguistic theory: Typology", |
| "volume": "", |
| "issue": "", |
| "pages": "200--210", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Cerini, V. Compagnoni, A. Demontis, M. Formentelli, and G. Gandini. 2007. Micro-WNOp: A gold stan- dard for the evaluation of automatically compiled lexi- cal resources for opinion mining. In Andrea Sanso, editor, Language resources and linguistic theory: Ty- pology, second language acquisition, pages 200-210.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Automatic expansion of feature-level opinion lexicons", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Ferm\u00edn", |
| "suffix": "" |
| }, |
| { |
| "first": "Jos\u00e9", |
| "middle": [ |
| "A" |
| ], |
| "last": "Cruz", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "Javier" |
| ], |
| "last": "Troyano", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Ortega", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Enr\u00edquez", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis", |
| "volume": "", |
| "issue": "", |
| "pages": "125--131", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ferm\u00edn L. Cruz, Jos\u00e9 A. Troyano, F. Javier Ortega, and Fernando Enr\u00edquez. 2011. Automatic expansion of feature-level opinion lexicons. In Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis, pages 125-131, Portland, Oregon, June.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Polarity consistency checking for sentiment dictionaries", |
| "authors": [ |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Dragut", |
| "suffix": "" |
| }, |
| { |
| "first": "Hong", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Clement", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| }, |
| { |
| "first": "Prasad", |
| "middle": [], |
| "last": "Sistla", |
| "suffix": "" |
| }, |
| { |
| "first": "Weiyi", |
| "middle": [], |
| "last": "Meng", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "997--1005", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eduard Dragut, Hong Wang, Clement Yu, Prasad Sistla, and Weiyi Meng. 2012. Polarity consistency check- ing for sentiment dictionaries. In Proceedings of the 50th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 997-1005, Jeju Island, Korea, July. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Determining term subjectivity and term orientation for opinion mining", |
| "authors": [ |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Esuli", |
| "suffix": "" |
| }, |
| { |
| "first": "Fabrizio", |
| "middle": [], |
| "last": "Sebastiani", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "193--200", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrea Esuli and Fabrizio Sebastiani. 2006. Determin- ing term subjectivity and term orientation for opinion mining. In Proceedings of EACL, pages 193-200, Trento, Italy.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "On Simultaneous Confidence Intervals for Multinomial Proportions", |
| "authors": [ |
| { |
| "first": "Leo", |
| "middle": [ |
| "A" |
| ], |
| "last": "Goodman", |
| "suffix": "" |
| } |
| ], |
| "year": 1965, |
| "venue": "Technometrics", |
| "volume": "7", |
| "issue": "2", |
| "pages": "247--254", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Leo A. Goodman. 1965. On Simultaneous Confidence Intervals for Multinomial Proportions. Technometrics, 7(2):247-254.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Integrating knowledge for subjectivity sense labeling", |
| "authors": [ |
| { |
| "first": "Yaw", |
| "middle": [], |
| "last": "Gyamfi", |
| "suffix": "" |
| }, |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "Rada", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| }, |
| { |
| "first": "Cem", |
| "middle": [], |
| "last": "Akkaya", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "The WEKA Data Mining Software: An Update. SIGKDD Explorations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yaw Gyamfi, Janyce Wiebe, Rada Mihalcea, and Cem Akkaya. 2009. Integrating knowledge for subjectivity sense labeling. In Proceedings of NAACL, pages 10- 18, Boulder, CO. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA Data Mining Software: An Update. SIGKDD Explorations, 11.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Predicting the semantic orientation of adjectives", |
| "authors": [ |
| { |
| "first": "Vasileios", |
| "middle": [], |
| "last": "Hatzivassiloglou", |
| "suffix": "" |
| }, |
| { |
| "first": "Kathleen", |
| "middle": [], |
| "last": "Mckeown", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "174--181", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vasileios Hatzivassiloglou and Kathleen McKeown. 1997. Predicting the semantic orientation of adjec- tives. In Proceedings of ACL, pages 174-181, Madrid, Spain.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Mining and summarizing customer reviews", |
| "authors": [ |
| { |
| "first": "Minqing", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining", |
| "volume": "", |
| "issue": "", |
| "pages": "168--177", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, pages 168-177, Seattle, WA.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Generating a Non-English Subjectivity Lexicon: Relations That Matter", |
| "authors": [ |
| { |
| "first": "Valentin", |
| "middle": [], |
| "last": "Jijkoun", |
| "suffix": "" |
| }, |
| { |
| "first": "Katja", |
| "middle": [], |
| "last": "Hofmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "398--405", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Valentin Jijkoun and Katja Hofmann. 2009. Gener- ating a Non-English Subjectivity Lexicon: Relations That Matter. In Proceedings of EACL, pages 398-405, Athens, Greece.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Using WordNet to measure semantic orientation of adjectives", |
| "authors": [ |
| { |
| "first": "Jaap", |
| "middle": [], |
| "last": "Kamps", |
| "suffix": "" |
| }, |
| { |
| "first": "Maarten", |
| "middle": [], |
| "last": "Marx", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Mokken", |
| "suffix": "" |
| }, |
| { |
| "first": "Maarten", |
| "middle": [], |
| "last": "De Rijke", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "1115--1118", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jaap Kamps, Maarten Marx, Robert Mokken, and Maarten de Rijke. 2004. Using WordNet to measure semantic orientation of adjectives. In Proceedings of LREC, pages 1115-1118, Lisbon, Portugal.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Fully automatic Lexicon Expansion for Domain-oriented Sentiment Analysis", |
| "authors": [ |
| { |
| "first": "Hiroshi", |
| "middle": [], |
| "last": "Kanayama", |
| "suffix": "" |
| }, |
| { |
| "first": "Tetsuya", |
| "middle": [], |
| "last": "Nasukawa", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "355--363", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hiroshi Kanayama and Tetsuya Nasukawa. 2006. Fully automatic Lexicon Expansion for Domain-oriented Sentiment Analysis. In Proceedings of EMNLP, pages 355-363, Syndey, Australia.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Determining the sentiment of opinions", |
| "authors": [ |
| { |
| "first": "Min", |
| "middle": [], |
| "last": "Soo", |
| "suffix": "" |
| }, |
| { |
| "first": "Edward", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "1367--1373", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Soo-Min Kim and Edward Hovy. 2004. Determining the sentiment of opinions. In Proceedings of COLING, pages 1367-1373, Geneva, Switzerland.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "EUROPARL: A Parallel corpus for Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the Machine Translation Summit", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philip Koehn. 2005. EUROPARL: A Parallel corpus for Statistical Machine Translation. In Proceedings of the Machine Translation Summit.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Automatic retrieval and clustering of similar words", |
| "authors": [ |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "768--774", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of ACL, pages 768-774, Montreal, Canada.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "When is a Liability not a Liability? Textual Analysis, Dictionaries, and 10-Ks", |
| "authors": [ |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Loughran", |
| "suffix": "" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Finance", |
| "volume": "66", |
| "issue": "", |
| "pages": "35--65", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tim Loughran and Bill McDonald. 2011. When is a Li- ability not a Liability? Textual Analysis, Dictionaries, and 10-Ks. Journal of Finance, 66:35-65.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Generating Targeted Paraphrases for Improved Translation", |
| "authors": [ |
| { |
| "first": "Nitin", |
| "middle": [], |
| "last": "Madnani", |
| "suffix": "" |
| }, |
| { |
| "first": "Bonnie", |
| "middle": [], |
| "last": "Dorr", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "ACM Transactions on Intelligent Systems and Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nitin Madnani and Bonnie Dorr. 2013. Generating Tar- geted Paraphrases for Improved Translation. ACM Transactions on Intelligent Systems and Technology, to appear.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Learning multilingual subjective language via crosslingual projections", |
| "authors": [ |
| { |
| "first": "Rada", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| }, |
| { |
| "first": "Carmen", |
| "middle": [], |
| "last": "Banea", |
| "suffix": "" |
| }, |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "976--983", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rada Mihalcea, Carmen Banea, and Janyce Wiebe. 2007. Learning multilingual subjective language via cross- lingual projections. In Proceedings of ACL, pages 976-983, Prague, Czech Republic.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "WordNet: A lexical database", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Communications of the ACM", |
| "volume": "38", |
| "issue": "", |
| "pages": "39--41", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George Miller. 1995. WordNet: A lexical database. Communications of the ACM, 38:39-41.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Generating high-coverage semantic orientation lexicons from overtly marked words and a thesaurus", |
| "authors": [ |
| { |
| "first": "Saif", |
| "middle": [], |
| "last": "Mohammad", |
| "suffix": "" |
| }, |
| { |
| "first": "Cody", |
| "middle": [], |
| "last": "Dunne", |
| "suffix": "" |
| }, |
| { |
| "first": "Bonnie", |
| "middle": [], |
| "last": "Dorr", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "599--608", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Saif Mohammad, Cody Dunne, and Bonnie Dorr. 2009. Generating high-coverage semantic orientation lexi- cons from overtly marked words and a thesaurus. In Proceedings of EMNLP, pages 599-608, Singapore, August.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Semiautomatic building method for a multidimensional affect dictionary for a new language", |
| "authors": [ |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Pitel", |
| "suffix": "" |
| }, |
| { |
| "first": "Gregory", |
| "middle": [], |
| "last": "Grefenstette", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guillaume Pitel and Gregory Grefenstette. 2008. Semi- automatic building method for a multidimensional af- fect dictionary for a new language. In Proceedings of LREC, Marrakech, Morocco.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Expanding domain sentiment lexicon through double propagation", |
| "authors": [ |
| { |
| "first": "Guang", |
| "middle": [], |
| "last": "Qiu", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiajun", |
| "middle": [], |
| "last": "Bu", |
| "suffix": "" |
| }, |
| { |
| "first": "Chun", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 21st international jont conference on Artifical intelligence, IJCAI'09", |
| "volume": "", |
| "issue": "", |
| "pages": "1199--1204", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guang Qiu, Bing Liu, Jiajun Bu, and Chun Chen. 2009. Expanding domain sentiment lexicon through double propagation. In Proceedings of the 21st international jont conference on Artifical intelligence, IJCAI'09, pages 1199-1204.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Large sample simultaneous confidence intervals for multinomial proportions", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Quesenberry", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Hurst", |
| "suffix": "" |
| } |
| ], |
| "year": 1964, |
| "venue": "Technometrics", |
| "volume": "6", |
| "issue": "", |
| "pages": "191--195", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Quesenberry and D. Hurst. 1964. Large sample si- multaneous confidence intervals for multinomial pro- portions. Technometrics, 6:191-195.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "C4.5: Programs for machine learning", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "R" |
| ], |
| "last": "Quinlan", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. R. Quinlan. 1993. C4.5: Programs for machine lear- ning. Morgan Kaufmann Publishers.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Semisupervised polarity lexicon induction", |
| "authors": [ |
| { |
| "first": "Delip", |
| "middle": [], |
| "last": "Rao", |
| "suffix": "" |
| }, |
| { |
| "first": "Deepak", |
| "middle": [], |
| "last": "Ravichandran", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "675--682", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Delip Rao and Deepak Ravichandran. 2009. Semi- supervised polarity lexicon induction. In Proceedings of EACL, pages 675-682, Athens.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "The General Inquirer: A Computer Approach to Content Analysis", |
| "authors": [ |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Stone", |
| "suffix": "" |
| }, |
| { |
| "first": "Dexter", |
| "middle": [], |
| "last": "Dunphy", |
| "suffix": "" |
| }, |
| { |
| "first": "Marshall", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Ogilvie", |
| "suffix": "" |
| } |
| ], |
| "year": 1966, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philip Stone, Dexter Dunphy, Marshall Smith, and Daniel Ogilvie. 1966. The General Inquirer: A Computer Approach to Content Analysis. MIT Press.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Eliciting Subjectivity and Polarity Judgements on Word Senses", |
| "authors": [], |
| "year": 2008, |
| "venue": "Proceedings of COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "825--832", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "WordNet-affect: an affective extension of WordNet. In Proceedings of LREC, pages 1083-1086, Lisbon, Portugal. Fangzhong Su and Katja Markert. 2008. Eliciting Subjectivity and Polarity Judgements on Word Senses. In Proceedings of COLING, pages 825-832, Manch- ester, UK.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Affect analysis of text using fuzzy semantic typing", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Subasic", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Huettner", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "IEEE Transactions on Fuzzy Systems", |
| "volume": "9", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Subasic and A. Huettner. 2001. Affect analysis of text using fuzzy semantic typing. IEEE Transactions on Fuzzy Systems, 9(4).", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Lexicon-Based Method for Sentiment Analysis", |
| "authors": [ |
| { |
| "first": "Maite", |
| "middle": [], |
| "last": "Taboada", |
| "suffix": "" |
| }, |
| { |
| "first": "Julian", |
| "middle": [], |
| "last": "Brooke", |
| "suffix": "" |
| }, |
| { |
| "first": "Milan", |
| "middle": [], |
| "last": "Tofiloski", |
| "suffix": "" |
| }, |
| { |
| "first": "Kimberly", |
| "middle": [], |
| "last": "Voll", |
| "suffix": "" |
| }, |
| { |
| "first": "Manfred", |
| "middle": [], |
| "last": "Stede", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Computational Linguistics", |
| "volume": "37", |
| "issue": "2", |
| "pages": "267--307", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maite Taboada, Julian Brooke, Milan Tofiloski, Kim- berly Voll, and Manfred Stede. 2011. Lexicon-Based Method for Sentiment Analysis. Computational Lin- guistics, 37(2):267-307.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Extracting semantic orientation of words using spin model", |
| "authors": [ |
| { |
| "first": "Hiroya", |
| "middle": [], |
| "last": "Takamura", |
| "suffix": "" |
| }, |
| { |
| "first": "Takashi", |
| "middle": [], |
| "last": "Inui", |
| "suffix": "" |
| }, |
| { |
| "first": "Manabu", |
| "middle": [], |
| "last": "Okumura", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "133--140", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hiroya Takamura, Takashi Inui, and Manabu Okumura. 2005. Extracting semantic orientation of words using spin model. In Proceedings of ACL, pages 133-140, Ann Arbor, MI.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Sentiment strength detection in short informal text", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Thelwall", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevan", |
| "middle": [], |
| "last": "Buckley", |
| "suffix": "" |
| }, |
| { |
| "first": "Georgios", |
| "middle": [], |
| "last": "Paltoglou", |
| "suffix": "" |
| }, |
| { |
| "first": "Di", |
| "middle": [], |
| "last": "Cai", |
| "suffix": "" |
| }, |
| { |
| "first": "Arvid", |
| "middle": [], |
| "last": "Kappas", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Journal of the American Society for Information Science and Technology", |
| "volume": "61", |
| "issue": "12", |
| "pages": "2544--2558", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Thelwall, Kevan Buckley, Georgios Paltoglou, Di Cai, and Arvid Kappas. 2010. Sentiment strength detection in short informal text. Journal of the Amer- ican Society for Information Science and Technology, 61(12):2544-2558.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network", |
| "authors": [ |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoram", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "252--259", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kristina Toutanova, Dan Klein, Christopher Manning, and Yoram Singer. 2003. Feature-Rich Part-of- Speech Tagging with a Cyclic Dependency Network. In Proceedings of HLT-NAACL, pages 252-259.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Measuring praise and criticism: Inference of semantic orientation from association", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Littman", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "ACM Transactions on Information Systems", |
| "volume": "21", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter Turney and Michael Littman. 2003. Measuring praise and criticism: Inference of semantic orientation from association. ACM Transactions on Information Systems, 21(4):315346.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "The viability of Web-derived polarity lexicons", |
| "authors": [ |
| { |
| "first": "Leonid", |
| "middle": [], |
| "last": "Velikovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Sasha", |
| "middle": [], |
| "last": "Blair-Goldensohn", |
| "suffix": "" |
| }, |
| { |
| "first": "Kerry", |
| "middle": [], |
| "last": "Hannan", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "777--785", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Leonid Velikovich, Sasha Blair-Goldensohn, Kerry Han- nan, and Ryan McDonald. 2010. The viability of Web-derived polarity lexicons. In Proceedings of NAACL, pages 777-785, Los Angeles, CA.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Word sense and subjectivity", |
| "authors": [ |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "Rada", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "1065--1072", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Janyce Wiebe and Rada Mihalcea. 2006. Word sense and subjectivity. In Proceedings of ACL, pages 1065- 1072, Sydney, Australia.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Creating subjective and objective sentence classifiers from unannotated texts", |
| "authors": [ |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellen", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of CICLING", |
| "volume": "", |
| "issue": "", |
| "pages": "486--497", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Janyce Wiebe and Ellen Riloff. 2005. Creating subjec- tive and objective sentence classifiers from unanno- tated texts. In Proceedings of CICLING (invited pa- per), pages 486-497, Mexico City.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "uris": null, |
| "text": "Sentiment distributions for the seed (left) and the expanded (right) lexicons.", |
| "type_str": "figure", |
| "num": null |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "text": "6 http://sentiwordnet.isti.cnr.it/, under \"Sample code.\" Sentiment profile distributions forLin-e-p (left) and WN-e-p (right) lexicons.", |
| "type_str": "figure", |
| "num": null |
| }, |
| "TABREF0": { |
| "text": "Examples of paraphraser expansions.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF2": { |
| "text": "", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>: Examples of estimated sentiment profiles. Words in gray are expansions generated from words in the preceding row; note the difference in the profiles.</td></tr></table>" |
| }, |
| "TABREF3": { |
| "text": "", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>" |
| }, |
| "TABREF4": { |
| "text": "", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>shows classification accuracies for 5 ma-</td></tr><tr><td>chine learning systems across 6 conditions, for the</td></tr><tr><td>seed and the expanded lexicons.</td></tr><tr><td>Let BL denote the best-performing baseline (BL-</td></tr></table>" |
| }, |
| "TABREF5": { |
| "text": "Classification accuracies on TEST set.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>Majo-rity baseline corresponds to classifying all sentences as neutral. The best performance is boldfaced. Let BL stand for the best-performing baseline (BL-full or BL-sum) for a combination of machine learner and lexicon. We use Wilcoxon Signed-Rank test, reporting the num-ber of signed ranks (N) and the sum of signed ranks (W). Statistically significant results at p=0.05 are: Int-sum > BL (N=10, W=43); Int-bin > BL (N=10, W=48); Int-bin > Int-sum (N=10, W=43); Int-bin > Int-full (N=10, W=47); Int-sum > Int-full (N=10, W=37); Int-bin > Int-</td></tr></table>" |
| }, |
| "TABREF7": { |
| "text": "Performance of WordNet-based, Lin-based, and Paraphraser-based expansions with profile enrichment in the Int-bin condition. Seed BL column shows the best baseline performance of the seed lexicon -before expansion and profile enrichment were applied. The last line shows the average relative gain over the best baseline calculated as AG lex = \u03a3 m\u2208M", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>Lexm\u2212SeedBLm</td></tr><tr><td>SeedBLm</td></tr></table>" |
| } |
| } |
| } |
| } |