ACL-OCL / Base_JSON /prefixP /json /P04 /P04-1035.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P04-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:44:46.685201Z"
},
"title": "A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cornell University Ithaca",
"location": {
"postCode": "14853-7501",
"region": "NY"
}
},
"email": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Cornell University Ithaca",
"location": {
"postCode": "14853-7501",
"region": "NY"
}
},
"email": "llee@cs.cornell.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Sentiment analysis seeks to identify the viewpoint(s) underlying a text span; an example application is classifying a movie review as \"thumbs up\" or \"thumbs down\". To determine this sentiment polarity, we propose a novel machine-learning method that applies text-categorization techniques to just the subjective portions of the document. Extracting these portions can be implemented using efficient techniques for finding minimum cuts in graphs; this greatly facilitates incorporation of cross-sentence contextual constraints.",
"pdf_parse": {
"paper_id": "P04-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "Sentiment analysis seeks to identify the viewpoint(s) underlying a text span; an example application is classifying a movie review as \"thumbs up\" or \"thumbs down\". To determine this sentiment polarity, we propose a novel machine-learning method that applies text-categorization techniques to just the subjective portions of the document. Extracting these portions can be implemented using efficient techniques for finding minimum cuts in graphs; this greatly facilitates incorporation of cross-sentence contextual constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The computational treatment of opinion, sentiment, and subjectivity has recently attracted a great deal of attention (see references), in part because of its potential applications. For instance, informationextraction and question-answering systems could flag statements and queries regarding opinions rather than facts (Cardie et al., 2003) . Also, it has proven useful for companies, recommender systems, and editorial sites to create summaries of people's experiences and opinions that consist of subjective expressions extracted from reviews (as is commonly done in movie ads) or even just a review's polarity -positive (\"thumbs up\") or negative (\"thumbs down\"). Document polarity classification poses a significant challenge to data-driven methods, resisting traditional text-categorization techniques (Pang, Lee, and Vaithyanathan, 2002) . Previous approaches focused on selecting indicative lexical features (e.g., the word \"good\"), classifying a document according to the number of such features that occur anywhere within it. In contrast, we propose the following process: (1) label the sentences in the document as either subjective or objective, discarding the latter; and then (2) apply a standard machine-learning classifier to the resulting extract. This can prevent the polarity classifier from considering irrelevant or even potentially misleading text: for example, although the sentence \"The protagonist tries to protect her good name\" contains the word \"good\", it tells us nothing about the author's opinion and in fact could well be embedded in a negative movie review. Also, as mentioned above, subjectivity extracts can be provided to users as a summary of the sentiment-oriented content of the document.",
"cite_spans": [
{
"start": 320,
"end": 341,
"text": "(Cardie et al., 2003)",
"ref_id": "BIBREF5"
},
{
"start": 807,
"end": 843,
"text": "(Pang, Lee, and Vaithyanathan, 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our results show that the subjectivity extracts we create accurately represent the sentiment information of the originating documents in a much more compact form: depending on choice of downstream polarity classifier, we can achieve highly statistically significant improvement (from 82.8% to 86.4%) or maintain the same level of performance for the polarity classification task while retaining only 60% of the reviews' words. Also, we explore extraction methods based on a minimum cut formulation, which provides an efficient, intuitive, and effective means for integrating inter-sentencelevel contextual information with traditional bag-ofwords features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One can consider document-level polarity classification to be just a special (more difficult) case of text categorization with sentiment-rather than topic-based categories. Hence, standard machinelearning classification techniques, such as support vector machines (SVMs), can be applied to the entire documents themselves, as was done by Pang, Lee, and Vaithyanathan (2002) . We refer to such classification techniques as default polarity classifiers.",
"cite_spans": [
{
"start": 338,
"end": 373,
"text": "Pang, Lee, and Vaithyanathan (2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture",
"sec_num": "2.1"
},
{
"text": "However, as noted above, we may be able to improve polarity classification by removing objective sentences (such as plot summaries in a movie review). We therefore propose, as depicted in Figure 1 , to first employ a subjectivity detector that determines whether each sentence is subjective or not: discarding the objective ones creates an extract that should better represent a review's subjective content to a default polarity classifier. To our knowledge, previous work has not integrated sentence-level subjectivity detection with document-level sentiment polarity. Yu and Hatzivassiloglou (2003) provide methods for sentencelevel analysis and for determining whether a document is subjective or not, but do not combine these two types of algorithms or consider document polarity classification. The motivation behind the singlesentence selection method of Beineke et al. (2004) is to reveal a document's sentiment polarity, but they do not evaluate the polarity-classification accuracy that results.",
"cite_spans": [
{
"start": 571,
"end": 601,
"text": "Yu and Hatzivassiloglou (2003)",
"ref_id": "BIBREF25"
},
{
"start": 862,
"end": 883,
"text": "Beineke et al. (2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 188,
"end": 197,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Architecture",
"sec_num": "2.1"
},
{
"text": "As with document-level polarity classification, we could perform subjectivity detection on individual sentences by applying a standard classification algorithm on each sentence in isolation. However, modeling proximity relationships between sentences would enable us to leverage coherence: text spans occurring near each other (within discourse boundaries) may share the same subjectivity status, other things being equal (Wiebe, 1994) .",
"cite_spans": [
{
"start": 422,
"end": 435,
"text": "(Wiebe, 1994)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context and Subjectivity Detection",
"sec_num": "2.2"
},
{
"text": "We would therefore like to supply our algorithms with pair-wise interaction information, e.g., to specify that two particular sentences should ideally receive the same subjectivity label but not state which label this should be. Incorporating such information is somewhat unnatural for classifiers whose input consists simply of individual feature vectors, such as Naive Bayes or SVMs, precisely because such classifiers label each test item in isolation. One could define synthetic features or feature vectors to attempt to overcome this obstacle. However, we propose an alternative that avoids the need for such feature engineering: we use an efficient and intuitive graph-based formulation relying on finding minimum cuts. Our approach is inspired by Blum and Chawla (2001) , although they focused on similarity between items (the motivation being to combine labeled and unlabeled data), whereas we are concerned with physical proximity between the items to be classified; indeed, in computer vision, modeling proximity information via graph cuts has led to very effective classification (Boykov, Veksler, and Zabih, 1999) . Figure 2 shows a worked example of the concepts in this section.",
"cite_spans": [
{
"start": 754,
"end": 776,
"text": "Blum and Chawla (2001)",
"ref_id": "BIBREF3"
},
{
"start": 1091,
"end": 1125,
"text": "(Boykov, Veksler, and Zabih, 1999)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 1128,
"end": 1136,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Context and Subjectivity Detection",
"sec_num": "2.2"
},
{
"text": "Suppose we have n items x 1 , . . . , x n to divide into two classes C 1 and C 2 , and we have access to two types of information:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cut-based classification",
"sec_num": "2.3"
},
{
"text": "\u2022 Individual scores ind j (x i ): non-negative estimates of each x i 's preference for being in C j based on just the features of x i alone; and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cut-based classification",
"sec_num": "2.3"
},
{
"text": "\u2022 Association scores assoc(x i , x k ): non-negative estimates of how important it is that x i and x k be in the same class. 1 We would like to maximize each item's \"net happiness\": its individual score for the class it is assigned to, minus its individual score for the other class. But, we also want to penalize putting tightlyassociated items into different classes. Thus, after some algebra, we arrive at the following optimization problem: assign the x i s to C 1 and C 2 so as to minimize the partition cost",
"cite_spans": [
{
"start": 125,
"end": 126,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cut-based classification",
"sec_num": "2.3"
},
{
"text": "x\u2208C 1 ind 2 (x)+ x\u2208C 2 ind 1 (x)+ x i \u2208C 1 , x k \u2208C 2 assoc(x i , x k ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cut-based classification",
"sec_num": "2.3"
},
{
"text": "The problem appears intractable, since there are 2 n possible binary partitions of the x i 's. However, suppose we represent the situation in the following manner. Build an undirected graph G with vertices {v 1 , . . . , v n , s, t}; the last two are, respectively, the source and sink. Add",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cut-based classification",
"sec_num": "2.3"
},
{
"text": "n edges (s, v i ), each with weight ind 1 (x i ), and n edges (v i , t), each with weight ind 2 (x i ). Finally, add n 2 edges (v i , v k ), each with weight assoc(x i , x k ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cut-based classification",
"sec_num": "2.3"
},
{
"text": "Then, cuts in G are defined as follows: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cut-based classification",
"sec_num": "2.3"
},
{
"text": "Definition 1 A cut (S, T ) of G is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cut-based classification",
"sec_num": "2.3"
},
{
"text": "] s t Y M N 2 ind (Y) [.2] 1 ind (Y) [.8] 2 ind (M) [.5] 1 ind (M) [.5] [.1] assoc(Y,N) 2 ind (N) [.9] 1 ind (N) assoc(M,N) assoc(Y,M) [.2] [1.0] [.1] C 1 Individual Association Cost penalties penalties {Y,M} .2 + .5 + .1 .1 + .2 1.1 (none) .8 + .5 + .1 0 1.4 {Y,M,N} .2 + .5 + .9 0 1.6 {Y} .2 + .5 + .1 1.0 + .1 1.9 {N} .8 + .5 + .9 .1 + .2 2.5 {M} .8 + .5 + .1 1.0 + .2 2.6 {Y,N} .2 + .5 + .9 1.0 + .2 2.8 {M,N}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cut-based classification",
"sec_num": "2.3"
},
{
"text": ".8 + .5 + .9 1.0 + .1 3.3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cut-based classification",
"sec_num": "2.3"
},
{
"text": "Figure 2: Graph for classifying three items. Brackets enclose example values; here, the individual scores happen to be probabilities. Based on individual scores alone, we would put Y (\"yes\") in C 1 , N (\"no\") in C 2 , and be undecided about M (\"maybe\"). But the association scores favor cuts that put Y and M in the same class, as shown in the table. Thus, the minimum cut, indicated by the dashed line, places M together with Y in C 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cut-based classification",
"sec_num": "2.3"
},
{
"text": "Observe that every cut corresponds to a partition of the items and has cost equal to the partition cost. Thus, our optimization problem reduces to finding minimum cuts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cut-based classification",
"sec_num": "2.3"
},
{
"text": "Practical advantages As we have noted, formulating our subjectivity-detection problem in terms of graphs allows us to model item-specific and pairwise information independently. Note that this is a very flexible paradigm. For instance, it is perfectly legitimate to use knowledge-rich algorithms employing deep linguistic knowledge about sentiment indicators to derive the individual scores. And we could also simultaneously use knowledgelean methods to assign the association scores. Interestingly, Yu and Hatzivassiloglou (2003) compared an individual-preference classifier against a relationship-based method, but didn't combine the two; the ability to coordinate such algorithms is precisely one of the strengths of our approach. But a crucial advantage specific to the utilization of a minimum-cut-based approach is that we can use maximum-flow algorithms with polynomial asymptotic running times -and near-linear running times in practice -to exactly compute the minimumcost cut(s), despite the apparent intractability of the optimization problem (Cormen, Leiserson, and Rivest, 1990; Ahuja, Magnanti, and Orlin, 1993) . 2 In contrast, other graph-partitioning problems that have been previously used to formulate NLP classification problems 3 are NP-complete (Hatzivassiloglou and McKeown, 1997; Agrawal et al., 2003; Joachims, 2003) .",
"cite_spans": [
{
"start": 500,
"end": 530,
"text": "Yu and Hatzivassiloglou (2003)",
"ref_id": "BIBREF25"
},
{
"start": 1053,
"end": 1090,
"text": "(Cormen, Leiserson, and Rivest, 1990;",
"ref_id": "BIBREF6"
},
{
"start": 1091,
"end": 1124,
"text": "Ahuja, Magnanti, and Orlin, 1993)",
"ref_id": "BIBREF1"
},
{
"start": 1127,
"end": 1128,
"text": "2",
"ref_id": null
},
{
"start": 1266,
"end": 1302,
"text": "(Hatzivassiloglou and McKeown, 1997;",
"ref_id": null
},
{
"start": 1303,
"end": 1324,
"text": "Agrawal et al., 2003;",
"ref_id": "BIBREF0"
},
{
"start": 1325,
"end": 1340,
"text": "Joachims, 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cut-based classification",
"sec_num": "2.3"
},
{
"text": "Our experiments involve classifying movie reviews as either positive or negative, an appealing task for several reasons. First, as mentioned in the introduction, providing polarity information about reviews is a useful service: witness the popularity of www.rottentomatoes.com. Second, movie reviews are apparently harder to classify than reviews of other products (Turney, 2002; Dave, Lawrence, and Pennock, 2003) . Third, the correct label can be extracted automatically from rating information (e.g., number of stars). Our data 4 contains 1000 positive and 1000 negative reviews all written before 2002, with a cap of 20 reviews per author (312 authors total) per category. We refer to this corpus as the polarity dataset.",
"cite_spans": [
{
"start": 365,
"end": 379,
"text": "(Turney, 2002;",
"ref_id": "BIBREF22"
},
{
"start": 380,
"end": 414,
"text": "Dave, Lawrence, and Pennock, 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Framework",
"sec_num": "3"
},
{
"text": "We tested support vector machines (SVMs) and Naive Bayes (NB). Following Pang et al. (2002) , we use unigram-presence features: the ith coordinate of a feature vector is 1 if the corresponding unigram occurs in the input text, 0 otherwise. (For SVMs, the feature vectors are length-normalized). Each default documentlevel polarity classifier is trained and tested on the extracts formed by applying one of the sentencelevel subjectivity detectors to reviews in the polarity dataset.",
"cite_spans": [
{
"start": 73,
"end": 91,
"text": "Pang et al. (2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Default polarity classifiers",
"sec_num": null
},
{
"text": "To train our detectors, we need a collection of labeled sentences. state that \"It is [very hard] to obtain collections of individual sentences that can be easily identified as subjective or objective\"; the polarity-dataset sentences, for example, have not been so annotated. 5 Fortunately, we were able to mine the Web to create a large, automaticallylabeled sentence corpus 6 . To gather subjective sentences (or phrases), we collected 5000 moviereview snippets (e.g., \"bold, imaginative, and impossible to resist\") from www.rottentomatoes.com. To obtain (mostly) objective data, we took 5000 sentences from plot summaries available from the Internet Movie Database (www.imdb.com). We only selected sentences or snippets at least ten words long and drawn from reviews or plot summaries of movies released post-2001, which prevents overlap with the polarity dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subjectivity dataset",
"sec_num": null
},
{
"text": "As noted above, we can use our default polarity classifiers as \"basic\" sentencelevel subjectivity detectors (after retraining on the subjectivity dataset) to produce extracts of the original reviews. We also create a family of cut-based subjectivity detectors; these take as input the set of sentences appearing in a single document and determine the subjectivity status of all the sentences simultaneously using per-item and pairwise relationship information. Specifically, for a given document, we use the construction in Section 2.2 to build a graph wherein the source s and sink t correspond to the class of subjective and objective sentences, respectively, and each internal node v i corresponds to the document's i th sentence s i . We can set the individual scores ind 1 (s i ) to P r N B sub (s i ) and ind 2 (s i ) to 1 \u2212 P r N B sub (s i ), as shown in Figure 3 , where P r N B sub (s) denotes Naive Bayes' estimate of the probability that sentence s is subjective; or, we can use the weights produced by the SVM classifier instead. 7 If we set all the association scores to zero, then the minimum-cut classification of the sentences is the same as that of the basic subjectivity detector. Alternatively, we incorporate the degree of proximity between pairs of sentences, controlled by three parameters. The threshold T specifies the maximum distance two sentences can be separated by and still be considered proximal. The 5 We therefore could not directly evaluate sentenceclassification accuracy on the polarity dataset.",
"cite_spans": [
{
"start": 792,
"end": 795,
"text": "N B",
"ref_id": null
},
{
"start": 835,
"end": 838,
"text": "N B",
"ref_id": null
},
{
"start": 1433,
"end": 1434,
"text": "5",
"ref_id": null
}
],
"ref_spans": [
{
"start": 863,
"end": 871,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Subjectivity detectors",
"sec_num": null
},
{
"text": "6 Available at www.cs.cornell.edu/people/pabo/moviereview-data/ , sentence corpus version 1.0. 7 We converted SVM output di, which is a signed distance (negative=objective) from the separating hyperplane, to nonnegative numbers by",
"cite_spans": [
{
"start": 95,
"end": 96,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subjectivity detectors",
"sec_num": null
},
{
"text": "ind1(si) def = 1 di > 2; (2 + di)/4 \u22122 \u2264 di \u2264 2; 0 di < \u22122.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subjectivity detectors",
"sec_num": null
},
{
"text": "and ind2(si) = 1 \u2212 ind1(si). Note that scaling is employed only for consistency; the algorithm itself does not require probabilities for individual scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subjectivity detectors",
"sec_num": null
},
{
"text": "non-increasing function f (d) specifies how the influence of proximal sentences decays with respect to distance d; in our experiments, we tried f (d) = 1, e 1\u2212d , and 1/d 2 . The constant c controls the relative influence of the association scores: a larger c makes the minimum-cut algorithm more loath to put proximal sentences in different classes. With these in hand 8 , we set (for j > i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subjectivity detectors",
"sec_num": null
},
{
"text": "assoc(s i , s j ) def = f (j \u2212 i) \u2022 c if (j \u2212 i) \u2264 T ; 0 otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subjectivity detectors",
"sec_num": null
},
{
"text": "Below, we report average accuracies computed by ten-fold cross-validation over the polarity dataset. Section 4.1 examines our basic subjectivity extraction algorithms, which are based on individualsentence predictions alone. Section 4.2 evaluates the more sophisticated form of subjectivity extraction that incorporates context information via the minimum-cut paradigm. As we will see, the use of subjectivity extracts can in the best case provide satisfying improvement in polarity classification, and otherwise can at least yield polarity-classification accuracies indistinguishable from employing the full review. At the same time, the extracts we create are both smaller on average than the original document and more effective as input to a default polarity classifier than the same-length counterparts produced by standard summarization tactics (e.g., first-or last-N sentences). We therefore conclude that subjectivity extraction produces effective summaries of document sentiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "As noted in Section 3, both Naive Bayes and SVMs can be trained on our subjectivity dataset and then used as a basic subjectivity detector. The former has somewhat better average ten-fold cross-validation performance on the subjectivity dataset (92% vs. 90%), and so for space reasons, our initial discussions will focus on the results attained via NB subjectivity detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic subjectivity extraction",
"sec_num": "4.1"
},
{
"text": "Employing (Full review); indeed, the difference is highly statistically significant (p < 0.01, paired t-test). With SVMs as the polarity classifier instead, the Full review performance rises to 87.15%, but comparison via the paired t-test reveals that this is statistically indistinguishable from the 86.4% that is achieved by running the SVM polarity classifier on Extract NB input. (More improvements to extraction performance are reported later in this section.) These findings indicate 10 that the extracts preserve (and, in the NB polarity-classifier case, apparently clarify) the sentiment information in the originating documents, and thus are good summaries from the polarity-classification point of view. Further support comes from a \"flipping\" experiment: if we give as input to the default polarity classifier an extract consisting of the sentences labeled objective, accuracy drops dramatically to 71% for NB and 67% for SVMs. This confirms our hypothesis that sentences discarded by the subjectivity extraction process are indeed much less indicative of sentiment polarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic subjectivity extraction",
"sec_num": "4.1"
},
{
"text": "\u00a5 \u00a1 \u00a5 \u00a1 \u00a5 \u00a5 \u00a1 \u00a5 \u00a1 \u00a5 \u00a6 \u00a1 \u00a6 \u00a6 \u00a1 \u00a6 \u00a7 \u00a1 \u00a7 \u00a1 \u00a7 \u00a7 \u00a1 \u00a7 \u00a1 \u00a7\u00a8 \u00a1\u00a8 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a9 \u00a1 \u00a9 \u00a1 \u00a9 \u00a1 \u00a1 n\u2212",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic subjectivity extraction",
"sec_num": "4.1"
},
{
"text": "Moreover, the subjectivity extracts are much more compact than the original documents (an important feature for a summary to have): they contain on average only about 60% of the source reviews' words. (This word preservation rate is plotted along the x-axis in the graphs in Figure 5 .) This prompts us to study how much reduction of the original documents subjectivity detectors can perform and still accurately represent the texts' sentiment information.",
"cite_spans": [],
"ref_spans": [
{
"start": 275,
"end": 283,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Basic subjectivity extraction",
"sec_num": "4.1"
},
{
"text": "We can create subjectivity extracts of varying lengths by taking just the N most subjective sentences 11 from the originating review. As one base- 10 Recall that direct evidence is not available because the polarity dataset's sentences lack subjectivity labels.",
"cite_spans": [
{
"start": 147,
"end": 149,
"text": "10",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Basic subjectivity extraction",
"sec_num": "4.1"
},
{
"text": "11 These are the N sentences assigned the highest probability by the basic NB detector, regardless of whether their probabil-line to compare against, we take the canonical summarization standard of extracting the first N sentences -in general settings, authors often begin documents with an overview. We also consider the last N sentences: in many documents, concluding material may be a good summary, and www.rottentomatoes.com tends to select \"snippets\" from the end of movie reviews (Beineke et al., 2004) . Finally, as a sanity check, we include results from the N least subjective sentences according to Naive Bayes. Figure 4 shows the polarity classifier results as N ranges between 1 and 40. Our first observation is that the NB detector provides very good \"bang for the buck\": with subjectivity extracts containing as few as 15 sentences, accuracy is quite close to what one gets if the entire review is used. In fact, for the NB polarity classifier, just using the 5 most subjective sentences is almost as informative as the Full review while containing on average only about 22% of the source reviews' words.",
"cite_spans": [
{
"start": 486,
"end": 508,
"text": "(Beineke et al., 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 622,
"end": 630,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Basic subjectivity extraction",
"sec_num": "4.1"
},
{
"text": "Also, it so happens that at N = 30, performance is actually slightly better than (but statistically indistinguishable from) Full review even when the SVM default polarity classifier is used (87.2% vs. 87.15%). 12 This suggests potentially effective extraction alternatives other than using a fixed probability threshold (which resulted in the lower accuracy of 86.4% reported above).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic subjectivity extraction",
"sec_num": "4.1"
},
{
"text": "Furthermore, we see in Figure 4 that the N mostsubjective-sentences method generally outperforms the other baseline summarization methods (which perhaps suggests that sentiment summarization cannot be treated the same as topic-based summarizaities exceed 50% and so would actually be classified as subjective by Naive Bayes. For reviews with fewer than N sentences, the entire review will be returned.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 31,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Basic subjectivity extraction",
"sec_num": "4.1"
},
{
"text": "12 Note that roughly half of the documents in the polarity dataset contain more than 30 sentences (average=32.3, standard deviation 15). tion, although this conjecture would need to be verified on other domains and data). It's also interesting to observe how much better the last N sentences are than the first N sentences; this may reflect a (hardly surprising) tendency for movie-review authors to place plot descriptions at the beginning rather than the end of the text and conclude with overtly opinionated statements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Basic subjectivity extraction",
"sec_num": "4.1"
},
{
"text": "The previous section demonstrated the value of subjectivity detection. We now examine whether context information, particularly regarding sentence proximity, can further improve subjectivity extraction. As discussed in Section 2.2 and 3, contextual constraints are easily incorporated via the minimum-cut formalism but are not natural inputs for standard Naive Bayes and SVMs. Figure 5 shows the effect of adding in proximity information. Extract NB+Prox and Extract SVM+Prox are the graph-based subjectivity detectors using Naive Bayes and SVMs, respectively, for the individual scores; we depict the best performance achieved by a single setting of the three proximity-related edge-weight parameters over all ten data folds 13 (parameter selection was not a focus of the current work). The two comparisons we are most interested in are Extract NB+Prox versus Extract NB and Extract SVM+Prox versus Extract SVM .",
"cite_spans": [],
"ref_spans": [
{
"start": 377,
"end": 385,
"text": "Figure 5",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Incorporating context information",
"sec_num": "4.2"
},
{
"text": "We see that the context-aware graph-based subjectivity detectors tend to create extracts that are more informative (statistically significant so (paired t-test) for SVM subjectivity detectors only), although these extracts are longer than their contextblind counterparts. We note that the performance 13 Parameters are chosen from T \u2208 {1, 2, 3}, f (d) \u2208 {1, e 1\u2212d , 1/d 2 }, and c \u2208 [0, 1] at intervals of 0.1. enhancements cannot be attributed entirely to the mere inclusion of more sentences regardless of whether they are subjective or not -one counterargument is that Full review yielded substantially worse results for the NB default polarity classifierand at any rate, the graph-derived extracts are still substantially more concise than the full texts. Now, while incorporating a bias for assigning nearby sentences to the same category into NB and SVM subjectivity detectors seems to require some non-obvious feature engineering, we also wish to investigate whether our graph-based paradigm makes better use of contextual constraints that can be (more or less) easily encoded into the input of standard classifiers. For illustrative purposes, we consider paragraph-boundary information, looking only at SVM subjectivity detection for simplicity's sake.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating context information",
"sec_num": "4.2"
},
{
"text": "It seems intuitively plausible that paragraph boundaries (an approximation to discourse boundaries) loosen coherence constraints between nearby sentences. To capture this notion for minimum-cutbased classification, we can simply reduce the association scores for all pairs of sentences that occur in different paragraphs by multiplying them by a cross-paragraph-boundary weight w \u2208 [0, 1]. For standard classifiers, we can employ the trick of having the detector treat paragraphs, rather than sentences, as the basic unit to be labeled. This enables the standard classifier to utilize coherence between sentences in the same paragraph; on the other hand, it also (probably unavoidably) poses a hard constraint that all of a paragraph's sentences get the same label, which increases noise sensitivity. 14 Our experiments reveal the graph-cut formulation to be the better approach: for both default polarity classifiers (NB and SVM), some choice of parameters (including w) for Extract SVM+Prox yields statistically significant improvement over its paragraphunit non-graph counterpart (NB: 86.4% vs. 85.2%; SVM: 86.15% vs. 85.45%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating context information",
"sec_num": "4.2"
},
{
"text": "We examined the relation between subjectivity detection and polarity classification, showing that subjectivity detection can compress reviews into much shorter extracts that still retain polarity information at a level comparable to that of the full review. In fact, for the Naive Bayes polarity classifier, the subjectivity extracts are shown to be more effective input than the originating document, which suggests that they are not only shorter, but also \"cleaner\" representations of the intended polarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "We have also shown that employing the minimum-cut framework results in the development of efficient algorithms for sentiment analysis. Utilizing contextual information via this framework can lead to statistically significant improvement in polarity-classification accuracy. Directions for future research include developing parameterselection techniques, incorporating other sources of contextual cues besides sentence proximity, and investigating other means for modeling such information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Asymmetry is allowed, but we used symmetric scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Code available at http://www.avglab.com/andrew/soft.html.3 Graph-based approaches to general clustering problems are too numerous to mention here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available at www.cs.cornell.edu/people/pabo/moviereview-data/ (review corpus version 2.0).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Parameter training is driven by optimizing the performance of the downstream polarity classifier rather than the detector itself because the subjectivity dataset's sentences come from different reviews, and so are never proximal.9 This result and others are depicted inFigure 5; for now, consider only the y-axis in those plots.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For example, in the data we used, boundaries may have been missed due to malformed html.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Eric Breck, Claire Cardie, Rich Caruana, Yejin Choi, Shimon Edelman, Thorsten Joachims, Jon Kleinberg, Oren Kurland, Art Munson, Vincent Ng, Fernando Pereira, Ves Stoyanov, Ramin Zabih, and the anonymous reviewers for helpful comments. This paper is based upon work supported in part by the National Science Foundation under grants ITR/IM IIS-0081334 and IIS-0329064, a Cornell Graduate Fellowship in Cognitive Studies, and by an Alfred P. Sloan Research Fellowship. Any opinions, findings, and conclusions or recommendations expressed above are those of the authors and do not necessarily reflect the views of the National Science Foundation or Sloan Foundation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Mining newsgroups using networks arising from social behavior",
"authors": [
{
"first": "Rakesh",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Sridhar",
"middle": [],
"last": "Rajagopalan",
"suffix": ""
},
{
"first": "Ramakrishnan",
"middle": [],
"last": "Srikant",
"suffix": ""
},
{
"first": "Yirong",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2003,
"venue": "WWW",
"volume": "",
"issue": "",
"pages": "529--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Agrawal, Rakesh, Sridhar Rajagopalan, Ramakrish- nan Srikant, and Yirong Xu. 2003. Mining news- groups using networks arising from social behav- ior. In WWW, pages 529-535.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Network Flows: Theory, Algorithms, and Applications",
"authors": [
{
"first": "",
"middle": [],
"last": "Ahuja",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"L"
],
"last": "Ravindra",
"suffix": ""
},
{
"first": "James",
"middle": [
"B"
],
"last": "Magnanti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Orlin",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahuja, Ravindra, Thomas L. Magnanti, and James B. Orlin. 1993. Network Flows: Theory, Algorithms, and Applications. Prentice Hall.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Exploring sentiment summarization",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Beineke",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Hastie",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Shivakumar",
"middle": [],
"last": "Vaithyanathan",
"suffix": ""
}
],
"year": 2004,
"venue": "AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications (AAAI tech report",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Beineke, Philip, Trevor Hastie, Christopher Man- ning, and Shivakumar Vaithyanathan. 2004. Exploring sentiment summarization. In AAAI Spring Symposium on Exploring Attitude and Af- fect in Text: Theories and Applications (AAAI tech report SS-04-07).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning from labeled and unlabeled data using graph mincuts",
"authors": [
{
"first": "Avrim",
"middle": [],
"last": "Blum",
"suffix": ""
},
{
"first": "Shuchi",
"middle": [],
"last": "Chawla",
"suffix": ""
}
],
"year": 2001,
"venue": "Intl. Conf. on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "19--26",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Blum, Avrim and Shuchi Chawla. 2001. Learning from labeled and unlabeled data using graph min- cuts. In Intl. Conf. on Machine Learning (ICML), pages 19-26.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Fast approximate energy minimization via graph cuts",
"authors": [
{
"first": "Yuri",
"middle": [],
"last": "Boykov",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Veksler",
"suffix": ""
},
{
"first": "Ramin",
"middle": [],
"last": "Zabih",
"suffix": ""
}
],
"year": 1999,
"venue": "Intl. Conf. on Computer Vision (ICCV)",
"volume": "23",
"issue": "",
"pages": "1222--1239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Boykov, Yuri, Olga Veksler, and Ramin Zabih. 1999. Fast approximate energy minimization via graph cuts. In Intl. Conf. on Computer Vision (ICCV), pages 377-384. Journal version in IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI) 23(11):1222-1239, 2001.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Combining low-level and summary representations of opinions for multiperspective question answering",
"authors": [
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "Diane",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2003,
"venue": "AAAI Spring Symposium on New Directions in Question Answering",
"volume": "",
"issue": "",
"pages": "20--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cardie, Claire, Janyce Wiebe, Theresa Wilson, and Diane Litman. 2003. Combining low-level and summary representations of opinions for multi- perspective question answering. In AAAI Spring Symposium on New Directions in Question An- swering, pages 20-27.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Introduction to Algorithms",
"authors": [
{
"first": "Thomas",
"middle": [
"H"
],
"last": "Cormen",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Charles",
"suffix": ""
},
{
"first": "Ronald",
"middle": [
"L"
],
"last": "Leiserson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rivest",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cormen, Thomas H., Charles E. Leiserson, and Ronald L. Rivest. 1990. Introduction to Algo- rithms. MIT Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Yahoo! for Amazon: Extracting market sentiment from stock message boards",
"authors": [
{
"first": "Sanjiv",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2001,
"venue": "Asia Pacific Finance Association Annual Conf. (APFA)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Das, Sanjiv and Mike Chen. 2001. Yahoo! for Amazon: Extracting market sentiment from stock message boards. In Asia Pacific Finance Associ- ation Annual Conf. (APFA).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Mining the peanut gallery: Opinion extraction and semantic classification of product reviews",
"authors": [
{
"first": "Kushal",
"middle": [],
"last": "Dave",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Lawrence",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Pennock",
"suffix": ""
}
],
"year": 2003,
"venue": "WWW",
"volume": "",
"issue": "",
"pages": "519--528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dave, Kushal, Steve Lawrence, and David M. Pen- nock. 2003. Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. In WWW, pages 519-528.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Opinion classification through information extraction",
"authors": [
{
"first": "Luca",
"middle": [],
"last": "Dini",
"suffix": ""
},
{
"first": "Giampaolo",
"middle": [],
"last": "Mazzini",
"suffix": ""
}
],
"year": 2002,
"venue": "Intl. Conf. on Data Mining Methods and Databases for Engineering, Finance and Other Fields",
"volume": "",
"issue": "",
"pages": "299--310",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dini, Luca and Giampaolo Mazzini. 2002. Opin- ion classification through information extraction. In Intl. Conf. on Data Mining Methods and Databases for Engineering, Finance and Other Fields, pages 299-310.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A system for affective rating of texts",
"authors": [
{
"first": "Stephen",
"middle": [
"D"
],
"last": "Durbin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Richter",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Warner",
"suffix": ""
}
],
"year": 2003,
"venue": "KDD Wksp. on Operational Text Classification Systems (OTC-3)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Durbin, Stephen D., J. Neal Richter, and Doug Warner. 2003. A system for affective rating of texts. In KDD Wksp. on Operational Text Classi- fication Systems (OTC-3).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Predicting the semantic orientation of adjectives",
"authors": [
{
"first": "Vasileios",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mc-Keown",
"suffix": ""
}
],
"year": 1997,
"venue": "35th ACL/8th EACL",
"volume": "",
"issue": "",
"pages": "174--181",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hatzivassiloglou, Vasileios and Kathleen Mc- Keown. 1997. Predicting the semantic orienta- tion of adjectives. In 35th ACL/8th EACL, pages 174-181.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Transductive learning via spectral graph partitioning",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2003,
"venue": "Intl. Conf. on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joachims, Thorsten. 2003. Transductive learning via spectral graph partitioning. In Intl. Conf. on Machine Learning (ICML).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A model of textual affect sensing using real-world knowledge",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Henry",
"middle": [],
"last": "Lieberman",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Selker",
"suffix": ""
}
],
"year": 2003,
"venue": "Intelligent User Interfaces (IUI)",
"volume": "",
"issue": "",
"pages": "125--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, Hugo, Henry Lieberman, and Ted Selker. 2003. A model of textual affect sensing using real-world knowledge. In Intelligent User Inter- faces (IUI), pages 125-132.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Text mining as a social thermometer",
"authors": [
{
"first": "",
"middle": [],
"last": "Montes-Y-G\u00f3mez",
"suffix": ""
},
{
"first": "Aurelio",
"middle": [],
"last": "Manuel",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "L\u00f3pez-L\u00f3pez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gelbukh",
"suffix": ""
}
],
"year": 1999,
"venue": "IJCAI Wksp. on Text Mining",
"volume": "",
"issue": "",
"pages": "103--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Montes-y-G\u00f3mez, Manuel, Aurelio L\u00f3pez-L\u00f3pez, and Alexander Gelbukh. 1999. Text mining as a social thermometer. In IJCAI Wksp. on Text Min- ing, pages 103-107.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Mining product reputations on the web",
"authors": [
{
"first": "Satoshi",
"middle": [],
"last": "Morinaga",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Yamanishi",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Tateishi",
"suffix": ""
},
{
"first": "Toshikazu",
"middle": [],
"last": "Fukushima",
"suffix": ""
}
],
"year": 2002,
"venue": "KDD",
"volume": "",
"issue": "",
"pages": "341--349",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Morinaga, Satoshi, Kenji Yamanishi, Kenji Tateishi, and Toshikazu Fukushima. 2002. Mining prod- uct reputations on the web. In KDD, pages 341- 349. Industry track.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Thumbs up? Sentiment classification using machine learning techniques",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Shivakumar",
"middle": [],
"last": "Vaithyanathan",
"suffix": ""
}
],
"year": 2002,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pang, Bo, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? Senti- ment classification using machine learning techniques. In EMNLP, pages 79-86.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications. AAAI technical report",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Shanahan",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qu, Yan, James Shanahan, and Janyce Wiebe, edi- tors. 2004. AAAI Spring Symposium on Explor- ing Attitude and Affect in Text: Theories and Ap- plications. AAAI technical report SS-04-07.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning extraction patterns for subjective expressions",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2003,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Riloff, Ellen and Janyce Wiebe. 2003. Learning extraction patterns for subjective expressions. In EMNLP.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning subjective nouns using extraction pattern bootstrapping",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "Theresa",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2003,
"venue": "Conf. on Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Riloff, Ellen, Janyce Wiebe, and Theresa Wilson. 2003. Learning subjective nouns using extraction pattern bootstrapping. In Conf. on Natural Lan- guage Learning (CoNLL), pages 25-32.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Affect analysis of text using fuzzy semantic typing",
"authors": [
{
"first": "Pero",
"middle": [],
"last": "Subasic",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Huettner",
"suffix": ""
}
],
"year": 2001,
"venue": "IEEE Trans. Fuzzy Systems",
"volume": "9",
"issue": "4",
"pages": "483--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Subasic, Pero and Alison Huettner. 2001. Af- fect analysis of text using fuzzy semantic typing. IEEE Trans. Fuzzy Systems, 9(4):483-496.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "An operational system for detecting and tracking opinions in on-line discussion",
"authors": [
{
"first": "Richard",
"middle": [
"M"
],
"last": "Tong",
"suffix": ""
}
],
"year": 2001,
"venue": "SIGIR Wksp. on Operational Text Classification",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tong, Richard M. 2001. An operational system for detecting and tracking opinions in on-line discus- sion. SIGIR Wksp. on Operational Text Classifi- cation.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2002,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "417--424",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Turney, Peter. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In ACL, pages 417-424.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Tracking point of view in narrative",
"authors": [
{
"first": "Janyce",
"middle": [
"M"
],
"last": "Wiebe",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "2",
"pages": "233--287",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wiebe, Janyce M. 1994. Tracking point of view in narrative. Computational Linguistics, 20(2):233- 287.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Sentiment analyzer: Extracting sentiments about a given topic using natural language processing techniques",
"authors": [
{
"first": "Jeonghee",
"middle": [],
"last": "Yi",
"suffix": ""
},
{
"first": "Tetsuya",
"middle": [],
"last": "Nasukawa",
"suffix": ""
},
{
"first": "Razvan",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "Wayne",
"middle": [],
"last": "Niblack",
"suffix": ""
}
],
"year": 2003,
"venue": "IEEE Intl. Conf. on Data Mining (ICDM)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi, Jeonghee, Tetsuya Nasukawa, Razvan Bunescu, and Wayne Niblack. 2003. Sentiment analyzer: Extracting sentiments about a given topic using natural language processing techniques. In IEEE Intl. Conf. on Data Mining (ICDM).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences",
"authors": [
{
"first": "Hong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Vasileios",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
}
],
"year": 2003,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu, Hong and Vasileios Hatzivassiloglou. 2003. Towards answering opinion questions: Separat- ing facts from opinions and identifying the polar- ity of opinion sentences. In EMNLP.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Polarity classification via subjectivity detection.",
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"text": "a partition of its nodes into sets S = {s} \u222a S and T = {t} \u222a T , where s \u2208 S , t \u2208 T . Its cost cost(S, T ) is the sum of the weights of all edges crossing from S to T . A minimum cut of G is one of minimum cost.",
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"text": "",
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"text": "Graph-cut-based creation of subjective extracts.",
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"text": "Accuracies using N-sentence extracts for NB (left) and SVM (right) default polarity classifiers.",
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"uris": null,
"text": "Word preservation rate vs. accuracy, NB (left) and SVMs (right) as default polarity classifiers. Also indicated are results for some statistical significance tests.",
"num": null,
"type_str": "figure"
}
}
}
}