| { |
| "paper_id": "Y09-1033", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:43:24.882575Z" |
| }, |
| "title": "Sentiment Classification Considering Negation and Contrast Transition *", |
| "authors": [ |
| { |
| "first": "Shoushan", |
| "middle": [], |
| "last": "Li", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The Hong Kong Polytechnic University", |
| "location": {} |
| }, |
| "email": "shoushan.li@gmail.com" |
| }, |
| { |
| "first": "Chu-Ren", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The Hong Kong Polytechnic University", |
| "location": {} |
| }, |
| "email": "churenhuang@gmail.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Negation and contrast transition are two kinds of linguistic phenomena which are popularly used to reverse the sentiment polarity of some words and sentences. In this paper, we propose an approach to incorporate their classification information into our sentiment classification system: First, we classify sentences into sentiment reversed and non-reversed parts. Then, represent them as two different bags-of-words. Third, present three general strategies to do classification with two-bag-of-words modeling. We collect a large-scale product reviews involving five domains and conduct our experiments on them. The experimental results show that incorporating both negation and contrast transition information is effective and performs robustly better than traditional machine learning approach (based on one-bag-of-words modeling) across five different domains.", |
| "pdf_parse": { |
| "paper_id": "Y09-1033", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Negation and contrast transition are two kinds of linguistic phenomena which are popularly used to reverse the sentiment polarity of some words and sentences. In this paper, we propose an approach to incorporate their classification information into our sentiment classification system: First, we classify sentences into sentiment reversed and non-reversed parts. Then, represent them as two different bags-of-words. Third, present three general strategies to do classification with two-bag-of-words modeling. We collect a large-scale product reviews involving five domains and conduct our experiments on them. The experimental results show that incorporating both negation and contrast transition information is effective and performs robustly better than traditional machine learning approach (based on one-bag-of-words modeling) across five different domains.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Sentiment classification is a task to classify text according to sentimental polarities of opinions they contain (e.g., favorable or unfavorable). This task has received considerable interests in computational linguistic community due to its wide applications.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the latest studies of this task, machine learning techniques become the state-of-the-art approach and have achieved much better results than some rule-based approaches (Kennedy and Inkpen, 2006; Pang et al., 2002) . In machine learning approach, a document (text) is usually modeled as a bag-of-words, a set of words without any word order or syntactic relation information. Therefore, the whole sentimental orientation is highly influenced by the sentiment polarity of each word. Notice that although each word takes a fixed sentiment polarity itself, its polarity contributed to the whole sentence or document might be completely the opposite. Negation and contrast transition are exactly the two kinds of linguistic phenomena which are able to reverse the sentiment polarity. For example, see a sentence containing negation \"this movie is not good\" and another sentence containing contrast transition \"this mouse is good looking, but it works terribly\". The sentiment polarity of the word good in these two sentences is positive but the whole sentences are negative. Therefore, we can see that the whole sentiment is not necessarily the sum of the parts (Turney, 2002) . This phenomenon is one main reason why machine learning often fails to classify some testing samples (Dredze et al., 2008) .", |
| "cite_spans": [ |
| { |
| "start": 171, |
| "end": 197, |
| "text": "(Kennedy and Inkpen, 2006;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 198, |
| "end": 216, |
| "text": "Pang et al., 2002)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1160, |
| "end": 1174, |
| "text": "(Turney, 2002)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 1278, |
| "end": 1299, |
| "text": "(Dredze et al., 2008)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Fortunately, a language usually has some special words which indicate the possible polarity shift of a word or even a sentence. These words are called contextual valence shifters (CVSs) which can cause the valence of a lexical item to shift from one pole to the other or, less forcefully, even to modify the valence towards a more neutral position (Polanyi and Zaenen, 2006) . Generally speaking, CVSs are classified into two categories: sentence-based and discourse-based (Polanyi and Zaenen, 2006) . Sentence-based CVSs are responsible for shifting valence of some words in a sentence. The most obvious shifters are negatives, such as not, none, never, nothing, and hardly. These shifts usually reverse the sentiment polarity of some words. Other sentence-based shifters can be intensifiers (e.g., rather, very), modal operators (e.g., if), etc. Discourse-based CVSs often indicate the valence shifting in the context. Some connectives, such as however, but, and notwithstanding, belong to this type.", |
| "cite_spans": [ |
| { |
| "start": 348, |
| "end": 374, |
| "text": "(Polanyi and Zaenen, 2006)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 473, |
| "end": 499, |
| "text": "(Polanyi and Zaenen, 2006)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we mainly focus on sentiment shifting including negation and contrast transition because this kind of shifting often fully reverses the sentiment polarity and thus mostly reflects the weakness of those machine learning approaches based on one-bag-of-words modeling. Other types of shifting, for instance, intensification with intensifiers (e.g., rather, very) is capable of changing the intension of some words but would not reverse their polarities.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Note that contrast transition is one special type of transition and is used to express contradiction or contrast when connecting one paragraph, sentence, clause or word with the other. It is distinguished from other types of transitions by different connectives. For contrast transitions, the connectives are some CVSs like however, but, and notwithstanding while others use some different connectives, e.g., conclusion transition takes the connectives like therefore, in a word, in summary, and in brief.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To incorporate sentiment reversing information into a machine learning approach, we first segment the whole document into sub-sentences. We then partition them into two groups: one includes those called sentiment-reversed sentences and the other includes those called sentiment-non-reversed sentences. As a result, each document is represented as two-bags-ofwords rather than traditional one-bag-of-words. Finally, we propose the classification algorithm to do the classification on the text with two-bags-of-words modeling.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The remainder of this paper is organized as follows. Section 2 introduces the related work on CVS applications in sentiment classification. Section 3 presents our approach in detail. Experimental results are presented and analyzed in Section 4. Finally, Section 5 draws our conclusions and outlines the future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "During recent several years, various of issues have been studied for sentiment classification, such as feature extraction (Riloff et al., 2006) , domain adaptation (Blitzer et al., 2007) and multi-domain learning (Li and Zong, 2008) . For a detailed survey of this research field, see Pang and Lee (2008) . However, most studies directly borrow machine learning approach from traditional topic-based text classification and very few work are focus on incorporating linguistic knowledge that sentiment text particularly contains, e.g., valence shifting phenomena and comparative sentences (Jindal and Liu, 2006) . Pang et al. (2002) first employ machine learning approach to sentiment classification and find that machine learning methods definitely outperform human-produced baselines. In their approach, they consider negation by adding the tag NOT to every word between a negation word (not, isn't, didn't, etc.) and the first punctuation mark following the negation word. But their results show that adding negation has a very negligible and on average slightly harmful effect on the performance. Kennedy and Inkpen (2006) check three types of CVSs: negatives, intensifiers, and diminishers and add their valence shifting bigrams as additional features. Their results show that considering CVSs greatly improve the performances of term-counting approach. But as far as machine learning approach is concerned, the improvement is very slight (less than 1%). Na et al. (2004) attempt to model negation more accurately and achieve a satisfactory improvement. However, they need to do part-of-speech to get negation phrases and their baseline performance itself is very low (less than 80%).", |
| "cite_spans": [ |
| { |
| "start": 122, |
| "end": 143, |
| "text": "(Riloff et al., 2006)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 164, |
| "end": 186, |
| "text": "(Blitzer et al., 2007)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 213, |
| "end": 232, |
| "text": "(Li and Zong, 2008)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 285, |
| "end": 304, |
| "text": "Pang and Lee (2008)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 588, |
| "end": 610, |
| "text": "(Jindal and Liu, 2006)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 613, |
| "end": 631, |
| "text": "Pang et al. (2002)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 888, |
| "end": 914, |
| "text": "(not, isn't, didn't, etc.)", |
| "ref_id": null |
| }, |
| { |
| "start": 1100, |
| "end": 1125, |
| "text": "Kennedy and Inkpen (2006)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 1459, |
| "end": 1475, |
| "text": "Na et al. (2004)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Different from all the above work, our approach is easy to implement and need no additional features (e.g., bi-gram, part-of-speech tag). Furthermore, our approach is capable of considering both negation and contrast transition. In our view, only considering negation is not enough since there are some negation sentences appear in a contrast transition structure. For example, this mouse is not good looking, but it works perfect and I like it. Apparently, only considering negation is still difficult to give an correct sentiment classification in this case.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In a standard machine learning classification problem, we seek a predictor f (also called a classifier) that maps an input vector x to the corresponding class label y. The predictor is trained on a finite set of labeled examples (X, Y) which are drawn from an unknown distribution D. The learning objective is to minimize the expected error, i.e., , arg min", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "( ( ), ) f X Y f L f X Y \u2208 = \u2211 \u0397 (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where L is a prescribed loss function and H is a set of functions called the hypothesis space, which consists of functions from x to y.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "As a linear classifier, the predictor takes the form ( )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "T i i f X w X = .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Then a regularized form of formula (1) is often used as below, which always has a unique and numerically stable solution", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "2 2 , arg min ( , ) 2 T w X Y w L w X Y w \u03bb = + \u2211 (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where 2 2 w = T w w and \u03bb is a non-negative regularization parameter. If 0 \u03bb = , the problem is unregularized. Solving (2) with stochastic gradient descent (SGD), we get the standard SGD online updating strategy as following (Zhang, 2004) ", |
| "cite_spans": [ |
| { |
| "start": 225, |
| "end": 238, |
| "text": "(Zhang, 2004)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "1 1 1 1 1\u02c6\u02c6( ( , ) ) T t t t t t t t t w w S w L w X Y X \u03b7 \u03bb \u2212 \u2212 \u2212 \u2212 \u2032 = \u2212 + (3) where 1 ( , ) ( , ) L p y L p y p \u2202 \u2032 = \u2202 and ( , ) t t", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "X Y is the instance we are observing at the t-th step. The matrix S can be regarded as a pre-conditioner. For simplicity, we assume it to be a constant matrix. 0 t \u03b7 > is a appropriately chosen learning rate parameter. The whole algorithm is described in Figure 1 (Zhang, 2004) .", |
| "cite_spans": [ |
| { |
| "start": 264, |
| "end": 277, |
| "text": "(Zhang, 2004)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 255, |
| "end": 263, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Classification Algorithm", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Initialize 0 w for t=1,2, ... Draw ( , t t X Y ) randomly from D. Update 1 t w \u2212 as 1 1 1 1 1\u02c6\u02c6( ( , ) ) T t t t t t t t t w w S w L w X Y X \u03b7 \u03bb \u2212 \u2212 \u2212 \u2212 \u2032 = \u2212 + end", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Algorithm (standard SGD)", |
| "sec_num": null |
| }, |
| { |
| "text": "In traditional text classification tasks, a text T (e.g., document, sentence) are modeled as one bag-of-words and the input vector of the text is constructed from weights of the words (also called terms) 1 ( ,..., )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Modeling", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "N t t .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Modeling", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In this paper, we focus on document-based sentiment classification. Specifically, the terms are possibly words, word n-grams, or even phrases extracted from the training data, with N being the number of terms. The weights are statistic information of these terms, e.g., tf, tf idf \u22c5 . Then the text T is represented as a vector ( ) X T , i.e., 1 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Modeling", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "( ) ( ), ( ), ... , ( ) N X T sta t sta t sta t =< >", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Modeling", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "(4) The output label y has a value of 1 or -1 representing a positive or negative sentiment polarity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Modeling", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "As a special case of text classification, sentiment classification applies bag-of-words model directly for a long time. Although machine learning with this text modeling approach has shown to perform much better than some rule-based approaches, e.g., term-counting approach, the achieved performance is much worse than traditional topic-based text classification. Compared to topic-based classification, one big challenge in sentiment classification is that sentiment polarity of one word is not always consistent with the whole orientation of the text. Consider the following two sentences: a1. This is not a good movie and I hate it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Modeling", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "a2. This is such a good movie and I do not hate it at all.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Modeling", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Because they are represented as almost the same bag-of-words, their classification results would be the same when applying machine learning with one-bag-of-words modeling. But their sentiment polarities are obviously different from each other. Therefore, traditional bag-ofwords modeling is not appropriate for sentiment classification to some extent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Modeling", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Instead of considering a text as a bag-of-words, we propose a new text modeling approach which considers a text as two bags-of-words. Specifically, a text T, either for training or testing, is partitioned into two sub-texts: sentiment-reversed part re T and sentiment-non-reversed part non T . Sentiment-reversed part ideally contains those sentences which holds words with the opposite sentiment polarity compared to the whole document's. Formally, a text T consists of multiple sentences, i.e., Suppose each sentence takes a sentiment-reversed tagging V which represents whether it is a sentimentreversed sentence ( ( ) 1 V s = ) or not ( ( ) 1 V s = \u2212 ). Originally, every sentence is assigned the same tagging value of -1, i.e., ( ) 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Modeling", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "o i V s = \u2212 , 1, 2,..., i m = .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Text Modeling", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We assume the sentences as the basic text unit and each one would be assigned a tag. Actually, the ideal basic text unit should be something like clauses rather than sentences (we call them sub-sentences). For example, b1. This is not a good movie and I hate it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence Segmentation", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "b2. I like it because I didn't want to transfer video.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence Segmentation", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Although these two sentences contain negation, it is unsuitable to put the whole sentence into the sentiment-reversed part. A better way is to first segment the sentences into subsentences and assign each one the sentiment-reversed tagging.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence Segmentation", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We implement a simple approach to segment a document into sub-sentences. First, we do segmentation merely with the punctuations, such as period, comma, and interrogation mark. Then, we use some manually-collected key words, such as and, because and since for further segmentation. These key words are used to introduce various complex sentences with clauses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence Segmentation", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "A language usually has some special words called CWSs to indicate possible sentiment shifting of a word or a sentence. As mentioned in the introduction, two kinds of CWSs are commonly used to indicate valence switching: negatives and contrast transition connectives. We would use these CWSs to tag sentence to be a sentiment-reversed sentence or not.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment-reversed Sentence Detection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "If the sentence i s contains k negatives, we update the tagging value as following:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment-reversed Sentence Detection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "( ) ( ) ( 1) k Neg i o i V s V s = \u00d7 \u2212", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Sentiment-reversed Sentence Detection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "As for transition connectives, we first need to recognize which related sentences are possible to be sentiment-reversed. Different from negatives, each transition connective has its own rule to pick sentiment-reversed sentences around it. Here, we only focus on two transition connectives: but and however because they appear most frequently and more likely to really reverse the sentiment polarity. If the connective is but, the sentence before it might be sentiment reversing. If the connective is however, there might be not only one sentiment-reversed sentence before it. We only pick the nearest one as the sentiment-reversed sentence to avoid introducing too many noises. Overall, if the sentence i s appears before but or however, we update its tagging value as following:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment-reversed Sentence Detection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "( ) ( ) ( 1) Tran i Neg i V s V s = \u00d7 \u2212", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Sentiment-reversed Sentence Detection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Then, we get the sentiment-reversed part re T and sentiment-non-reversed part non T as follows. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment-reversed Sentence Detection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "It is worth pointing out that the sentiment-reversed sentences obtained by our approach sometimes are not really sentiment reversed. This is due to some mistakes in sentence segmentation and reversed-sentiment detection. Meanwhile, some real sentiment-reversed sentences are not able to be recognized. Consider the following sentence: c1. It could have been a great product. I dislike it, however.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment-reversed Sentence Detection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "The sub-sentence (I dislike it) before however is actually not sentiment-reversed but the previous sentence (It could have been a great product) is. In fact, recognizing those sentimentreversed sentences can hardly perform perfectly and it might be as difficult as sentiment classification itself. Nevertheless, our main objective here is to build an approach which is able to incorporate the sentiment reversing information. As a preliminary step, we try to recognize most sentiment-reversed sentences and decrease their influence to the whole sentiment.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment-reversed Sentence Detection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "In this section, we propose three general strategies for classifying the text with two-bags-ofwords modeling: (1) remove the sentiment-reversed part; (2) tune the parameters of the sentiment-reversed part according to those learned from the sentiment-non-reversed part; (3) simultaneity learn both sentiment-reversed and sentiment-non-reversed parts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment Classification", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "The first naive strategy, called remove strategy, is to directly remove the sentiment-reversed part considering that they might badly influence the whole sentiment. Accordingly, the text is represented as a bag-of-words which only contains the words in all sentiment-non-reversed text, i.e., non T . Then, the words in non T are used to generate input vectors N X for each document. The learning objective is to minimize the following expected error 2 2 , arg min ( ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment Classification", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "T n n n N n w X Y w L w X Y w \u03bb = + \u2211", |
| "eq_num": ") 2 n" |
| } |
| ], |
| "section": "Sentiment Classification", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "In the testing phase, the label Y \u2032 of one sample N X \u2032 is estimated as ( )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment Classification", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "T n N Y Sgn w X \u2032 \u2032 = (10) Where ( ) Sgn x is defined as 1 0 ( ) 0 0 -1 0 if x Sgn x if x if x > \uf8f1 \uf8f4 = = \uf8f2 \uf8f4 < \uf8f3 (11)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment Classification", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "The second strategy, called shift strategy, takes the same learning process as the first strategy in the training phase but perform different estimation in the testing phase. Since the sentences in the sentiment-reversed part are possibly expressing the reversed polarities, we would like to shift the parameters \u02c6n w when they are applied to the sentiment-reversed text. Thus the label Y \u2032 of one sample", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment Classification", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "( N X \u2032 , N re X \u2212 \u2032 ) is estimated as \u02c6( ( 1) ) T T n N n N re Y Sgn w X w X \u2212 \u2032 \u2032 \u2032 = + \u2212 \u22c5 (12)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment Classification", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "where N re X \u2212 \u2032 represents the input vector of the sentiment-reversed text. Here, N X \u2032 and N re X \u2212 \u2032 are generated from the same term set as the first strategy, i.e., the words in non T .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment Classification", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "The third strategy, called joint strategy, simultaneity learning both sentiment-reversed and sentiment-non-reversed parts. In the training phase, the learning objective is to minimize the following expected error", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment Classification", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "2 2 2 2 , ,, arg min ( , ) 2 2 n r T T n r n r n N r R re n r w w X Y w w L w X w X Y w w \u03bb \u03bb \u2212 = + + + \u2211 (13)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment Classification", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "where R re X \u2212 represents the input vector of the sentiment-reversed text. Here, N X and R re X \u2212 are generated from different term sets: the words in non T and in re T respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment Classification", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "In the testing phase, the label Y \u2032 of one sample", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment Classification", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "( N X \u2032 , R re X \u2212 \u2032 ) is estimated as \u02c6( ) T T n N r R re Y Sgn w X w X \u2212 \u2032 \u2032 \u2032 = +", |
| "eq_num": "(" |
| } |
| ], |
| "section": "Sentiment Classification", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "14) Although all strategies are expressed in terms of linear classifiers, the corresponding ideas for the first and third strategies are general for any other classification algorithms. Overall speaking, only the third one really utilizes both the reversed-sentiment and non-reversed sentiment information for learning. Also, it shares the similar computational complexity as traditional machine learning approaches based on one-bag-of-words modeling.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentiment Classification", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "Data Set: There are some famous public data sets available for sentiment classification studies. Among them, Cornell movie-review dataset 1 (Pang and Lee, 2004) and product reviews 2 (Blitzer et al., 2007) are most popularly used. Both of them are 2-category (positive and negative) tasks and each consists of 2,000 reviews in a domain. The results in some previous work are sometimes not consistent due to the application of different domains of reviews when negation is considered (Pang et al., 2002 and Na et al., 2004) . Thus we follow the way of Blitzer et al. (2007) to collect more data involving data in our experiments. Specifically, we totally collect 5 domains of reviews from Amazon.cn, namely Book, Camera, HD (Hard Disk), Health and Kitchen. Each domain consists of 2,400 reviews and each category (negative or positive) contains 1,200 reviews.", |
| "cite_spans": [ |
| { |
| "start": 140, |
| "end": 160, |
| "text": "(Pang and Lee, 2004)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 183, |
| "end": 205, |
| "text": "(Blitzer et al., 2007)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 483, |
| "end": 505, |
| "text": "(Pang et al., 2002 and", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 506, |
| "end": 522, |
| "text": "Na et al., 2004)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 551, |
| "end": 572, |
| "text": "Blitzer et al. (2007)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Experiment Implementation: We perform 5-fold cross validation in all experiments. That is to say, the dataset in each domain is randomly and evenly split into 5 folds. Then we use each 4 folds for training and the remaining 1 fold for testing. We use accuracy to measure the classification performances.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Features: The features are single words with a BOOL weight (0 or 1), representing the presence or absence of a feature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Classification Algorithm: We use SGD linear predictors with Huber function as the loss function (Zhang, 2004) . Compared to support vector machine (SVM), SGD linear classifier not only performs online learning but also gives comparable or even better results. We compare the two classification algorithms with the Cornell movie-review data set (Pang and Lee, 2004) . The 5-fold cross validation average results are 0.843 by SVM and 0.859 by SGD, from which we can see that SGD outperforms SVM (implemented with LIBSVM 3 with linear kernel). Actually, similar conclusion can be found in Dredze et al. (2008) .", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 109, |
| "text": "(Zhang, 2004)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 344, |
| "end": 364, |
| "text": "(Pang and Lee, 2004)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 586, |
| "end": 606, |
| "text": "Dredze et al. (2008)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Before classification, each document is necessarily partitioned into two sub-texts: sentimentreversed part and sentiment-non-reversed part. To achieve that, we use some CVSs to classify those segmented sub-sentences into two categories: sentiment-reversed and sentiment-nonreversed. Specifically, negatives are used to recognize the negation sentences and the connectives of 'but' and 'however' are used to recognize contrast transition sentences. First of all, let us see the distribution of these negation sentences and contrast transition sentences in our review corpus. Figure 2 (left) shows the proportions of negation sentences to all sentences in negative and positive reviews respectively. The proportion is computed in each domain. From Figure 2 , we can see that negation sentences occur frequently in reviews and are more likely expressed in negative reviews. The proportion of negation sentences in negative reviews is about 8%, which is about twice as the one in positive reviews. This result agrees with our general knowledge that people are more likely to use negation sentences when expressing their negative opinions.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 746, |
| "end": 754, |
| "text": "Figure 2", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Distribution of Negation and Contrast Transition Sentences", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Figure 2 (right) shows the proportions of contrast transition sentences to all sentences in negative and positive reviews respectively. From this figure, we can also see that contrast sentences are more likely expressed in negative reviews than in positive reviews. Compared to negation sentences, contrast transition sentences are much fewer. Table 1 shows the classification results of different strategies when only negation is considered for sentiment-reversed sentence detection. Baseline shows the results of using all unigrams with one-bag-of-words modeling. Let us compare the results between the baseline and each strategy. First, comparing baseline and remove strategy, we find that simply removing all negation sentences is not helpful. Sometimes, the performances even decrease more than one percent (see the domain of health).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 344, |
| "end": 351, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Distribution of Negation and Contrast Transition Sentences", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Second, comparing baseline and switch strategy, we find that switch strategy is worse and always harmful for sentiment classification. This is different from our first thought of this strategy. But after close thinking of it, we would notice that assigning all the words a negative parameter, i.e.,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification Results with Different Strategies", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "* N w \u2212", |
| "eq_num": "( 1)" |
| } |
| ], |
| "section": "Classification Results with Different Strategies", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "in a sentiment-reversed sentence is not reasonable. In fact, it is only necessary to assign a positive parameter to those words which express sentiment. Moreover, some words are commonly used in both negation and non-negation sentences for expressing the same sentiment polarity. For example, see the word waste in the following two sentences. d1. It is a waste of your money.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classification Results with Different Strategies", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Third, comparing to baseline, we find that joint strategy is successful and consistently improves the performance. But the improved performances in some domains are insignificant (less than 0.5% in camera). Therefore, it is not strange that the conclusions in Pang et al. (2002) and Na et al. (2004) is a little different from each other. Whether inducing negation is effective or not is influenced by the application domains. Table 2 shows the classification results of different strategies when only contrast transition is considered for sentiment-reversed sentence detection. Let us compare the results between the baseline and each strategy respectively.", |
| "cite_spans": [ |
| { |
| "start": 260, |
| "end": 278, |
| "text": "Pang et al. (2002)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 283, |
| "end": 299, |
| "text": "Na et al. (2004)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 427, |
| "end": 434, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "d2. Do not waste your money.", |
| "sec_num": null |
| }, |
| { |
| "text": "First, quite different from the case of negation, simply removing the contrast transition sentences can always improve classification performances. We think this is mainly because the amount of transition sentences is much less than negation sentences. Removing them is beneficial for deleting classification noise without losing too much useful classification information.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "d2. Do not waste your money.", |
| "sec_num": null |
| }, |
| { |
| "text": "Second, switch strategy generally fails to improve the performance. It can only make very small improvement in the domain of health Third, joint strategy is still effective in dealing with contrast transition. However, some results are no better than remove strategy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "d2. Do not waste your money.", |
| "sec_num": null |
| }, |
| { |
| "text": "Overall speaking, contrast transition is also helpful for classification. But the improved performances are a little lower than the ones by using negation. This is mainly because negation appears more often than contrast transition, which makes the sentences' sentiment reversed more frequently. Table 3 shows the classification results of different strategies when both negation and contrast transition are considered for sentiment-reversed sentence detection. Apparently, joint strategy is more powerful than the other two strategies and consistently achieves much better classification results than the baseline (The improved accuracy is no less than 1% in all domains).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 296, |
| "end": 303, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "d2. Do not waste your money.", |
| "sec_num": null |
| }, |
| { |
| "text": "Comparing the results in Table 3 to the results in Table 1 or Table 2 , we can conclude that considering both negation and contrast transition is generally a better choice than considering only one of them.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 25, |
| "end": 32, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 51, |
| "end": 69, |
| "text": "Table 1 or Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "d2. Do not waste your money.", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, we propose an approach for incorporating sentiment reversing information into machine learning based sentiment classification system. Specifically, we consider two kinds of linguistic phenomena: negation and contrast transition, which are popularly used to reverse the sentiment polarity. Experimental results on a newly collected corpus show that simply removing the contrast transition sentences is helpful but it is not effective for negation. Furthermore, we see that our approach with joint strategy is able to robustly improve the performances across all five domains.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In our approach, we only use negation and contrast transition keywords to detect sentiment reversed sentences. In addition, there certainly exist some other structures which can reverse the sentiment polarity of a word or sentence. In our future work, we hope to find some more effective detection approaches and consider more structures to recognize sentiment-reversed sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "23rd Pacific Asia Conference on Language, Information and Computation, pages 297-306", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.cs.cornell.edu/People/pabo/movie-review-data/ 2 http://www.seas.upenn.edu/~mdredze/datasets/sentiment/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.csie.ntu.edu.tw/~cjlin/libsvm/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Blitzer", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Dredze", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of Annual Meeting on Association for Computational Linguistics (ACL-07)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Blitzer, J., M. Dredze and F. Pereira. 2007. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. In Proceedings of Annual Meeting on Association for Computational Linguistics (ACL-07).", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Confidence-weighted Linear Classification", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Dredze", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Crammer", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of International Conference on Machine Learning (ICML-08)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dredze, M., K. Crammer and F. Pereira. 2008. Confidence-weighted Linear Classification. In Proceedings of International Conference on Machine Learning (ICML-08).", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Identifying Comparative Sentences in Text Documents", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Jindal", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 29th Annual International ACM SIGIR Conference on Research & Development on Information Retrieval (SIGIR-06)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jindal, N. and B. Liu. 2006. Identifying Comparative Sentences in Text Documents. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research & Development on Information Retrieval (SIGIR-06).", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Sentiment Classification of Movie Reviews using Contextual Valence Shifters", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Kennedy", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Inkpen", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Computational Intelligence", |
| "volume": "22", |
| "issue": "2", |
| "pages": "110--125", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kennedy, A. and D. Inkpen. 2006. Sentiment Classification of Movie Reviews using Contextual Valence Shifters . Computational Intelligence, Vol. 22, No. 2, pp. 110-125.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Multi-domain Sentiment Classification", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Zong", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of Annual Meeting of the Association for Computational Linguistics: Human Language Technology (ACL-08: HLT)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Li, S. and C. Zong. 2008. Multi-domain Sentiment Classification. In Proceedings of Annual Meeting of the Association for Computational Linguistics: Human Language Technology (ACL-08: HLT).", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Effectiveness of Simple Linguistic Processing in Automatic Sentiment Classification of Product Reviews", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Na", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Sui", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Khoo", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Chan", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Conference of the International Society for Knowledge Organization (ISKO)", |
| "volume": "", |
| "issue": "", |
| "pages": "49--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Na, J., H. Sui, C. Khoo, S. Chan and Y. Zhou. 2004. Effectiveness of Simple Linguistic Processing in Automatic Sentiment Classification of Product Reviews. In Conference of the International Society for Knowledge Organization (ISKO), pages 49-54.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Thumbs up? Sentiment Classification using Machine Learning Techniques", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Vaithyanathan", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pang, B., L. Lee and S. Vaithyanathan. 2002. Thumbs up? Sentiment Classification using Machine Learning Techniques. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP-02).", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A Sentimental Education: Sentiment Analysis using Subjectivity Summarization based on Minimum Cuts", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of Annual Meeting on Association for Computational Linguistics (ACL-04)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pang, B. and L. Lee. 2004. A Sentimental Education: Sentiment Analysis using Subjectivity Summarization based on Minimum Cuts. In Proceedings of Annual Meeting on Association for Computational Linguistics (ACL-04).", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Opinion Mining and Sentiment Analysis. Foundation and Trends in Information Retrieval", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "2", |
| "issue": "", |
| "pages": "1--135", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pang, B. and L. Lee. 2008. Opinion Mining and Sentiment Analysis. Foundation and Trends in Information Retrieval, 2(1-2):1-135.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Contextual Valence Shifters", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Polanyi", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Zaenen", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Computing attitude and affect in text: Theory and application", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Polanyi, L. and A. Zaenen. 2006. Contextual Valence Shifters. In Computing attitude and affect in text: Theory and application. Springer Verlag.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Feature Subsumption for Opinion Analysis", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Patwardhan", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Riloff, E., S. Patwardhan and J. Wiebe. 2006. Feature Subsumption for Opinion Analysis. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP-06).", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Thumbs Up or Thumbs Down? Sentiment Orientation Applied to Unsupervised Classification of Reviews", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of Annual Meeting on Association for Computational Linguistics (ACL-02)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Turney, P. 2002. Thumbs Up or Thumbs Down? Sentiment Orientation Applied to Unsupervised Classification of Reviews. In Proceedings of Annual Meeting on Association for Computational Linguistics (ACL-02).", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Solving Large Scale Linear Prediction Problems using Stochastic Gradient Descent Algorithms", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhang, T. 2004. Solving Large Scale Linear Prediction Problems using Stochastic Gradient Descent Algorithms. In Proceedings of International Conference on Machine Learning (ICML-04).", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Standard online SGD algorithm", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "text": "The proportion of negation (left) and transition (right) sentences in negative and positive reviews", |
| "num": null, |
| "type_str": "figure", |
| "uris": null |
| }, |
| "TABREF1": { |
| "html": null, |
| "content": "<table><tr><td>Domain</td><td>Baseline</td><td/><td>Negation</td><td/></tr><tr><td/><td/><td>Remove</td><td>Switch</td><td>Joint</td></tr><tr><td>Book</td><td>0.849</td><td>0.845</td><td>0.834</td><td>0.860</td></tr><tr><td>Camera</td><td>0.920</td><td>0.912</td><td>0.907</td><td>0.924</td></tr><tr><td>HD</td><td>0.934</td><td>0.929</td><td>0.917</td><td>0.946</td></tr><tr><td>Health</td><td>0.841</td><td>0.830</td><td>0.819</td><td>0.854</td></tr><tr><td>Kitchen</td><td>0.860</td><td>0.861</td><td>0.858</td><td>0.872</td></tr></table>", |
| "type_str": "table", |
| "text": "The classification results of different strategies when only considering negation", |
| "num": null |
| }, |
| "TABREF2": { |
| "html": null, |
| "content": "<table><tr><td>Domain</td><td>Baseline</td><td/><td>Transition</td><td/></tr><tr><td/><td/><td>Remove</td><td>Switch</td><td>Joint</td></tr><tr><td>Book</td><td>0.849</td><td>0.850</td><td>0.843</td><td>0.848</td></tr><tr><td>Camera</td><td>0.920</td><td>0.924</td><td>0.917</td><td>0.930</td></tr><tr><td>HD</td><td>0.934</td><td>0.934</td><td>0.930</td><td>0.939</td></tr><tr><td>Health</td><td>0.841</td><td>0.848</td><td>0.845</td><td>0.854</td></tr><tr><td>Kitchen</td><td>0.860</td><td>0.865</td><td>0.855</td><td>0.864</td></tr></table>", |
| "type_str": "table", |
| "text": "The classification results of different strategies when only considering contrast transition", |
| "num": null |
| }, |
| "TABREF3": { |
| "html": null, |
| "content": "<table><tr><td>Domain</td><td>Baseline</td><td colspan=\"2\">Negation + Transition</td><td/></tr><tr><td/><td/><td>Remove</td><td>Switch</td><td>Joint</td></tr><tr><td>Book</td><td>0.849</td><td>0.847</td><td>0.821</td><td>0.863</td></tr><tr><td>Camera</td><td>0.920</td><td>0.919</td><td>0.900</td><td>0.930</td></tr><tr><td>HD</td><td>0.934</td><td>0.923</td><td>0.913</td><td>0.946</td></tr><tr><td>Health</td><td>0.841</td><td>0.848</td><td>0.812</td><td>0.864</td></tr><tr><td>Kitchen</td><td>0.860</td><td>0.861</td><td>0.852</td><td>0.873</td></tr></table>", |
| "type_str": "table", |
| "text": "The classification results of different strategies when considering both negation and contrast transition", |
| "num": null |
| } |
| } |
| } |
| } |