ACL-OCL / Base_JSON /prefixP /json /P11 /P11-1013.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P11-1013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:47:22.401779Z"
},
"title": "Automatically Extracting Polarity-Bearing Topics for Cross-Domain Sentiment Classification",
"authors": [
{
"first": "Yulan",
"middle": [],
"last": "He",
"suffix": "",
"affiliation": {},
"email": "y.he@open.ac.uk"
},
{
"first": "Chenghua",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Exeter",
"location": {
"postCode": "EX4 4QF",
"settlement": "Exeter",
"country": "UK"
}
},
"email": ""
},
{
"first": "Harith",
"middle": [],
"last": "Alani",
"suffix": "",
"affiliation": {},
"email": "h.alani@open.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Joint sentiment-topic (JST) model was previously proposed to detect sentiment and topic simultaneously from text. The only supervision required by JST model learning is domain-independent polarity word priors. In this paper, we modify the JST model by incorporating word polarity priors through modifying the topic-word Dirichlet priors. We study the polarity-bearing topics extracted by JST and show that by augmenting the original feature space with polarity-bearing topics, the in-domain supervised classifiers learned from augmented feature representation achieve the state-of-the-art performance of 95% on the movie review data and an average of 90% on the multi-domain sentiment dataset. Furthermore, using feature augmentation and selection according to the information gain criteria for cross-domain sentiment classification, our proposed approach performs either better or comparably compared to previous approaches. Nevertheless, our approach is much simpler and does not require difficult parameter tuning.",
"pdf_parse": {
"paper_id": "P11-1013",
"_pdf_hash": "",
"abstract": [
{
"text": "Joint sentiment-topic (JST) model was previously proposed to detect sentiment and topic simultaneously from text. The only supervision required by JST model learning is domain-independent polarity word priors. In this paper, we modify the JST model by incorporating word polarity priors through modifying the topic-word Dirichlet priors. We study the polarity-bearing topics extracted by JST and show that by augmenting the original feature space with polarity-bearing topics, the in-domain supervised classifiers learned from augmented feature representation achieve the state-of-the-art performance of 95% on the movie review data and an average of 90% on the multi-domain sentiment dataset. Furthermore, using feature augmentation and selection according to the information gain criteria for cross-domain sentiment classification, our proposed approach performs either better or comparably compared to previous approaches. Nevertheless, our approach is much simpler and does not require difficult parameter tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Given a piece of text, sentiment classification aims to determine whether the semantic orientation of the text is positive, negative or neutral. Machine learning approaches to this problem (?; ?; ?; ?; ?; ?) typically assume that classification models are trained and tested using data drawn from some fixed distribution. However, in many practical cases, we may have plentiful labeled examples in the source domain, but very few or no labeled examples in the target domain with a different distribution. For example, we may have many labeled books reviews, but we are interested in detecting the polarity of electronics reviews. Reviews for different produces might have widely different vocabularies, thus classifiers trained on one domain often fail to produce satisfactory results when shifting to another domain. This has motivated much research on sentiment transfer learning which transfers knowledge from a source task or domain to a different but related task or domain (?; ?; ?; ?).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Joint sentiment-topic (JST) model (?; ?) was extended from the latent Dirichlet allocation (LDA) model (?) to detect sentiment and topic simultaneously from text. The only supervision required by JST learning is domain-independent polarity word prior information. With prior polarity words extracted from both the MPQA subjectivity lexicon 1 and the appraisal lexicon 2 , the JST model achieves a sentiment classification accuracy of 74% on the movie review data 3 and 71% on the multi-domain sentiment dataset 4 . Moreover, it is also able to extract coherent and informative topics grouped under different sentiment. The fact that the JST model does not required any labeled documents for training makes it desirable for domain adaptation in sentiment classification. Many existing approaches solve the sentiment transfer problem by associating words from different domains which indicate the same sentiment (?; ?). Such an association mapping problem can be naturally solved by the posterior inference in the JST model. Indeed, the polarity-bearing topics extracted by JST essentially capture sentiment associations among words from different domains which effectively overcome the data distribution difference between source and target domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The previously proposed JST model uses the sentiment prior information in the Gibbs sampling inference step that a sentiment label will only be sampled if the current word token has no prior sentiment as defined in a sentiment lexicon. This in fact implies a different generative process where many of the word prior sentiment labels are observed. The model is no longer \"latent\". We propose an alternative approach by incorporating word prior polarity information through modifying the topic-word Dirichlet priors. This essentially creates an informed prior distribution for the sentiment labels and would allow the model to actually be latent and would be consistent with the generative story.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We study the polarity-bearing topics extracted by the JST model and show that by augmenting the original feature space with polarity-bearing topics, the performance of in-domain supervised classifiers learned from augmented feature representation improves substantially, reaching the state-of-the-art results of 95% on the movie review data and an average of 90% on the multi-domain sentiment dataset. Furthermore, using simple feature augmentation, our proposed approach outperforms the structural correspondence learning (SCL) (?) algorithm and achieves comparable results to the recently proposed spectral feature alignment (SFA) method (?). Nevertheless, our approach is much simpler and does not require difficult parameter tuning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We proceed with a review of related work on sentiment domain adaptation. We then briefly describe the JST model and present another approach to incorporate word prior polarity information into JST learning. We subsequently show that words from different domains can indeed be grouped under the same polarity-bearing topic through an illustration of example topic words extracted by JST before proposing a domain adaptation approach based on JST. We verify our proposed approach by conducting experiments on both the movie review data and the multi-domain sentiment dataset. Finally, we conclude our work and outline future directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There has been significant amount of work on algorithms for domain adaptation in NLP. Earlier work treats the source domain data as \"prior knowledge\" and uses maximum a posterior (MAP) estimation to learn a model for the target domain data under this prior distribution (?). Chelba and Acero (?) also uses the source domain data to estimate prior distribution but in the context of a maximum entropy (ME) model. The ME model has later been studied in (?) for domain adaptation where a mixture model is defined to learn differences between domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Other approaches rely on unlabeled data in the target domain to overcome feature distribution differences between domains. Motivated by the alternating structural optimization (ASO) algorithm (?) for multi-task learning, Blitzer et al. (?) proposed structural correspondence learning (SCL) for domain adaptation in sentiment classification. Given labeled data from a source domain and unlabeled data from target domain, SCL selects a set of pivot features to link the source and target domains where pivots are selected based on their common frequency in both domains and also their mutual information with the source labels.",
"cite_spans": [
{
"start": 221,
"end": 239,
"text": "Blitzer et al. (?)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There has also been research in exploring careful structuring of features for domain adaptation. Daum\u00e9 (?) proposed a kernel-mapping function which maps both source and target domains data to a high-dimensional feature space so that data points from the same domain are twice as similar as those from different domains. Dai et al.(?) proposed translated learning which uses a language model to link the class labels to the features in the source spaces, which in turn is translated to the features in the target spaces. Dai et al. (?) further proposed using spectral learning theory to learn an eigen feature representation from a task graph representing features, instances and class labels. In a similar vein, Pan et al. (?) proposed the spectral feature alignment (SFA) algorithm where some domainindependent words are used as a bridge to construct a bipartite graph to model the co-occurrence relationship between domain-specific words and domain-independent words. Feature clusters are generated by co-align domain-specific and domainindependent words.",
"cite_spans": [
{
"start": 320,
"end": 333,
"text": "Dai et al.(?)",
"ref_id": null
},
{
"start": 520,
"end": 534,
"text": "Dai et al. (?)",
"ref_id": null
},
{
"start": 712,
"end": 726,
"text": "Pan et al. (?)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Graph-based approach has also been studied in (?) where a graph is built with nodes denoting documents and edges denoting content similarity between documents. The sentiment score of each unlabeled documents is recursively calculated until convergence from its neighbors the actual labels of source domain documents and pseudo-labels of target document documents. This approach was later extended by simultaneously considering relations between documents and words from both source and target domains (?).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "More recently, Seah et al. (?) addressed the issue when the predictive distribution of class label given input data of the domains differs and proposed Predictive Distribution Matching SVM learn a robust classifier in the target domain by leveraging the labeled data from only the relevant regions of multiple sources.",
"cite_spans": [
{
"start": 15,
"end": 30,
"text": "Seah et al. (?)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Assume that we have a corpus with a collection of D documents denoted by C",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Sentiment-Topic (JST) Model",
"sec_num": "3"
},
{
"text": "= {d 1 , d 2 , ..., d D }; each document in the corpus is a sequence of N d words denoted by d = (w 1 , w 2 , ..., w N d )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Sentiment-Topic (JST) Model",
"sec_num": "3"
},
{
"text": ", and each word in the document is an item from a vocabulary index with V distinct terms denoted by {1, 2, ..., V }. Also, let S be the number of distinct sentiment labels, and T be the total number of topics. The generative process in JST which corresponds to the graphical model shown in Figure ? ?(a) is as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 290,
"end": 298,
"text": "Figure ?",
"ref_id": null
}
],
"eq_spans": [],
"section": "Joint Sentiment-Topic (JST) Model",
"sec_num": "3"
},
{
"text": "\u2022 For each document d, choose a distribution \u03c0 d \u223c Dir(\u03b3). \u2022 For each sentiment label l under document d, choose a distribution \u03b8 d,l \u223c Dir(\u03b1). \u2022 For each word w i in document d -choose a sentiment label l i \u223c Mult(\u03c0 d ), -choose a topic z i \u223c Mult(\u03b8 d,l i ), -choose a word w i from \u03d5 l i z i , a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Sentiment-Topic (JST) Model",
"sec_num": "3"
},
{
"text": "Multinomial distribution over words conditioned on topic z i and sentiment label l i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Sentiment-Topic (JST) Model",
"sec_num": "3"
},
{
"text": "Gibbs sampling was used to estimate the posterior distribution by sequentially sampling each variable of interest, z t and l t here, from the distribution over that variable given the current values of all other variables and data. Letting the superscript \u2212t denote a quantity that excludes data from t th position, the conditional posterior for z t and l t by marginalizing out the random variables \u03d5, \u03b8, and \u03c0 is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Sentiment-Topic (JST) Model",
"sec_num": "3"
},
{
"text": "w \u0219 \u0133 \u012e z \u0215 Nd S*T \u028c \u0216 D l S (a) JST model. w \u0219 \u0133 \u012e z \u0215 Nd S*T \u028c \u0216 D l S S \u021c S (b) Modified JST model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Sentiment-Topic (JST) Model",
"sec_num": "3"
},
{
"text": "P (z t = j, l t = k|w, z \u2212t , l \u2212t , \u03b1, \u03b2, \u03b3) \u221d N \u2212t wt,j,k + \u03b2 N \u2212t j,k + V \u03b2 \u2022 N \u2212t j,k,d + \u03b1 j,k N \u2212t k,d + j \u03b1 j,k \u2022 N \u2212t k,d + \u03b3 N \u2212t d + S\u03b3 . (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Sentiment-Topic (JST) Model",
"sec_num": "3"
},
{
"text": "where N wt,j,k is the number of times word w t appeared in topic j and with sentiment label k, N j,k is the number of times words assigned to topic j and sentiment label k, N j,k,d is the number of times a word from document d has been associated with topic j and sentiment label k, N k,d is the number of times sentiment label k has been assigned to some word tokens in document d, and N d is the total number of words in the document collection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Sentiment-Topic (JST) Model",
"sec_num": "3"
},
{
"text": "In the modified JST model as shown in Figure ??(b), we add an additional dependency link of \u03d5 on the matrix \u03bb of size S \u00d7 V which we use to encode word prior sentiment information into the JST model. For each word w \u2208 {1, ..., V }, if w is found in the sentiment lexicon, for each l \u2208 {1, ..., S}, the element \u03bb lw is updated as follows",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 44,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Joint Sentiment-Topic (JST) Model",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03bb lw = 1 if S(w) = l 0 otherwise ,",
"eq_num": "(2)"
}
],
"section": "Joint Sentiment-Topic (JST) Model",
"sec_num": "3"
},
{
"text": "where the function S(w) returns the prior sentiment label of w in a sentiment lexicon, i.e. neutral, posi- tive or negative. The matrix \u03bb can be considered as a transformation matrix which modifies the Dirichlet priors \u03b2 of size S \u00d7 T \u00d7 V , so that the word prior polarity can be captured. For example, the word \"excellent\" with index i in the vocabulary has a positive polarity. The corresponding row vector in \u03bb is [0, 1, 0] with its elements representing neutral, positive, and negative. For each topic j, multiplying \u03bb li with \u03b2 lji , only the value of \u03b2 lposji is retained, and \u03b2 lneuji and \u03b2 lnegji are set to 0. Thus, the word \"excellent\" can only be drawn from the positive topic word distributions generated from a Dirichlet distribution with parameter \u03b2 lpos .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Sentiment-Topic (JST) Model",
"sec_num": "3"
},
{
"text": "The JST model allows clustering different terms which share similar sentiment. In this section, we study the polarity-bearing topics extracted by JST. We combined reviews from the source and target domains and discarded document labels in both domains. There are a total of six different combinations. We then run JST on the combined data sets and listed some of the topic words extracted as shown in Table ? ?. Words in each cell are grouped under one topic and the upper half of the table shows topic words under the positive sentiment label while the lower half shows topic words under the negative sentiment label.",
"cite_spans": [],
"ref_spans": [
{
"start": 401,
"end": 408,
"text": "Table ?",
"ref_id": null
}
],
"eq_spans": [],
"section": "Polarity Words Extracted by JST",
"sec_num": "4"
},
{
"text": "We can see that JST appears to better capture sentiment association distribution in the source and target domains. For example, in the DVD+Elec. set, words from the DVD domain describe a rock concert DVD while words from the Electronics domain are likely relevant to stereo amplifiers and receivers, and yet they are grouped under the same topic by the JST model. Checking the word coverage in each domain reveals that for example \"bass\" seldom appears in the DVD domain, but appears more often in the Electronics domain. Likewise, in the Book+Kitch. set, \"stainless\" rarely appears in the Book domain and \"interest\" does not occur often in the Kitchen domain and they are grouped under the same topic. These observations motivate us to explore polaritybearing topics extracted by JST for cross-domain sentiment classification since grouping words from different domains but bearing similar sentiment has the effect of overcoming the data distribution difference of two domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polarity Words Extracted by JST",
"sec_num": "4"
},
{
"text": "Given input data x and a class label y, labeled patterns of one domain can be drawn from the joint distribution P (x, y) = P (y|x)P (x). Domain adaptation usually assume that data distribution are different in source and target domains, i.e., P s (x) = P t (x). The task of domain adaptation is to predict the label y t i corresponding to x t i in the target domain. We assume that we are given two sets of training data, D s and D t , the source domain and target domain data sets, respectively. In the multiclass classification problem, the source domain data consist of labeled instances,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation using JST",
"sec_num": "5"
},
{
"text": "D s = {(x s n ; y s n ) \u2208 X \u00d7 Y : 1 \u2264 n \u2264 N s },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation using JST",
"sec_num": "5"
},
{
"text": "where X is the input space and Y is a finite set of class labels. No class label is given in the target domain,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation using JST",
"sec_num": "5"
},
{
"text": "D t = {x t n \u2208 X : 1 \u2264 n \u2264 N t , N t N s }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation using JST",
"sec_num": "5"
},
{
"text": "Algorithm ?? shows how to perform domain adaptation using the JST model. The source and target domain data are first merged with document labels discarded. A JST model is then learned from the merged corpus to generate polaritybearing topics for each document. The original documents in the source domain are augmented with those polarity-bearing topics as shown in Step 4 of Algorithm ??, where l i z i denotes a combination of sentiment label l i and topic z i for word w i . Finally, feature selection is performed according to the information gain criteria and a classifier is then trained from the source domain using the new document representations. The target domain documents are also encoded in a similar way with polarity-bearing topics added into their feature representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation using JST",
"sec_num": "5"
},
{
"text": "Algorithm 1 Domain adaptation using JST.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation using JST",
"sec_num": "5"
},
{
"text": "Input: The source domain data D s = {(x s n ; y s n ) \u2208 X \u00d7 Y : 1 \u2264 n \u2264 N s }, the target domain data, D t = {x t n \u2208 X : 1 \u2264 n \u2264 N t , N t N s } Output: A sentiment classifier for the target domain D t 1: Merge D s and D t with document labels discarded, D = {(x s n , 1 \u2264 n \u2264 N s ; x t n , 1 \u2264 n \u2264 N t } 2: Train a JST model on D 3: for each document x s n = (w 1 , w 2 , ..., w m ) \u2208 D s do 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation using JST",
"sec_num": "5"
},
{
"text": "Augment document with polarity-bearing topics generated from JST,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation using JST",
"sec_num": "5"
},
{
"text": "x s n = (w 1 , w 2 , ..., w m , l 1 z 1 , l 2 z 2 , ..., l m z m ) 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation using JST",
"sec_num": "5"
},
{
"text": "Add {x s n ; y s n } into a document pool B 6: end for 7: Perform feature selection using IG on B 8: Return a classifier, trained on B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation using JST",
"sec_num": "5"
},
{
"text": "As discussed in Section ?? that the JST model directly models P (l|d), the probability of sentiment label given document, and hence document polarity can be classified accordingly. Since JST model learning does not require the availability of document labels, it is possible to augment the source domain data by adding most confident pseudo-labeled documents from the target domain by the JST model as shown in Algorithm ??.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation using JST",
"sec_num": "5"
},
{
"text": "We evaluate our proposed approach on the two datasets, the movie review (MR) data and the multidomain sentiment (MDS) dataset. The movie review data consist of 1000 positive and 1000 negative movie reviews drawn from the IMDB movie archive while the multi-domain sentiment dataset contains four different types of product reviews extracted from Amazon.com including Book, DVD, Electronics and Kitchen appliances. Each category Algorithm 2 Adding pseudo-labeled documents. Input: The target domain data, D t = {x t n \u2208 X : Infer its sentiment class label from JST as l n = arg max s P (l|x t n ; \u039b) 4:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "1 \u2264 n \u2264 N t , N t N s },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "if P (l n |x t n ; \u039b) > \u03c4 then 5:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Add labeled sample (x t n , l n ) into a document pool B 6: end if 7: end for of product reviews comprises of 1000 positive and 1000 negative reviews and is considered as a domain. Preprocessing was performed on both of the datasets by removing punctuation, numbers, nonalphabet characters and stopwords. The MPQA subjectivity lexicon is used as a sentiment lexicon in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "While the original JST model can produce reasonable results with a simple symmetric Dirichlet prior, here we use asymmetric prior \u03b1 over the topic proportions which is learned directly from data using a fixed-point iteration method (?).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.1"
},
{
"text": "In our experiment, \u03b1 was updated every 25 iterations during the Gibbs sampling procedure. In terms of other priors, we set symmetric prior \u03b2 = 0.01 and \u03b3 = (0.05\u00d7L)/S, where L is the average document length, and the value of 0.05 on average allocates 5% of probability mass for mixing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "6.1"
},
{
"text": "We performed 5-fold cross validation for the performance evaluation of supervised sentiment classification. Results reported in this section are averaged over 10 such runs. We have tested several classifiers including Na\u00efve Bayes (NB) and support vector machines (SVMs) from WEKA 5 , and maximum entropy (ME) from MALLET 6 . All parameters are set to their default values except the Gaussian prior variance is set to 0.1 for the ME model training. The results show that ME consistently outperforms NB and SVM on average. Thus, we only report results from ME trained on document vectors with each term weighted according to its frequency. The only parameter we need to set is the number of topics T . It has to be noted that the actual number of feature clusters is 3 \u00d7 T . For example, when T is set to 5, there are 5 topic groups under each of the positive, negative, or neutral sentiment labels and hence there are altogether 15 feature clusters. The generated topics for each document from the JST model were simply added into its bag-of-words (BOW) feature representation prior to model training. Figure ?? shows the classification results on the five different domains by varying the number of topics from 1 to 200. It can be observed that the best classification accuracy is obtained when the number of topics is set to 1 (or 3 feature clusters). Increasing the number of topics results in the decrease of accuracy though it stabilizes after 15 topics. Nevertheless, when the number of topics is set to 15, using JST feature augmentation still outperforms ME without feature augmentation (the baseline model) in all of the domains. It is worth pointing out that the JST model with single topic becomes the standard LDA model with only three sentiment topics. Nevertheless, we have proposed an effective way to incorporate domain-independent word polarity prior information into model learning. As will be shown later in Table ? ? that the JST model with word polarity priors incorporated performs significantly better than the LDA model without incorporating such prior information.",
"cite_spans": [],
"ref_spans": [
{
"start": 1101,
"end": 1110,
"text": "Figure ??",
"ref_id": null
},
{
"start": 1926,
"end": 1933,
"text": "Table ?",
"ref_id": null
}
],
"eq_spans": [],
"section": "Supervised Sentiment Classification",
"sec_num": "6.2"
},
{
"text": "For comparison purpose, we also run the LDA model and augmented the BOW features with the generated topics in a similar way. The best accuracy was obtained when the number of topics is set to 15 in the LDA model. Table ? ? shows the classification accuracy results with or without feature augmentation. We have performed significance test and found that LDA performs statistically significant better than Baseline according to a paired t-test with p < 0.005 for the Kitchen domain and with p < 0.001 for all the other domains. JST performs statistically significant better than both Baseline and LDA with p < 0.001. We also compare our method with other recently proposed approaches. Yessenalina et al. (?) explored different methods to automatically generate annotator rationales to improve sentiment classification accuracy. Our method using JST feature augmentation consistently performs better than their approach (denoted as [YE10] in Table ? ?). They further proposed a two-level structured model (?) for document-level sentiment classification. The best accuracy obtained on the MR data is 93.22% with the model being initialized with sentence-level human annotations, which is still worse than ours. Li et al. (?) adopted a two-stage process by first classifying sentences as personal views and impersonal views and then using an ensemble method to perform sentiment classification. Their method (denoted as [LI10] in Table ? ?) performs worse than either LDA or JST feature augmentation. To the best of our knowledge, the results achieved using JST feature augmentation are the state-of-the-art for both the MR and the MDS datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 213,
"end": 220,
"text": "Table ?",
"ref_id": null
},
{
"start": 940,
"end": 947,
"text": "Table ?",
"ref_id": null
},
{
"start": 1426,
"end": 1433,
"text": "Table ?",
"ref_id": null
}
],
"eq_spans": [],
"section": "Supervised Sentiment Classification",
"sec_num": "6.2"
},
{
"text": "We conducted domain adaptation experiments on the MDS dataset comprising of four different domains, Book (B), DVD (D), Electronics (E), and Kitchen appliances (K). We randomly split each do-main data into a training set of 1,600 instances and a test set of 400 instances. A classifier trained on the training set of one domain is tested on the test set of a different domain. We preformed 5 random splits and report the results averaged over 5 such runs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation",
"sec_num": "6.3"
},
{
"text": "We compare our proposed approaches with two baseline models. The first one (denoted as \"Base\" in Table ? ?) is an ME classifier trained without adaptation. LDA results were generated from an ME classifier trained on document vectors augmented with topics generated from the LDA model. The number of topics was set to 15. JST results were obtained in a similar way except that we used the polaritybearing topics generated from the JST model. We also tested with adding pseudo-labeled examples from the JST model into the source domain for ME classifier training (following Algorithm ??), denoted as \"JST-PL\" in Table ? ?. The document sentiment classification probability threshold \u03c4 was set to 0.8. Finally, we performed feature selection by selecting the top 2000 features according to the information gain criteria (\"JST-IG\") 7 .",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 104,
"text": "Table ?",
"ref_id": null
},
{
"start": 610,
"end": 617,
"text": "Table ?",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with Baseline Models",
"sec_num": null
},
{
"text": "There are altogether 12 cross-domain sentiment classification tasks. We showed the adaptation loss results in Table ? ? where the result for each domain and for each method is averaged over all three possible adaptation tasks by varying the source domain. The adaptation loss is calculated with respect to the in-domain gold standard classification result. For example, the in-domain goal standard for the Book domain is 79.96%. For adapting from DVD to Book, baseline achieves 72.25% and JST gives 76.45%. The adaptation loss is 7.71 for baseline and 3.51 for JST.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 117,
"text": "Table ?",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with Baseline Models",
"sec_num": null
},
{
"text": "It can be observed from Table ? ? that LDA only improves slightly compared to the baseline with an error reduction of 11%. JST further reduces the error due to transfer by 27%. Adding pseudo-labeled examples gives a slightly better performance compared to JST with an error reduction of 36%. With feature selection, JST-IG outperforms all the other approaches with a relative error reduction of 53%. 7.9 7.7 6.3 5.4 3.9 Kitch.",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 31,
"text": "Table ?",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with Baseline Models",
"sec_num": null
},
{
"text": "7.6 7.6 6.9 6.1 4.4 Average 8.6 7.7 6.3 5.5 4.1 Table 3 : Adaptation loss with respect to the in-domain gold standard. The last row shows the average loss over all the four domains.",
"cite_spans": [],
"ref_spans": [
{
"start": 48,
"end": 55,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with Baseline Models",
"sec_num": null
},
{
"text": "There is only one parameters to be set in the JST-IG approach, the number of topics. We plot the classification accuracy versus different topic numbers in Figure ? ? with the number of topics varying between 1 and 200, corresponding to feature clusters varying between 3 and 600. It can be observed that for the relatively larger Book and DVD data sets, the accuracies peaked at topic number 10, whereas for the relatively smaller Electronics and Kitchen data sets, the best performance was obtained at topic number 50. Increasing topic numbers results in the decrease of classification accuracy. Manually examining the extracted polarity topics from JST reveals that when the topic number is small, each topic cluster contains well-mixed words from different domains. However, when the topic number is large, words under each topic cluster tend to be dominated by a single domain.",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 163,
"text": "Figure ?",
"ref_id": null
}
],
"eq_spans": [],
"section": "Parameter Sensitivity",
"sec_num": null
},
{
"text": "We compare in Figure ? ? our proposed approach with two other domain adaptation algorithms for sentiment classification, SCL and SFA. Each set of bars represent a cross-domain sentiment classification task. The thick horizontal lines are in-domain sentiment classification accuracies. It is worth noting that our in-domain results are slightly different from those reported in (?; ?) due to different random splits. Our proposed JST-IG approach outperforms SCL in average and achieves comparable results to SFA. While SCL requires the construction of a reasonable number of auxiliary tasks that are useful to model \"pivots\" and \"non-pivots\", SFA relies on a good selection of domain-independent features for the construction of bipartite feature graph before running spectral clustering to derive feature clusters. 129 On the contrary, our proposed approach based on the JST model is much simpler and yet still achieves comparable results.",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 22,
"text": "Figure ?",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with Existing Approaches",
"sec_num": null
},
{
"text": "In this paper, we have studied polarity-bearing topics generated from the JST model and shown that by augmenting the original feature space with polaritybearing topics, the in-domain supervised classifiers learned from augmented feature representation achieve the state-of-the-art performance on both the movie review data and the multi-domain sentiment dataset. Furthermore, using feature augmentation and selection according to the information gain criteria for cross-domain sentiment classification, our proposed approach outperforms SCL and gives similar results as SFA. Nevertheless, our approach is much simpler and does not require difficult parameter tuning. There are several directions we would like to explore in the future. First, polarity-bearing topics generated by the JST model were simply added into the original feature space of documents, it is worth investigating attaching different weight to each topic maybe in proportional to the posterior probability of sentiment label and topic given a word estimated by the JST model. Second, it might be interesting to study the effect of introducing a tradeoff parameter to balance the effect of original and new features. Finally, our experimental results show that adding pseudo-labeled examples by the JST model does not appear to be effective. We could possibly explore instance weight strategies (?) on both pseudo-labeled examples and source domain training examples in order to improve the adaptation performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "http://www.cs.pitt.edu/mpqa/ 2 http://lingcog.iit.edu/arc/appraisal_ lexicon_2007b.tar.gz 3 http://www.cs.cornell.edu/people/pabo/ movie-review-data 4 http://www.cs.jhu.edu/\u02dcmdredze/ datasets/sentiment/index2.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.cs.waikato.ac.nz/ml/weka/ 6 http://mallet.cs.umass.edu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Both values of 0.8 and 2000 were set arbitrarily after an initial run on some held-out data; they were not tuned to optimize test performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported in part by the EC-FP7 projects ROBUST (grant number 257859).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A framework for learning predictive structures from multiple tasks and unlabeled data",
"authors": [
{
"first": "R",
"middle": [
"K"
],
"last": "Ando",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2005,
"venue": "The Journal of Machine Learning Research",
"volume": "6",
"issue": "",
"pages": "1817--1853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R.K. Ando and T. Zhang. 2005. A framework for learn- ing predictive structures from multiple tasks and un- labeled data. The Journal of Machine Learning Re- search, 6:1817-1853.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Customizing sentiment classifiers to new domains: a case study",
"authors": [
{
"first": "A",
"middle": [],
"last": "Aue",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gamon",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Recent Advances in Natural Language Processing (RANLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Aue and M. Gamon. 2005. Customizing sentiment classifiers to new domains: a case study. In Proceed- ings of Recent Advances in Natural Language Process- ing (RANLP).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Latent Dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "J. Mach. Learn. Res",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. J. Mach. Learn. Res., 3:993-1022.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification",
"authors": [
{
"first": "J",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Blitzer, M. Dredze, and F. Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adap- tation for sentiment classification. In ACL, page 440- 447.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Adaptation of maximum entropy classifier: Little data can help a lot",
"authors": [
{
"first": "C",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Acero",
"suffix": ""
}
],
"year": 2004,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Chelba and A. Acero. 2004. Adaptation of maxi- mum entropy classifier: Little data can help a lot. In EMNLP.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Translated learning: Transfer learning across different feature spaces",
"authors": [
{
"first": "W",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "G",
"middle": [
"R"
],
"last": "Xue",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2008,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "353--360",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Dai, Y. Chen, G.R. Xue, Q. Yang, and Y. Yu. 2008. Translated learning: Transfer learning across different feature spaces. In NIPS, pages 353-360.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Eigentransfer: a unified framework for transfer learning",
"authors": [
{
"first": "W",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "G",
"middle": [
"R"
],
"last": "Xue",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2009,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "193--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Dai, O. Jin, G.R. Xue, Q. Yang, and Y. Yu. 2009. Eigentransfer: a unified framework for transfer learn- ing. In ICML, pages 193-200.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Domain adaptation for statistical classifiers",
"authors": [
{
"first": "H",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Artificial Intelligence Research",
"volume": "26",
"issue": "1",
"pages": "101--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Daum\u00e9 III and D. Marcu. 2006. Domain adaptation for statistical classifiers. Journal of Artificial Intelli- gence Research, 26(1):101-126.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Frustratingly easy domain adaptation",
"authors": [
{
"first": "H",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "256--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Daum\u00e9. 2007. Frustratingly easy domain adaptation. In ACL, pages 256-263.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Instance weighting for domain adaptation in NLP",
"authors": [
{
"first": "J",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "C",
"middle": [
"X"
],
"last": "Zhai",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "264--271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Jiang and C.X. Zhai. 2007. Instance weighting for domain adaptation in NLP. In ACL, pages 264-271.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Sentiment classification of movie reviews using contextual valence shifters",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kennedy",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Inkpen",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Intelligence",
"volume": "22",
"issue": "2",
"pages": "110--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Kennedy and D. Inkpen. 2006. Sentiment clas- sification of movie reviews using contextual valence shifters. Computational Intelligence, 22(2):110-125.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Employing personal/impersonal views in supervised and semi-supervised sentiment classification",
"authors": [
{
"first": "S",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "C",
"middle": [
"R"
],
"last": "Huang",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "S",
"middle": [
"Y M"
],
"last": "Lee",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "414--423",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Li, C.R. Huang, G. Zhou, and S.Y.M. Lee. 2010. Employing personal/impersonal views in supervised and semi-supervised sentiment classification. In ACL, pages 414-423.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Joint sentiment/topic model for sentiment analysis",
"authors": [
{
"first": "C",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 18th ACM international conference on Information and knowledge management (CIKM)",
"volume": "",
"issue": "",
"pages": "375--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Lin and Y. He. 2009. Joint sentiment/topic model for sentiment analysis. In Proceedings of the 18th ACM international conference on Information and knowl- edge management (CIKM), pages 375-384.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A Comparative Study of Bayesian Models for Unsupervised Sentiment Detection",
"authors": [
{
"first": "C",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Everson",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 14th Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "144--152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Lin, Y. He, and R. Everson. 2010. A Compara- tive Study of Bayesian Models for Unsupervised Sen- timent Detection. In Proceedings of the 14th Confer- ence on Computational Natural Language Learning (CoNLL), pages 144-152.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Structured models for fine-to-coarse sentiment analysis",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Kerry",
"middle": [],
"last": "Hannan",
"suffix": ""
},
{
"first": "Tyler",
"middle": [],
"last": "Neylon",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Wells",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Reynar",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "432--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Kerry Hannan, Tyler Neylon, Mike Wells, and Jeff Reynar. 2007. Structured models for fine-to-coarse sentiment analysis. In ACL, pages 432- 439.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Estimating a Dirichlet distribution",
"authors": [
{
"first": "T",
"middle": [],
"last": "Minka",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Minka. 2003. Estimating a Dirichlet distribution. Technical report.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Cross-domain sentiment classification via spectral feature alignment",
"authors": [
{
"first": "S",
"middle": [
"J"
],
"last": "Pan",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "J",
"middle": [
"T"
],
"last": "Sun",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 19th international conference on World Wide Web (WWW)",
"volume": "",
"issue": "",
"pages": "751--760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S.J. Pan, X. Ni, J.T. Sun, Q. Yang, and Z. Chen. 2010. Cross-domain sentiment classification via spectral fea- ture alignment. In Proceedings of the 19th interna- tional conference on World Wide Web (WWW), pages 751-760.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2004,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "271--278",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2004. A sentimental educa- tion: sentiment analysis using subjectivity summariza- tion based on minimum cuts. In ACL, page 271-278.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Thumbs up?: sentiment classification using machine learning techniques",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Shivakumar",
"middle": [],
"last": "Vaithyanathan",
"suffix": ""
}
],
"year": 2002,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using ma- chine learning techniques. In EMNLP, pages 79-86.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Supervised and unsupervised PCFG adaptation to novel domains",
"authors": [
{
"first": "B",
"middle": [],
"last": "Roark",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bacchiani",
"suffix": ""
}
],
"year": 2003,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "126--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Roark and M. Bacchiani. 2003. Supervised and un- supervised PCFG adaptation to novel domains. In NAACL-HLT, pages 126-133.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Predictive Distribution Matching SVM for Multi-domain Learning",
"authors": [
{
"first": "C",
"middle": [
"W"
],
"last": "Seah",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Tsang",
"suffix": ""
},
{
"first": "Y",
"middle": [
"S"
],
"last": "Ong",
"suffix": ""
},
{
"first": "K",
"middle": [
"K"
],
"last": "Lee",
"suffix": ""
}
],
"year": 2010,
"venue": "ECML-PKDD",
"volume": "",
"issue": "",
"pages": "231--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.W. Seah, I. Tsang, Y.S. Ong, and K.K. Lee. 2010. Pre- dictive Distribution Matching SVM for Multi-domain Learning. In ECML-PKDD, pages 231-247.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Using appraisal groups for sentiment analysis",
"authors": [
{
"first": "Casey",
"middle": [],
"last": "Whitelaw",
"suffix": ""
},
{
"first": "Navendu",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Shlomo",
"middle": [],
"last": "Argamon",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACM international conference on Information and Knowledge Management (CIKM)",
"volume": "",
"issue": "",
"pages": "625--631",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Casey Whitelaw, Navendu Garg, and Shlomo Argamon. 2005. Using appraisal groups for sentiment analysis. In Proceedings of the ACM international conference on Information and Knowledge Management (CIKM), pages 625-631.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Graph ranking for sentiment transfer",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2009,
"venue": "ACL-IJCNLP",
"volume": "",
"issue": "",
"pages": "317--320",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Q. Wu, S. Tan, and X. Cheng. 2009. Graph ranking for sentiment transfer. In ACL-IJCNLP, pages 317-320.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "MIEA: a Mutual Iterative Enhancement Approach for Cross-Domain Sentiment Classification",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Duan",
"suffix": ""
}
],
"year": 2010,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "1327--1335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Q. Wu, S. Tan, X. Cheng, and M. Duan. 2010. MIEA: a Mutual Iterative Enhancement Approach for Cross- Domain Sentiment Classification. In COLING, page 1327-1335.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Automatically generating annotator rationales to improve sentiment classification",
"authors": [
{
"first": "A",
"middle": [],
"last": "Yessenalina",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2010,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "336--341",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Yessenalina, Y. Choi, and C. Cardie. 2010a. Auto- matically generating annotator rationales to improve sentiment classification. In ACL, pages 336-341.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Multi-Level Structured Models for Document-Level Sentiment Classification",
"authors": [
{
"first": "A",
"middle": [],
"last": "Yessenalina",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yue",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2010,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1046--1056",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Yessenalina, Y. Yue, and C. Cardie. 2010b. Multi- Level Structured Models for Document-Level Senti- ment Classification. In EMNLP, pages 1046-1056.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Adding redundant features for CRFs-based sentence sentiment classification",
"authors": [
{
"first": "Jun",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Gen",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2008,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "117--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun Zhao, Kang Liu, and Gen Wang. 2008. Adding re- dundant features for CRFs-based sentence sentiment classification. In EMNLP, pages 117-126.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "JST model and its modified version.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Classification accuracy vs. no. of topics.",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Adapted to Electronics and Kitchen data sets.",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Classification accuracy vs. no. of topics.",
"num": null
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"text": "Adapted to Electronics and Kitchen data sets.",
"num": null
},
"FIGREF5": {
"type_str": "figure",
"uris": null,
"text": "Comparison with existing approaches.",
"num": null
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td/><td>Book</td><td>DVD</td><td>Book</td><td colspan=\"4\">Elec. Book Kitch. DVD Elec. DVD</td><td>Kitch.</td><td>Elec.</td><td>Kitch.</td></tr><tr><td/><td colspan=\"8\">recommend funni interest pictur interest qualiti concert sound movi recommend sound</td><td>pleas</td></tr><tr><td>Pos.</td><td>highli easi</td><td colspan=\"7\">cool entertain knowledg paper polit servic favorit bass classic perfect topic clear success easili rock listen stori highli</td><td>excel satisfi</td><td>look worth</td></tr><tr><td/><td>depth</td><td colspan=\"2\">awesom follow</td><td colspan=\"4\">color clearli stainless sing amaz fun</td><td>great</td><td>perform materi</td></tr><tr><td/><td>strong</td><td>worth</td><td>easi</td><td colspan=\"3\">accur popular safe</td><td>talent acoust charact</td><td>qulati</td><td>comfort profession</td></tr><tr><td/><td>mysteri</td><td>cop</td><td colspan=\"6\">abus problem bore return bore poorli horror cabinet tomtom elimin</td></tr><tr><td>Neg.</td><td>fbi investig</td><td colspan=\"6\">shock question poor tediou heavi prison mislead design cheat stick stupid replac scari plot low alien</td><td>break install</td><td>region regardless error cheapli</td></tr><tr><td/><td>death</td><td>escap</td><td>point</td><td>case</td><td colspan=\"3\">crazi defect stori avoid evil</td><td>drop</td><td>code</td><td>plain</td></tr><tr><td/><td>report</td><td>dirti</td><td>disagre</td><td>flaw</td><td>hell</td><td colspan=\"2\">mess terribl crap dead</td><td>gap</td><td>dumb incorrect</td></tr></table>",
"num": null,
"html": null,
"text": "Extracted polarity words by JST on the combined data sets."
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td>[YE10]</td><td colspan=\"2\">91.78 82.75 82.85 84.55 87.9</td></tr><tr><td>[LI10]</td><td>-</td><td>79.49 81.65 83.64 85.65</td></tr></table>",
"num": null,
"html": null,
"text": "Method MR MDS Book DVD Elec. Kitch. Baseline 82.53 79.96 81.32 83.61 85.82 LDA 83.76 84.32 85.62 85.4 87.68 JST 94.98 89.95 91.7 88.25 89.85"
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null,
"text": "Supervised sentiment classification accuracy."
}
}
}
}