ACL-OCL / Base_JSON /prefixD /json /D16 /D16-1023.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D16-1023",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:36:14.161526Z"
},
"title": "Learning Sentence Embeddings with Auxiliary Tasks for Cross-Domain Sentiment Classification",
"authors": [
{
"first": "Jianfei",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Singapore Management University",
"location": {}
},
"email": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {},
"email": "jingjiang@smu.edu.sg"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we study cross-domain sentiment classification with neural network architectures. We borrow the idea from Structural Correspondence Learning and use two auxiliary tasks to help induce a sentence embedding that supposedly works well across domains for sentiment classification. We also propose to jointly learn this sentence embedding together with the sentiment classifier itself. Experiment results demonstrate that our proposed joint model outperforms several state-of-theart methods on five benchmark datasets.",
"pdf_parse": {
"paper_id": "D16-1023",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we study cross-domain sentiment classification with neural network architectures. We borrow the idea from Structural Correspondence Learning and use two auxiliary tasks to help induce a sentence embedding that supposedly works well across domains for sentiment classification. We also propose to jointly learn this sentence embedding together with the sentiment classifier itself. Experiment results demonstrate that our proposed joint model outperforms several state-of-theart methods on five benchmark datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the growing need of correctly identifying the sentiments expressed in subjective texts such as product reviews, sentiment classification has received continuous attention in the NLP community for over a decade (Pang et al., 2002; Pang and Lee, 2004; Hu and Liu, 2004; Choi and Cardie, 2008; Nakagawa et al., 2010) . One of the big challenges of sentiment classification is how to adapt a sentiment classifier trained on one domain to a different new domain. This is because sentiments are often expressed with domain-specific words and expressions. For example, in the Movie domain, words such as moving and engaging are usually positive, but they may not be relevant in the Restaurant domain. Since labeled data is expensive to obtain, it would be very useful if we could adapt a model trained on a source domain to a target domain.",
"cite_spans": [
{
"start": 215,
"end": 234,
"text": "(Pang et al., 2002;",
"ref_id": "BIBREF23"
},
{
"start": 235,
"end": 254,
"text": "Pang and Lee, 2004;",
"ref_id": "BIBREF21"
},
{
"start": 255,
"end": 272,
"text": "Hu and Liu, 2004;",
"ref_id": "BIBREF12"
},
{
"start": 273,
"end": 295,
"text": "Choi and Cardie, 2008;",
"ref_id": "BIBREF6"
},
{
"start": 296,
"end": 318,
"text": "Nakagawa et al., 2010)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Much work has been done in sentiment analysis to address this domain adaptation problem (Blitzer et al., 2007; Pan et al., 2010; Bollegala et al., 2011; Ponomareva and Thelwall, 2012; Bollegala et al., 2016) . Among them, an appealing method is the Structural Correspondence Learning (SCL) method (Blitzer et al., 2007) , which uses pivot feature prediction tasks to induce a projected feature space that works well for both the source and the target domains. The intuition behind is that these pivot prediction tasks are highly correlated with the original task. For sentiment classification, Blitzer et al. (2007) first chose pivot words which have high mutual information with the sentiment labels, and then set up the pivot prediction tasks to be the predictions of each of these pivot words using the other words.",
"cite_spans": [
{
"start": 88,
"end": 110,
"text": "(Blitzer et al., 2007;",
"ref_id": "BIBREF1"
},
{
"start": 111,
"end": 128,
"text": "Pan et al., 2010;",
"ref_id": "BIBREF19"
},
{
"start": 129,
"end": 152,
"text": "Bollegala et al., 2011;",
"ref_id": "BIBREF2"
},
{
"start": 153,
"end": 183,
"text": "Ponomareva and Thelwall, 2012;",
"ref_id": "BIBREF24"
},
{
"start": 184,
"end": 207,
"text": "Bollegala et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 297,
"end": 319,
"text": "(Blitzer et al., 2007)",
"ref_id": "BIBREF1"
},
{
"start": 594,
"end": 615,
"text": "Blitzer et al. (2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, the original SCL method is based on traditional discrete feature representations and linear classifiers. In recent years, with the advances of deep learning in NLP, multi-layer neural network models such as RNNs and CNNs have been widely used in sentiment classification and achieved good performance (Socher et al., 2013; Dong et al., 2014a; Dong et al., 2014b; Kim, 2014; Tang et al., 2015) . In these models, dense, real-valued feature vectors and non-linear classification functions are used. By using real-valued word embeddings pre-trained from a large corpus, these models can take advantage of the embedding space that presumably better captures the syntactic and semantic similarities between words. And by using non-linear functions through multi-layer neural networks, these models represent a more expressive hypothesis space. Therefore, it would be interesting to explore how these neural network models could be extended for cross-domain sentiment classification.",
"cite_spans": [
{
"start": 310,
"end": 331,
"text": "(Socher et al., 2013;",
"ref_id": "BIBREF25"
},
{
"start": 332,
"end": 351,
"text": "Dong et al., 2014a;",
"ref_id": "BIBREF8"
},
{
"start": 352,
"end": 371,
"text": "Dong et al., 2014b;",
"ref_id": "BIBREF9"
},
{
"start": 372,
"end": 382,
"text": "Kim, 2014;",
"ref_id": "BIBREF14"
},
{
"start": 383,
"end": 401,
"text": "Tang et al., 2015)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There has been some recent studies on neural network-based domain adaptation (Glorot et al., 2011; Chen et al., 2012; Yang and Eisenstein, 2014) . They use Stacked Denoising Auto-encoders (SDA) to induce a hidden representation that presumably works well across domains. However, SDA is fully unsupervised and does not consider the end task we need to solve, i.e., the sentiment classification task. In contrast, the idea behind SCL is to use carefullychosen auxiliary tasks that correlate with the end task to induce a hidden representation. Another line of work aims to learn a low dimensional representation for each feature in both domains based on predicting its neighboring features (Yang and Eisenstein, 2015; Bollegala et al., 2015) . Different from these methods, we aim to directly learn sentence embeddings that work well across domains.",
"cite_spans": [
{
"start": 77,
"end": 98,
"text": "(Glorot et al., 2011;",
"ref_id": "BIBREF11"
},
{
"start": 99,
"end": 117,
"text": "Chen et al., 2012;",
"ref_id": "BIBREF5"
},
{
"start": 118,
"end": 144,
"text": "Yang and Eisenstein, 2014)",
"ref_id": "BIBREF28"
},
{
"start": 188,
"end": 193,
"text": "(SDA)",
"ref_id": null
},
{
"start": 689,
"end": 716,
"text": "(Yang and Eisenstein, 2015;",
"ref_id": "BIBREF29"
},
{
"start": 717,
"end": 740,
"text": "Bollegala et al., 2015)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we aim to extend the main idea behind SCL to neural network-based solutions to sentiment classification to address the domain adaptation problem. Specifically, we borrow the idea of using pivot prediction tasks from SCL. But instead of learning thousands of pivot predictors and performing singular value decomposition on the learned weights, which all relies on linear transformations, we introduce only two auxiliary binary prediction tasks and directly learn a non-linear transformation that maps an input to a dense embedding vector. Moreover, different from SCL and the auto-encoderbased methods, in which the hidden feature representation and the final classifier are learned sequentially, we propose to jointly learn the hidden feature representation together with the sentiment classification model itself, and we show that joint learning works better than sequential learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We conduct experiments on a number of different source and target domains for sentence-level sentiment classification. We show that our proposed method is able to achieve the best performance compared with a number of baselines for most of these domain pairs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Domain Adaptation: Domain adaptation is a general problem in NLP and has been well studied in recent years (Blitzer et al., 2006; Daum\u00e9 III, 2007; Jiang and Zhai, 2007; Dredze and Crammer, 2008; Titov, 2011; Yu and Jiang, 2015) . For sentiment classification, most existing domain adaptation methods are based on traditional discrete feature representations and linear classifiers. One line of work focuses on inducing a general lowdimensional cross-domain representation based on the co-occurrences of domain-specific and domainindependent features (Blitzer et al., 2007; Pan et al., 2010; Pan et al., 2011) . Another line of work tries to derive domain-specific sentiment words (Bollegala et al., 2011; Li et al., 2012) . Our proposed method is similar to the first line of work in that we also aim to learn a general, cross-domain representation (sentence embeddings in our case).",
"cite_spans": [
{
"start": 107,
"end": 129,
"text": "(Blitzer et al., 2006;",
"ref_id": "BIBREF0"
},
{
"start": 130,
"end": 146,
"text": "Daum\u00e9 III, 2007;",
"ref_id": "BIBREF7"
},
{
"start": 147,
"end": 168,
"text": "Jiang and Zhai, 2007;",
"ref_id": "BIBREF13"
},
{
"start": 169,
"end": 194,
"text": "Dredze and Crammer, 2008;",
"ref_id": "BIBREF10"
},
{
"start": 195,
"end": 207,
"text": "Titov, 2011;",
"ref_id": "BIBREF27"
},
{
"start": 208,
"end": 227,
"text": "Yu and Jiang, 2015)",
"ref_id": "BIBREF30"
},
{
"start": 550,
"end": 572,
"text": "(Blitzer et al., 2007;",
"ref_id": "BIBREF1"
},
{
"start": 573,
"end": 590,
"text": "Pan et al., 2010;",
"ref_id": "BIBREF19"
},
{
"start": 591,
"end": 608,
"text": "Pan et al., 2011)",
"ref_id": "BIBREF20"
},
{
"start": 680,
"end": 704,
"text": "(Bollegala et al., 2011;",
"ref_id": "BIBREF2"
},
{
"start": 705,
"end": 721,
"text": "Li et al., 2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A recent trend of deep learning enhances various kinds of neural network models for sentiment classification, including Convolutional Neural Networks (CNNs), Recursive Neural Network (ReNNs) and Recurrent Neural Network (RNNs), which have been shown to achieve competitive results across different benchmarks (Socher et al., 2013; Dong et al., 2014a; Dong et al., 2014b; Kim, 2014; Tang et al., 2015 ). Inspired by their success in standard indomain settings, it is intuitive for us to apply these neural network models to domain adaptation settings. Denoising Auto-encoders for Domain Adaptation: Denoising Auto-encoders have been extensively studied in cross-domain sentiment classification, since the representations learned through multilayer neural networks are robust against noise during domain adaptation. The initial application of this idea is to directly employ stacked denoising autoencoders (SDA) by reconstructing the original features from data that are corrupted with noise (Glorot et al., 2011), and Chen et al. (2012) proposed to analytically marginalize out the corruption during SDA training. Later Yang and Eisenstein (2014) further showed that their proposed structured dropout noise strategy can dramatically improve the efficiency without sacrificing the accuracy. However, these methods are still based on traditional discrete representation and do not exploit the idea of using auxiliary tasks that are related to the end task. In contrast, the sentence embeddings learned from our method are derived from real-valued feature vectors and rely on related auxiliary tasks.",
"cite_spans": [
{
"start": 309,
"end": 330,
"text": "(Socher et al., 2013;",
"ref_id": "BIBREF25"
},
{
"start": 331,
"end": 350,
"text": "Dong et al., 2014a;",
"ref_id": "BIBREF8"
},
{
"start": 351,
"end": 370,
"text": "Dong et al., 2014b;",
"ref_id": "BIBREF9"
},
{
"start": 371,
"end": 381,
"text": "Kim, 2014;",
"ref_id": "BIBREF14"
},
{
"start": 382,
"end": 399,
"text": "Tang et al., 2015",
"ref_id": "BIBREF26"
},
{
"start": 1017,
"end": 1035,
"text": "Chen et al. (2012)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Networks for Sentiment Classification:",
"sec_num": null
},
{
"text": "In this section we present our sentence embeddingbased domain adaptation method for sentiment classification. We first introduce the necessary notation and an overview of our method. we then delve into the details of the method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "We assume that each input is a piece of text consisting of a sequence of words. For the rest of this paper, we assume each input is a sentence, although our method is general enough for longer pieces of text. Let x = (x 1 , x 2 , . . .) denote a sentence where each",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Method Overview",
"sec_num": "3.1"
},
{
"text": "x i \u2208 {1, 2, . . . , V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Method Overview",
"sec_num": "3.1"
},
{
"text": "} is a word in the vocabulary and V is the vocabulary size. Let the sentiment label of x be y \u2208 {+, \u2212} where + denotes a positive sentiment and \u2212 a negative sentiment. We further assume that we are given a set of labeled training sentences from a source domain, denoted by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Method Overview",
"sec_num": "3.1"
},
{
"text": "D s = {(x s i , y s i )} N s i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Method Overview",
"sec_num": "3.1"
},
{
"text": ". Also, we have a set of unlabeled sentences from a target domain, denoted by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Method Overview",
"sec_num": "3.1"
},
{
"text": "D t = {x t i } N t i=1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Method Overview",
"sec_num": "3.1"
},
{
"text": "Our goal is to learn a good sentiment classifier from both D s and D t such that the classifier works well on the target domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Method Overview",
"sec_num": "3.1"
},
{
"text": "A baseline solution without considering any domain difference is to simply train a classifier using D s , and with the recent advances in neural networkbased methods to sentence classification, we consider a baseline that uses a multi-layer neural network such as a CNN or an RNN to perform the classification task. To simplify the discussion and focus on the domain adaptation ideas we propose, we will leave the details of the neural network model we use in Section 3.5. For now, we assume that a multilayer neural network is used to transform each input x into a sentence embedding vector z. Let us use f \u0398 to denote the transformation function parameterized by \u0398, that is, z = f \u0398 (x). Next, we assume that a linear classifier such as a softmax classifier is learned to map z to a sentiment label y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Notation and Method Overview",
"sec_num": "3.1"
},
{
"text": "We introduce two auxiliary tasks which presumably are highly correlated with the sentiment classification task itself. Labels for these auxiliary tasks can be automatically derived from unlabeled data in both the source and the target domains. With the help of the two auxiliary tasks, we learn a non-linear transformation function f \u0398 from unlabeled data and use it to derive a sentence embedding vector z from sentence x, which supposedly works better across domains. Finally we use the source domain's training data to learn a linear classifier on the representation z \u2295 z , where \u2295 is the operator that concatenates two vectors. Figure 1 gives the outline of our method.",
"cite_spans": [],
"ref_spans": [
{
"start": 633,
"end": 641,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Notation and Method Overview",
"sec_num": "3.1"
},
{
"text": "Our two auxiliary tasks are about whether an input sentence contains a positive or negative domainindependent sentiment word. The intuition is the following. If we have a list of domain-independent positive sentiment words, then an input sentence that contains one of these words, regardless of the domain the sentence is from, is more likely to contain an overall positive sentiment. For example, a sentence containing the word good is likely to be overall positive. Moreover, the rest of the sentence excluding the word good may contain domain-specific words or expressions that also convey a positive sentiment. For example, in the sentence \"The laptop is good and goes really fast,\" we can see that the word fast is a domain-specific sentiment word, and its sentiment polarity correlates with that of the word good, which is domain-independent. Therefore, we can hide the domain-independent positive words in a sentence and try to use the other words in the sentence to predict whether the original sentence contains a domain-independent positive word. There are two things to note about this auxiliary task: (1) The label of the task can be automatically derived provided that we have the domain-independent positive word list. (2) The task is closely related to the original task of sentence-level sentiment classification. Similarly, we can introduce a task to predict the existence of a domain-independent negative sentiment word in a sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Auxiliary Tasks",
"sec_num": "3.2"
},
{
"text": "Formally, let us assume that we have two domainindependent sentiment word lists, one for the positive sentiment and the other for the negative sentiment. Details of how these lists are obtained will be given in Section 3.5. Borrowing the term from SCL, we refer to these sentiment words as pivot words. For each sentence x, we replace all the occurrences of these pivot words with a special token UNK. Let g(\u2022) be a function that denotes this procedure, that is, g(x) is the resulting sentence with UNK tokens. We then introduce two binary labels for g(x). The first label u indicates whether the original sentence x contains at least one domainindependent positive sentiment word, and the second label v indicates whether x contains at least one domain-independent negative sentiment word. Figure 1 shows an example sentence x, its modified version g(x) and the labels u and v for x. We further use",
"cite_spans": [],
"ref_spans": [
{
"start": 791,
"end": 799,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Auxiliary Tasks",
"sec_num": "3.2"
},
{
"text": "D a = {(x i , u i , v i )} N a i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Auxiliary Tasks",
"sec_num": "3.2"
},
{
"text": "to denote a set of training sentences for the auxiliary tasks. Note that the sentences in D a can be from the sentences in D s and D t , but they can also be from other unlabeled sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Auxiliary Tasks",
"sec_num": "3.2"
},
{
"text": "With the two auxiliary tasks, we can learn a neural network model in a standard way to produce sentence embeddings that work well for the auxiliary tasks. Specifically, we still use \u0398 to denote the parameters of the neural network that produces the sentence embeddings (and f \u0398 the corresponding transformation function), and we use \u03b2 + and \u03b2 \u2212 to denote the parameters of two softmax classifiers for the two auxiliary tasks, respectively. Using crossentropy loss, we can learn \u0398 by minimizing the following loss function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Embeddings for Domain Adaptation",
"sec_num": "3.3"
},
{
"text": "J(\u0398 , \u03b2 + , \u03b2 \u2212 ) = \u2212 (x,u,v)\u2208D a log p(u|f \u0398 (g(x)); \u03b2 + ) + log p(v|f \u0398 (g(x)); \u03b2 \u2212 ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Embeddings for Domain Adaptation",
"sec_num": "3.3"
},
{
"text": "where p(y|z; \u03b2) is the probability of label y given vector z and parameter \u03b2 under softmax regression.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Embeddings for Domain Adaptation",
"sec_num": "3.3"
},
{
"text": "With the learned \u0398 , we can derive a sentence embedding z from any sentence. Although we could simply use this embedding z for sentiment classification through another softmax classifier, this may not be ideal because z is transformed from g(x), which has the domain-independent sentiment words removed. Similar to SCL and some other previous work, we concatenate the embedding vector z with the standard embedding vector z for the final classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Embeddings for Domain Adaptation",
"sec_num": "3.3"
},
{
"text": "Although we can learn \u0398 using D a as a first step, here we also explore a joint learning setting. In this setting, \u0398 is learned together with the neural network model used for the end task, i.e., sentiment classification. This way, the learning of \u0398 depends not only on D a but also on D s , i.e., the sentimentlabeled training data from the source domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning",
"sec_num": "3.4"
},
{
"text": "Specifically, we use \u0398 to denote the parameters for a neural network that takes the original sentence x and transforms it to a sentence embedding (and f \u0398 the corresponding transformation function). We use \u03b3 to denote the parameters of a softmax classifier that operates on the concatenated sentence embedding z \u2295 z for sentiment classification. With joint learning, we try to minimize the following loss func-tion:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning",
"sec_num": "3.4"
},
{
"text": "J(\u0398, \u0398 , \u03b3, \u03b2 + , \u03b2 \u2212 ) = \u2212 (x,y)\u2208D s log p(y|f \u0398 (x) \u2295 f \u0398 (g(x)); \u03b3) \u2212 (x,u,v)\u2208D a log p(u|f \u0398 (g(x)); \u03b2 + ) + log p(v|f \u0398 (g(x)); \u03b2 \u2212 ) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning",
"sec_num": "3.4"
},
{
"text": "We can see that this loss function contains two parts. The first part is the cross-entropy loss based on the true sentiment labels of the sentences in D s . The second part is the loss based on the auxiliary tasks and the data D a , which are derived from unlabeled sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning",
"sec_num": "3.4"
},
{
"text": "Finally, to make a prediction on a sentence, we use the learned \u0398 and \u0398 to derive a sentence embedding f \u0398 (x) \u2295 f \u0398 (g(x)), and then use the softmax classifier parameterized by the learned \u03b3 to make the final prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Learning",
"sec_num": "3.4"
},
{
"text": "In this section we explain some of the model details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "3.5"
},
{
"text": "Recall that the two auxiliary tasks depend on two domain-independent sentiment word lists, i.e., pivot word lists. Different from Blitzer et al. (2007) , we employ weighted log-likelihood ratio (WLLR) to select the most positive and negative words in both domains as pivots. The reason is that in our preliminary experiments we observe that mutual information (used by Blitzer et al. (2007) ) is biased towards low frequency words. Some high frequency words including good and great are scored low. In comparison, WLLR does not have this issue. The same observation was also reported previously by Li et al. (2009) .",
"cite_spans": [
{
"start": 130,
"end": 151,
"text": "Blitzer et al. (2007)",
"ref_id": "BIBREF1"
},
{
"start": 369,
"end": 390,
"text": "Blitzer et al. (2007)",
"ref_id": "BIBREF1"
},
{
"start": 598,
"end": 614,
"text": "Li et al. (2009)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pivot Word Selection",
"sec_num": null
},
{
"text": "More specifically, we first tokenize the sentences in D s and D t and perform part-of-speech tagging using the NLTK toolkit. Next, we extract only adjectives, adverbs and verbs with a frequency of at least 3 in the source domain and at least 3 in the target domain. We also remove negation words such as not and stop words using a stop word list. We then measure each remaining candidate word's relevance to the positive and the negative classes based on D s by computing the following scores:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pivot Word Selection",
"sec_num": null
},
{
"text": "r(w, y) =p(w|y) logp (w|y) p(w|\u0233) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pivot Word Selection",
"sec_num": null
},
{
"text": "where w is a word, y \u2208 {+, \u2212} is a sentiment label, y is the opposite label of y, andp(w|y) is the empirical probability of observing w in sentences labeled with y. We can then rank the candidate words in decreasing order of r(w, +) and r(w, \u2212). Finally, we select the top 25% from each ranked list as the final lists of pivot words for the positive and the negative sentiments. Some manual inspection shows that most of these words are indeed domain-independent sentiment words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pivot Word Selection",
"sec_num": null
},
{
"text": "Our framework is general and potentially we can use any neural network model to transform an input sentence to a sentence embedding vector. In this paper, we adopt a CNN-based approach because it has been shown to work well for sentiment classification. Specifically, each word (including the token UNK) is represented by a word embedding vector. Let W \u2208 R d\u00d7V denote the lookup table for words, where each column is a d-dimensional embedding vector for a word type. Two separate CNNs are used to process x and g(x), and their mechanisms are the same. For a word x i in each CNN, the embedding vectors inside a window of size n centered at i are concatenated into a new vector, which we refer to as e i \u2208 R nd . A convolution operation is then performed by applying a filter F \u2208 R h\u00d7nd on e i to produce a hidden vector h i = m(Fe i + b), where b \u2208 R h is a bias vector and m is an elementwise non-linear transformation function. Note that we pad the original sequence in front and at the back to ensure that at each position i we have n vectors to be combined into h i . After the convolution operation is applied to the whole sequence, we obtain H = [h 1 , h 2 , . . .], and we apply a max-over-time pooling operator to take the maximum value of each row of H to obtain an overall hidden vector, i.e., z for x and z for g(x).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Model",
"sec_num": null
},
{
"text": "It is worth noting that the two neural networks corresponding to f \u0398 and f \u0398 share the same word embedding lookup table. This lookup table is initialized with word embeddings from word2vec 1 and is updated during our learning process. Note that the token UNK is initialized as a zero vector and never updated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Network Model",
"sec_num": null
},
{
"text": "Although our method is inspired by SCL, there are a number of major differences: (1) Our method is based on neural network models with continuous, dense feature representations and non-linear transformation functions. SCL is based on discrete, sparse feature vectors and linear transformations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Differences from SCL",
"sec_num": "3.6"
},
{
"text": "(2) Although our pivot word selection is similar to that of SCL, in the end we only use two auxiliary tasks while SCL uses much more pivot prediction tasks. 3We can directly learn the transformation function f \u0398 that produces the hidden representation, while SCL relies on SVD to learn the projection function. 4We perform joint learning of the auxiliary tasks and the end task, i.e., sentiment classification, while SCL performs the learning in a sequential manner. To evaluate our proposed method, we conduct experiments using five benchmark data sets. The data sets are summarized in Table 1 . Movie1 2 and Movie2 3 are movie reviews labeled by Pang and Lee (2005) and Socher et al. (2013) , respectively. Camera 4 are reviews of digital products such as MP3 players and cameras (Hu and Liu, 2004) . Laptop and Restaurant 5 are laptop and restaurant reviews taken 2 https://www.cs.cornell.edu/people/pabo/ movie-review-data/ 3 http://nlp.stanford.edu/sentiment/ 4 http://www.cs.uic.edu/\u02dcliub/FBS/ sentiment-analysis.html 5 Note that the original data set is for aspect-level sentiment analysis. We remove sentences with opposite polarities towards different aspects, and use the consistent polarity as the sentencelevel sentiment of each remaining sentence.",
"cite_spans": [
{
"start": 648,
"end": 667,
"text": "Pang and Lee (2005)",
"ref_id": "BIBREF22"
},
{
"start": 672,
"end": 692,
"text": "Socher et al. (2013)",
"ref_id": "BIBREF25"
},
{
"start": 782,
"end": 800,
"text": "(Hu and Liu, 2004)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 587,
"end": 594,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Differences from SCL",
"sec_num": "3.6"
},
{
"text": "from SemEval 2015 Task 12.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Differences from SCL",
"sec_num": "3.6"
},
{
"text": "We consider 18 pairs of data sets where the two data sets come from different domains. 6 For neural network-based methods, we randomly pick 200 sentences from the target domain as the development set for parameter tuning, and the rest of the data from the target domain as the test data.",
"cite_spans": [
{
"start": 87,
"end": 88,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Differences from SCL",
"sec_num": "3.6"
},
{
"text": "We consider the following baselines: Naive is a non-domain-adaptive baseline based on bag-of-word representations. SCL is our implementation of the Structural Correspondence Learning method. We set the number of induced features K to 100 and rescale factor \u03b1 = 5, and we use 1000 pivot words based on our preliminary experiments. mDA is our implementation of marginalized Denoising Auto-encoders (Chen et al., 2012) , one of the state-of-the-art domain adaptation methods, which learns a shared hidden representation by reconstructing pivot features from corrupted inputs. Following Yang and Eisenstein (2014) , we employ the efficient and effective structured dropout noise strategy without any parameter. The top 500 features are chosen as pivots based on our preliminary experiments. NaiveNN is a non-domain-adaptive baseline based on CNN, as described in Section 3.5. Aux-NN is a simple combination of our auxiliary tasks with NaiveNN, which treats the derived label of two auxiliary tasks as two features and then appends them to the hidden representation learned from CNN, followed by a softmax classifier. SCL-NN is a naive combination of SCL with NaiveNN, which appends the induced representation from SCL to the hidden representation learned from CNN, followed by a softmax classifier. mDA-NN is similar to SCL-NN but uses the hidden representation derived from mDA. Sequential is our proposed method without joint learning, which first learns \u0398 based on D a and then learns \u0398 and \u03b3 based on D s with fixed \u0398 . Joint is our proposed joint learning method, that is, we jointly learn \u0398 and \u0398 . For Naive, SCL and mDA, we use LibLinear 7 to train linear classifiers and use its default hyperparameters. In all the tasks, we use unigrams and bigrams with a frequency of at least 4 as features for classification. For the word embeddings, we set the dimension d to 300. For CNN, we set the window size to 3. Also, the size of the hidden representations z and z is set to 100. Following Kim (2014) , the non-linear activation function in CNN is Relu, the mini-batch size is 50, the dropout rate \u03b1 equals 0.5, and the hyperparameter for the l 2 norms is set to be 3. For Naive, SCL and mDA, we do not use the 200 sentences in the development set for tuning parameters. Hence, for fair comparison, we also include settings where the 200 sentences are added to the training set. We denote these settings by ++.",
"cite_spans": [
{
"start": 396,
"end": 415,
"text": "(Chen et al., 2012)",
"ref_id": "BIBREF5"
},
{
"start": 583,
"end": 609,
"text": "Yang and Eisenstein (2014)",
"ref_id": "BIBREF28"
},
{
"start": 1990,
"end": 2000,
"text": "Kim (2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines and Hyperparameters",
"sec_num": "4.2"
},
{
"text": "In Table 2 , we report the results of all the methods. It is easy to see that the performance of Naive is very limited, and the incorporation of 200 reviews in the development set (Naive++) brings in 4.3% of improvement on average. SCL++ and mDA++ can further improve the average accuracy respectively by 0.8% and 1.9%, which verifies the usefulness of these two domain adaptation methods. However, we can easily see that the performance of these domain adaptation methods based on discrete, bag-of-word representations is even much lower than the nondomain-adaptive method on continuous representations (NaiveNN). This confirms that it is useful to develop domain adaptation methods based on embedding vectors and neural network models.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Moreover, we can find that the performance of simply appending two features from auxiliary tasks to NaiveNN (i.e., Aux-NN) is quite close to that of NaiveNN on most data set pairs, which shows that it is not ideal for domain adaptation. In addition, although the shared hidden representations derived from SCL and mDA are based on traditional bag-of-word representations, SCL-NN and mDA-NN can still improve the performance of NaiveNN on most data set pairs, which indicates that the derived shared hidden representations by SCL and by mDA can generalize better across domains and are generally useful for domain adaptation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Finally, it is easy to see that our method with joint learning outperforms SCL-NN on almost all the data set pairs. And in comparison with mDA-NN, our method with joint learning can also outper- form it on most data set pairs, especially when the size of the labeled data in the source domain is relatively large. Furthermore, we can easily observe that for our method, joint learning generally works better than sequential learning. All these observations show the advantage of our joint learning method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In Table 3 , we also show the comparison between mDA-NN and our model under a setting some labeled target data is used. Specifically, we randomly select 100 sentences from the development set and mix them with the training set. We can observe that our method Joint outperforms NaiveNN and mDA-NN by 1.2% and 0.6%, respectively, which further confirms the effectiveness of our model. But, in comparison with the setting where no target data is available, the average improvement of our method over NaiveNN is relatively small.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Hence, to give a deeper analysis, we further show the comparison of Joint and NaiveNN with respect to the number of labeled target data in Figure 2 . Note that for space limitation, we only present the results on MV2 \u2192 RT and MV2 \u2192 CR. Similar trends have been observed on other data set pairs. As we can see from Figure 2 , the difference between the performance of NaiveNN and that of Joint gradually decreases with the increase of the number of labeled target data. This indicates that our joint model is much more effective when no or small number of labeled target data is available.",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 147,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 314,
"end": 322,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "To obtain a better understanding of our method, we conduct a case study where the source is CR and the target is RT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.4"
},
{
"text": "For each sentiment polarity, we try to extract the most useful trigrams for the final predictions. Recall that our CNN models use a window size of 3, which corresponds to trigrams. By tracing the final prediction scores back through the neural network, we are able to locate the trigrams which have contributed the most through max-pooling. In Table 4 , we present the most useful trigrams of each polarity extracted by NaiveNN and by the two components of our sequential and joint method. Sequentialoriginal and Joint-original refer to the CNN corresponding to f \u0398 while Sequential-auxiliary and Joint-auxiliary refer to the CNN corresponding to f \u0398 , which is related to the auxiliary tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 344,
"end": 351,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.4"
},
{
"text": "In Table 4 , we can easily observe that for NaiveNN, the most important trigrams are domainindependent, which contain some general sentiment words like good, great and disappointing. For our sequential model, the most important trigrams captured by Sequential-original are similar to NaiveNN, but due to the removal of the pivot words in each sentence, the most important trigrams extracted by Sequential-auxiliary are domain-specific, including target-specific sentiment words like oily, friendly and target-specific aspect words like flavor, atmosphere. But since aspect words are irrelevant to our sentiment classification Table 4 : Comparison of the most useful trigrams chosen by our method and by NaiveNN on CR \u2192 RT. Here * denotes a \"padding\", which we added at the beginning and the end of each sentence. The domain-specific sentiment words are in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": null
},
{
"start": 626,
"end": 633,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.4"
},
{
"text": "task, it might bring in some noise and affect the performance of our sequential model. In contrast to Sequential-auxiliary, Joint-auxiliary is jointly learnt with the sentiment classification task, and it is easy to see that most of its extracted trigrams are target-specific sentiment words. Also, for Jointoriginal, since we share the word embeddings of two components and do not remove any pivot, it is intuitive to see that the extracted trigrams contain both domain-independent and domain-specific sentiment words. These observations agree with our motivations behind the model. Finally, we also sample several sentences from the test dataset, i.e., RT, to get a deeper insight of our joint model. Although NaiveNN and Sequential correctly predict sentiments of the following two sentences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.4"
},
{
"text": "1. \"I've also been amazed at all the new additions in the past few years: A new Jazz Bar, the most fantastic Dining Garden, the Best Thin Crust Pizzas, and now a Lasagna Menu which is to die for!\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.4"
},
{
"text": "2. \"The have a great cocktail with Citrus Vodka and lemon and lime juice and mint leaves that is to die for!\" Both of them give wrong predictions on another three sentences containing to die for:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.4"
},
{
"text": "3. \"Try their chef's specials-they are to die for.\" 4. \"Their tuna tartar appetizer is to die for.\" 5. \"It's to die for!\". However, since to die for co-occurs with some general sentiment words like fantastic, best and great in previous two sentences, our joint model can implicitly learn that to die for is highly correlated with the positive sentiment via our auxiliary tasks, and ultimately make correct predictions for the latter three sentences. This further indicates that our joint model can identify more domain-specific sentiment words in comparison with NaiveNN and Sequential, and therefore improve the performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study",
"sec_num": "4.4"
},
{
"text": "We presented a domain adaptation method for sentiment classification based on sentence embeddings. Our method induces a sentence embedding that works well across domains, based on two auxiliary tasks. We also jointly learn the cross-domain sentence embedding and the sentiment classifier. Experiment results show that our proposed joint method can outperform several highly competitive domain adaptation methods on 18 source-target pairs using five benchmark data sets. Moreover, further analysis confirmed that our method is able to pick up domain-specific sentiment words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "https://code.google.com/p/word2vec/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Because Movie1 and Movie2 come from the same domain, we do not take this pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.csie.ntu.edu.tw/cjlin/ liblinear/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research is supported by the Singapore National Research Foundation under its International Research Centre@Singapore Funding Initiative and administered by the IDM Programme Office, Media Development Authority (MDA).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgment",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Domain adaptation with structural correspondence learning",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "120--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspon- dence learning. In Proceedings of the 2006 Con- ference on Empirical Methods in Natural Language Processing, pages 120-128. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Asso- ciation of Computational Linguistics, pages 440-447, Prague, Czech Republic, June. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Using multiple sources to construct a sentiment sensitive thesaurus for cross-domain sentiment classification",
"authors": [
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "132--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danushka Bollegala, David Weir, and John Carroll. 2011. Using multiple sources to construct a sentiment sensitive thesaurus for cross-domain sentiment clas- sification. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 132- 141. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised cross-domain word representation learning",
"authors": [
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "Takanori",
"middle": [],
"last": "Maehara",
"suffix": ""
},
{
"first": "Ken-Ichi",
"middle": [],
"last": "Kawarabayashi",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "730--740",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danushka Bollegala, Takanori Maehara, and Ken-ichi Kawarabayashi. 2015. Unsupervised cross-domain word representation learning. In Proceedings of the 53rd Annual Meeting of the Association for Computa- tional Linguistics and the 7th International Joint Con- ference on Natural Language Processing (Volume 1: Long Papers), pages 730-740, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Cross-domain sentiment classification using sentiment sensitive embeddings",
"authors": [
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "Tingting",
"middle": [],
"last": "Mu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Goulermas",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE Transactions on Knowledge & Data Engineering",
"volume": "6",
"issue": "2",
"pages": "398--410",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danushka Bollegala, Tingting Mu, and John Goulermas. 2016. Cross-domain sentiment classification using sentiment sensitive embeddings. IEEE Transactions on Knowledge & Data Engineering, 6(2):398-410.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Marginalized denoising autoencoders for domain adaptation",
"authors": [
{
"first": "Minmin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhixiang",
"middle": [
"Eddie"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Sha",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 29th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minmin Chen, Zhixiang Eddie Xu, Kilian Q. Weinberger, and Fei Sha. 2012. Marginalized denoising autoen- coders for domain adaptation. In Proceedings of the 29th International Conference on Machine Learning.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning with compositional semantics as structural inference for subsentential sentiment analysis",
"authors": [
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "793--801",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yejin Choi and Claire Cardie. 2008. Learning with com- positional semantics as structural inference for subsen- tential sentiment analysis. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 793-801. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Frustratingly easy domain adaptation",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "256--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III. 2007. Frustratingly easy domain adapta- tion. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 256- 263.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Adaptive recursive neural network for target-dependent twitter sentiment classification",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Chuanqi",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "49--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014a. Adaptive recursive neural network for target-dependent twitter sentiment classi- fication. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), pages 49-54, Baltimore, Mary- land, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adaptive multi-compositionality for recursive neural models with applications to sentiment analysis",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2014,
"venue": "Twenty-Eighth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Dong, Furu Wei, Ming Zhou, and Ke Xu. 2014b. Adaptive multi-compositionality for recursive neural models with applications to sentiment analysis. In Twenty-Eighth AAAI Conference on Artificial Intelli- gence.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Online methods for multi-domain learning and adaptation",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "689--697",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Dredze and Koby Crammer. 2008. Online meth- ods for multi-domain learning and adaptation. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, pages 689-697.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Domain adaptation for large-scale sentiment classification: A deep learning approach",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Twenty-eight International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In In Pro- ceedings of the Twenty-eight International Conference on Machine Learning.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining",
"volume": "",
"issue": "",
"pages": "168--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 168-177. ACM.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Instance weighting for domain adaptation in nlp",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "264--271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Jiang and ChengXiang Zhai. 2007. Instance weight- ing for domain adaptation in nlp. In Proceedings of the 45th Annual Meeting of the Association of Compu- tational Linguistics, pages 264-271.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sen- tence classification. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751. Association for Computational Linguistics, October.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A framework of feature selection methods for text categorization",
"authors": [
{
"first": "Shoushan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Rui",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "2",
"issue": "",
"pages": "692--700",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shoushan Li, Rui Xia, Chengqing Zong, and Chu-Ren Huang. 2009. A framework of feature selection meth- ods for text categorization. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 692-700. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Cross-domain co-extraction of sentiment and topic lexicons",
"authors": [
{
"first": "Fangtao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ou",
"middle": [],
"last": "Sinno Jialin Pan",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fangtao Li, Sinno Jialin Pan, Ou Jin, Qiang Yang, and Xi- aoyan Zhu. 2012. Cross-domain co-extraction of sen- timent and topic lexicons. In Proceedings of the 50th",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Annual Meeting of the Association for Computational Linguistics: Long Papers",
"authors": [],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "410--419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 410-419. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Dependency tree-based sentiment classification using crfs with hidden variables",
"authors": [
{
"first": "Tetsuji",
"middle": [],
"last": "Nakagawa",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "786--794",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tetsuji Nakagawa, Kentaro Inui, and Sadao Kurohashi. 2010. Dependency tree-based sentiment classification using crfs with hidden variables. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Com- putational Linguistics, pages 786-794. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Cross-domain sentiment classification via spectral feature alignment",
"authors": [
{
"first": "Xiaochuan",
"middle": [],
"last": "Sinno Jialin Pan",
"suffix": ""
},
{
"first": "Jian-Tao",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 19th international conference on World wide web",
"volume": "",
"issue": "",
"pages": "751--760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain senti- ment classification via spectral feature alignment. In Proceedings of the 19th international conference on World wide web, pages 751-760. ACM.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Domain adaptation via transfer component analysis",
"authors": [
{
"first": "Ivor",
"middle": [
"W"
],
"last": "Sinno Jialin Pan",
"suffix": ""
},
{
"first": "James",
"middle": [
"T"
],
"last": "Tsang",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Kwok",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2011,
"venue": "IEEE Transactions on",
"volume": "22",
"issue": "2",
"pages": "199--210",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sinno Jialin Pan, Ivor W Tsang, James T Kwok, and Qiang Yang. 2011. Domain adaptation via transfer component analysis. Neural Networks, IEEE Transac- tions on, 22(2):199-210.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd annual meeting on Association for Computational Linguistics, page 271. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd annual meeting on Association for Computational Lin- guistics, page 271. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "115--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang and Lillian Lee. 2005. Seeing stars: Ex- ploiting class relationships for sentiment categoriza- tion with respect to rating scales. In Proceedings of the 43rd Annual Meeting on Association for Compu- tational Linguistics, pages 115-124. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Thumbs up?: sentiment classification using machine learning techniques",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Shivakumar",
"middle": [],
"last": "Vaithyanathan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 conference on Empirical methods in natural language processing",
"volume": "10",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using ma- chine learning techniques. In Proceedings of the ACL- 02 conference on Empirical methods in natural lan- guage processing-Volume 10, pages 79-86. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Do neighbours help?: an exploration of graph-based algorithms for cross-domain sentiment classification",
"authors": [
{
"first": "Natalia",
"middle": [],
"last": "Ponomareva",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Thelwall",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "655--665",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Natalia Ponomareva and Mike Thelwall. 2012. Do neighbours help?: an exploration of graph-based al- gorithms for cross-domain sentiment classification. In Proceedings of the 2012 Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning, pages 655-665. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Recursive deep models for semantic compositionality over a sentiment treebank",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Perelygin",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1631--1642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Pro- ceedings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing, pages 1631- 1642, Seattle, Washington, USA, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Document modeling with gated recurrent neural network for sentiment classification",
"authors": [
{
"first": "Duyu",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1422--1432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duyu Tang, Bing Qin, and Ting Liu. 2015. Docu- ment modeling with gated recurrent neural network for sentiment classification. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1422-1432, Lisbon, Por- tugal, September. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Domain adaptation by constraining inter-domain variability of latent feature representation",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "62--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Titov. 2011. Domain adaptation by constraining inter-domain variability of latent feature representa- tion. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 62-71.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Fast easy unsupervised domain adaptation with marginalized structured dropout",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "538--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Yang and Jacob Eisenstein. 2014. Fast easy unsuper- vised domain adaptation with marginalized structured dropout. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), pages 538-544.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Unsupervised multi-domain adaptation with feature embeddings",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "672--682",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Yang and Jacob Eisenstein. 2015. Unsupervised multi-domain adaptation with feature embeddings. In Proceedings of the North American Chapter of the As- sociation for Computational Linguistics, pages 672- 682.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A hassle-free unsupervised domain adaptation method using instance similarity features",
"authors": [
{
"first": "Jianfei",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "168--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianfei Yu and Jing Jiang. 2015. A hassle-free unsuper- vised domain adaptation method using instance sim- ilarity features. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguis- tics and the 7th International Joint Conference on Nat- ural Language Processing (Volume 2: Short Papers), pages 168-173, Beijing, China, July. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "The Outline of our Proposed Method.",
"uris": null
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"text": "The influence of the number of labeled target data.",
"uris": null
},
"TABREF0": {
"num": null,
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td/><td>Auxiliary Tasks</td></tr><tr><td>CNN/RNN</td><td>CNN/RNN</td></tr><tr><td>Shared</td><td/></tr><tr><td>Lookup</td><td/></tr><tr><td>Table</td><td/></tr><tr><td>Original Sentence</td><td>New Sentence without Pivots</td></tr></table>"
},
"TABREF2": {
"num": null,
"type_str": "table",
"html": null,
"text": "Statistics of our data sets.",
"content": "<table/>"
},
"TABREF3": {
"num": null,
"type_str": "table",
"html": null,
"text": "TaskMethodSource Target Naive Naive++ SCL++ mDA++ NaiveNN Aux-NN SCL-NN mDA-NN Sequential Joint",
"content": "<table><tr><td>MV1</td><td>LT</td><td>0.656</td><td>0.739</td><td>0.742</td><td>0.742</td><td>0.773</td><td>0.779</td><td>0.776</td><td>0.780</td><td>0.774</td><td>0.804 *</td></tr><tr><td>MV1</td><td>RT</td><td>0.625</td><td>0.742</td><td>0.750</td><td>0.761</td><td>0.802</td><td>0.794</td><td>0.817</td><td>0.819</td><td>0.814</td><td>0.825 *</td></tr><tr><td>MV1</td><td>CR</td><td>0.609</td><td>0.684</td><td>0.688</td><td>0.688</td><td>0.721</td><td>0.717</td><td>0.734</td><td>0.730</td><td>0.717</td><td>0.747 *</td></tr><tr><td>MV2</td><td>LT</td><td>0.699</td><td>0.760</td><td>0.765</td><td>0.772</td><td>0.805</td><td>0.811</td><td>0.800</td><td>0.811</td><td>0.808</td><td>0.827 *</td></tr><tr><td>MV2</td><td>RT</td><td>0.696</td><td>0.761</td><td>0.768</td><td>0.778</td><td>0.813</td><td>0.819</td><td>0.824</td><td>0.825</td><td>0.833</td><td>0.840 *</td></tr><tr><td>MV2</td><td>CR</td><td>0.644</td><td>0.697</td><td>0.705</td><td>0.706</td><td>0.738</td><td>0.732</td><td>0.736</td><td>0.756</td><td>0.745</td><td>0.768 *</td></tr><tr><td>CR</td><td>LT</td><td>0.780</td><td>0.791</td><td>0.802</td><td>0.806</td><td>0.848</td><td>0.848</td><td>0.846</td><td>0.850</td><td>0.856</td><td>0.858 *</td></tr><tr><td>CR</td><td>RT</td><td>0.746</td><td>0.784</td><td>0.782</td><td>0.789</td><td>0.827</td><td>0.835</td><td>0.841</td><td>0.839</td><td>0.835</td><td>0.844 *</td></tr><tr><td>CR</td><td>MV1</td><td>0.593</td><td>0.597</td><td>0.612</td><td>0.612</td><td>0.685</td><td>0.689</td><td>0.689</td><td>0.692</td><td>0.687</td><td>0.696 *</td></tr><tr><td>CR</td><td>MV2</td><td>0.609</td><td>0.629</td><td>0.644</td><td>0.640</td><td>0.735</td><td>0.726</td><td>0.734</td><td>0.731</td><td>0.735</td><td>0.736</td></tr><tr><td>LT</td><td>RT</td><td>0.736</td><td>0.781</td><td>0.800</td><td>0.810</td><td>0.819</td><td>0.820</td><td>0.823</td><td>0.852</td><td>0.841</td><td>0.840</td></tr><tr><td>LT</td><td>MV1</td><td>0.574</td><td>0.601</td><td>0.612</td><td>0.630</td><td>0.711</td><td>0.703</td><td>0.702</td><td>0.709</td><td>0.705</td><td>0.707</td></tr><tr><td>LT</td><td>MV2</td><td>0.588</td><td>0.632</td><td>0.645</td><td>0.663</td><td>0.742</td><td>0.745</td><td>0.739</td><td>0.747</td><td>0.746</td><td>0.747</td></tr><tr><td>LT</td><td>CR</td><td>0.736</td><td>0.762</td><td>0.768</td><td>0.780</td><td>0.791</td><td>0.796</td><td>0.803</td><td>0.819</td><td>0.803</td><td>0.817</td></tr><tr><td>RT</td><td>LT</td><td>0.732</td><td>0.777</td><td>0.777</td><td>0.799</td><td>0.817</td><td>0.822</td><td>0.831</td><td>0.826</td><td>0.828</td><td>0.834 *</td></tr><tr><td>RT</td><td>MV1</td><td>0.580</td><td>0.604</td><td>0.618</td><td>0.643</td><td>0.721</td><td>0.726</td><td>0.724</td><td>0.734</td><td>0.722</td><td>0.724</td></tr><tr><td>RT</td><td>MV2</td><td>0.605</td><td>0.630</td><td>0.633</td><td>0.664</td><td>0.761</td><td>0.762</td><td>0.756</td><td>0.772</td><td>0.757</td><td>0.765</td></tr><tr><td>RT</td><td>CR</td><td>0.689</td><td>0.708</td><td>0.704</td><td>0.732</td><td>0.764</td><td>0.772</td><td>0.759</td><td>0.774</td><td>0.772</td><td>0.779 *</td></tr><tr><td colspan=\"2\">Average</td><td>0.661</td><td>0.704</td><td>0.712</td><td>0.723</td><td>0.770</td><td>0.772</td><td>0.774</td><td>0.781</td><td>0.777</td><td>0.787</td></tr></table>"
},
"TABREF4": {
"num": null,
"type_str": "table",
"html": null,
"text": "Comparison of classification accuracies of different methods.",
"content": "<table/>"
},
"TABREF6": {
"num": null,
"type_str": "table",
"html": null,
"text": "Comparison of our method Joint with NaiveNN and mDA-NN in a setting where some labeled target data is used.",
"content": "<table/>"
},
"TABREF7": {
"num": null,
"type_str": "table",
"html": null,
"text": "disgusting * *, it is not, * * great, good * *,* * best, * i love, NaiveNN slow * *, * too bad, * * terrible, place is not, was very good,* * excellent, unpleasant experience *, would not go, * the only wonderful * *, * * amazing, * * nice disgusting * *, disappointing * *, * * terrible * * great, good * *, * * best, * i love, Sequential-original expensive * *, it is not, unpleasant experience *, * highly recommended, * * excellent, slow * *, * too bad, probably would not, awful * * wonderful * *, is amazing *, is the perfect disgusting * *, never go back, money * *,",
"content": "<table><tr><td>Method</td><td>Negative Sentiment</td><td>Positive Sentiment</td></tr><tr><td/><td colspan=\"2\">disappointing * *, delicious * *, friendly * *, food * *,</td></tr><tr><td colspan=\"2\">Sequential-auxiliary rude * *, flavor * *, * this place, oily * *,</td><td>food is UNK, * highly UNK, fresh * *,</td></tr><tr><td/><td>prices * *, inedible ! *, this place survives</td><td>atmosphere * *, * i highly, nyc * *</td></tr><tr><td/><td>disgusting * *, soggy * *, disappointing * *,</td><td>* * great, good * *, * * best, * i love,</td></tr><tr><td>Joint-original</td><td>* too bad, * would never, it is not, rude * *,</td><td>* * amazing, delicious * *,</td></tr><tr><td/><td>* * terrible, place is not, disappointment * *</td><td>back * *, * i highly, of my favorite</td></tr><tr><td/><td>soggy * *, disgusting * *, rude * *,</td><td>delicious * *, go back *, is always fresh,</td></tr><tr><td>Joint-auxiliary</td><td>disappointment * *, not go back, was not fresh,</td><td>friendly * *, to die for, also very UNK,</td></tr><tr><td/><td>prices * *, inedible ! *, oily * *, overpriced * *</td><td>of my favorite, food * *, * i highly, delicious ! *</td></tr></table>"
}
}
}
}