| { |
| "paper_id": "Y18-1005", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:36:41.435629Z" |
| }, |
| "title": "Domain Adaptation for Sentiment Analysis using Keywords in the Target Domain as the Learning Weight", |
| "authors": [ |
| { |
| "first": "Jing", |
| "middle": [], |
| "last": "Bai", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Ibaraki University", |
| "location": { |
| "addrLine": "Sciences 4-12-1 Nakanarusawa", |
| "postCode": "316-8511", |
| "settlement": "Hitachi", |
| "region": "Ibaraki JAPAN" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Hiroyuki", |
| "middle": [], |
| "last": "Shinnou", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Ibaraki University", |
| "location": { |
| "addrLine": "Sciences 4-12-1 Nakanarusawa", |
| "postCode": "316-8511", |
| "settlement": "Hitachi", |
| "region": "Ibaraki JAPAN" |
| } |
| }, |
| "email": "hiroyuki.shinnou.0828@vc.ibaraki.ac.jp" |
| }, |
| { |
| "first": "Kanako", |
| "middle": [], |
| "last": "Komiya", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Ibaraki University", |
| "location": { |
| "addrLine": "Sciences 4-12-1 Nakanarusawa", |
| "postCode": "316-8511", |
| "settlement": "Hitachi", |
| "region": "Ibaraki JAPAN" |
| } |
| }, |
| "email": "kanako.komiya.nlp@vc.ibaraki.ac.jp" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper proposes a new method of instance-based domain adaptation for sentiment analysis. First, our method defines the likelihood of keywords, through the value of inverse document frequency (IDF), for each word in documents in the target domain. Next, the keyword content rate of a document is calculated using the likelihood of keywords and the domain adaptation is performed by giving the keyword content rate to each document in the source domain as the weight. The experiment used an Amazon dataset to demonstrate the effectiveness of our proposed method. Although the instance-based method has not shown great efficiency, the advantages combining instance-based method and feature-based method are shown in this paper.", |
| "pdf_parse": { |
| "paper_id": "Y18-1005", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper proposes a new method of instance-based domain adaptation for sentiment analysis. First, our method defines the likelihood of keywords, through the value of inverse document frequency (IDF), for each word in documents in the target domain. Next, the keyword content rate of a document is calculated using the likelihood of keywords and the domain adaptation is performed by giving the keyword content rate to each document in the source domain as the weight. The experiment used an Amazon dataset to demonstrate the effectiveness of our proposed method. Although the instance-based method has not shown great efficiency, the advantages combining instance-based method and feature-based method are shown in this paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "This paper proposes a new method of instance-based domain adaptation for sentiment analysis. Sentiment analysis involves judging a polarity, positive or negative, of a review such as a movie review. This is one of the document classification tasks and supervised learning can be used to solve it. However, if the domain of the test data is different from the domain for the learning data (for example, book reviews), the accuracy of the classifier obtained through standard supervised learning reduced. This is the problem with domain shift.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The solution to this problem is domain adaptation. Domain adaptation can be roughly divided into two categories: feature-based and instance-based (Pan and Yang, 2010) . In summary, both are weightedlearning methods, but feature-based gives weights to features and instance-based gives weight to instance.", |
| "cite_spans": [ |
| { |
| "start": 146, |
| "end": 166, |
| "text": "(Pan and Yang, 2010)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Here, we present a new instance-based method. Generally, the instance-based method assumes a covariate shift, and gives the weight based on the probability density ratio between target domain and source domain. However, the computational cost for the instance-based method is too high. The method presented here is simple and its effect is better than methods using a typical probability density ratio.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our method first defines l w , the likelihood of the keyword of the word w using the IDF in the target domain. Using l w , the weight of a review x in the source domain is set as the keyword content rate w x . After that, weighted-learning is performed by giving w x to each document in the source domain x to overcome the domain shift.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the experiment, we used Amazon dataset (Blitzer et al., 2007) , and compared our proposed method with two typical instance-based methods: unconstrained least squares importance fitting (uL-SIF) (Yamada et al., 2011) using the probability density ratio and the method defining weight through Naive Bayes model (Shinnou and Sasaki, 2014) , to demonstrate the effectiveness of the proposed method.", |
| "cite_spans": [ |
| { |
| "start": 42, |
| "end": 64, |
| "text": "(Blitzer et al., 2007)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 197, |
| "end": 218, |
| "text": "(Yamada et al., 2011)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 312, |
| "end": 338, |
| "text": "(Shinnou and Sasaki, 2014)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Domain adaptation is roughly divided into two types: the supervised approach using labeled data in the target domain and the unsupervised approach that does not use them. For supervised approach, Daum\u00e9's method (Daum\u00e9 III, 2007) has become a 32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018", |
| "cite_spans": [ |
| { |
| "start": 211, |
| "end": 228, |
| "text": "(Daum\u00e9 III, 2007)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Copyright 2018 by the authors standard method because of its simplicity and high ability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The method in the current research is an unsupervised approach. Unsupervised approaches can further be divided into two types: feature-based and instance-based (Pan and Yang, 2010). They are both weighted learning methods; feature-based methods give weights to features and instance-based methods give weights to instances. Among featurebased methods, the most representative method is structural correspondence learning (SCL) (Blitzer et al., 2006) . In addition, CORAL has attracted much attention for its simplicity and high ability in recent years. Moreover, the feature-based methods with deep learning (Glorot et al., 2011) , the expanded CORAL and adversarial networks (Ganin and Lempitsky, 2015) (Tzeng et al., 2017) are also considered as the state of the art.", |
| "cite_spans": [ |
| { |
| "start": 427, |
| "end": 449, |
| "text": "(Blitzer et al., 2006)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 608, |
| "end": 629, |
| "text": "(Glorot et al., 2011)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 676, |
| "end": 703, |
| "text": "(Ganin and Lempitsky, 2015)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 704, |
| "end": 724, |
| "text": "(Tzeng et al., 2017)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "On the other hand, instance-based methods have not been studied as much as feature-based methods. The instance-based method assumes a covariate shift. A covariate shift assumes P S (c|x) = P T (c|x) and P S (x) = P T (x). Under a covariate shift, P T (c|x) can be obtained by the weighted learning that uses the probability density ratio r = P T (x)/P S (x) as the weight of the document of the source data x. There are a variety of methods for calculating the probability density ratio. The simplest way to calculate the ratio is directly estimate P S (x) and P T (x), but in the case of complex models, the problem will be more complicated. Thus, the method that directly models the probability density ratio was studied. Among these methods, uLSIF (Yamada et al., 2011) is widely used because the time complexity of the method is relatively small. However, P (x) of bag-of-words can be modeled by Naive Bayes model if the problem is limited to natural language processing. Therefore, (Shinnou and Sasaki, 2014) defined P r (x), the prior of x, as follows:", |
| "cite_spans": [ |
| { |
| "start": 751, |
| "end": 772, |
| "text": "(Yamada et al., 2011)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 987, |
| "end": 1013, |
| "text": "(Shinnou and Sasaki, 2014)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "P R (x) = \u220f n i=1 P R (f i )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": ", where x denotes a data in the domain R and x has a set of features, that is,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "x = {f 1 , f 2 , \u2022 \u2022 \u2022 , f n }.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "They also obtain P R (f i ) using the following equation:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "P R (f ) = n(R,f )+1 N (R)+2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": ". Here, n(R; f ) is the frequency of feature f in the domain R, and n(R) is the number of data in the domain R. Therefore, the probability density ratio is obtained as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "r = P T (x) P S (x) = n(T, f ) + 1 N (T ) + 2 \u2022 N (S) + 2 n(S, f ) + 1 (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "3 Proposed Method", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The likelihood of the keyword in the target domain is l x , and l x is set as the value of IDF in the target domain of w:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Likelihood of the Keyword in the Target Domain", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "l x = log ( N d i ) + 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Likelihood of the Keyword in the Target Domain", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Here N is the number of articles in the article collection in the target domain, and d i is the number of articles containing the word w in the article collection in the target domain.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Likelihood of the Keyword in the Target Domain", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Set the weight w x of the instance x in the source domain. The words (file) x is {w i } K i=1 , and the frequency within x for word w i is f i . Using these, w x is given by the following equation:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Content of Keywords in the Source Case", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "w x = 1 \u2211 k i=1 f i K \u2211 i=1 f i \u2022 l w i 4 Experiment", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Content of Keywords in the Source Case", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The Amazon dataset (Blitzer et al., 2007) used in the experiment is specifically developed using the processed_acl.tar.gz file on the following website.", |
| "cite_spans": [ |
| { |
| "start": 19, |
| "end": 41, |
| "text": "(Blitzer et al., 2007)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Content of Keywords in the Source Case", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "https://www.cs.jhu.edu/\u02dcmdredze/ datasets/sentiment/.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Content of Keywords in the Source Case", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The data include books (B), dvd (D), electronics (E), and kitchen (K). The number of files contained in each domain is shown in Table 2. There are 1000 positive data and negative data in each domain, and these 2000 data are used as training data in this domain. The learning algorithm is an SVM with scikitlearn. The core is linear, the value of the c parameter is fixed at 0.1, and the scikit-learn SVM supports Weighted-Learning 1 , so the scikit-learn SVM is used here. domain adaptations are: Table 1 for the results of the two methods uLSIF (Yamada et al., 2011) and using equation 1of Naive Bayes for determining the rate density ratio of each domain and the proposed method. NONE in Table 1 means that the domain adaptation method was not used but simply applies the classifier formed from the training data of the source domain to the result of the test data in the target domain was applied. In addition, IDEAL is a result using the training data in the target domain to learn through the classifier and apply it to the test data in the target domain.", |
| "cite_spans": [ |
| { |
| "start": 546, |
| "end": 567, |
| "text": "(Yamada et al., 2011)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 497, |
| "end": 504, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 690, |
| "end": 697, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Content of Keywords in the Source Case", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "B D,B E,B K, D B,D E,D K, E B,E D,E K, K B,K D,K E. See", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Content of Keywords in the Source Case", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Using the case as a weighted method, a compari-1 http://scikit-learn.org/stable/auto_ examples/svm/plot_weighted_samples.html son of uLSIF, NB, the our method shows that the six highest correct answer rates in the 12 domain adaptations are obtained by our method, and the remaining six highest positive answer rates are obtained by NB. When we take 12 averages, the solution rate of our method is more than that of NB, and our method is weighted with example and,which is excellent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Content of Keywords in the Source Case", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "NONE in Table1 is compared with the case weighting method (uLSIF, NB, and our method). It is clear that NONE has a high positive solution rate. For theae data only, the instance-based method has no effect on domain adaptation. However, feature-based and instance-based methods are easy to combine. Here, the four domains applied in paper are adapted to B E, D B, E K and K D, and SCL conversion training is used first. The prime vector of the data, then experiment with the weighted-learning in this transformed vector is performed using the proposed method. The results are shown in Table 3 . The CORAL in Table 3 is taken from .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 584, |
| "end": 591, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 607, |
| "end": 614, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "From Table 3 ,it can be seen that SCL has no effect.After SCL is combined with our method, the accuracy is not high enough. However, when SCL is combined with the proposed method, the precision for SCL alone is improved. The positive effect of ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 5, |
| "end": 12, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "AutoEncoder 1 W 2 W T S x x / T S x x / S x NN ' y w y y loss ) , ' ( Learning S x encoded y w label weight", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Figure 1: AE+NN+Weighted-Learning the combined the instance-based and feature-based methods can be confirmed. There are many ways in useing the feature-based method, in addition to SCL, so it can be improved by combining these techniques with the proposed method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In addition, although the weighted-learning SVM is used in this paper, the loss value of the loss function in the neural network is multiplied by the weight, and it is easier to realize weighted-learning as the loss value. There are many options for using the domain adaptation method on deep learning, and these solutions combined with instance-based are easier to compute.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In a simple example, we used the AutoEncoder (AE) as the feature-based method. Using AE, the dimension of the data in the source and target domain was reduced, that is, encoded. In learning and testing, we used the connected data of the orig-inal data and the encoded data instead of the original data. In learning, as described above, the value of the loss function was multiplied by the weight obtained by our method and was taken as the loss value FIG. 1 . Only the experiment of B E is performed, and the results in Table 4 FIG. 2 were obtained. In addition, in this experiment, neural network learning was ended in 50 epochs, and the correct rate was the result from evaluating the model obtained by learning data after 50 epochs. Moreover, the dimension was reduced to 200. Every method used the same multi-layer perceptron, which has three layers.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 445, |
| "end": 457, |
| "text": "value FIG. 1", |
| "ref_id": null |
| }, |
| { |
| "start": 520, |
| "end": 527, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The use of the connected data of the original data and the encoded data is a feature-based method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "FIG NN+AE and our method further improves it. This result shows that the combination of the featurebased method and the instance-based method is easy in network learning and effective. In the future, we are planning to design a domain adaptation method in this framework.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "This paper proposed a method for instance-based domain adaptation of sentiment analysis. For outline, from the target domain, using IDF to set the likelihood of keywords, and the data in the source domain, the content rate in the target domain keyword, and the keyword content rate as the weight. In the experiment, we compared our proposed method with two typical instance-based methods: uLSI using the probability density ratio and the method defining weight through Naive Bayes model. However, using an instance-based alone to perform domain adaptation has a very small effect, the combining instance-based method and feature-based method is assured as shown in this paper. Further, the combination is easy to implement in the neural network model. Thus, we will investigate this approach in future.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "32nd Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Domain adaptation with structural correspondence learning", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Blitzer", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "EMNLP-2006", |
| "volume": "", |
| "issue": "", |
| "pages": "120--128", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspon- dence learning. In EMNLP-2006, pages 120-128.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Biographies, Bollywood, Boom-boxes and Blenders: Domain adaptation for Sentiment Classification", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Blitzer", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Dredze", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "ACL-2007", |
| "volume": "", |
| "issue": "", |
| "pages": "440--447", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, Boom-boxes and Blenders: Domain adaptation for Sentiment Classification. In ACL-2007, pages 440-447.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Frustratingly easy domain adaptation", |
| "authors": [ |
| { |
| "first": "Hal", |
| "middle": [], |
| "last": "Daum\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Iii", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "ACL-2007", |
| "volume": "", |
| "issue": "", |
| "pages": "256--263", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hal Daum\u00e9 III. 2007. Frustratingly easy domain adapta- tion. In ACL-2007, pages 256-263.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Unsupervised domain adaptation by backpropagation", |
| "authors": [ |
| { |
| "first": "Yaroslav", |
| "middle": [], |
| "last": "Ganin", |
| "suffix": "" |
| }, |
| { |
| "first": "Victor", |
| "middle": [ |
| "S" |
| ], |
| "last": "Lempitsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "1180--1189", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yaroslav Ganin and Victor S. Lempitsky. 2015. Unsu- pervised domain adaptation by backpropagation. In ICML, pages 1180-1189.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach", |
| "authors": [ |
| { |
| "first": "Xavier", |
| "middle": [], |
| "last": "Glorot", |
| "suffix": "" |
| }, |
| { |
| "first": "Antoine", |
| "middle": [], |
| "last": "Bordes", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "ICML-11", |
| "volume": "", |
| "issue": "", |
| "pages": "513--520", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach. In ICML- 11, pages 513-520.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Pacific Asia Conference on Language, Information and Computation Hong Kong", |
| "authors": [], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A survey on transfer learning. Knowledge and Data Engineering", |
| "authors": [ |
| { |
| "first": "Qiang", |
| "middle": [], |
| "last": "Sinno Jialin Pan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "IEEE Transactions on", |
| "volume": "22", |
| "issue": "10", |
| "pages": "1345--1359", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. Knowledge and Data Engineering, IEEE Transactions on, 22(10):1345-1359.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Domain Adaptations for Word Sense Disambiguation under the Problem of Covariate Shift", |
| "authors": [ |
| { |
| "first": "Hiroyuki", |
| "middle": [], |
| "last": "Shinnou", |
| "suffix": "" |
| }, |
| { |
| "first": "Minoru", |
| "middle": [], |
| "last": "Sasaki", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Journal of Natural Language Processing", |
| "volume": "21", |
| "issue": "1", |
| "pages": "61--79", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hiroyuki Shinnou and Minoru Sasaki. 2014. Domain Adaptations for Word Sense Disambiguation under the Problem of Covariate Shift (in Japanese). Journal of Natural Language Processing, 21(1):61-79.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Deep coral: Correlation alignment for deep domain adaptation", |
| "authors": [ |
| { |
| "first": "Baochen", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Kate", |
| "middle": [], |
| "last": "Saenko", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Computer Vision-ECCV 2016 Workshops", |
| "volume": "", |
| "issue": "", |
| "pages": "443--450", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Baochen Sun and Kate Saenko. 2016. Deep coral: Cor- relation alignment for deep domain adaptation. In Computer Vision-ECCV 2016 Workshops, pages 443- 450.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Return of Frustratingly Easy Domain Adaptation", |
| "authors": [ |
| { |
| "first": "Baochen", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiashi", |
| "middle": [], |
| "last": "Feng", |
| "suffix": "" |
| }, |
| { |
| "first": "Kate", |
| "middle": [], |
| "last": "Saenko", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Baochen Sun, Jiashi Feng, and Kate Saenko. 2016. Re- turn of Frustratingly Easy Domain Adaptation. AAAI.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Relative density-ratio estimation for robust distribution comparison", |
| "authors": [ |
| { |
| "first": "Makoto", |
| "middle": [], |
| "last": "Yamada", |
| "suffix": "" |
| }, |
| { |
| "first": "Taiji", |
| "middle": [], |
| "last": "Suzuki", |
| "suffix": "" |
| }, |
| { |
| "first": "Takafumi", |
| "middle": [], |
| "last": "Kanamori", |
| "suffix": "" |
| }, |
| { |
| "first": "Hirotaka", |
| "middle": [], |
| "last": "Hachiya", |
| "suffix": "" |
| }, |
| { |
| "first": "Masashi", |
| "middle": [], |
| "last": "Sugiyama", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Neural Computation", |
| "volume": "25", |
| "issue": "5", |
| "pages": "1370--1370", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Makoto Yamada, Taiji Suzuki, Takafumi Kanamori, Hi- rotaka Hachiya, and Masashi Sugiyama. 2011. Rel- ative density-ratio estimation for robust distribution comparison. Neural Computation, 25(5):1370-1370.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Pacific Asia Conference on Language, Information and Computation Hong Kong", |
| "authors": [], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pacific Asia Conference on Language, Information and Computation Hong Kong, 1-3 December 2018 Copyright 2018 by the authors", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Weighted-Learning by neural network", |
| "uris": null |
| }, |
| "TABREF1": { |
| "content": "<table><tr><td/><td colspan=\"3\">: Experimental result</td><td/></tr><tr><td>IDEAL</td><td>NONE</td><td>uLSIF</td><td>NB</td><td>our</td></tr><tr><td/><td/><td/><td/><td>method</td></tr><tr><td>B D 0.822</td><td>0.806</td><td>0.806</td><td>0.811</td><td>0.809</td></tr><tr><td>B E 0.852</td><td>0.761</td><td>0.756</td><td>0.755</td><td>0.765</td></tr><tr><td>B K 0.878</td><td>0.845</td><td>0.778</td><td>0.779</td><td>0.785</td></tr><tr><td>D B 0.831</td><td>0.762</td><td>0.733</td><td>0.745</td><td>0.741</td></tr><tr><td>D E 0.852</td><td>0.761</td><td>0.748</td><td>0.753</td><td>0.758</td></tr><tr><td>D K 0.878</td><td>0.795</td><td>0.773</td><td>0.782</td><td>0.789</td></tr><tr><td>E B 0.831</td><td>0.712</td><td>0.714</td><td>0.723</td><td>0.719</td></tr><tr><td>E D 0.822</td><td>0.722</td><td>0.708</td><td>0.723</td><td>0.714</td></tr><tr><td>E K 0.878</td><td>0.849</td><td>0.854</td><td>0.857</td><td>0.855</td></tr><tr><td>K B 0.831</td><td>0.713</td><td>0.707</td><td>0.714</td><td>0.715</td></tr><tr><td>K D 0.822</td><td>0.740</td><td>0.733</td><td>0.723</td><td>0.736</td></tr><tr><td>K E 0.852</td><td>0.842</td><td>0.847</td><td>0.852</td><td>0.845</td></tr><tr><td>Average 0.846</td><td>0.776</td><td>0.763</td><td>0.768</td><td>0.769</td></tr></table>", |
| "html": null, |
| "text": "", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF2": { |
| "content": "<table><tr><td colspan=\"4\">: The number of files in each domain</td></tr><tr><td/><td colspan=\"3\">positive negative test data</td></tr><tr><td>books</td><td>1,000</td><td>1,000</td><td>4,465</td></tr><tr><td>dvd</td><td>1,000</td><td>1,000</td><td>3,586</td></tr><tr><td>electronics</td><td>1,000</td><td>1,000</td><td>5,681</td></tr><tr><td>kitchen</td><td>1,000</td><td>1,000</td><td>5,945</td></tr></table>", |
| "html": null, |
| "text": "", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF3": { |
| "content": "<table><tr><td>IDEAL</td><td>NONE</td><td>CORAL</td><td>our</td><td>SCL</td><td>SCL + our method</td></tr><tr><td/><td/><td/><td>method</td><td/><td/></tr><tr><td>B E 0.852</td><td>0.761</td><td>0.763</td><td>0.760</td><td>0.757</td><td>0.756</td></tr><tr><td>D B 0.831</td><td>0.762</td><td>0.783</td><td>0.756</td><td>0.732</td><td>0.733</td></tr><tr><td>E K 0.878</td><td>0.849</td><td>0.836</td><td>0.849</td><td>0.852</td><td>0.853</td></tr><tr><td>K D 0.822</td><td>0.740</td><td>0.739</td><td>0.743</td><td>0.732</td><td>0.733</td></tr><tr><td>Average 0.846</td><td>0.778</td><td>0.780</td><td>0.777</td><td>0.768</td><td>0.769</td></tr></table>", |
| "html": null, |
| "text": "Combination of feature-based method and instance-based method", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF4": { |
| "content": "<table><tr><td>NN+AE+WT 0.7697</td></tr><tr><td>0.78</td></tr><tr><td>0.77</td></tr><tr><td>0.76</td></tr><tr><td>0.75</td></tr><tr><td>precision</td></tr><tr><td>0.74</td></tr><tr><td>NN+AE 0.7667</td></tr><tr><td>0.73</td></tr><tr><td>0.72</td></tr><tr><td>NN 0.7618</td></tr><tr><td>0.71</td></tr><tr><td>0.7</td></tr><tr><td>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50</td></tr><tr><td>epoch</td></tr><tr><td>Hong Kong, 1-3 December 2018</td></tr><tr><td>Copyright 2018 by the authors</td></tr></table>", |
| "html": null, |
| "text": ". 2 shows that this feature-based method NN+AE improves the precision of the standard neural network NN. Moreover, combining 32nd Pacific Asia Conference on Language, Information and Computation", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF5": { |
| "content": "<table><tr><td/><td/><td colspan=\"3\">: weighted-learning by neural network</td></tr><tr><td>IDEAL</td><td>NONE</td><td>NN</td><td>NN+AE</td><td>NN+AE+WT</td></tr><tr><td>0.852</td><td>0.761</td><td>0.7618</td><td>0.7667</td><td>0.7697</td></tr></table>", |
| "html": null, |
| "text": "", |
| "type_str": "table", |
| "num": null |
| } |
| } |
| } |
| } |