ACL-OCL / Base_JSON /prefixR /json /R19 /R19-1025.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "R19-1025",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:02:44.481922Z"
},
"title": "Self-Adaptation for Unsupervised Domain Adaptation",
"authors": [
{
"first": "Xia",
"middle": [],
"last": "Cui",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Liverpool",
"location": {
"addrLine": "Ashton Street",
"postCode": "L69 3BX",
"settlement": "Liverpool",
"country": "United Kingdom"
}
},
"email": "xia.cui@liverpool.ac.uk"
},
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Liverpool",
"location": {
"addrLine": "Ashton Street",
"postCode": "L69 3BX",
"settlement": "Liverpool",
"country": "United Kingdom"
}
},
"email": "danushka.bollegala@liverpool.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Lack of labelled data in the target domain for training is a common problem in domain adaptation. To overcome this problem, we propose a novel unsupervised domain adaptation method that combines projection and self-training based approaches. Using the labelled data from the source domain, we first learn a projection that maximises the distance among the nearest neighbours with opposite labels in the source domain. Next, we project the source domain labelled data using the learnt projection and train a classifier for the target class prediction. We then use the trained classifier to predict pseudo labels for the target domain unlabelled data. Finally, we learn a projection for the target domain as we did for the source domain using the pseudo-labelled target domain data, where we maximise the distance between nearest neighbours having opposite pseudo labels. Experiments on a standard benchmark dataset for domain adaptation show that the proposed method consistently outperforms numerous baselines and returns competitive results comparable to that of SOTA including self-training, tri-training, and neural adaptations.",
"pdf_parse": {
"paper_id": "R19-1025",
"_pdf_hash": "",
"abstract": [
{
"text": "Lack of labelled data in the target domain for training is a common problem in domain adaptation. To overcome this problem, we propose a novel unsupervised domain adaptation method that combines projection and self-training based approaches. Using the labelled data from the source domain, we first learn a projection that maximises the distance among the nearest neighbours with opposite labels in the source domain. Next, we project the source domain labelled data using the learnt projection and train a classifier for the target class prediction. We then use the trained classifier to predict pseudo labels for the target domain unlabelled data. Finally, we learn a projection for the target domain as we did for the source domain using the pseudo-labelled target domain data, where we maximise the distance between nearest neighbours having opposite pseudo labels. Experiments on a standard benchmark dataset for domain adaptation show that the proposed method consistently outperforms numerous baselines and returns competitive results comparable to that of SOTA including self-training, tri-training, and neural adaptations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A machine learning model trained using data from one domain (source domain) might not necessarily perform well on a different (target) domain when their distributions are different. Domain adaptation (DA) considers the problem of adapting a machine learning model such as a classifier that is trained using a source domain to a target domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In particular, in Unsupervised Domain Adaptation (UDA) (Blitzer et al., 2006 (Blitzer et al., , 2007 Pan et al., 2010) we do not assume the availability of any labelled instances from the target domain but a set of labelled instances from the source domain and unlabelled instances from both source and the target domains.",
"cite_spans": [
{
"start": 55,
"end": 76,
"text": "(Blitzer et al., 2006",
"ref_id": "BIBREF3"
},
{
"start": 77,
"end": 100,
"text": "(Blitzer et al., , 2007",
"ref_id": "BIBREF2"
},
{
"start": 101,
"end": 118,
"text": "Pan et al., 2010)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Two main approaches for UDA can be identified from prior work: projection-based and selftraining.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Projection 1 -based methods for UDA learn an embedding space where the distribution of features in the source and the target domains become closer to each other than they were in the original feature spaces (Blitzer et al., 2006) . For this purpose, the union of the source and target feature spaces is split into domain-independent (often referred to as pivots) and domain-specific features using heuristics such as mutual information or frequency of a feature in a domain. A projection is then learnt between those two feature spaces and used to adapt a classifier trained from the source domain labelled data. For example, methods based on different approaches such as graphdecomposition spectral feature alignment (Pan et al., 2010) or autoencoders (Louizos et al., 2015) have been proposed for this purpose.",
"cite_spans": [
{
"start": 207,
"end": 229,
"text": "(Blitzer et al., 2006)",
"ref_id": "BIBREF3"
},
{
"start": 718,
"end": 736,
"text": "(Pan et al., 2010)",
"ref_id": "BIBREF18"
},
{
"start": 753,
"end": 775,
"text": "(Louizos et al., 2015)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Self-training (Yarowsky, 1995; Abney, 2007 ) is a technique to iteratively increase a set of labelled instances by training a classifier using current labelled instances and applying the trained classifier to predict pseudo-labels for unlabelled instances. High confident predictions are then appended to the current labelled dataset, thereby increasing the number of labelled instances. The process is iterated until no additional pseudo-labelled instances can be found. Self-training provides a direct solution to the lack of labelled data in the target do-main in UDA (McClosky et al., 2006; Reichart and Rappoport, 2007; Drury et al., 2011) . Specifically, the source domain's labelled instances are used to initialise the self-training process and during subsequent iterations labels are inferred for the target domain's unlabelled instances, which can be used to train a classifier for the task of interest.",
"cite_spans": [
{
"start": 14,
"end": 30,
"text": "(Yarowsky, 1995;",
"ref_id": "BIBREF26"
},
{
"start": 31,
"end": 42,
"text": "Abney, 2007",
"ref_id": "BIBREF0"
},
{
"start": 567,
"end": 594,
"text": "UDA (McClosky et al., 2006;",
"ref_id": null
},
{
"start": 595,
"end": 624,
"text": "Reichart and Rappoport, 2007;",
"ref_id": "BIBREF21"
},
{
"start": 625,
"end": 644,
"text": "Drury et al., 2011)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "So far in UDA projection-learning and selftrained approaches have been explored separately. An interesting research question we ask and answer positively in this paper is whether can we improve the performance of projection-based methods in UDA using self-training? In particular, recent work on UDA (Morerio et al., 2018) has shown that minimising the entropy of a classifier on its predictions in the source and target domains is equivalent to learning a projection space that maximises the correlation between source and target instances. Motivated by these developments, we propose Self-Adapt, a method that combines the complementary strengths of projection-based methods and self-training methods for UDA.",
"cite_spans": [
{
"start": 300,
"end": 322,
"text": "(Morerio et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our proposed method consists of three steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 First, using labelled instances from the source domain we learn a projection (S prj ) that maximises the distance between each source domain labelled instance and its nearest neighbours with opposite labels. Intuitively, this process will learn a projected feature space in the source domain where the margin between the opposite labelled nearest neighbours is maximised, thereby minimising the risk of misclassifications. We project the source domain's labelled instances using S prj for the purpose of training a classifier for predicting the target task labels such as positive/negative sentiment in cross-domain sentiment classification or part-of-speech tags in cross-domain part-of-speech tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Second, we use the classifier trained in the previous step to assign pseudo labels for the (unlabelled) target domain instances. Different strategies can be used for this label inference process such as selecting instances with the highest classifier confidence as in selftraining or checking the agreement among multiple classifier as in tri-training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Third, we use the pseudo-labelled target domain instances to learn a projection for the target domain (T prj ) following the same procedure used to learn S prj . Specifically, we learn a projected feature space in the target domain where the margin between the opposite pseudo-labelled nearest neighbours is maximised. We project labelled instances in the source domain and pseudo-labelled instances in the target domain respectively using S prj and T prj , and use those projected instances to learn a classifier for the target task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As an evaluation task, we perform cross-domain sentiment classification on the Amazon multidomain sentiment dataset (Blitzer et al., 2007) . Although most prior work on UDA have used this dataset as a standard evaluation benchmark, the evaluations have been limited to the four domains books, dvds, electronic appliances and kitchen appliances. We too report performances on those four domains for the ease of comparison against prior work. However, to reliably estimate the generalisability of the proposed method, we perform an additional extensive evaluation using 16 other domains included in the original version of the Amazon multi-domain sentiment dataset.",
"cite_spans": [
{
"start": 116,
"end": 138,
"text": "(Blitzer et al., 2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Results from the cross-domain sentiment classification reveal several interesting facts. A baseline that uses S prj alone would still outperform a baseline that uses a classifier trained using only the source domain's labelled instances on the target domain test instances, without performing any adaptations. This result shows that it is useful to consider the label distribution available in the source domain to learn a projection even though it might be different to that in the target domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "On the other hand, training a classifier using the pseudo-labelled target domain instances alone, without learning T prj further improves performance. This result shows that pseudo labels inferred for the target domain unlabelled instances can be used to overcome the issue of lack of labelled instances in the target domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Moreover, if we further use the pseudo-labelled instances to learn T prj , then we see a significant improvement of performance across all domain pairs, suggesting that UDA can benefit from both projection learning and self-training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "These experimental results support our claim that it is beneficial to combine projection-based and self-training-based UDA approaches. Moreover, our proposed method outperforms all selftraining based domain adaptation methods such as tri-training (Zhou and Li, 2005; S\u00f8gaard, 2010) and is competitive against neural domain adapta-Self-training (Yarowsky, 1995) has been adapted to various cross-domain NLP tasks such as document classification (Raina et al., 2007) , POS tagging (McClosky et al., 2006; Reichart and Rappoport, 2007) and sentiment classification (Drury et al., 2011) . Although different variants of selftraining algorithms have been proposed (Abney, 2007; Yu and K\u00fcbler, 2011) a common recipe can be recognised involving the following three steps: (a) Initialise the training dataset, L = S L , to the labelled data in the source domain, and train a classifier for the target task using L, (b) apply the classifier trained in step (a) to the unlabelled data in the target domain T U , and append the most confident predictions as identified by the classifier (e.g. higher than a pre-defined confidence threshold \u03c4 ) to the labelled dataset L, (c) repeat the two steps (a) and (b) until we cannot append additional high confidence predictions to L.",
"cite_spans": [
{
"start": 247,
"end": 266,
"text": "(Zhou and Li, 2005;",
"ref_id": "BIBREF28"
},
{
"start": 267,
"end": 281,
"text": "S\u00f8gaard, 2010)",
"ref_id": "BIBREF24"
},
{
"start": 344,
"end": 360,
"text": "(Yarowsky, 1995)",
"ref_id": "BIBREF26"
},
{
"start": 444,
"end": 464,
"text": "(Raina et al., 2007)",
"ref_id": "BIBREF20"
},
{
"start": 479,
"end": 502,
"text": "(McClosky et al., 2006;",
"ref_id": "BIBREF16"
},
{
"start": 503,
"end": 532,
"text": "Reichart and Rappoport, 2007)",
"ref_id": "BIBREF21"
},
{
"start": 562,
"end": 582,
"text": "(Drury et al., 2011)",
"ref_id": "BIBREF11"
},
{
"start": 659,
"end": 672,
"text": "(Abney, 2007;",
"ref_id": "BIBREF0"
},
{
"start": 673,
"end": 693,
"text": "Yu and K\u00fcbler, 2011)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Another popular approach for inferring labels for the target domain is co-training (Blum and Mitchell, 1998) , where the availability of multiple views of the feature space is assumed. In the simplest case where there are two views available for the instances, a separate classifier is trained using the source domain's labelled instances that involve features from a particular view only. Next, the two classifiers are used to predict pseudo labels for the target domain unlabelled instances. If the two classifiers agree on the label for a particular unlabelled instances, then that label is assigned to that instance. Co-training has been applied to UDA (Yu and K\u00fcbler, 2011; Chen et al., 2011) , where the feature spaces in the source and target domains were considered as the multiple views. The performance of co-training will depend on the complementarity of the information captured by the different feature spaces. Therefore, it is an important to carefully design multiple feature spaces when performing UDA. In contrast, our proposed method does not require such multiple views and does not require training multiple classifiers for the purpose of assigning pseudo labels for the target domain unlabelled instances, which makes the proposed method easy to implement.",
"cite_spans": [
{
"start": 83,
"end": 108,
"text": "(Blum and Mitchell, 1998)",
"ref_id": "BIBREF4"
},
{
"start": 657,
"end": 678,
"text": "(Yu and K\u00fcbler, 2011;",
"ref_id": "BIBREF27"
},
{
"start": 679,
"end": 697,
"text": "Chen et al., 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Tri-training (Zhou and Li, 2005) relaxes the requirement of co-training for the feature spaces to be sufficient and redundant views. Specifically, in tri-training, as the name implies three separate classifiers are trained using bootstrapped subsets of instances sampled from the labelled instances. If at least two out of the three classifiers agree upon a label for an unlabelled instance, that label is then assigned to the unlabelled instance. S\u00f8gaard (2010) proposed a variation of tri-training (i.e. tri-training with diversification) that diversifies the sampling process and reduces the number of additional instances, where they require exactly two out of the three classifiers to agree upon a label and the third classifier to disagree. It has been shown that the classic tri-training algorithm when applied to UDA acts as a strong baseline that outperforms even more complex SoTA neural adaptation methods (Ruder and Plank, 2018) . As later shown in our experiments, the proposed Self-Adapt method consistently outperforms selftraining, tri-training and tri-training with diversification across most of the domain pairs considered.",
"cite_spans": [
{
"start": 13,
"end": 32,
"text": "(Zhou and Li, 2005)",
"ref_id": "BIBREF28"
},
{
"start": 448,
"end": 462,
"text": "S\u00f8gaard (2010)",
"ref_id": "BIBREF24"
},
{
"start": 917,
"end": 940,
"text": "(Ruder and Plank, 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Projection-based approaches for UDA learn a (possibly lower-dimensional) projection where the difference between the source and target feature spaces is reduced. For example, Structural Correspondence Learning (SCL) (Blitzer et al., 2006 (Blitzer et al., , 2007 learns a projection using a set of domain invariant common features called pivots. Different strategies have been proposed in the literature for finding pivots for different tasks such as the frequency of a feature in a domain for crossdomain POS tagging (Blitzer et al., 2006; Cui et al., 2017a) , mutual information (Blitzer et al., 2007) and pointwise mutual information (Bollegala et al., 2011, 2015) for cross-domain sentiment classification. Cui et al. (2017b) proposed a method for learning the appropriateness of a feature as a pivot (pivothood) from the data during training, without requiring any heuristics. Although we use projections in the proposed method, unlike prior work on projection-based UDA, we do not require splitting the feature space into domain independent and domain specific features. Moreover, we learn two separate projections for each of the source and target domain, which gives us more flexibility to address the domain-specific constrains in the learnt projections. They used Maximum Mean Discrepancy (MMD) (Gretton et al., 2006) for further promoting the invariant projected feature space. Ganin et al. (2016) proposed Domain Adversarial Neural Network (DANN) to learn features that combine the discriminative power of a classifier and the domaininvariance of the projection space to simultaneously learn adaptable and discriminative projections. Saito et al. (2017) proposed a deep tritraining method with three neural networks, two for pseudo labelling the target unlabelled data and another one for learning discriminator using the inferred pseudo labels for the target domain. Ruder and Plank (2018) proposed Multi-task Tritraining (MT-Tri) based on tri-training and Bi-LSTM. They show that tri-training is a competitive baseline and rivals more complex neural adaptation methods. Although MT-Tri does not outperform SoTA on cross-domain sentiment classification tasks, their proposal reduces the time and space complexity required by the classical tritraining.",
"cite_spans": [
{
"start": 216,
"end": 237,
"text": "(Blitzer et al., 2006",
"ref_id": "BIBREF3"
},
{
"start": 238,
"end": 261,
"text": "(Blitzer et al., , 2007",
"ref_id": "BIBREF2"
},
{
"start": 517,
"end": 539,
"text": "(Blitzer et al., 2006;",
"ref_id": "BIBREF3"
},
{
"start": 540,
"end": 558,
"text": "Cui et al., 2017a)",
"ref_id": "BIBREF9"
},
{
"start": 580,
"end": 602,
"text": "(Blitzer et al., 2007)",
"ref_id": "BIBREF2"
},
{
"start": 710,
"end": 728,
"text": "Cui et al. (2017b)",
"ref_id": "BIBREF10"
},
{
"start": 1304,
"end": 1326,
"text": "(Gretton et al., 2006)",
"ref_id": "BIBREF13"
},
{
"start": 1388,
"end": 1407,
"text": "Ganin et al. (2016)",
"ref_id": "BIBREF12"
},
{
"start": 1645,
"end": 1664,
"text": "Saito et al. (2017)",
"ref_id": "BIBREF23"
},
{
"start": 1879,
"end": 1901,
"text": "Ruder and Plank (2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As stated above, our proposed method Self-Adapt, differs from the prior work discussed above in that it (a) does not require pivots, (b) does not require multiple feature views, (c) learns two different projections for the source and target domains and (d) combines a projection and a selftraining step in a non-iterative manner to improve the performance in UDA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3 Self-Adaptation (Self-Adapt)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In UDA, we are given a set of positively (S + L ) and negatively (S \u2212 L ) labelled instances for a source domain",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "S (S L = S + L \u222aS \u2212 L )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": ", and sets of unlabelled instances S U and T U respectively for the source and target domain T . Given a dataset D, we are required to learn a classifier f (x, y; D) that returns the probability of a test instance x taking a label y. For simplicity, we consider the pairwise adaptation from a single source to single target, and binary (y \u2208 {\u22121, 1}) classification as the target task. However, self-adapt can be easily extended to multi-domain and multi-class UDA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We represent an instance (document/review) x by a bag-of-n-gram (BonG) embedding (Arora et al., 2018) , where we add the pre-trained ddimensional word embeddings w \u2208 R d for the words w \u2208 x to create a d-dimensional feature vector x \u2208 R d representing x. Self-adapt consists of three steps: (a) learning a source projection us-ing S L (Section 3.1), (b) pseudo labelling T U using a classifier trained on the projected S L (Section 3.2); (c) learning a target projection using the pseudo-labelled target instances, and then learning a classifier f for the target task (Section 3.3).",
"cite_spans": [
{
"start": 81,
"end": 101,
"text": "(Arora et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In UDA, the adaptation task does not vary between the source and target domains. Therefore, we can use S L to learn a projection for the source domain S prj where the separation between an instance x \u2208 S L and its opposite-labelled nearest neighbours is maximised. Specifically, for an instance x we represent the set of k of its nearest neighbours NN(x, D, k) selected from a set D by a vector \u03c6(x, D, k) as the sum of the word embeddings of the neighbours given by (1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source Projection Learning (S prj )",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c6(x, D, k) = u\u2208NN(x,D,k) \u03b8(x, u)u.",
"eq_num": "(1)"
}
],
"section": "Source Projection Learning (S prj )",
"sec_num": "3.1"
},
{
"text": "Here, the weight \u03b8(x, u) is computed using the cosine similarity between u and x, and is normalised s.t. u\u2208NN(x,D,k) \u03b8(x, u) = 1. Other similarity measures can also be used instead of cosine, for example, Euclidean distance (Van Asch and Daelemans, 2016). Then, S prj is defined by the projection matrices A + \u2208 R d\u00d7d and A \u2212 \u2208 R d\u00d7d and is learnt by maximising the objective O L given by (2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source Projection Learning (S prj )",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "A\u2212) = x\u2208S + L A+x \u2212 A\u2212\u03c6(x, S \u2212 L , k) 2 2 + x\u2208S \u2212 L A\u2212x \u2212 A+\u03c6(x, S + L , k) 2 2",
"eq_num": "(2)"
}
],
"section": "OL(A+,",
"sec_num": null
},
{
"text": "We initialise A + and A \u2212 to the identify matrix I \u2208 R d\u00d7d and apply Adam (Kingma and Ba, 2014) to find their optimal values denoted respectively by A * + and A * \u2212 . Finally, we project S L using the learnt S prj to obtain a projected set of source domain labelled instances S *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OL(A+,",
"sec_num": null
},
{
"text": "L = A * + \u2022 S + L \u222a A * \u2212 \u2022 S \u2212 L .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OL(A+,",
"sec_num": null
},
{
"text": "Here, we use the notation A \u2022 D = {Ax|x \u2208 D} to indicate the application of a projection matrix A \u2208 R d\u00d7d on elements x \u2208 R d in a dataset D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OL(A+,",
"sec_num": null
},
{
"text": "In UDA, we do not have labelled data for the target domain. To overcome this issue, inspired by prior work on self-training approaches to UDA, we train a classifier f (x, y; S * L ) on S * L first and then use this classifier to assign pseudo labels for the target domain's unlabelled data T U , if the classifier",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo Label Generation (PL)",
"sec_num": "3.2"
},
{
"text": "Algorithm 1 Pseudo Label Generation Input: source domain positively labelled data S + L , source domain negatively labelled data S \u2212 L , source domain positive transformation ma- trix A + , source domain negative transformation matrix A \u2212 , target domain unlabelled data T U , a set of target classes Y = {+1, \u22121}, classification confidence threshold \u03c4 . Output: target domain pseudo-labelled data T L S * L \u2190 A * + S + L \u222a A * \u2212 S \u2212 L T L \u2190 \u2205 for x \u2208 T U do y \u2208 Y , p(t = y|x) = f (x, y; S * L ) {probability of x belongs to class y} if p(t = y|x)) > \u03c4 then T L \u2190 T L \u222a {(x, y)} end if end for return T L",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo Label Generation (PL)",
"sec_num": "3.2"
},
{
"text": "is more confident than a pre-defined threshold \u03c4 . Algorithm 1 returns a pseudo-labelled dataset T L for the target domain. According to the classical self-training (Yarowsky, 1995; Abney, 2007) , T L will be appended to S * L and the classifier is retrained on this extended labelled dataset. The process is repeated until no further unlabelled instances can be assigned labels with confidence higher than \u03c4 . However, in our preliminary experiments, we found that this process does not improve the performance in UDA beyond the first iteration. Therefore, we limit the number of iterations to one as shown in Algorithm 1. Doing so also speeds up the training process over classical self-training, which retrains the classifier and iterates.",
"cite_spans": [
{
"start": 165,
"end": 181,
"text": "(Yarowsky, 1995;",
"ref_id": "BIBREF26"
},
{
"start": 182,
"end": 194,
"text": "Abney, 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pseudo Label Generation (PL)",
"sec_num": "3.2"
},
{
"text": "Armed with the pseudo-labelled data generated via Algorithm 1, we can now learn a projection for the target domain, T prj , following the same procedure we proposed for learning S prj in Section 3.1. Specifically, T prj is defined by the two target-domain projection matrices B + \u2208 R d\u00d7d and B \u2212 \u2208 R d\u00d7d that maximises the distance between each pseudo-labelled target instance x and its k opposite labelled nearest neighbours se-lected from positively (T + L ) and negatively (T \u2212 L ) pseudo-labelled instances. The objective O L for this optimisation problem is given by (3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target Projection Learning (T prj )",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "O L (B+, B\u2212) = x\u2208T + L B+x \u2212 B\u2212\u03c6(x, T \u2212 L , k) 2 2 + x\u2208T \u2212 L B\u2212x \u2212 B+\u03c6(x, T + L , k) 2 2",
"eq_num": "(3)"
}
],
"section": "Target Projection Learning (T prj )",
"sec_num": "3.3"
},
{
"text": "Likewise with S prj , B + and B \u2212 are initialised to the identify matrix I \u2208 R d\u00d7d , and Adam is used to find their minimisers denoted respectively by B * + and B * \u2212 . We project the target domain pseudo-labelled data using",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target Projection Learning (T prj )",
"sec_num": "3.3"
},
{
"text": "T prj to obtain T * L = B * + \u2022 T + L \u222a B * \u2212 \u2022 T \u2212 L .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Target Projection Learning (T prj )",
"sec_num": "3.3"
},
{
"text": "Finally, we train a classifier f (x, y; S * L \u222a T * L ) for the target task using both source and target projected labelled instances S * L \u222a T * L . Any binary classifier can be used for this purpose. In our experiments, we use 2 regularised logistic regression following prior work in UDA (Blitzer et al., 2006; Pan et al., 2010; Bollegala et al., 2013) . Moreover, by using a simple linear classifier, we can decouple the projection learning step from the target classification task, thereby more directly evaluate the performance of the former.",
"cite_spans": [
{
"start": 291,
"end": 313,
"text": "(Blitzer et al., 2006;",
"ref_id": "BIBREF3"
},
{
"start": 314,
"end": 331,
"text": "Pan et al., 2010;",
"ref_id": "BIBREF18"
},
{
"start": 332,
"end": 355,
"text": "Bollegala et al., 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Target Projection Learning (T prj )",
"sec_num": "3.3"
},
{
"text": "Our proposed method does not assume any information about the target task and can be in principle applied for any domain adaptation task. We use cross-domain sentiment classification as an evaluation task in this paper because it has been used extensively in prior work on UDA, thereby enabling us to directly compare the performance of our proposed method against previously proposed UDA methods. In particular, we use the Amazon multi-domain sentiment dataset, originally created by Blitzer et al. (2007) , as a benchmark dataset in our experiments. This dataset includes Amazon Product Reviews from four categories: Books (B), DVDs (D), Electronic Appliances (E) and Kitchen Appliances (K). Considering each category as a domain 2 , we can generate 4 2 = 12 pair-wise adaptation tasks involving a single source and a single target domain.",
"cite_spans": [
{
"start": 485,
"end": 506,
"text": "Blitzer et al. (2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "An Amazon product review is assigned 1-5 star rating and product reviews with 4 or 5 stars are labelled as positive, whereas 1 or 2 star reviews are labelled as negative. 3 star reviews are ignored because of their ambiguity. In addition to the labelled reviews, the Amazon multi-domain dataset contains a large number of unlabelled reviews for each domain. We use the official balanced train and test dataset splits, which has 800 (pos), 800 (neg) training instances and 200 (pos), 200 (neg) test instances for each domain. We name this dataset as the Multi-domain Adaptation Dataset (MAD).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "One issue that is often raised with MAD is that it contains only four domains. In order to robustly evaluate the performance of an UDA method we must evaluate on multiple domains. Therefore, in addition to MAD, we also evaluate on an extended dataset that contains 16 domains. We name this dataset as the Extended Multi-domain Adaptation Dataset (EMAD). The reviews for the 16 domains contained in EMAD were also collected by Blitzer et al. (2007) , but were not used in the evaluations. The same star-based procedure used in MAD is used to label the reviews in EMAD. We randomly select 20% of the available labelled reviews as test data and construct a balanced training dataset from the rest of the labelled reviews (i.e. for each domain we have equal number of positive and negative labelled instances in the train datasets). Likewise in MAD, we generate 16 2 = 240 pair-wise domain adaptation tasks from EMAD.",
"cite_spans": [
{
"start": 426,
"end": 447,
"text": "Blitzer et al. (2007)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We train an 2 regularised logistic regression as the target (sentiment) classifier, in which we tune the regularisation coefficient using validation data. We randomly select 10% from the target domain labelled data, which is separate from the train or test data. We tune regularisation coefficient in [0.001, 0.01, 0.1, 1]. We use 300 dimensional pre-trained GloVe word embeddings (Pennington et al., 2014) to create BonG embeddings for uni and bigrams. We found that a maximum of 100 epochs to be sufficient to reach convergence in all projection learning tasks for all domains. The source code implementation of self-adapt will be made publicly available upon paper acceptance.",
"cite_spans": [
{
"start": 381,
"end": 406,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Our proposed method consists of 3 main steps as described in Section 3: source projection learning, pseudo labelling and target projection learning. Using MAD, in Table 1 , we compare the relative effectiveness of these three steps towards the overall performance in UDA using k = 1 for all the steps. Specifically, we consider the following baselines.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 170,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect of Projection Learning and Pseudo-Labelling",
"sec_num": "4.1"
},
{
"text": "NA: No-adaptation. Learn a classifier from S L and simply use it to classify sentiment on target domain test instances, without performing any domain adaptation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Projection Learning and Pseudo-Labelling",
"sec_num": "4.1"
},
{
"text": "S prj : Learn a source projection S prj and apply it to project",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Projection Learning and Pseudo-Labelling",
"sec_num": "4.1"
},
{
"text": "S L to obtain S * L = A * + \u2022 S + L \u222a A * \u2212 \u2022 S \u2212 L .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Projection Learning and Pseudo-Labelling",
"sec_num": "4.1"
},
{
"text": "Train a sentiment classifier using S * L and use it to classify target domain test instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Projection Learning and Pseudo-Labelling",
"sec_num": "4.1"
},
{
"text": "S prj +PL: Use the classifier trained using S * L on target domain unlabelled data to create a pseudolabelled dataset T L . Train a sentiment classifier on S * L \u222a T L and use it to classify target domain test instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Projection Learning and Pseudo-Labelling",
"sec_num": "4.1"
},
{
"text": "S prj +T prj : This is the proposed method including all three steps. A target projection T prj is learnt using T L and is applied to obtain T *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Projection Learning and Pseudo-Labelling",
"sec_num": "4.1"
},
{
"text": "L = B * + \u2022 T + L \u222a B * \u2212 \u2022 T \u2212 L .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Projection Learning and Pseudo-Labelling",
"sec_num": "4.1"
},
{
"text": "Finally, a sentiment classifier is trained using S * L \u222a T * L and used to classify target domain test instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Projection Learning and Pseudo-Labelling",
"sec_num": "4.1"
},
{
"text": "With all methods, we keep k = 1 in the nearest neighbour feature representation in (1) for the ease of comparisons. Confidence threshold \u03c4 is tuned in the range [0.5, 0.9] using cross-validation. From Table 1 we see that S prj consistently outperforms NA, showing that even without using any information from the target domain, it is still useful to learn source domain projections that discriminates instances with opposite labels. When we perform pseudo labelling on top of source projection learning (S prj +PL) we see a slight but consistent improvement in all domain-pairs. However, when we use the pseudo labelled instances to learn a target projection (S prj +T prj ) we obtain the best performance in all domain-pairs. Moreover, the obtained results are significantly better under the stricter p < 0.001 level over the NA baseline in 7 out of 12 domain-pairs. Table 3 shows the classification accuracy for the EMAD. Due to the limited availability of space, we show the average classification accuracy for adapting to the same target domain instead of showing all 240 domain-pairs for EMAD. Likewise in MAD, we see in EMAD we obtain the best results when we use both source and target projections. Interestingly, we see that the proposed method adapting well even to the domains Table 1 : Target domain test data classification accuracy for the different steps in the proposed method (k = 1). S \u2212 T denotes adapting from a source S to a target T domain. The best result for each domain-pair is bolded. Statistically significant improvements over NA according to the binomial exact test are shown by \"*\" and \"**\" respectively at p = 0.01 and p = 0.001 levels.",
"cite_spans": [],
"ref_spans": [
{
"start": 201,
"end": 208,
"text": "Table 1",
"ref_id": null
},
{
"start": 868,
"end": 875,
"text": "Table 3",
"ref_id": null
},
{
"start": 1287,
"end": 1294,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Effect of Projection Learning and Pseudo-Labelling",
"sec_num": "4.1"
},
{
"text": "with smaller numbers of unlabelled data such as gourmet food (168 labelled and 267 unlabelled train instances). This is encouraging because it shows that the proposed method can overcome the lack of labelled instances via pseudo labelling and projection learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of Projection Learning and Pseudo-Labelling",
"sec_num": "4.1"
},
{
"text": "Ruder and Plank (2018) evaluated classical general-purpose semi-supervised learning methods proposed for inducing pseudo labels for unlabelled instances using a seed set of labelled instances in the context of UDA. They found that tri-training to outperform more complex neural SoTA UDA methods. Considering the fact that Self-Adapt is performing pseudo-labelling, similar to other self-training methods, it is interesting to see how well it compares against classical selftraining methods for inducing labels (Yarowsky, 1995; Abney, 2007; Zhou and Li, 2005; S\u00f8gaard, 2010) when applied to UDA. Specifically, we consider the classical self-training (Yarowsky, 1995; Abney, 2007 ) (Self), Tri-training (Zhou and Li, 2005) (Tri) and Tri-training with diversification (S\u00f8gaard, 2010) (Tri-D) . For each of those methods, we use the labelled data in the source domain as seeds and induce labels for the unlabelled data in the target domain. Table 2 reports the results on MAD.",
"cite_spans": [
{
"start": 510,
"end": 526,
"text": "(Yarowsky, 1995;",
"ref_id": "BIBREF26"
},
{
"start": 527,
"end": 539,
"text": "Abney, 2007;",
"ref_id": "BIBREF0"
},
{
"start": 540,
"end": 558,
"text": "Zhou and Li, 2005;",
"ref_id": "BIBREF28"
},
{
"start": 559,
"end": 573,
"text": "S\u00f8gaard, 2010)",
"ref_id": "BIBREF24"
},
{
"start": 649,
"end": 665,
"text": "(Yarowsky, 1995;",
"ref_id": "BIBREF26"
},
{
"start": 666,
"end": 677,
"text": "Abney, 2007",
"ref_id": "BIBREF0"
},
{
"start": 701,
"end": 726,
"text": "(Zhou and Li, 2005) (Tri)",
"ref_id": null
},
{
"start": 765,
"end": 788,
"text": "(S\u00f8gaard, 2010) (Tri-D)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 937,
"end": 944,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Comparisons against Self-Training",
"sec_num": "4.2"
},
{
"text": "We re-implement the classical self-training methods considered by Ruder and Plank (2018) and evaluated them against the proposed self-adapt on the same datasets, feature representations and settings to conduct a fair comparison. All classical self-training methods were trained using the source domain labelled instances S L as seed data. As discussed in Section 3.2, similar to Self-Adapt, we observed that the performance did not significantly increase beyond the first iteration for any of the classical self-training methods in UDA. Consequently, we compare all classical self-training methods for their peak performance, obtained after the first iteration. We tune the confidence threshold \u03c4 for each method using validation data and found the optimal value of \u03c4 to fall in the range [0.6, 0.9]. k is a hyperparameter selected using validation dataset for Self-Adapt in comparison.",
"cite_spans": [
{
"start": 66,
"end": 88,
"text": "Ruder and Plank (2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons against Self-Training",
"sec_num": "4.2"
},
{
"text": "Experimental results on MAD and EMAD are shown respectively in Tables 2 and 4. From those Tables, we see that Self-Adapt for most of the domain pairs performs similarly or slightly worse than NA. Although Tri and Tri-D outperform NA on all cases, we found that those two methods are highly sensitive to the seed instances used to initialise the pseudo-labelling process. We find the proposed Self-Adapt to outperform all classical self-training based methods in 11 out of 12 domain pairs in MAD and in all 16 target domains in EMAD, showing a strong and robust improvement over classical self-training methods. This result shows that by combining source and target domain projections with self-training, we can obtain superior performance in UDA in comparison to using classical self-training methods alone. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons against Self-Training",
"sec_num": "4.2"
},
{
"text": "S-T NA Self Tri Tri-D Self-Adapt B-D",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparisons against Self-Training",
"sec_num": "4.2"
},
{
"text": "In Table 3 : Average classification accuracy on each target domain in EMAD for the steps in the proposed method (k = 1).",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparisons against Neural UDA",
"sec_num": "4.3"
},
{
"text": "ric Tri-training (Saito et al., 2017 ) (Asy-Tri), and Multi-task Tri-training (Ruder and Plank, 2018) (MT-Tri) . We select these methods as they are the current SoTA for UDA on MAD, and report the results from the original publications in Table 5 . Although only in 3 out of 12 domain-pairs Self-Adapt is obtaining the best performance, the difference of performance between DANN and Self-Adapt is not statistically significant. Although MT-Tri is outperforming Self-Adapt in 8 domainpairs, it is noteworthy that MT-Tri is using a larger feature space than that of Self-Adapt. Specifically, MT-Tri is using 5000 dimensional tf-idf weighted unigram and bigram vectors for representing reviews, whereas we Self-Adapt uses a 300 dimensional BonG representation computed using pre-trained GloVe vectors. Moreover, prior work on neural UDA have not used the entire unlabelled datasets and have sampled a smaller subset due to computational feasibility. For example, MT-Tri uses only 2000 unlabelled instances for each domain despite the fact that the original unlabelled datasets contain much larger numbers of reviews. This is not only a waste of available data but it is also non-obvious as how to subsample unlabelled data for training. Our preliminary experiments revealed that the performance of neural UDA methods to be sensitive to the unlabelled datasets used. 3 On the other hand, Self-Adapt does not require sub-sampling of unlabelled data and uses all the available unlabelled data for UDA. During the pseudo-labelling step, Self-Adapt automatically selects a subset of unlabelled target in- stances that are determined to be confident by the classifier more than a pre-defined threshold \u03c4 . The ability to operate on a lower-dimensional feature space and obviating the need to subsample unlabelled data are properties of the proposed method that are attractive when applying UDA methods on large datasets and across multiple domains. Table 5 : Classification accuracy compared with neural adaptation methods. The best result is bolded. Statistically significant improvements over DANN according to the binomial exact test are shown by \"*\" and \"**\" respectively at p = 0.01 and p = 0.001 levels.",
"cite_spans": [
{
"start": 17,
"end": 36,
"text": "(Saito et al., 2017",
"ref_id": "BIBREF23"
},
{
"start": 78,
"end": 110,
"text": "(Ruder and Plank, 2018) (MT-Tri)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 239,
"end": 246,
"text": "Table 5",
"ref_id": "TABREF3"
},
{
"start": 1941,
"end": 1948,
"text": "Table 5",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Comparisons against Neural UDA",
"sec_num": "4.3"
},
{
"text": "We proposed Self-Adapt, an UDA method that combines projection learning and self-training. Our experimental results on two datasets for crossdomain sentiment classification show that projection learning and self-training have complementary strengths and jointly contribute to improve UDA performance. In future, we plan to apply Self-Adapt to other UDA tasks in NLP such as cross-domain POS tagging and NER.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "we consider the terms project and embed as synonymous in this paper",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "tion methods(Louizos et al., 2015;Ganin et al., 2016;Saito et al., 2017;Ruder and Plank, 2018).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Multiple reviews might exist for the same product within the same domain. Products are not shared across domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Unfortunately, the source code for MT-Tri was not available for us to run this method with the same set of features and unlabelled dataset that we used. Therefore, we report the results from the original publication.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Semisupervised Learning for Computational Linguistics",
"authors": [
{
"first": "Steven",
"middle": [],
"last": "Abney",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven Abney. 2007. Semisupervised Learning for Computational Linguistics, 1st edition. Chapman & Hall/CRC.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A compressed sensing view of unsupervised text embeddings, bag-of-n-grams, and LSTMs",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Mikhail",
"middle": [],
"last": "Khodak",
"suffix": ""
},
{
"first": "Nikunj",
"middle": [],
"last": "Saunshi",
"suffix": ""
},
{
"first": "Kiran",
"middle": [],
"last": "Vodrahalli",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Mikhail Khodak, Nikunj Saunshi, and Kiran Vodrahalli. 2018. A compressed sensing view of unsupervised text embeddings, bag-of-n-grams, and LSTMs. In International Conference on Learn- ing Representations.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classi- fication. In Proc. of ACL, pages 440-447.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Domain adaptation with structural correspondence learning",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "120--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspon- dence learning. In Proc. of EMNLP, pages 120-128.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Combining labeled and unlabeled data with co-training",
"authors": [
{
"first": "Avrim",
"middle": [],
"last": "Blum",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the eleventh annual conference on Computational learning theory",
"volume": "",
"issue": "",
"pages": "92--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Avrim Blum and Tom Mitchell. 1998. Combining la- beled and unlabeled data with co-training. In Pro- ceedings of the eleventh annual conference on Com- putational learning theory, pages 92-100. ACM.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Cross-domain sentiment classification using sentiment sensitive embeddings",
"authors": [
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "Tingting",
"middle": [],
"last": "Mu",
"suffix": ""
},
{
"first": "Yannis",
"middle": [],
"last": "Goulermas",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE Transactions on Knowledge and Data Engineering (TKDE)",
"volume": "28",
"issue": "2",
"pages": "398--410",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danushka Bollegala, Tingting Mu, and Yannis Gouler- mas. 2015. Cross-domain sentiment classifica- tion using sentiment sensitive embeddings. IEEE Transactions on Knowledge and Data Engineering (TKDE), 28(2):398-410.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Using multiple sources to construct a sentiment sensitive thesaurus for cross-domain sentiment classification",
"authors": [
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL/HLT'11",
"volume": "",
"issue": "",
"pages": "132--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danushka Bollegala, David Weir, and John Carroll. 2011. Using multiple sources to construct a senti- ment sensitive thesaurus for cross-domain sentiment classification. In ACL/HLT'11, pages 132 -141.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Cross-domain sentiment classification using a sentiment sensitive thesaurus",
"authors": [
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 2013,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "25",
"issue": "8",
"pages": "1719--1731",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danushka Bollegala, David Weir, and John Carroll. 2013. Cross-domain sentiment classification using a sentiment sensitive thesaurus. IEEE Transactions on Knowledge and Data Engineering, 25(8):1719 - 1731.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Co-training for domain adaptation",
"authors": [
{
"first": "Minmim",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Kilian",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Weinberger",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blitzer",
"suffix": ""
}
],
"year": 2011,
"venue": "NIPS'11",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minmim Chen, Kilian Q. Weinberger, and John Blitzer. 2011. Co-training for domain adaptation. In NIPS'11.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Effect of data imbalance on unsupervised domain adaptation of part-of-speech tagging and pivot selection strategies",
"authors": [
{
"first": "Xia",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Frans",
"middle": [],
"last": "Coenen",
"suffix": ""
},
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of the Wokshop on Learning With Imbalanced Domains: Theory and Applications (LIDTA) at the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD)",
"volume": "",
"issue": "",
"pages": "103--115",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xia Cui, Frans Coenen, and Danushka Bollegala. 2017a. Effect of data imbalance on unsupervised domain adaptation of part-of-speech tagging and pivot selection strategies. In Proc. of the Wokshop on Learning With Imbalanced Domains: Theory and Applications (LIDTA) at the European Confer- ence on Machine Learning and Principles and Prac- tice of Knowledge Discovery in Databases (ECML- PKDD), pages 103-115.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "TSP: Learning task-specific pivots for unsupervised domain adaptation",
"authors": [
{
"first": "Xia",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Frans",
"middle": [],
"last": "Coenen",
"suffix": ""
},
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD)",
"volume": "",
"issue": "",
"pages": "754--771",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xia Cui, Frans Coenen, and Danushka Bollegala. 2017b. TSP: Learning task-specific pivots for un- supervised domain adaptation. In Proc. of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD), pages 754-771.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Guided self training for sentiment classification",
"authors": [
{
"first": "Brett",
"middle": [],
"last": "Drury",
"suffix": ""
},
{
"first": "Lu\u00eds",
"middle": [],
"last": "Torgo",
"suffix": ""
},
{
"first": "Jose Joao",
"middle": [],
"last": "Almeida",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of Workshop on Robust Unsupervised and Semisupervised Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brett Drury, Lu\u00eds Torgo, and Jose Joao Almeida. 2011. Guided self training for sentiment classification. In Proceedings of Workshop on Robust Unsupervised and Semisupervised Methods in Natural Language Processing, pages 9-16.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Domain-adversarial training of neural networks",
"authors": [
{
"first": "Yaroslav",
"middle": [],
"last": "Ganin",
"suffix": ""
},
{
"first": "Evgeniya",
"middle": [],
"last": "Ustinova",
"suffix": ""
},
{
"first": "Hana",
"middle": [],
"last": "Ajakan",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Germain",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Laviolette",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Marchand",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Lempitsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Machine Learning Research",
"volume": "17",
"issue": "59",
"pages": "1--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran\u00e7ois Lavi- olette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural net- works. Journal of Machine Learning Research, 17(59):1-35.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A kernel method for the two-sample-problem",
"authors": [
{
"first": "Arthur",
"middle": [],
"last": "Gretton",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Karsten",
"suffix": ""
},
{
"first": "Malte",
"middle": [],
"last": "Borgwardt",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Rasch",
"suffix": ""
},
{
"first": "Alex",
"middle": [
"J"
],
"last": "Sch\u00f6lkopf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smola",
"suffix": ""
}
],
"year": 2006,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "513--520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur Gretton, Karsten M Borgwardt, Malte Rasch, Bernhard Sch\u00f6lkopf, and Alex J Smola. 2006. A kernel method for the two-sample-problem. In Ad- vances in neural information processing systems, pages 513-520.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The variational fair autoencoder",
"authors": [
{
"first": "Christos",
"middle": [],
"last": "Louizos",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Swersky",
"suffix": ""
},
{
"first": "Yujia",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Welling",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"S"
],
"last": "Zemel",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard S. Zemel. 2015. The varia- tional fair autoencoder. CoRR, abs/1511.00830.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Reranking and self-training for parser adaptation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "337--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McClosky, Eugene Charniak, and Mark John- son. 2006. Reranking and self-training for parser adaptation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Compu- tational Linguistics, pages 337-344. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Minimal-entropy correlation alignment for unsupervised deep domain adaptation",
"authors": [
{
"first": "Pietro",
"middle": [],
"last": "Morerio",
"suffix": ""
},
{
"first": "Jacopo",
"middle": [],
"last": "Cavazza",
"suffix": ""
},
{
"first": "Vittorio",
"middle": [],
"last": "Murino",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pietro Morerio, Jacopo Cavazza, and Vittorio Murino. 2018. Minimal-entropy correlation alignment for unsupervised deep domain adaptation. In Interna- tional Conference on Learning Representations.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Cross-domain sentiment classification via spectral feature alignment",
"authors": [
{
"first": "Xiaochuan",
"middle": [],
"last": "Sinno Jialin Pan",
"suffix": ""
},
{
"first": "Jian-Tao",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of WWW",
"volume": "",
"issue": "",
"pages": "751--760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain senti- ment classification via spectral feature alignment. In Proc. of WWW, pages 751-760.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Glove: global vectors for word representation",
"authors": [
{
"first": "Jeffery",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proc. of Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffery Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: global vectors for word representation. In Proc. of Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Self-taught learning: Transfer learning from unlabeled data",
"authors": [
{
"first": "Rajat",
"middle": [],
"last": "Raina",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Battle",
"suffix": ""
},
{
"first": "Honglak",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Packer",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y. Ng. 2007. Self-taught learn- ing: Transfer learning from unlabeled data. In ICML'07.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets",
"authors": [
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL 2007",
"volume": "",
"issue": "",
"pages": "616--623",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roi Reichart and Ari Rappoport. 2007. Self-training for enhancement and domain adaptation of statisti- cal parsers trained on small datasets. In ACL 2007, pages 616 -623.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Strong baselines for neural semi-supervised learning under domain shift",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Plank",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "1044--1054",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder and Barbara Plank. 2018. Strong baselines for neural semi-supervised learning under domain shift. pages 1044-1054.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Asymmetric tri-training for unsupervised domain adaptation",
"authors": [
{
"first": "Kuniaki",
"middle": [],
"last": "Saito",
"suffix": ""
},
{
"first": "Yoshitaka",
"middle": [],
"last": "Ushiku",
"suffix": ""
},
{
"first": "Tatsuya",
"middle": [],
"last": "Harada",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2988--2997",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kuniaki Saito, Yoshitaka Ushiku, and Tatsuya Harada. 2017. Asymmetric tri-training for unsupervised do- main adaptation. In International Conference on Machine Learning, pages 2988-2997.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Simple semi-supervised training of part-of-speech taggers",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the ACL 2010 Conference Short Papers, ACLShort '10",
"volume": "",
"issue": "",
"pages": "205--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard. 2010. Simple semi-supervised train- ing of part-of-speech taggers. In Proceedings of the ACL 2010 Conference Short Papers, ACLShort '10, pages 205-208, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Predicting the effectiveness of self-training: Application to sentiment classification",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Van Asch",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Daelemans",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1601.03288"
]
},
"num": null,
"urls": [],
"raw_text": "Vincent Van Asch and Walter Daelemans. 2016. Pre- dicting the effectiveness of self-training: Appli- cation to sentiment classification. arXiv preprint arXiv:1601.03288.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Unsupervised word sense disambiguation rivaling supervised methods",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd Annual Meeting on Association for Computational Linguistics, ACL '95",
"volume": "",
"issue": "",
"pages": "189--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky. 1995. Unsupervised word sense dis- ambiguation rivaling supervised methods. In Pro- ceedings of the 33rd Annual Meeting on Association for Computational Linguistics, ACL '95, pages 189- 196, Stroudsburg, PA, USA. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Filling the gap: Semi-supervised learning for opinion detection across domains",
"authors": [
{
"first": "Ning",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Sandra",
"middle": [],
"last": "K\u00fcbler",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "200--209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ning Yu and Sandra K\u00fcbler. 2011. Filling the gap: Semi-supervised learning for opinion detec- tion across domains. In Proceedings of the Fif- teenth Conference on Computational Natural Lan- guage Learning, pages 200-209. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Tri-training: exploiting unlabeled data using three classifiers",
"authors": [
{
"first": "Zhi-Hua",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2005,
"venue": "IEEE Transactions on Knowledge and Data Engineering",
"volume": "17",
"issue": "11",
"pages": "1529--1541",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhi-Hua Zhou and Ming Li. 2005. Tri-training: ex- ploiting unlabeled data using three classifiers. IEEE Transactions on Knowledge and Data Engineering, 17(11):1529-1541.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Neural adaptation methods have recently reported SoTA for UDA. Louizos et al. (2015) proposed a Variational Fair Autoencoder (VFAE) to learn an invariant representation for a domain.",
"num": null,
"uris": null
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td>: Target domain test data classification ac-</td></tr><tr><td>curacy of classical self-training methods when ap-</td></tr><tr><td>plied to UDA.</td></tr></table>",
"num": null,
"html": null,
"text": ""
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td>, we compare Self-Adapt against</td></tr><tr><td>the following neural UDA methods on MAD:</td></tr><tr><td>Variational Fair Autoencoder (Louizos et al.,</td></tr><tr><td>2015) (VFAE), Domain-adversarial Neural Net-</td></tr><tr><td>works (Ganin et al., 2016) (DANN), Asymmet-</td></tr></table>",
"num": null,
"html": null,
"text": ""
},
"TABREF5": {
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null,
"text": "Average classification accuracy on each target domain in EMAD of classical self-training based methods when applied to UDA."
}
}
}
}