ACL-OCL / Base_JSON /prefixS /json /sigtyp /2021.sigtyp-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:21:25.461175Z"
},
"title": "Improving Cross-Lingual Sentiment Analysis via Conditional Language Adversarial Nets",
"authors": [
{
"first": "Hemanth",
"middle": [],
"last": "Sai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tufts University",
"location": {}
},
"email": ""
},
{
"first": "",
"middle": [],
"last": "Kandula",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tufts University",
"location": {}
},
"email": ""
},
{
"first": "Bonan",
"middle": [],
"last": "Min",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tufts University",
"location": {}
},
"email": "bonan.min@raytheon.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Sentiment analysis has come a long way for high-resource languages due to the availability of large annotated corpora. However, it still suffers from lack of training data for lowresource languages. To tackle this problem, we propose Conditional Language Adversarial Network (CLAN), an end-to-end neural architecture for cross-lingual sentiment analysis without cross-lingual supervision. CLAN differs from prior work in that it allows the adversarial training to be conditioned on both learned features and the sentiment prediction, to increase discriminativity for learned representation in the cross-lingual setting. Experimental results demonstrate that CLAN outperforms previous methods on the multilingual multi-domain Amazon review dataset. Our source code is released at https:// github.com/hemanthkandula/clan.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Sentiment analysis has come a long way for high-resource languages due to the availability of large annotated corpora. However, it still suffers from lack of training data for lowresource languages. To tackle this problem, we propose Conditional Language Adversarial Network (CLAN), an end-to-end neural architecture for cross-lingual sentiment analysis without cross-lingual supervision. CLAN differs from prior work in that it allows the adversarial training to be conditioned on both learned features and the sentiment prediction, to increase discriminativity for learned representation in the cross-lingual setting. Experimental results demonstrate that CLAN outperforms previous methods on the multilingual multi-domain Amazon review dataset. Our source code is released at https:// github.com/hemanthkandula/clan.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent success in sentiment analysis Sun et al., 2019; Howard and Ruder, 2018; Brahma, 2018) is largely due to the availability of large-scale annotated datasets (Maas et al., 2011; Zhang et al., 2015; Rosenthal et al., 2017) . However, such success can not be replicated to lowresource languages because of the lack of labeled data for training Machine Learning models.",
"cite_spans": [
{
"start": 37,
"end": 54,
"text": "Sun et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 55,
"end": 78,
"text": "Howard and Ruder, 2018;",
"ref_id": "BIBREF8"
},
{
"start": 79,
"end": 92,
"text": "Brahma, 2018)",
"ref_id": "BIBREF1"
},
{
"start": 162,
"end": 181,
"text": "(Maas et al., 2011;",
"ref_id": "BIBREF12"
},
{
"start": 182,
"end": 201,
"text": "Zhang et al., 2015;",
"ref_id": "BIBREF25"
},
{
"start": 202,
"end": 225,
"text": "Rosenthal et al., 2017)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As it is prohibitively expensive to obtain training data for all languages of interest, cross-lingual sentiment analysis (CLSA) ( Barnes et al., 2018; Zhou et al., 2016b; Xu and Wan, 2017; Wan, 2009; Demirtas and Pechenizkiy, 2013; Xiao and Guo, 2012; Zhou et al., 2016a) offers the possibility of learning sentiment classification models for a target language using only annotated data from a different source language where large annotated data is available. These models often rely on bilingual lexicons, pre-trained cross-lingual word embeddings, or Machine Translation to bridge the gap between the source and target languages.",
"cite_spans": [
{
"start": 130,
"end": 150,
"text": "Barnes et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 151,
"end": 170,
"text": "Zhou et al., 2016b;",
"ref_id": "BIBREF27"
},
{
"start": 171,
"end": 188,
"text": "Xu and Wan, 2017;",
"ref_id": "BIBREF22"
},
{
"start": 189,
"end": 199,
"text": "Wan, 2009;",
"ref_id": "BIBREF20"
},
{
"start": 200,
"end": 231,
"text": "Demirtas and Pechenizkiy, 2013;",
"ref_id": "BIBREF5"
},
{
"start": 232,
"end": 251,
"text": "Xiao and Guo, 2012;",
"ref_id": "BIBREF21"
},
{
"start": 252,
"end": 271,
"text": "Zhou et al., 2016a)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "CLIDSA/CLCDSA (Feng and Wan, 2019) is the first end-to-end CLSA model that does not require cross-lingual supervision which may not be available for low-resource languages.",
"cite_spans": [
{
"start": 14,
"end": 34,
"text": "(Feng and Wan, 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose Conditional Language Adversarial Network (CLAN) for end-toend CLSA. Similar to prior work, CLAN performs CLSA without using any cross-lingual supervision. Differing from prior work, CLAN incorporates conditional language adversarial training to learn language invariant features by conditioning on both learned feature representations (or features for short) and sentiment predictions, therefore increases the features' discriminativity in the crosslingual setting. Our contributions are three fold:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We develop Conditional Language Adversarial Network (CLAN) which is designed to learn language invariant features that are also discriminative for sentiment classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Experiments on the multilingual multidomain Amazon review dataset (Prettenhofer and Stein, 2010) show that CLAN outperforms all previous methods for both in-domain and cross-domain CLSA tasks.",
"cite_spans": [
{
"start": 68,
"end": 98,
"text": "(Prettenhofer and Stein, 2010)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 t-SNE visualization of the held-out examples shows that the learned features align well across languages, indicating that CLAN is able to learn language invariant features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Cross-lingual sentiment analysis (CLSA): Several CLSA methods (Wan, 2009; Demirtas and Pechenizkiy, 2013; Xiao and Guo, 2012; Zhou et al., 2016a; Wan, 2009; Xiao and Guo, 2012) rely on Machine Translation (MT) for providing supervision across languages. MT, often trained from parallel corpora, may not be available for lowresource languages. Other CLSA methods (Barnes et al., 2018; Zhou et al., 2016b; Xu and Wan, 2017) Figure 1: CLAN architecture. We illustrate with a source language l s =English (solid line) and target language l t =French (dotted line). x ls , x lt are sentences in l s and l t , f ls , f lt are features extracted by the language model for x ls and x lt , and g ls , g lt are the sentiment predictions for x ls and x lt , respectively. The sentiment classification loss J ls senti is only trained on x ls for which the sentiment label is available, while the language discriminator is trained from both x ls and x lt . uses bilingual lexicons or cross-lingual word embeddings (CLWE) to project words with similar meanings from different languages into nearby spaces, to enable training cross-lingual sentiment classifiers. CLWE often depends on a bilingual lexicon (Barnes et al., 2018) or parallel or comparable corpora (Mogadala and Rettinger, 2016; Vuli\u0107 and Moens, 2016) . Recently, CLWE methods (Lample and ) that rely on no parallel resources are proposed, but they require very large monolingual corpora to train. The work that is most related to ours is (Feng and Wan, 2019) , which does not rely on cross-lingual resources. Different from the language adversarial network used in (Feng and Wan, 2019) , our work performs cross-lingual sentiment analysis using conditional language adversarial training, which allows the language invariant features to be specialized for sentiment class predictions.",
"cite_spans": [
{
"start": 62,
"end": 73,
"text": "(Wan, 2009;",
"ref_id": "BIBREF20"
},
{
"start": 74,
"end": 105,
"text": "Demirtas and Pechenizkiy, 2013;",
"ref_id": "BIBREF5"
},
{
"start": 106,
"end": 125,
"text": "Xiao and Guo, 2012;",
"ref_id": "BIBREF21"
},
{
"start": 126,
"end": 145,
"text": "Zhou et al., 2016a;",
"ref_id": "BIBREF26"
},
{
"start": 146,
"end": 156,
"text": "Wan, 2009;",
"ref_id": "BIBREF20"
},
{
"start": 157,
"end": 176,
"text": "Xiao and Guo, 2012)",
"ref_id": "BIBREF21"
},
{
"start": 362,
"end": 383,
"text": "(Barnes et al., 2018;",
"ref_id": "BIBREF0"
},
{
"start": 384,
"end": 403,
"text": "Zhou et al., 2016b;",
"ref_id": "BIBREF27"
},
{
"start": 404,
"end": 421,
"text": "Xu and Wan, 2017)",
"ref_id": "BIBREF22"
},
{
"start": 1190,
"end": 1211,
"text": "(Barnes et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 1246,
"end": 1276,
"text": "(Mogadala and Rettinger, 2016;",
"ref_id": "BIBREF14"
},
{
"start": 1277,
"end": 1299,
"text": "Vuli\u0107 and Moens, 2016)",
"ref_id": "BIBREF19"
},
{
"start": 1487,
"end": 1507,
"text": "(Feng and Wan, 2019)",
"ref_id": "BIBREF6"
},
{
"start": 1614,
"end": 1634,
"text": "(Feng and Wan, 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Adversarial training for domain adaptation Our approach draws inspiration from Domain-Adversarial Training of Neural Networks (Ganin et al., 2016) and Conditional Adversarial Domain Adaptation (CDAN) (Long et al., 2018) . DANN (Ganin et al., 2016 ) trains a feature generator to minimize the classification loss, and a domain discriminator to distinguish the domain where the input instances come from. It attempts to learn domain invariant features that deceive the domain discriminator while learning to predict the correct sentiment labels.CDAN (Long et al., 2018) additionally makes the discriminator conditioned on both extracted features and class predictions to improve discriminativity.",
"cite_spans": [
{
"start": 126,
"end": 146,
"text": "(Ganin et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 200,
"end": 219,
"text": "(Long et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 227,
"end": 246,
"text": "(Ganin et al., 2016",
"ref_id": "BIBREF7"
},
{
"start": 548,
"end": 567,
"text": "(Long et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Networks for Sentiment Analysis x, we compute the probability of seeing a word w k given the previous words:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Language Adversarial",
"sec_num": "3"
},
{
"text": "p(x) = |x| k=1 P (w k |w 1 , ..., w k\u22121 ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Language Adversarial",
"sec_num": "3"
},
{
"text": "we first pass the input words through the embedding layer of language l parameterized by \u03b8 l emb . The embedding for word w k is w k . We then pass the word embeddings to two LSTM layer parameterized by \u03b8 1 and \u03b8 2 , that are shared across all languages and all domains, to generate hidden states (z 1 , z 2 , ..., z x ) that can be considered as features for CLSA: h k = LSTM(h k\u22121 , w k ; \u03b8 1 ), and z k = LSTM(z k\u22121 , h k ; \u03b8 2 ). We then use a linear decoding layer parameterized by \u03b8 l dec with a softmax for next word prediction. To summarize, the LM objective for l is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Language Adversarial",
"sec_num": "3"
},
{
"text": "J l lm (\u03b8 l emb , \u03b8 1 , \u03b8 2 , \u03b8 l dec ) = E x\u223cL l [\u2212 1 |x| |x| k=1 logp(w k |w1, ..., w k\u22121 )]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Language Adversarial",
"sec_num": "3"
},
{
"text": "where x \u223c L l indicates that x is sampled from text in language l.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Language Adversarial",
"sec_num": "3"
},
{
"text": "Sentiment Classifier We use a linear classifier that takes the average final hidden states 1 |x| |x| k=1 z k as input features, and then a softmax to output sentiment labels. The objective is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Language Adversarial",
"sec_num": "3"
},
{
"text": "J l senti (\u03b8 l emb , \u03b8 1 , \u03b8 2 , \u03b8 l senti ) = E (x,y)\u223cC senti [\u2212logp(y|x)]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Language Adversarial",
"sec_num": "3"
},
{
"text": "where (x, y) \u223c C l senti indicates that the sentence x and its label y are sampled from the labeled examples in language l, and \u03b8 l senti denotes the parameters of the linear sentiment classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Language Adversarial",
"sec_num": "3"
},
{
"text": "Conditional Language Adversarial Training To force the features to be language invariant, we adopted conditional adversarial training (Long et al., 2018) : a language discriminator is trained to predict language ID given the features by minimizing the cross-entropy loss, while the LM is trained to fool the discriminator by maximizing the loss:",
"cite_spans": [
{
"start": 134,
"end": 153,
"text": "(Long et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Language Adversarial",
"sec_num": "3"
},
{
"text": "J l adv_lang (\u03b8 emb , \u03b8 1 , \u03b8 2 , \u03b8 dis_lang ) = E (x,l) [\u2212logp(l|f (x) \u2297 g(x))]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Language Adversarial",
"sec_num": "3"
},
{
"text": "where f (x), g(x) and l \u2208 L are features extracted by the LM for input sentence x, its sentiment prediction and its language ID respectively, \u03b8 emb = \u03b8 1 emb \u2295\u03b8 2 emb \u2295...\u2295\u03b8 |L| emb denotes the parameters of all embedding layers and \u03b8 dis_lang denotes the parameters of the language discriminator. We use multilinear conditioning (Long et al., 2018) by conditioning l on the cross-covariance f (x) \u2297 g(x).",
"cite_spans": [
{
"start": 330,
"end": 349,
"text": "(Long et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Language Adversarial",
"sec_num": "3"
},
{
"text": "A key innovation is the conditional language adversarial training: the multilinear conditioning enables manipulation of the multiplicative interactions across features and class predictions. Such interactions capture the cross-covariance between the language invariant features and classifier predictions to improve discriminability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Language Adversarial",
"sec_num": "3"
},
{
"text": "The Full Model Putting all components together, the final objective function is the following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Language Adversarial",
"sec_num": "3"
},
{
"text": "J (\u03b8 emb , \u03b8 lstm , \u03b8 dec , \u03b8 senti , \u03b8 dis_lang ) = (l,d) J l lm + \u03b1J l senti \u2212 \u03b2J l adv_lang",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Language Adversarial",
"sec_num": "3"
},
{
"text": "where \u03b8 lstm = \u03b8 1 \u2295 \u03b8 2 denotes parameters of the LSTM layers, \u03b8 dec = \u03b8 1 dec \u2295\u03b8 2 dec \u2295...\u2295\u03b8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Language Adversarial",
"sec_num": "3"
},
{
"text": "dec denotes the paramters of all decoding layers, \u03b1 and \u03b2 are hyperpameters controlling the relative importance of the sentiment classification and the language adversarial training objectives. Parameters \u03b8 dis_lang is trained to maximize the full objective function while the others are trained to minimize it:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "|L|",
"sec_num": null
},
{
"text": "\u03b8 dis_lang = arg max \u03b8 dis_lang J (\u03b8 emb ,\u03b8 lstm ,\u03b8 dec ,\u03b8 senti ) = arg min \u03b8 emb ,\u03b8 lstm ,\u03b8 dec ,\u03b8 senti J 4 Experiments Datasets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "|L|",
"sec_num": null
},
{
"text": "We evaluate CLAN on the Websis-CLS-10 dataset (Prettenhofer and Stein, 2010) which consists of Amazon product reviews from 4 languages and 3 domains. Following prior work, we use English as the source language and other languages as the target languages. For each languagedomain pair there are 2,000 training documents, 2,000 test documents, and 9,000-50,000 unlabeled documents depending on the language-domain pair (details are in Prettenhofer and Stein, 2010) . Implementation details: The models are implemented in PyTorch (Paszke et al., 2019) . All models are trained on four NVIDIA 1080ti GPUs. We tokenized text using NLTK (Loper and Bird, 2002) . For each language, we kept the most frequent 15000 words in the vocabulary since a bigger vocabulary leads to under-fitting and much longer training time. We set the word embedding size to 600 for the language model, and use 300 neurons for the hidden layer in the sentiment classifier. We set \u03b1 = 0.02 and \u03b2 = 0.1 for all experiments. All weights of CLAN were trained end-to-end using Adam optimizer with a learning rate of 0.03. We train the models with a maximum of 50,000 iterations with early stopping (typically stops at 3,000-4,000 iterations) to avoid over-fitting.",
"cite_spans": [
{
"start": 46,
"end": 76,
"text": "(Prettenhofer and Stein, 2010)",
"ref_id": "BIBREF16"
},
{
"start": 433,
"end": 462,
"text": "Prettenhofer and Stein, 2010)",
"ref_id": "BIBREF16"
},
{
"start": 527,
"end": 548,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF15"
},
{
"start": 631,
"end": 653,
"text": "(Loper and Bird, 2002)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "|L|",
"sec_num": null
},
{
"text": "Experiment results: We follow the experiment setting described in (Feng and Wan, 2019) . Table 1a and 1b show the accuracy of CLAN comparing to prior methods for the in-domain CLSA and cross-domain CLSA tasks, respectively. We compare CLAN to the following methods: CL-SCL, BiDRL, UMM, CLDFA, CNN-BE (Ziser and Reichart, 2018) , PBLM-BE (Ziser and Reichart, 2018), A-SCL (Ziser and Reichart, 2018) are methods that require cross-lingual supervision (Prettenhofer and Stein, 2010) 79.5 76.9 77.7 78.0 78.4 78.8 77.9 78.3 73.0 71.0 75.1 73.0",
"cite_spans": [
{
"start": 66,
"end": 86,
"text": "(Feng and Wan, 2019)",
"ref_id": "BIBREF6"
},
{
"start": 300,
"end": 326,
"text": "(Ziser and Reichart, 2018)",
"ref_id": "BIBREF28"
},
{
"start": 371,
"end": 397,
"text": "(Ziser and Reichart, 2018)",
"ref_id": "BIBREF28"
},
{
"start": 449,
"end": 479,
"text": "(Prettenhofer and Stein, 2010)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "|L|",
"sec_num": null
},
{
"text": "English-German English-French English-Japanese B D M AVG B D M AVG B D M AVG CL-SCL",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "|L|",
"sec_num": null
},
{
"text": "BiDRL (Zhou et al., 2016a) 84.1 84.0 84.6 84.2 84.3 83.6 82.5 83.4 73.1 76.7 78.7 76.1 UMM (Xu and Wan, 2017) 81.6 81.2 81.3 81.3 80.2 80.2 79.4 79.9 71.2 72.5 75.3 73.0 CLDFA (Xu and Yang, 2017) 83.9 83.1 79.0 82.0 83.3 82.5 83.3 83.0 77.3 80.5 76.4 78.0 MAN-MoE (Chen et al., 2019) 82.4 78.8 77.1 79.4 81.1 84.2 80.9 82.0 62.7 69.1 72.6 68.1 MWE (Conneau et al., 2017) 76 (Conneau et al., 2017) to generate cross-lingual word embeddings. CLIDSA/CLCDSA (Feng and Wan, 2019) uses language adversarial training. We refer readers to the corresponding papers for details of each model. As shown in Table 1a and 1b, CLAN outperforms all prior methods in 11 out of 12 settings for cross-domain CLSA, and outperforms all prior methods in 8 out of 9 settings for in-domain CLSA. On average, CLAN achieves state-of-the-art performance on all language pairs for both in-domain and cross-domain CLSA tasks.",
"cite_spans": [
{
"start": 6,
"end": 26,
"text": "(Zhou et al., 2016a)",
"ref_id": "BIBREF26"
},
{
"start": 91,
"end": 109,
"text": "(Xu and Wan, 2017)",
"ref_id": "BIBREF22"
},
{
"start": 176,
"end": 195,
"text": "(Xu and Yang, 2017)",
"ref_id": "BIBREF23"
},
{
"start": 264,
"end": 283,
"text": "(Chen et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 348,
"end": 370,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 374,
"end": 396,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF4"
},
{
"start": 454,
"end": 474,
"text": "(Feng and Wan, 2019)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 595,
"end": 603,
"text": "Table 1a",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "|L|",
"sec_num": null
},
{
"text": "Analysis of results: To understand what features CLAN learned to enable CLSA, we probed CLAN by visualizing the distribution of features extracted from held-out examples from the language model through t-SNE (Maaten and Hinton, 2008) . The plots are in Figure 2 . The t-SNE plots show that the feature distributions for sentences in the source and target languages align well, indicating that CLAN is able to learn language-invariant features. To further look into what CLAN learns, we manually inspected 50 examples where CLAN classified correctly but the prior models failed: for example, in the books domain in German, CLAN classified \"unterhaltsam und etwas lustig\" (\"entertaining and a little funny\") correctly as positive, also classified the following text correctly as pos- Figure 2 : t-SNE plots of the distributions of features extracted from CLAN's language model, trained via the in-domain CLSA task. Red and blue dots represent features extracted from the source and target language held-out sentences, respectively. EN, DE, FR, JA refers to English, German, French and Japanese respectively. itive: \"ein buch dass mich gefesselt hat...Dieses Buch ist absolut nichts f\u00fcr schwache Nerven oder Moralisten\" (\"a book that captivated me...this book is absolutely not for the faint of heart or moralists!\"). This indicates that CLAN is able to learn better lexical, syntactic and semantic features.",
"cite_spans": [
{
"start": 208,
"end": 233,
"text": "(Maaten and Hinton, 2008)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 253,
"end": 261,
"text": "Figure 2",
"ref_id": null
},
{
"start": 782,
"end": 790,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "|L|",
"sec_num": null
},
{
"text": "We present Conditional Language Adversarial Networks for cross-lingual sentiment analysis, and show that it achieves state-of-the-art performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "This work was supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA Contract No. 2019-19051600006 under the BETTER program. The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Bilingual sentiment embeddings: Joint projection of sentiment across languages",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Barnes",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
},
{
"first": "Sabine",
"middle": [],
"last": "Schulte Im Walde",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2483--2493",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1231"
]
},
"num": null,
"urls": [],
"raw_text": "Jeremy Barnes, Roman Klinger, and Sabine Schulte im Walde. 2018. Bilingual sentiment embeddings: Joint projection of sentiment across languages. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2483-2493, Melbourne, Aus- tralia. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improved sentence modeling using suffix bidirectional lstm",
"authors": [
{
"first": "Siddhartha",
"middle": [],
"last": "Brahma",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.07340"
]
},
"num": null,
"urls": [],
"raw_text": "Siddhartha Brahma. 2018. Improved sentence model- ing using suffix bidirectional lstm. arXiv preprint arXiv:1805.07340.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Multisource cross-lingual model transfer: Learning what to share",
"authors": [
{
"first": "Xilun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Hassan Awadallah",
"suffix": ""
},
{
"first": "Hany",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3098--3112",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1299"
]
},
"num": null,
"urls": [],
"raw_text": "Xilun Chen, Ahmed Hassan Awadallah, Hany Has- san, Wei Wang, and Claire Cardie. 2019. Multi- source cross-lingual model transfer: Learning what to share. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 3098-3112, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02116"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Word translation without parallel data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.04087"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Crosslingual polarity detection with machine translation",
"authors": [
{
"first": "Erkin",
"middle": [],
"last": "Demirtas",
"suffix": ""
},
{
"first": "Mykola",
"middle": [],
"last": "Pechenizkiy",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Second International Workshop on Issues of Sentiment Discovery and Opinion Mining",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erkin Demirtas and Mykola Pechenizkiy. 2013. Cross- lingual polarity detection with machine translation. In Proceedings of the Second International Work- shop on Issues of Sentiment Discovery and Opinion Mining, pages 1-8.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards a unified end-to-end approach for fully unsupervised crosslingual sentiment analysis",
"authors": [
{
"first": "Yanlin",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "1035--1044",
"other_ids": {
"DOI": [
"10.18653/v1/K19-1097"
]
},
"num": null,
"urls": [],
"raw_text": "Yanlin Feng and Xiaojun Wan. 2019. Towards a unified end-to-end approach for fully unsupervised cross- lingual sentiment analysis. In Proceedings of the 23rd Conference on Computational Natural Lan- guage Learning (CoNLL), pages 1035-1044, Hong Kong, China. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Domainadversarial training of neural networks",
"authors": [
{
"first": "Yaroslav",
"middle": [],
"last": "Ganin",
"suffix": ""
},
{
"first": "Evgeniya",
"middle": [],
"last": "Ustinova",
"suffix": ""
},
{
"first": "Hana",
"middle": [],
"last": "Ajakan",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Germain",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Laviolette",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "March",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Lempitsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Machine Learning Research",
"volume": "17",
"issue": "59",
"pages": "1--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pas- cal Germain, Hugo Larochelle, Fran\u00e7ois Laviolette, Mario March, and Victor Lempitsky. 2016. Domain- adversarial training of neural networks. Journal of Machine Learning Research, 17(59):1-35.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Universal language model fine-tuning for text classification",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1801.06146"
]
},
"num": null,
"urls": [],
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Univer- sal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Crosslingual language model pretraining",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.07291"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. arXiv preprint arXiv:1901.07291.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Conditional adversarial domain adaptation",
"authors": [
{
"first": "Mingsheng",
"middle": [],
"last": "Long",
"suffix": ""
},
{
"first": "Zhangjie",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Jianmin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Michael I Jordan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems 31",
"volume": "",
"issue": "",
"pages": "1640--1650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mingsheng Long, ZHANGJIE CAO, Jianmin Wang, and Michael I Jordan. 2018. Conditional adversar- ial domain adaptation. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 1640-1650. Curran Associates, Inc.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Nltk: The natural language toolkit",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "63--70",
"other_ids": {
"DOI": [
"10.3115/1118108.1118117"
]
},
"num": null,
"urls": [],
"raw_text": "Edward Loper and Steven Bird. 2002. Nltk: The natu- ral language toolkit. In Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Com- putational Linguistics -Volume 1, ETMTNLP '02, page 63-70, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Learning word vectors for sentiment analysis",
"authors": [
{
"first": "Andrew",
"middle": [
"L"
],
"last": "Maas",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"E"
],
"last": "Daly",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"T"
],
"last": "Pham",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "142--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analy- sis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 142-150, Port- land, Oregon, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Visualizing data using t-sne",
"authors": [
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of machine learning research",
"volume": "9",
"issue": "",
"pages": "2579--2605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579-2605.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bilingual word embeddings from parallel and nonparallel corpora for cross-language text classification",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Mogadala",
"suffix": ""
},
{
"first": "Achim",
"middle": [],
"last": "Rettinger",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "692--702",
"other_ids": {
"DOI": [
"10.18653/v1/N16-1083"
]
},
"num": null,
"urls": [],
"raw_text": "Aditya Mogadala and Achim Rettinger. 2016. Bilin- gual word embeddings from parallel and non- parallel corpora for cross-language text classifica- tion. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 692-702, San Diego, California. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kopf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raison",
"suffix": ""
},
{
"first": "Alykhan",
"middle": [],
"last": "Tejani",
"suffix": ""
},
{
"first": "Sasank",
"middle": [],
"last": "Chilamkurthy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Crosslanguage text classification using structural correspondence learning",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Benno",
"middle": [],
"last": "Stein",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1118--1127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Prettenhofer and Benno Stein. 2010. Cross- language text classification using structural corre- spondence learning. In Proceedings of the 48th Annual Meeting of the Association for Computa- tional Linguistics, pages 1118-1127, Uppsala, Swe- den. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "SemEval-2017 task 4: Sentiment analysis in twitter",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "502--518",
"other_ids": {
"DOI": [
"10.18653/v1/S17-2088"
]
},
"num": null,
"urls": [],
"raw_text": "Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. SemEval-2017 task 4: Sentiment analysis in twit- ter. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 502-518, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "How to fine-tune bert for text classification?",
"authors": [
{
"first": "Chi",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Yige",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "China National Conference on Chinese Computational Linguistics",
"volume": "",
"issue": "",
"pages": "194--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? In China National Conference on Chinese Computa- tional Linguistics, pages 194-206. Springer.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Bilingual distributed word representations from documentaligned comparable data",
"authors": [
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Marie-Francine",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Artificial Intelligence Research",
"volume": "55",
"issue": "",
"pages": "953--994",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan Vuli\u0107 and Marie-Francine Moens. 2016. Bilingual distributed word representations from document- aligned comparable data. Journal of Artificial Intel- ligence Research, 55:953-994.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Co-training for cross-lingual sentiment classification",
"authors": [
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "235--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojun Wan. 2009. Co-training for cross-lingual senti- ment classification. In Proceedings of the Joint Con- ference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 235-243, Suntec, Singapore. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Multi-view Ad-aBoost for multilingual subjectivity analysis",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Yuhong",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2012,
"venue": "Mumbai, India. The COLING 2012 Organizing Committee",
"volume": "",
"issue": "",
"pages": "2851--2866",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Min Xiao and Yuhong Guo. 2012. Multi-view Ad- aBoost for multilingual subjectivity analysis. In Pro- ceedings of COLING 2012, pages 2851-2866, Mum- bai, India. The COLING 2012 Organizing Commit- tee.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Towards a universal sentiment classifier in multiple languages",
"authors": [
{
"first": "Kui",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "511--520",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1053"
]
},
"num": null,
"urls": [],
"raw_text": "Kui Xu and Xiaojun Wan. 2017. Towards a universal sentiment classifier in multiple languages. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 511- 520, Copenhagen, Denmark. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Cross-lingual distillation for text classification",
"authors": [
{
"first": "Ruochen",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.02073"
]
},
"num": null,
"urls": [],
"raw_text": "Ruochen Xu and Yiming Yang. 2017. Cross-lingual distillation for text classification. arXiv preprint arXiv:1705.02073.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Russ",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5753--5763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural in- formation processing systems, pages 5753-5763.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Character-level convolutional networks for text classification",
"authors": [
{
"first": "Xiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "649--657",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in neural information pro- cessing systems, pages 649-657.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Attention-based LSTM network for cross-lingual sentiment classification",
"authors": [
{
"first": "Xinjie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jianguo",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "247--256",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1024"
]
},
"num": null,
"urls": [],
"raw_text": "Xinjie Zhou, Xiaojun Wan, and Jianguo Xiao. 2016a. Attention-based LSTM network for cross-lingual sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, pages 247-256, Austin, Texas. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Cross-lingual sentiment classification with bilingual document representation learning",
"authors": [
{
"first": "Xinjie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jianguo",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1403--1412",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1133"
]
},
"num": null,
"urls": [],
"raw_text": "Xinjie Zhou, Xiaojun Wan, and Jianguo Xiao. 2016b. Cross-lingual sentiment classification with bilingual document representation learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1403-1412, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Deep pivot-based modeling for cross-language cross-domain transfer with minimal guidance",
"authors": [
{
"first": "Yftah",
"middle": [],
"last": "Ziser",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "238--249",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1022"
]
},
"num": null,
"urls": [],
"raw_text": "Yftah Ziser and Roi Reichart. 2018. Deep pivot-based modeling for cross-language cross-domain transfer with minimal guidance. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 238-249, Brussels, Bel- gium. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "shows the architecture of CLAN. It has three components: a multilingual language model (LM) that extracts features from the input sentences, a sentiment classifier built atop of the fea-tures extracted by the LM, and a conditional language adversarial trainer to force the features to be language invariant. All three components are jointly optimized in a single end-to-end neural architecture, allowing CLAN to learn cross-lingual features and to capture multiplicative interactions between the features and sentiment predictions. The resulting cross-lingual features are specialized for each sentiment class.CLAN aims at solving the cross-lingual multidomain sentiment analysis task. Formally, given a set of domains D and a set of languages L, CLAN consists of the following components:\u2022Sentiment classifier: train on (l s , d s ) (sentiment labels are available) and test on (l t , d t ) (no sentiment labels), in which l s , l t \u2208 L, l s = l t and d s , d t \u2208 D. CLAN works for both variants of the CLSA problem: in-domain CLSA where d s = d t , and cross-domain CLSA where d s = d t . \u2022 Language model: train on (l, d) in which l \u2208 L, d \u2208 D. \u2022 Language discriminator: train on (l, d) in which l \u2208 L and d \u2208 D. The language IDs are known. Language Model (LM): For a sentence"
},
"TABREF0": {
"html": null,
"type_str": "table",
"text": "Accuracy for cross-domain CLSA. Six domain pairs were generated for each language pair. S and T refers to the source and target domains, respectively.",
"content": "<table><tr><td/><td/><td/><td/><td/><td colspan=\"6\">.1 76.8 74.7 75.8 76.3 78.7 71.6 75.5</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan=\"4\">CLIDSA (Feng and Wan, 2019)</td><td colspan=\"11\">86.6 84.6 85.0 85.4 87.2 87.9 87.1 87.4 79.3 81.9 84.0 81.7</td></tr><tr><td/><td colspan=\"2\">CLAN</td><td/><td colspan=\"11\">88.2 84.5 86.3 86.3 88.6 88.7 87.7 88.3 82.0 84.1 85.1 83.7</td></tr><tr><td/><td/><td/><td/><td/><td colspan=\"4\">(a) Accuracy for in-domain CLSA.</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td colspan=\"3\">English-German</td><td/><td/><td/><td colspan=\"3\">English-French</td><td/><td/></tr><tr><td>S T</td><td colspan=\"14\">D B M B B D M D B M D M AVG D B M B B D M D B M D M AVG</td></tr><tr><td>CNN-BE</td><td>62.8</td><td>63.8</td><td>65.3</td><td>68.7</td><td>71.6</td><td>72.0</td><td>67.3 69.5</td><td>59.7</td><td>63.7</td><td>65.7</td><td colspan=\"2\">65.9</td><td>67.0</td><td>65.2</td></tr><tr><td>DCI</td><td>67.1</td><td>60.6</td><td>66.9</td><td>66.7</td><td>68.9</td><td>68.2</td><td>66.4 71.2</td><td>65.4</td><td>69.1</td><td>67.5</td><td colspan=\"2\">66.7</td><td>71.4</td><td>68.6</td></tr><tr><td>CL-SCL</td><td>65.9</td><td>62.5</td><td>65.1</td><td>65.2</td><td>71.2</td><td>69.8</td><td>66.7 70.3</td><td>63.8</td><td>68.8</td><td>66.8</td><td colspan=\"2\">66.0</td><td>70.1</td><td>67.6</td></tr><tr><td>A-SCL</td><td>67.9</td><td>63.7</td><td>68.7</td><td>63.8</td><td>69.0</td><td>70.1</td><td>67.2 68.6</td><td>66.1</td><td>69.2</td><td>69.4</td><td colspan=\"2\">66.7</td><td>68.1</td><td>68.0</td></tr><tr><td>A-S-SR</td><td>68.3</td><td>62.5</td><td>69.4</td><td>69.9</td><td>70.2</td><td>69</td><td>67.4 69.3</td><td>68.9</td><td>70.9</td><td>70.7</td><td>67</td><td/><td>71.4</td><td>69.7</td></tr><tr><td colspan=\"2\">PBLM+BE 78.7</td><td>78.6</td><td>80.6</td><td>79.2</td><td>81.7</td><td>78.5</td><td>79.5 81.1</td><td>74.7</td><td>76.3</td><td>75.0</td><td colspan=\"2\">75.1</td><td>76.8</td><td>76.5</td></tr><tr><td>CLCDSA</td><td>85.4</td><td>81.7</td><td>79.3</td><td>81.0</td><td>83.4</td><td>81.7</td><td>82.0 86.2</td><td>81.8</td><td>84.3</td><td>82.8</td><td colspan=\"2\">83.7</td><td>85.0</td><td>83.9</td></tr><tr><td>CLAN</td><td>86.9</td><td>85.1</td><td>82.4</td><td>81.6</td><td>83</td><td>83.8</td><td>83.8 87.3</td><td>85.5</td><td>85.3</td><td>83.9</td><td colspan=\"2\">85.5</td><td>85.7</td><td>85.5</td></tr><tr><td>(b)</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"num": null
},
"TABREF1": {
"html": null,
"type_str": "table",
"text": "Accuracy of CLSA methods on Websis-CLS-10. Top scores are shown in bold. D, M, B refers to DVD, music, and books, respectively. AVG refers to the average of scores per each language pair.",
"content": "<table/>",
"num": null
}
}
}
}