ACL-OCL / Base_JSON /prefixK /json /K19 /K19-1003.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K19-1003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:06:21.938703Z"
},
"title": "Multilingual model using cross-task embedding projection",
"authors": [
{
"first": "Jin",
"middle": [],
"last": "Sakuma",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": "jsakuma@tkl.iis.u-tokyo.ac.jp"
},
{
"first": "Naoki",
"middle": [],
"last": "Yoshinaga",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Tokyo",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a method for applying a neural network trained on one (resource-rich) language for a given task to other (resource-poor) languages. We accomplish this by inducing a mapping from pre-trained cross-lingual word embeddings to the embedding layer of the neural network trained on the resource-rich language. To perform element-wise cross-task embedding projection, we invent locally linear mapping which assumes and preserves the local topology across the semantic spaces before and after the projection. Experimental results on topic classification task and sentiment analysis task showed that the fully task-specific multilingual model obtained using our method outperformed the existing multilingual models with embedding layers fixed to pre-trained cross-lingual word embeddings. 1",
"pdf_parse": {
"paper_id": "K19-1003",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a method for applying a neural network trained on one (resource-rich) language for a given task to other (resource-poor) languages. We accomplish this by inducing a mapping from pre-trained cross-lingual word embeddings to the embedding layer of the neural network trained on the resource-rich language. To perform element-wise cross-task embedding projection, we invent locally linear mapping which assumes and preserves the local topology across the semantic spaces before and after the projection. Experimental results on topic classification task and sentiment analysis task showed that the fully task-specific multilingual model obtained using our method outperformed the existing multilingual models with embedding layers fixed to pre-trained cross-lingual word embeddings. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Deep neural networks have improved the accuracy of various natural language processing (NLP) tasks by performing representation learning with massive annotated datasets. However, the annotations in NLP depend on the target language as well as the task, and it is unrealistic to prepare such extensive annotated datasets for every pair of language and task. As a result, we can only obtain an accurate model for a few resource-rich languages such as English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To overcome this problem, researchers have attempted to make models trained with massive annotated datasets in a resource-rich language (hereafter, source language) applicable to a resourcepoor language (target language) that have no annotated datasets (Ruder et al., 2019 ) ( \u00a7 2). These methods utilize language-universal word representations, namely cross-lingual word embeddings, to 1 All the code is available at: https://github.com/ jyori112/task-spec Figure 1 : Locally linear mapping for sentiment analysis task. The relationship between \"merveilleux (wonderful)\" and its neighboring English words, \"wonderful\" and \"good,\" are preserved after projection. absorb the differences among languages in the vocabularies of neural network models; specifically, these multilingual models are trained with embedding layers fixed to pre-trained cross-lingual word embeddings. However, because those embedding layers are not optimized for the target task, the resulting model cannot exploit the true potential of representation learning, as demonstrated by Kim (2014) and our experimental results ( \u00a7 5.1).",
"cite_spans": [
{
"start": 253,
"end": 272,
"text": "(Ruder et al., 2019",
"ref_id": "BIBREF17"
},
{
"start": 387,
"end": 388,
"text": "1",
"ref_id": null
},
{
"start": 1054,
"end": 1064,
"text": "Kim (2014)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 458,
"end": 466,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose methods of projecting pre-trained cross-lingual word embeddings to word embeddings of a fully task-specific neural network all of whose parameters are optimized to the training data in a source language, to realize fully task-specific multilingual model ( \u00a7 3). In addition to naive linear projection, we present an element-wise projection method inspired by locally linear embeddings used for dimension reduction (Roweis and Saul, 2000) . This method is built on the assumption that local topology is preserved between the semantic spaces of word embeddings in two NLP tasks; that is, adequately close words in pre-trained cross-lingual word embeddings will have similar representation even in task-specific semantic space (Figure 1 ). We first represent the general cross-lingual word embedding of a word in the target language by weighted linear combinations of general cross-lingual word embeddings of k neighboring words in the source language. We then use the weights to compute a task-specific word embedding of the target word as a linear combination of task-specific word embeddings of the k neighboring source words ( \u00a7 3.2).",
"cite_spans": [
{
"start": 425,
"end": 448,
"text": "(Roweis and Saul, 2000)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 735,
"end": 744,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate our method on topic classification and sentiment analysis tasks ( \u00a7 4). We first obtain a task-specific neural network using annotated corpora in the source language (English) and then induce task-specific cross-lingual word embeddings for the target languages to apply the accurate taskspecific neural network to those languages. Experimental results demonstrate that our method has improved the classification accuracy of the multilingual model (Duong et al., 2017 ) in most of the task-language pairs ( \u00a7 5).",
"cite_spans": [
{
"start": 459,
"end": 478,
"text": "(Duong et al., 2017",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contributions are as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We established a method of obtaining fully task-specific multilingual models by learning a cross-task embedding projection ( \u00a7 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Our cross-task projection is simple and has an analytical solution with one hyperparameter; the solution is a global optima ( \u00a7 3.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We confirmed the limitation of the traditional multilingual model with embedding layers fixed to pre-trained cross-lingual word embeddings ( \u00a7 5.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We showed the effectiveness of our method over the existing models ( \u00a7 5.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Lack of resources in resource-poor languages has been a deeply rooted problem in NLP, and there have been many pieces of researches contributed to mitigating this problem by transferring models across languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "An intuitive approach to realize the cross-lingual transfer of a model is to utilize machine translation by either translating the training set or the model input (Wan, 2009) . Instead of translating, Meng et al. (2012) leverage a parallel corpus of the source and target languages to obtain cross-lingual mixture model to bridge the language gap. Xu and Wan (2017) also utilize parallel corpus with word alignment to train a multilingual model for sen-timent analysis task. While some of these methods do not rely on an annotated corpus in the target language, they heavily rely on cross-lingual resources such as parallel corpora, and thus, are not applicable to the resource-poor languages.",
"cite_spans": [
{
"start": 163,
"end": 174,
"text": "(Wan, 2009)",
"ref_id": "BIBREF19"
},
{
"start": 201,
"end": 219,
"text": "Meng et al. (2012)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual models using parallel corpora",
"sec_num": null
},
{
"text": "Multilingual models with cross-lingual word embeddings Another method to obtain multilingual models is to fix the embedding layer of a neural network to pre-trained cross-lingual word embeddings. Many existing pieces of researches implemented this for various tasks in unsupervised senario (Duong et al., 2017; Can et al., 2018) where no annotated corpus is available in the target language as ours and supervised scenario (Pappas and Popescu-Belis, 2017; Upadhyay et al., 2018) where a small annotated corpus is available in the target language. Another study enhanced this method by employing language-adversarial networks (Chen et al., 2018) . These methods do not induce task-specific word embeddings, thereby failing to exert true potential of neural networks, as we confirm in \u00a7 5.",
"cite_spans": [
{
"start": 290,
"end": 310,
"text": "(Duong et al., 2017;",
"ref_id": "BIBREF4"
},
{
"start": 311,
"end": 328,
"text": "Can et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 423,
"end": 455,
"text": "(Pappas and Popescu-Belis, 2017;",
"ref_id": "BIBREF14"
},
{
"start": 456,
"end": 478,
"text": "Upadhyay et al., 2018)",
"ref_id": "BIBREF18"
},
{
"start": 625,
"end": 644,
"text": "(Chen et al., 2018)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual models using parallel corpora",
"sec_num": null
},
{
"text": "Multilingual models with character embeddings Several studies utilize character level embeddings shared across languages to obtain multilingual models (Kim et al., 2017; Yang et al., 2017) . An obvious weak point of these methods is that they do not apply to distant language pairs with different alphabets. In contrast, our method only relies on cross-lingual word embeddings which are obtainable regardless of the alphabets of the languages (Artetxe et al., 2018) .",
"cite_spans": [
{
"start": 151,
"end": 169,
"text": "(Kim et al., 2017;",
"ref_id": "BIBREF8"
},
{
"start": 170,
"end": 188,
"text": "Yang et al., 2017)",
"ref_id": null
},
{
"start": 443,
"end": 465,
"text": "(Artetxe et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual models using parallel corpora",
"sec_num": null
},
{
"text": "Task-specific word embeddings Few efforts have been previously made to obtain cross-lingual task-specific word embeddings. Gouws and S\u00f8gaard (2015) obtain task-specific cross-lingual word embeddings by constructing a task-specific bilingual dictionary, which defines \"equivalent classes\" designed for the given task instead of equivalent semantics. Although they successfully obtained task-specific cross-lingual word embeddings for POS tagging and supersense tagging tasks, the open problems are how to define a taskspecific bilingual dictionary for many of other tasks, and cost of developing such resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual models using parallel corpora",
"sec_num": null
},
{
"text": "Feng and Wan (2019) exploit multi-task learning to induce cross-lingual task-specific word embeddings for sentiment analysis task. This method is tailored for the sentiment analysis task and thus, not applicable to other tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual models using parallel corpora",
"sec_num": null
},
{
"text": "Our method first learns a neural network model by optimizing to the annotated corpus in the source language. It then induces a projection from the semantic space of general cross-lingual word embeddings to the semantic space of the optimized embedding layer, to make the model applicable to languages other than the source language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fully task-specific multilingual model",
"sec_num": "3"
},
{
"text": "The entire framework of obtaining a fully taskspecific multilingual model is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework",
"sec_num": "3.1"
},
{
"text": "Step 1 (train task-specific neural network) First, we train a neural network f (\u2022; X spec , \u03b8) on an annotated corpus in the source language. The embedding layer, X spec , of the resulting neural network consists of task-specific word embeddings of the source language, and \u03b8 is the collection of the other parameters. At this point, this neural network is only applicable to the source language since we do not have task-specific word embeddings Y spec of the target language in the same semantic space as X spec .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework",
"sec_num": "3.1"
},
{
"text": "Step 2 (induce cross-lingual word embeddings) Next, we obtain general cross-lingual word embeddings {X gen , Y gen } in the same semantic space from raw monolingual corpora where X gen and Y gen are cross-lingual word embeddings of the source and target languages, respectively. Without loss of generality, we assume that X gen and X spec are aligned so that X gen i and X spec i represent the same word. We utilize unsupervised cross-lingual word embeddings such as (Artetxe et al., 2018) that do not require any cross-lingual resources to maximize the applicability of our approach.",
"cite_spans": [
{
"start": 467,
"end": 489,
"text": "(Artetxe et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Framework",
"sec_num": "3.1"
},
{
"text": "Step 3 (learn cross-task embedding projection) Then, we induce a cross-task projection \u03c6 that computes task-specific word embeddings of the target language Y spec from the general crosslingual word embeddings {X gen , Y gen } obtained in Step 2 to the task-specific word embeddings of the source language X spec obtained in Step 1. We explain the details of this core part in \u00a7 3.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework",
"sec_num": "3.1"
},
{
"text": "Step 4 (obtain task-specific multilingual model) Finally, we replace embedding layer X spec of the neural network f (\u2022; X spec , \u03b8) trained in Step 1 with Y spec induced in Step 3 to obtain a task-specific neural network f (\u2022; Y spec , \u03b8) applicable to the target language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Framework",
"sec_num": "3.1"
},
{
"text": "Here, we explain the detailed construction of our cross-task projection \u03c6 for cross-lingual word embeddings used in Step 3 in \u00a7 3.1. Given general cross-lingual word embeddings, X gen and Y gen , of the source and target languages and task-specific word embeddings X spec of the source language, we compute task-specific word embeddings Y spec of the target language in the same semantic space with X spec . In what follows, we propose two simple methods to obtain such projection: a linear projection and a locally linear mapping.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cross-task embedding projection",
"sec_num": "3.2"
},
{
"text": "One naive approach is to regard general and task-specific word embeddings as embeddings of two distinct languages and to exploit a mapping method developed for cross-lingual word embeddings (Mikolov et al., 2013 ). 2 Concretely, we train a transformation matrix W that maps general word embeddings Y gen to task-specific word embeddings Y spec by minimizin\u011d",
"cite_spans": [
{
"start": 190,
"end": 211,
"text": "(Mikolov et al., 2013",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linear projection",
"sec_num": null
},
{
"text": "W = arg min W |V X | i=1 W X gen i \u2212 X spec i 2 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear projection",
"sec_num": null
},
{
"text": "where |V X | is the vocabulary size of the source langauge. Then, we compute the task-specific word embeddings of the target language,\u0176 spec ;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear projection",
"sec_num": null
},
{
"text": "Y spec i = W Y gen i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear projection",
"sec_num": null
},
{
"text": "(2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Linear projection",
"sec_num": null
},
{
"text": "A possible limitation of the above linear projection method is the lack of representation power. Due to the difference of topologies between the general and task-specific semantic spaces, our experimental results indicate that it fails to obtain precise cross-task embedding projection ( \u00a7 5). Therefore, we introduce an element-wise mapping method inspired by locally linear embeddings (Roweis and Saul, 2000), a dimension reduction technique. Our method assumes that the local topology among nearest neighbors will be consistent between two NLP tasks (here, language modeling and the target task). In other words, synonyms will have a similar role across NLP tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locally linear mapping",
"sec_num": null
},
{
"text": "We build the cross-task projection as follows. First, for each word i in the target language, we take k nearest neighbors (words) in the source language, N gen i , in the semantic space of the general cross-lingual word embeddings where k is a hyperparameter, and the cosine similarity is the metric. We next obtain the reconstruction weights,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locally linear mapping",
"sec_num": null
},
{
"text": "\u03b1 ij \u2208 R, that restore Y gen i as a linear combination of X gen j \u2208 N gen i by optimizin\u011d \u03b1 i = arg min \u03b1 i Y gen i \u2212 j\u2208N gen i \u03b1 ij X gen j 2 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locally linear mapping",
"sec_num": null
},
{
"text": "with constraint of j \u03b1 ij = 1. The solution to this optimization problem can be analytically given by using the method of Lagrange multipliers as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locally linear mapping",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1 ij = l (C \u22121 i ) jl j l (C \u22121 i ) jl",
"eq_num": "(4)"
}
],
"section": "Locally linear mapping",
"sec_num": null
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locally linear mapping",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "C ijl = Y gen i \u2212 X gen j \u2022 Y gen i \u2212 X gen l",
"eq_num": "(5)"
}
],
"section": "Locally linear mapping",
"sec_num": null
},
{
"text": "(see Appendix A for the detailed derivation). We can thereby find the global optima by this analytical solution with simple computation. We then compute Y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locally linear mapping",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "spec i using\u03b1 i by Y spec i = j\u2208N gen i\u03b1 ij X spec j ,",
"eq_num": "(6)"
}
],
"section": "Locally linear mapping",
"sec_num": null
},
{
"text": "assuming that the local topology among N gen i is preserved before and after the projection. The resulting Y spec is in the same semantic space with X spec . Setting a large k = |N gen i | in the projection, we can handle words in the target language that have no direct translations in the source language (e.g., amiga, female friend in Spanish).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locally linear mapping",
"sec_num": null
},
{
"text": "Hyperparameter search In general, we choose a hyperparameter that performs best on development data in the target task and language. However, since we assume that no annotated data is available in the target language, we cannot exploit development data in the target language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locally linear mapping",
"sec_num": null
},
{
"text": "To address this issue, we apply our cross-task projection to the source language with various hyperparameter k; namely, represent X gen i considering k nearest neighbors X gen j (j = i). We then choose k with the best model performance with the resulting embeddings on the development data of the target task in the source language. In \u00a7 5.2, we report results with this language-universal, yet the task-specific method of tuning. We also report results of a language-and task-specific tuning method assuming a minimal development data in the target language in addition to a naive method of fixing k = 1, which is equivalent to the wordby-word translation. Furthermore, we investigate the effect of value k in details in \u00a7 5.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Locally linear mapping",
"sec_num": null
},
{
"text": "We conduct a series of experiments to evaluate our fully task-specific multilingual models ( \u00a7 3) obtained by our cross-task projections of crosslingual word embeddings ( \u00a7 3.2). Our method is language-and task-independent and is applicable to various tasks where existing multilingual models are applicable. We adopted a topic classification task and a sentiment analysis task as the target tasks for evaluation in various languages. Topic classification is the task of predicting the topic of a given document. For this task, we use English (en) as the source language, and Spanish (es), German (de), Danish (da), French (fr), Italian (it), Dutch (nl), Portuguese (pt), and Swedish (sv) as the target languages. We use the RCV1/RCV2 dataset (Lewis et al., 2004) for this task, following Duong et al. (2017) . This dataset contains news articles in various languages with labels of four categories: Corporate/Industrial, Economics, Government/Social, and Markets. For English dataset, we randomly chose 10,000 examples as test data, another 10,000 examples as development data, and the rest as training data. For the other languages, we randomly selected 1000 examples as test data, and another 1000 examples (for Danish, 100 examples) as development data, and the rest as training data. Among the development data, we randomly chose 100 samples as the development data for an alternative, language-specific tuning of k ( \u00a7 3.2). The summary of the resulting dataset is shown in Table 1 . Sentiment analysis is a task of predicting a polarity label of the writer's attitude for a given text. We design this task to be a three-class classification of positive, negative, and neutral labels. We use datasets from two domains of restaurant review and product review to conduct this experiment. In both domains, we consider the most resource-rich language, English (en), as the source language and other languages (Spanish (es), Dutch (nl), and Turkish (tr) for restaurant review domain, and German (de), French (fr), and Japanese (ja) for product review domain) as the target languages.",
"cite_spans": [
{
"start": 743,
"end": 763,
"text": "(Lewis et al., 2004)",
"ref_id": "BIBREF11"
},
{
"start": 789,
"end": 808,
"text": "Duong et al. (2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 1480,
"end": 1487,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "To train models in restaurant review domain, we use Yelp Review dataset 3 which consists of a set of restaurant reviews with numerical ratings in the range of 1-5 given by the reviewers. We label the reviews with ratings of 1 or 2 as negative, those with ratings of 4 or 5 as positive, and the rest with ratings of 3 as neutral. Then, we randomly chose 100,000 examples as test data, another 100,000 examples as development data, and the rest as training data. For evaluation in the target languages, we use a subset of ABSA dataset (Pontiki et al., 2016) , which consists of restaurant reviews in English, Spanish, Dutch, and Turkish with annotation of a polarity label of positive, negative, or neutral to each sentence. For each language, we randomly chose 100 sentences as development data for the alternative, language-specific tuning of k ( \u00a7 3.2) and the rest as test data.",
"cite_spans": [
{
"start": 533,
"end": 555,
"text": "(Pontiki et al., 2016)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "For experiments in the product review domain, we use Amazon Multilingual Review dataset 4 which consists of a set of product reviews in English, German, French, Japanese with numerical ratings given in the same manner as the Yelp Review dataset. We label the reviews in the same manner as the Yelp Review dataset. For English dataset, we randomly sample 100,000 examples as development data, other 100,000 examples as test data, and the remaining 6,731,166 examples as training data. For the other languages, we randomly chose 10,000 examples as development data, another 10,000 examples as test data, and the rest as training data. Among the development data, we randomly chose 100 examples as development data for the alternative, language-specific tuning of k. The summary of the resulting datasets is shown in Amazon English (en) 6,731,166 100,000 100,000 German (de) 659,121 10,000 10,000 French (fr) 234,080 10,000 10,000 Japanese (ja) 242,431 10,000 10,000 General cross-lingual word embeddings were obtained using a state-of-the-art unsupervised method with self-learning framework (Artetxe et al., 2018) . 5 This method takes monolingual word embeddings of two languages and learns a mapping between them to obtain cross-lingual word embeddings. For monolingual word embeddings, we used pre-trained word embeddings available online (Grave et al., 2018) . 6 They are word embeddings with 300 dimensions obtained by applying subword-information skip-gram (Bojanowski et al., 2017) to the Wikipedia corpus.",
"cite_spans": [
{
"start": 1090,
"end": 1112,
"text": "(Artetxe et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 1115,
"end": 1116,
"text": "5",
"ref_id": null
},
{
"start": 1341,
"end": 1361,
"text": "(Grave et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 1364,
"end": 1365,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "Preprocessing We use the tokenizer of Europarl tools 7 to tokenize all datasets except for Japanese. For Japanese, we use MeCab v0.996 8 with IPA dictionary v2.7.0. After tokenization, the tokens are lowercased to match vocabularies of the pretrained word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "Models To evaluate the impact of our taskspecific word embeddings on multilingual models and to compare the two methods for the cross-task embeddings projections we proposed in \u00a7 3, we compare the following five models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "CLWE fixed trains a bag-of-embeddings model in the target language with its embedding layers fixed to the pre-trained cross-lingual word embedding. The model takes the dimensionwise average of all embeddings of input tokens into a feedforward neural network with one hidden layer. This model is analogous to (Duong et al., 2017) except that they use the summation weighted by tf \u2022 idf.",
"cite_spans": [
{
"start": 308,
"end": 328,
"text": "(Duong et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "Method en-da en-de en-es en-fr en-it en-nl en-pt en-sv CLWE fixed + NNmap adds two embeddingwise hidden layers to the original feedforward neural network in CLWE fixed. This is aimed at giving the network the capability of acquiring task-specific word embeddings by enhancing the representation of the network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "CLWE opt (LP) is CLWE fixed with embedding layer updated; we made this model crosslingual by the linear projection ( \u00a7 3.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "CLWE opt (LLM) is CLWE fixed with the embedding layer updated; we made this model cross-lingual by the locally linear mapping ( \u00a7 3.2). We report results with the three strategies to tune the hyperparameter k for cross-task projection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "Monolingual has the same network as CLWE fixed with the embedding layer updated; we trained the model with datasets in the same languages as testing. We present this result to show the upper bound of model accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "The dimensions of all the layers of the above five models are 300, and they are all optimized by Adam optimizer (Kingma and Ba, 2014) different initialization of the model parameters and report the average accuracy, and hyperparameter tuning is conducted independently to each model.",
"cite_spans": [
{
"start": 112,
"end": 133,
"text": "(Kingma and Ba, 2014)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "4"
},
{
"text": "We evaluate the models in cross-lingual settings to confirm how well our method produces taskspecific cross-lingual word embeddings (Table 3 and Table 4 ). Prior to reporting the results, we confirm the impact of task-specific word embeddings in neural networks through experiments in a monolingual setting in English (Table 5) .",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 140,
"text": "(Table 3",
"ref_id": "TABREF6"
},
{
"start": 145,
"end": 152,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 318,
"end": 327,
"text": "(Table 5)",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We examine the impact of optimizing the embedding layer of a neural network to the given task on model accuracy through experiments in Table 6 : Nearest neighbors of some words in the semantic space of general and task-specific word embeddings.",
"cite_spans": [],
"ref_spans": [
{
"start": 135,
"end": 142,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Impact of task-specific word embeddings",
"sec_num": "5.1"
},
{
"text": "English by comparing Monolingual to Monolingual fixed which is the same network as Monolingual with the embedding layer fixed to general words embeddings. We show the results of topic classification and sentiment analysis tasks in Table 5 . In both tasks, Monolingual outperformed Monolingual fixed with a wide margin, which indicates that task-specific word embeddings are indeed crucial to obtain better model performance. This result motivates us to learn taskspecific cross-lingual word embeddings to exploit the fully task-specific neural network. Table 3 and Table 4 report the classification accuracy of the models on topic classification and sentiment analysis, respectively. All models are trained in English and evaluated in the target languages. CLWE opt with hyperparameter k tuned on the source language successfully outperformed the two baselines, CLWE fixed and CLWE fixed + NNmap, in all task-language pairs except for English-German in the topic classification task and English-Japanese in the sentiment analysis task. This result indicates the importance of task-specific word representation in the multilingual model and that our projection successfully induced task-specific cross-lingual word embeddings. Although we gained some improvements by tuning k to the target language using the minimal development set in some configurations, the gains are smaller than the gains over the two baselines. This implies that k is more sensitive to the target task rather than the target language, which we discuss further in \u00a7 5.3.",
"cite_spans": [],
"ref_spans": [
{
"start": 231,
"end": 238,
"text": "Table 5",
"ref_id": "TABREF9"
},
{
"start": 553,
"end": 572,
"text": "Table 3 and Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Impact of task-specific word embeddings",
"sec_num": "5.1"
},
{
"text": "In some languages, CLWE fixed + NNmap has even lower classification accuracy than CLWE fixed. We hypothesize that by having more layers, the model becomes more sensitive to the small difference in word representation, which means that the noise in pre-trained cross-lingual word embeddings affects on the model accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance of multilingual models",
"sec_num": "5.2"
},
{
"text": "Comparing CLWE opt (LLM) to CLWE opt (LP), we found that our locally linear mapping outperforms the linear projection method for a cross-task embedding projection. For some configurations, the performance of CLWE opt (LP) degrades significantly. These results indicate that the topology of the general and task-specific embedding spaces are so apart from each other that simple projection methods such as the linear projection are inappropriate. We will further discuss the difference in the topologies of the general and task-specific embedding spaces in \u00a7 5.3 by looking into nearest neighbors of some target words in the semantic space of general and task-specific crosslingual word embeddings (Table 6) .",
"cite_spans": [],
"ref_spans": [
{
"start": 697,
"end": 706,
"text": "(Table 6)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance of multilingual models",
"sec_num": "5.2"
},
{
"text": "In all configurations where sufficient dataset is available in the target languages, monolingual outperformed cross-lingual models with a wide margin. This indicates that there is still space for improvements in cross-lingual models. Figure 2 : Distribution of the reconstruction weights\u03b1 for the nearest words of the target words and the other nearest neighbors.",
"cite_spans": [],
"ref_spans": [
{
"start": 234,
"end": 242,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance of multilingual models",
"sec_num": "5.2"
},
{
"text": "We conduct further investigation to gain a profound understanding of our method and the resulting task-specific cross-lingual word embeddings. We first analyze the task-specific crosslingual word embeddings through nearest neighbors of some words. We next investigate the distribution of the reconstruction weights to see the impact of k nearest neighbors other than the nearest one. We then evaluate the sensitivity of the model accuracy to the value of k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "Properties of task-specific embeddings Here, we examine the properties of task-specific word embeddings obtained using our cross-task projection. For this purpose, we present nearest neighbors of frequent words in the tasks in various embeddings in English and French. Table 6a shows nearest neighbors of \"excellent,\" \"terrible,\" and \"economic\" in the general word embeddings, and the embedding layer of the models optimized for the training data in English. In the general embeddings, the words are close to words that have similar semantic or syntactic while the task-specific word embeddings show different properties specific to the target tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 269,
"end": 277,
"text": "Table 6a",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "In the embedding layer optimized for topic classification, we found \"economic\" to be close to \"imf (International Monetary Fund)\" or \"wto (World Trade Organization).\" Even though they are semantically distinct, they all strongly indicate the Economy label. In contrast, the nearest neighbors of \"excellent\" and \"terrible\" are noisy since they do not contribute to the topic classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "The embedding layers optimized for sentiment analysis exhibit different properties. While the nearest neighbors of \"excellent\" and \"terrible\" are not semantically close, they all indicate positive and negative polarities in the respective domains. However, the nearest neighbors of \"economic\" are noisy as they do not contribute to the task. Table 6b shows nearest neighbors of \"excellent (excellent),\" \"terrible (terrible),\" and \"\u00e9conomie (economy)\" in French; the general word embeddings (General) and the task-specific word embeddings obtained using our cross-task projection (LLM). General embeddings exhibit similar properties as English ones.",
"cite_spans": [],
"ref_spans": [
{
"start": 342,
"end": 350,
"text": "Table 6b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "LLM embeddings of topic classification task have \"fmi (IMF; International Monetary Fund)\" and \"conjoncture (conjuncture)\" as nearest neighbors of \"\u00e9conomie.\" This indicates that our crosstask projection successfully obtains word embeddings optimized for the task since they are strong signals of the Economy label. For sentiment analysis, the word embeddings obtained by our crosstask projection of Amazon dataset captures \"extraordinary\" and \"parfaite,\" which strongly indicate positive polarity, as the nearest neighbors of \"excellent\" In contrast, the words strongly associated with negative polarity, \"d\u00e9bile\" and \"stupide,\" are the nearest neighbors of \"terrible\" in the embedding space. These properties suggest that our cross-task projection successfully obtains taskspecific cross-lingual word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "To see how much the nearest neighbors for the target words contribute to the projection, we investigate the distribution of\u03b1 induced by Eq. 3. shows the distribution of the absolute value of\u03b1 for the nearest neighbors of the target word and the other nearest neighbors. For this experiment, we used k tuned on the source language. Even though the nearest words tend to have a slightly higher value of\u03b1 compared to the other nearest neighbor words, the difference is not so significant for most of the configuration. This observation indicates that all of the k-nearest neighbors contribute to the projection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distribution of the reconstruction weights",
"sec_num": null
},
{
"text": "Sensitivity to hyperparameter k We proposed three strategies to tune the hyperparameter k of our locally linear mapping for cross-task embedding projection of cross-lingual word embeddings: tuning on the development data in the source language as described in \u00a7 3.2, preparing small development data (100 samples) in the target languages, or fixing k = 1. Revisiting results in Table 3 and Table 4 , for the topic classification task, the classification accuracy of the models are consistent among all of the tuning methods (Table 3) , while for the sentiment analysis task, fixing k = 1 yields lower classification acuracy (Table 4) . Here, we conduct further analysis to gain a profound understanding of the effect of the value of k. Figure 3 depicts the classification accuracy of the models on the test set while varying k in the topic classification task and sentiment analysis task. Across languages, a smaller value of k yields better performance for the topic classification task, while a larger value of k yields better performance for the sentiment analysis task. These results indicate that the best value of k is language-independent and thus, the tuning k for the development set of source language suffices to achieve good results.",
"cite_spans": [],
"ref_spans": [
{
"start": 378,
"end": 398,
"text": "Table 3 and Table 4",
"ref_id": "TABREF6"
},
{
"start": 525,
"end": 534,
"text": "(Table 3)",
"ref_id": "TABREF6"
},
{
"start": 625,
"end": 634,
"text": "(Table 4)",
"ref_id": "TABREF7"
},
{
"start": 737,
"end": 745,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Distribution of the reconstruction weights",
"sec_num": null
},
{
"text": "We proposed a method to obtain a fully taskspecific multilingual model without relying on any cross-lingual resources or annotated corpora in the target language by a cross-task embedding projection. Because a naive linear projection puts too strong assumption on the topologies of two embedding spaces, we present an effective method for the cross-task embedding projection named locally linear mapping. The locally linear mapping assumes and preserves the local topology across the semantic spaces before and after the projection. Experimental results demonstrated that the locally linear mapping successfully obtains taskspecific word embeddings of the target language, and the resulting fully task-specific multilingual model exhibited better model accuracy than the existing multilingual model that fixes its embedding layer to general word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "We plan to evaluate our method on various NLP tasks, languages, and neural network models, and investigate the results to devise an adaptive method to tune k for individual words. A Derivation of the locally linear mapping",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Recall that X gen and Y gen represent general crosslingual word embeddings of the source and target languages, respectively. Also, for each word i in the target language, we denote the set of its k nearest neighbors in the target language in the semantic space of the general cross-lingual word embeddings as N gen i . We reconstruct Y gen i as a linear combination,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "j\u2208N gen i \u03b1 ij X gen j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "where \u03b1 i is the weight vector which we optimize. The reconstruction error is given as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "= Y gen i \u2212 j\u2208N gen i \u03b1 ij X gen j 2 = j\u2208N gen i \u03b1 ij Y gen i \u2212 X gen j 2 = j\u2208N gen i i\u2208N gen i \u03b1 ij \u03b1 il C ijl",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "where C i \u2208 R k\u00d7k is the covariance matrix,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "C ijl = Y gen i \u2212 X gen j Y gen i \u2212 X gen l .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "We minimize this reconstruction error under the constraint of j\u2208N gen i \u03b1 ij = 1. Applying the method of Lagrange multiplier, we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "L = j\u2208N gen i i\u2208N gen i \u03b1 ij \u03b1 il C ijl \u2212\u03bb \uf8eb \uf8ed j\u2208N gen i \u03b1 ij \u2212 1 \uf8f6 \uf8f8 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "We then solve \u2202L \u2202\u03b1 ij = \u2202L \u2202\u03bb = 0 to obtain",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "\u03b1 ij = l (C \u22121 i ) jl j l (C \u22121 i ) jl .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "The resulting value of\u03b1 i is then used to compute the task-specific word embedding of i as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Y spec i = j\u2208N gen i\u03b1 ij X spec j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "where X spec is the tast-specific word embeddings of the source language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Although orthogonal mapping(Xing et al., 2015) is reported to perform better for inducing cross-lingual word embeddings, it performed worse for our purpose in preliminary experiments probably due to the strong constraint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.yelp.com/dataset 4 https://s3.amazonaws.com/ amazon-reviews-pds/readme.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/artetxem/vecmap 6 https://fasttext.cc/docs/en/ crawl-vectors.html 7 http://www.statmt.org/europarl/ 8 https://taku910.github.io/mecab/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We deeply thank Satoshi Tohda for proofreading the draft of our paper. We also thank Dr. Junpei Komiyama for checking the mathematics. This research was supported by NII CRIS Contract Research 2019.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "789--798",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1073"
]
},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), pages 789-798.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics (TACL)",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00051"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics (TACL), 5:135- 146.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Multilingual sentiment analysis: An RNN-based framework for limited data",
"authors": [
{
"first": "F",
"middle": [],
"last": "Ethem",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Can",
"suffix": ""
}
],
"year": 2018,
"venue": "ACM SIGIR 2018 Workshop on Learning from Limited or Noisy Data (LND4IR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ethem F. Can, Aysu Ezen-Can, and Fazli Can. 2018. Multilingual sentiment analysis: An RNN-based framework for limited data. In ACM SIGIR 2018 Workshop on Learning from Limited or Noisy Data (LND4IR).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Adversarial deep averaging networks for cross-lingual sentiment classification",
"authors": [
{
"first": "Xilun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Athiwaratkun",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Kilian",
"middle": [],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics (TACL)",
"volume": "6",
"issue": "",
"pages": "557--570",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00039"
]
},
"num": null,
"urls": [],
"raw_text": "Xilun Chen, Yu Sun, Ben Athiwaratkun, Claire Cardie, and Kilian Weinberger. 2018. Adversarial deep av- eraging networks for cross-lingual sentiment classi- fication. Transactions of the Association for Com- putational Linguistics (TACL), 6:557-570.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multilingual training of crosslingual word embeddings",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Duong",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Kanayama",
"suffix": ""
},
{
"first": "Tengfei",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "894--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. 2017. Multilingual training of crosslingual word embeddings. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 894-904.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Learning bilingual sentiment-specific word embeddings without cross-lingual supervision",
"authors": [
{
"first": "Yanlin",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "420--429",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1040"
]
},
"num": null,
"urls": [],
"raw_text": "Yanlin Feng and Xiaojun Wan. 2019. Learning bilin- gual sentiment-specific word embeddings without cross-lingual supervision. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT), pages 420-429.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Simple task-specific bilingual word embeddings",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)",
"volume": "",
"issue": "",
"pages": "1386--1390",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Gouws and Anders S\u00f8gaard. 2015. Sim- ple task-specific bilingual word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies (NAACL-HLT), pages 1386-1390.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning word vectors for 157 languages",
"authors": [
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Prakhar",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "3483--3487",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Ar- mand Joulin, and Tomas Mikolov. 2018. Learn- ing word vectors for 157 languages. In Proceed- ings of the Eleventh International Conference on Language Resources and Evaluation (LREC), pages 3483-3487.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Cross-lingual transfer learning for POS tagging without cross-lingual resources",
"authors": [
{
"first": "Joo-Kyung",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Young-Bum",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Ruhi",
"middle": [],
"last": "Sarikaya",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Fosler-Lussier",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2832--2838",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1302"
]
},
"num": null,
"urls": [],
"raw_text": "Joo-Kyung Kim, Young-Bum Kim, Ruhi Sarikaya, and Eric Fosler-Lussier. 2017. Cross-lingual transfer learning for POS tagging without cross-lingual re- sources. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2832-2838.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1746--1751",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1181"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1746-1751.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the third International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proceedings of the third International Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "RCV1: A new benchmark collection for text categorization research",
"authors": [
{
"first": "David",
"middle": [
"D"
],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Tony",
"middle": [
"G"
],
"last": "Rose",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Machine Learning Research",
"volume": "5",
"issue": "",
"pages": "361--397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David D. Lewis, Yiming Yang, Tony G. Rose, and Fei Li. 2004. RCV1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5(Apr):361-397.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Cross-lingual mixture model for sentiment classification",
"authors": [
{
"first": "Xinfan",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ge",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "572--581",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinfan Meng, Furu Wei, Xiaohua Liu, Ming Zhou, Ge Xu, and Houfeng Wang. 2012. Cross-lingual mixture model for sentiment classification. In Pro- ceedings of the 50th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), pages 572-581.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Exploiting similarities among languages for machine translation",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "Computing Research Repository",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1309.4168"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for ma- chine translation. Computing Research Repository, arXiv:1309.4168. Version 1.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Multilingual hierarchical attention networks for document classification",
"authors": [
{
"first": "Nikolaos",
"middle": [],
"last": "Pappas",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Popescu-Belis",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the eighth International Joint Conference on Natural Language Processing (EACL)",
"volume": "",
"issue": "",
"pages": "1015--1025",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikolaos Pappas and Andrei Popescu-Belis. 2017. Multilingual hierarchical attention networks for doc- ument classification. In Proceedings of the eighth International Joint Conference on Natural Lan- guage Processing (EACL), pages 1015-1025.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Semeval-2016 task 5: Aspect based sentiment analysis",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Pontiki",
"suffix": ""
},
{
"first": "Dimitris",
"middle": [],
"last": "Galanis",
"suffix": ""
},
{
"first": "Haris",
"middle": [],
"last": "Papageorgiou",
"suffix": ""
},
{
"first": "Ion",
"middle": [],
"last": "Androutsopoulos",
"suffix": ""
},
{
"first": "Suresh",
"middle": [],
"last": "Manandhar",
"suffix": ""
},
{
"first": "Al-",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Smadi",
"suffix": ""
},
{
"first": "Yanyan",
"middle": [],
"last": "Al-Ayyoub",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Orphee",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Veronique",
"middle": [],
"last": "De Clercq",
"suffix": ""
},
{
"first": "Marianna",
"middle": [],
"last": "Hoste",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Apidianaki",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Tannier",
"suffix": ""
},
{
"first": "Evgeniy",
"middle": [],
"last": "Loukachevitch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kotelnikov",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval)",
"volume": "",
"issue": "",
"pages": "19--30",
"other_ids": {
"DOI": [
"10.18653/v1/S16-1002"
]
},
"num": null,
"urls": [],
"raw_text": "Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Moham- mad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphee De Clercq, Veronique Hoste, Marianna Apidianaki, Xavier Tannier, Na- talia Loukachevitch, Evgeniy Kotelnikov, N\u00faria Bel, Salud Mar\u00eda Jim\u00e9nez-Zafra, and G\u00fcl\u015fen Eryigit. 2016. Semeval-2016 task 5: Aspect based sentiment analysis. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval), pages 19-30.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Nonlinear dimensionality reduction by locally linear embedding",
"authors": [
{
"first": "T",
"middle": [],
"last": "Sam",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"K"
],
"last": "Roweis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Saul",
"suffix": ""
}
],
"year": 2000,
"venue": "Science",
"volume": "290",
"issue": "5500",
"pages": "2323--2326",
"other_ids": {
"DOI": [
"10.1126/science.290.5500.2323"
]
},
"num": null,
"urls": [],
"raw_text": "Sam T. Roweis and Lawrence K. Saul. 2000. Nonlin- ear dimensionality reduction by locally linear em- bedding. Science, 290(5500):2323-2326.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A survey of cross-lingual word embedding models",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
}
],
"year": 2019,
"venue": "Journal of Artificial Intelligence Research (JAIR)",
"volume": "65",
"issue": "",
"pages": "569--631",
"other_ids": {
"DOI": [
"10.1613/jair.1.11640"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Ruder, Ivan Vuli\u0107, and Anders S\u00f8gaard. 2019. A survey of cross-lingual word embedding models. Journal of Artificial Intelligence Research (JAIR), 65:569-631.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Almost) zeroshot cross-lingual spoken language understanding",
"authors": [
{
"first": "Shyam",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Gokhan",
"middle": [],
"last": "Tur",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Heck",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "6034--6038",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2018.8461905"
]
},
"num": null,
"urls": [],
"raw_text": "Shyam Upadhyay, Manaal Faruqui, Gokhan Tur, Dilek Hakkani-Tur, and Larry Heck. 2018. (Almost) zero- shot cross-lingual spoken language understanding. In Proceedings of the 2018 IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP), pages 6034-6038.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Co-training for cross-lingual sentiment classification",
"authors": [
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the fourth International Joint Conference on Natural Language Processing of the AFNLP (ACL-IJCNLP)",
"volume": "",
"issue": "",
"pages": "235--243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojun Wan. 2009. Co-training for cross-lingual sen- timent classification. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the fourth International Joint Conference on Natural Language Processing of the AFNLP (ACL- IJCNLP), pages 235-243.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Normalized word embedding and orthogonal transform for bilingual word translation",
"authors": [
{
"first": "Chao",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Chao",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yiye",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1104"
]
},
"num": null,
"urls": [],
"raw_text": "Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal trans- form for bilingual word translation. In Proceed- ings of the 2015 Conference of the North American",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Figure 2",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "Classification accuracy as a function of k in cross-task embedding projection.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Number of examples for topic classification."
},
"TABREF3": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Dataset Language</td><td>train</td><td>dev.</td><td>test</td></tr><tr><td>Yelp</td><td colspan=\"4\">English (en) 5,796,996 100,000 100,000</td></tr><tr><td/><td>English (en)</td><td>-</td><td>100</td><td>1462</td></tr><tr><td>ABSA</td><td>Spanish (es) Dutch (nl)</td><td>--</td><td>100 100</td><td>1237 1125</td></tr><tr><td/><td>Turkish (tr)</td><td>-</td><td>100</td><td>855</td></tr></table>",
"text": ""
},
"TABREF4": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Number of examples for sentiment analysis."
},
"TABREF6": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td/><td>Amazon</td><td/><td>Yelp -ABSA</td><td/></tr><tr><td>Method</td><td>en-de en-fr en-ja</td><td colspan=\"3\">en-es en-nl en-tr</td></tr><tr><td>CLWE fixed</td><td>0.798 0.805 0.798</td><td colspan=\"3\">0.731 0.675 0.591</td></tr><tr><td>CLWE fixed + NNmap</td><td>0.798 0.803 0.784</td><td colspan=\"3\">0.748 0.665 0.556</td></tr><tr><td>CLWE opt (LP)</td><td>0.797 0.804 0.779</td><td colspan=\"3\">0.725 0.655 0.605</td></tr><tr><td>CLWE opt (LLM)</td><td/><td/><td/><td/></tr><tr><td>k = 1</td><td>0.813 0.811 0.764</td><td colspan=\"3\">0.731 0.680 0.569</td></tr><tr><td>k tuned to task</td><td>0.815 0.812 0.785</td><td colspan=\"3\">0.759 0.684 0.616</td></tr><tr><td colspan=\"2\">k tuned to task/language 0.815 0.810 0.777</td><td colspan=\"3\">0.766 0.719 0.617</td></tr><tr><td>Monolingual</td><td>0.879 0.857 0.838</td><td>-</td><td>-</td><td>-</td></tr></table>",
"text": "Classification accuracy of topic classification task in cross-lingual settings. The underlined values indicate that, among the three trials, the worst model of CLWE opt (LLM) outperforms the best model of CLWE fixed."
},
"TABREF7": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": ""
},
"TABREF8": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table><tr><td>Method</td><td>Topic Class.</td><td colspan=\"2\">Senti. Analysis</td></tr><tr><td/><td/><td>Amazon</td><td>Yelp</td></tr><tr><td>Monolingual fixed</td><td>0.921</td><td>0.828</td><td>0.799</td></tr><tr><td>Monolingual</td><td>0.980</td><td>0.872</td><td>0.866</td></tr></table>",
"text": "for training. We conduct all experiments three times with"
},
"TABREF9": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Classification accuracy of monolingual models in English."
},
"TABREF12": {
"html": null,
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 1006-1011. Kui Xu and Xiaojun Wan. 2017. Towards a universal sentiment classifier in multiple languages. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 511-520. Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2017. Transfer learning for sequence tagging with hierarchical recurrent networks. In Processing of the fifth International Conference on Learning Representations (ICLR)."
}
}
}
}