ACL-OCL / Base_JSON /prefixD /json /D18 /D18-1034.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D18-1034",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:46:29.839850Z"
},
"title": "Neural Cross-Lingual Named Entity Recognition with Minimal Resources",
"authors": [
{
"first": "Jiateng",
"middle": [],
"last": "Xie",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "jiatengx@cs.cmu.edu"
},
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "zhiliny@cs.cmu.edu"
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": "gneubig@cs.cmu.edu"
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Washington",
"location": {}
},
"email": "nasmith@cs.washington.edu"
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Carnegie Mellon University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "For languages with no annotated resources, unsupervised transfer of natural language processing models such as named-entity recognition (NER) from resource-rich languages would be an appealing capability. However, differences in words and word order across languages make it a challenging problem. To improve mapping of lexical items across languages, we propose a method that finds translations based on bilingual word embeddings. To improve robustness to word order differences, we propose to use self-attention, which allows for a degree of flexibility with respect to word order. We demonstrate that these methods achieve state-of-the-art or competitive NER performance on commonly tested languages under a cross-lingual setting, with much lower resource requirements than past approaches. We also evaluate the challenges of applying these methods to Uyghur, a lowresource language. 1",
"pdf_parse": {
"paper_id": "D18-1034",
"_pdf_hash": "",
"abstract": [
{
"text": "For languages with no annotated resources, unsupervised transfer of natural language processing models such as named-entity recognition (NER) from resource-rich languages would be an appealing capability. However, differences in words and word order across languages make it a challenging problem. To improve mapping of lexical items across languages, we propose a method that finds translations based on bilingual word embeddings. To improve robustness to word order differences, we propose to use self-attention, which allows for a degree of flexibility with respect to word order. We demonstrate that these methods achieve state-of-the-art or competitive NER performance on commonly tested languages under a cross-lingual setting, with much lower resource requirements than past approaches. We also evaluate the challenges of applying these methods to Uyghur, a lowresource language. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Named entity recognition (NER), the task of detecting and classifying named entities from text into a few predefined categories such as people, locations or organizations, has seen the state-of-theart greatly advanced by the introduction of neural architectures (Collobert et al., 2011; Huang et al., 2015; Chiu and Nichols, 2016; Lample et al., 2016; Yang et al., 2016; Ma and Hovy, 2016; Peters et al., 2017; Liu et al., 2018; Peters et al., 2018 ). However, the success of these methods is highly dependent on a reasonably large amount of annotated training data, and thus it remains a challenge to apply these models to languages with limited amounts of labeled data. Cross-lingual NER attempts to address this challenge by transferring knowledge from a high-resource source language with abundant entity labels to a low-resource target language with few or no labels. Specifically, in this paper we attempt to tackle the extreme scenario of unsupervised transfer, where no labeled data is available in the target language. Within this paradigm, there are two major challenges to tackle: how to effectively perform lexical mapping between the languages, and how to address word order differences.",
"cite_spans": [
{
"start": 262,
"end": 286,
"text": "(Collobert et al., 2011;",
"ref_id": "BIBREF8"
},
{
"start": 287,
"end": 306,
"text": "Huang et al., 2015;",
"ref_id": "BIBREF19"
},
{
"start": 307,
"end": 330,
"text": "Chiu and Nichols, 2016;",
"ref_id": "BIBREF7"
},
{
"start": 331,
"end": 351,
"text": "Lample et al., 2016;",
"ref_id": "BIBREF22"
},
{
"start": 352,
"end": 370,
"text": "Yang et al., 2016;",
"ref_id": "BIBREF46"
},
{
"start": 371,
"end": 389,
"text": "Ma and Hovy, 2016;",
"ref_id": "BIBREF27"
},
{
"start": 390,
"end": 410,
"text": "Peters et al., 2017;",
"ref_id": "BIBREF35"
},
{
"start": 411,
"end": 428,
"text": "Liu et al., 2018;",
"ref_id": "BIBREF26"
},
{
"start": 429,
"end": 448,
"text": "Peters et al., 2018",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To cope with the first challenge of lexical mapping, a number of methods use parallel corpora to project annotations between languages through word alignment (Ehrmann et al., 2011; Kim et al., 2012; Wang and Manning, 2014; Ni et al., 2017) . Since parallel corpora may not be always available, Mayhew et al. (2017) proposed a \"cheap translation\" approach that uses a bilingual dictionary to perform word-level translation. The above approaches provide a reasonable proxy for the actual labeled training data, largely because the words that participate in entities can be translated relatively reliably given extensive parallel dictionaries or corpora (e.g., with 1 million word pairs or sentences). Additionally, as a side benefit of having explicitly translated words, models can directly exploit features extracted from the surface forms (e.g. through character-level neural feature extractors), which has proven essential for high accuracy in the monolingual scenario (Ma and Hovy, 2016) . However, these methods are largely predicated on the availability of large-scale parallel resources, and thus, their applicability to lowresource languages is limited.",
"cite_spans": [
{
"start": 158,
"end": 180,
"text": "(Ehrmann et al., 2011;",
"ref_id": "BIBREF12"
},
{
"start": 181,
"end": 198,
"text": "Kim et al., 2012;",
"ref_id": "BIBREF21"
},
{
"start": 199,
"end": 222,
"text": "Wang and Manning, 2014;",
"ref_id": "BIBREF45"
},
{
"start": 223,
"end": 239,
"text": "Ni et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 294,
"end": 314,
"text": "Mayhew et al. (2017)",
"ref_id": "BIBREF28"
},
{
"start": 971,
"end": 990,
"text": "(Ma and Hovy, 2016)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In contrast, it is also possible to learn lexical mappings through bilingual word embeddings (BWE). These bilingual embeddings can be obtained by using a small dictionary to project two sets of embeddings into a consistent space (Mikolov et al., 2013a; Faruqui and Dyer, 2014; Artetxe et al., 2016; Smith et al., 2017) , or even in an entirely unsupervised manner using adversarial training or identical character strings (Zhang et al., 2017; Artetxe et al., 2017; Lample et al., 2018) . Many approaches in the past have leveraged the shared embedding space for cross-lingual applications (Guo et al., 2015; Ammar et al., 2016b; Zhang et al., 2016; Fang and Cohn, 2017) , including NER (Bharadwaj et al., 2016; Ni et al., 2017) . The minimal dependency on parallel resources makes the embedding-based method much more suitable for low-resource languages. However, since different languages have different linguistic properties, it is hard, if not impossible, to align the two embedding spaces perfectly (see Figure 1 ). Meanwhile, because surface forms are not available, character-level features cannot be used, resulting in reduced tagging accuracy (as demonstrated in our experiments).",
"cite_spans": [
{
"start": 229,
"end": 252,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF30"
},
{
"start": 253,
"end": 276,
"text": "Faruqui and Dyer, 2014;",
"ref_id": "BIBREF15"
},
{
"start": 277,
"end": 298,
"text": "Artetxe et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 299,
"end": 318,
"text": "Smith et al., 2017)",
"ref_id": "BIBREF38"
},
{
"start": 422,
"end": 442,
"text": "(Zhang et al., 2017;",
"ref_id": "BIBREF49"
},
{
"start": 443,
"end": 464,
"text": "Artetxe et al., 2017;",
"ref_id": "BIBREF3"
},
{
"start": 465,
"end": 485,
"text": "Lample et al., 2018)",
"ref_id": "BIBREF23"
},
{
"start": 589,
"end": 607,
"text": "(Guo et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 608,
"end": 628,
"text": "Ammar et al., 2016b;",
"ref_id": "BIBREF1"
},
{
"start": 629,
"end": 648,
"text": "Zhang et al., 2016;",
"ref_id": "BIBREF50"
},
{
"start": 649,
"end": 669,
"text": "Fang and Cohn, 2017)",
"ref_id": "BIBREF14"
},
{
"start": 686,
"end": 710,
"text": "(Bharadwaj et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 711,
"end": 727,
"text": "Ni et al., 2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 1008,
"end": 1016,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address the above issues, we propose a new lexical mapping approach that combines the advantages of both discrete dictionary-based methods and continuous embedding-based methods. Specifically, we first project embeddings of different languages into the shared BWE space, then learn discrete word translations by looking for nearest neighbors in this projected space, and finally train a model on the translated data. This allows our method to inherit the benefits of both embedding-based and dictionary-based methods: its resource requirements are low as in the former, but it suffers less from misalignment of the embedding spaces and has access to character-level information like the latter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Turning to differences in word ordering, to our knowledge there are no methods that explicitly deal with this problem in unsupervised crosslingual transfer for NER. Our second contribution is a method to alleviate this issue by incorporating an order-invariant self-attention mechanism (Vaswani et al., 2017; Lin et al., 2017) into our neural architecture. Self-attention allows re-ordering of information within a particular encoded sequence, which makes it possible to account for word order differences between the source and the target languages.",
"cite_spans": [
{
"start": 286,
"end": 308,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF44"
},
{
"start": 309,
"end": 326,
"text": "Lin et al., 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our experiments, we start with models trained in English as the source language on the CoNLL 2002 and 2003 datasets and transfer them into Spanish, Dutch, and German as the target languages. Our approach obtains new state-of-the-art cross-lingual results in Spanish and Dutch, and competitive results in German, even without a dictionary, completely removing the need for resources such as Wikipedia and parallel corpora. Next, we transfer English using the same approach into Uyghur, a truly low-resource language. With significantly fewer cross-lingual resources, our approach can still perform competitively with previous best results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We establish our problem setting ( \u00a72.1), then present our methods in detail ( \u00a72.2), and provide some additional motivation ( \u00a72.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "2"
},
{
"text": "NER takes a sentence as the input and outputs a sequence of labels corresponding to the named entity categories of the words in the sentence, such as location, organization, person, or none. In standard supervised NER, we are provided with a labeled corpus of sentences in the target language along with tags indicating which spans correspond to entities of each type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Setting",
"sec_num": "2.1"
},
{
"text": "As noted in the introduction, we study the problem of unsupervised cross-lingual NER: given labeled training data only in a separate source language, we aim to learn a model that is able to perform NER in the target language. This transfer can be performed using a variety of resources, including parallel corpora (T\u00e4ckstr\u00f6m et al., 2012; Ni et al., 2017) , Wikipedia (Nothman et al., 2013) , and large dictionaries (Ni et al., 2017; Mayhew et al., 2017) . In this work, we limit ourselves to a setting where we have the following resources, making us comparable to other methods such as Mayhew et al. (2017) and Ni et al. (2017) :",
"cite_spans": [
{
"start": 314,
"end": 338,
"text": "(T\u00e4ckstr\u00f6m et al., 2012;",
"ref_id": "BIBREF40"
},
{
"start": 339,
"end": 355,
"text": "Ni et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 368,
"end": 390,
"text": "(Nothman et al., 2013)",
"ref_id": "BIBREF33"
},
{
"start": 416,
"end": 433,
"text": "(Ni et al., 2017;",
"ref_id": "BIBREF32"
},
{
"start": 434,
"end": 454,
"text": "Mayhew et al., 2017)",
"ref_id": "BIBREF28"
},
{
"start": 588,
"end": 608,
"text": "Mayhew et al. (2017)",
"ref_id": "BIBREF28"
},
{
"start": 613,
"end": 629,
"text": "Ni et al. (2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Setting",
"sec_num": "2.1"
},
{
"text": "\u2022 Labeled training data in the source language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Setting",
"sec_num": "2.1"
},
{
"text": "\u2022 Monolingual corpora in both source and target languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Setting",
"sec_num": "2.1"
},
{
"text": "\u2022 A dictionary, either a small pre-existing one, or one induced by unsupervised methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Setting",
"sec_num": "2.1"
},
{
"text": "Our method follows the process below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2.2"
},
{
"text": "1. Train separate word embeddings using monolingual corpora using standard embedding training methods ( \u00a72.2.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2.2"
},
{
"text": "2. Project word embeddings in the two languages into a shared embedding space by optimizing Figure 1 : Example of the result of our approach on Spanish-English words not included in the dictionary (embeddings are reduced to 2 dimensions for visual clarity). We first project word embeddings into a shared space, and then use the nearest neighbors for word translation. Notice that the word pairs are not perfectly aligned in the shared embedding space, but after word translation we obtain correct alignments.",
"cite_spans": [],
"ref_spans": [
{
"start": 92,
"end": 100,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "2.2"
},
{
"text": "the word embedding alignment using the given dictionary ( \u00a72.2.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2.2"
},
{
"text": "3. For each word in the source language training data, translate it by finding its nearest neighbor in the shared embedding space ( \u00a72.2.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2.2"
},
{
"text": "4. Train an NER model using the translated words along with the named entity tags from the English corpus ( \u00a72.2.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2.2"
},
{
"text": "We consider each in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "2.2"
},
{
"text": "Given text in the source and target language, we first independently learn word embedding matrices X and Y in the source and target languages respectively. These embeddings can be learned on monolingual text in both languages with any of the myriad of word embedding methods (Mikolov et al., 2013b; Pennington et al., 2014; Bojanowski et al., 2017) .",
"cite_spans": [
{
"start": 275,
"end": 298,
"text": "(Mikolov et al., 2013b;",
"ref_id": "BIBREF31"
},
{
"start": 299,
"end": 323,
"text": "Pennington et al., 2014;",
"ref_id": "BIBREF34"
},
{
"start": 324,
"end": 348,
"text": "Bojanowski et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Monolingual Embeddings",
"sec_num": "2.2.1"
},
{
"text": "Next, we learn a cross-lingual projection of X and Y into a shared space. Assume we are given a dictionary",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Bilingual Embeddings",
"sec_num": "2.2.2"
},
{
"text": "{x i , y i } D i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Bilingual Embeddings",
"sec_num": "2.2.2"
},
{
"text": ", where x i and y i denote the embeddings of a word pair. Let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Bilingual Embeddings",
"sec_num": "2.2.2"
},
{
"text": "X D = [x 1 , x 2 , \u2022 \u2022 \u2022 , x D ] and Y D = [y 1 , y 2 , \u2022 \u2022 \u2022 , y D ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Bilingual Embeddings",
"sec_num": "2.2.2"
},
{
"text": "denote two embedding matrices consisting of word pairs from the dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Bilingual Embeddings",
"sec_num": "2.2.2"
},
{
"text": "Following previous work (Zhang et al., 2016; Artetxe et al., 2016; Smith et al., 2017) , we optimize the following objective:",
"cite_spans": [
{
"start": 24,
"end": 44,
"text": "(Zhang et al., 2016;",
"ref_id": "BIBREF50"
},
{
"start": 45,
"end": 66,
"text": "Artetxe et al., 2016;",
"ref_id": "BIBREF2"
},
{
"start": 67,
"end": 86,
"text": "Smith et al., 2017)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Bilingual Embeddings",
"sec_num": "2.2.2"
},
{
"text": "min W d i=1 W x i \u2212 y i 2 s.t. W W = I,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Bilingual Embeddings",
"sec_num": "2.2.2"
},
{
"text": "where W is a square parameter matrix. This ob-jective can be further simplified as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Bilingual Embeddings",
"sec_num": "2.2.2"
},
{
"text": "max W Tr(X D W Y D ) s.t. W W = I.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Bilingual Embeddings",
"sec_num": "2.2.2"
},
{
"text": "Here, the transformation matrix W is constrained to be orthogonal so that the dot product similarity of words is invariant with respect to the transformation both within and across languages. To optimize the above objective (the Procrustes problem), we decompose the matrix Y D X D using singular value decomposition. Let the results be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Bilingual Embeddings",
"sec_num": "2.2.2"
},
{
"text": "Y D X D = U V , then W = U V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Bilingual Embeddings",
"sec_num": "2.2.2"
},
{
"text": "gives the exact solution. We define the similarity matrix between X and Y to be S = Y W X = Y U (XV ) , where each column contains the cosine similarity between source word x i and all target words y i . We can then define X = XV and Y = Y U , which are X and Y transformed into a shared embedding space.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Bilingual Embeddings",
"sec_num": "2.2.2"
},
{
"text": "To refine the alignment in this shared space further, we iteratively perform a self-learning refinement step k 2 times by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Bilingual Embeddings",
"sec_num": "2.2.2"
},
{
"text": "1. Using the aligned embeddings to generate a new dictionary that consists of mutual nearest neighbors obtained using the same metric as introduced below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Bilingual Embeddings",
"sec_num": "2.2.2"
},
{
"text": "2. Solving the Procrustes problem based on the newly generated dictionary to get a new set of bilingual embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Bilingual Embeddings",
"sec_num": "2.2.2"
},
{
"text": "The bilingual embeddings at the end of the kth step, X k and Y k , will be used to perform translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Bilingual Embeddings",
"sec_num": "2.2.2"
},
{
"text": "To learn actual word translations, we next proceed to perform nearest-neighbor search in the common space. Instead of using a common distance metric such as cosine similarity, we adopt the cross-domain similarity local scaling (CSLS) metric (Lample et al., 2018) , which is designed to address the hubness problem common to the shared embedding space (Dinu and Baroni, 2014) . Specifically,",
"cite_spans": [
{
"start": 241,
"end": 262,
"text": "(Lample et al., 2018)",
"ref_id": "BIBREF23"
},
{
"start": 351,
"end": 374,
"text": "(Dinu and Baroni, 2014)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Translations",
"sec_num": "2.2.3"
},
{
"text": "CSLS(x i , y j ) = 2 cos(x i , y j ) \u2212 r T (x i ) \u2212 r S (y j ) where r T (x i ) = 1 K yt\u2208N T (x i ) cos(x i , y t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Translations",
"sec_num": "2.2.3"
},
{
"text": "denotes the mean cosine similarity between x i and its K neighbors y t . Using this metric, we find translations for each source word s by selecting target wordt s wheret s = arg max t CSLS(x s , y t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Word Translations",
"sec_num": "2.2.3"
},
{
"text": "Finally, we translate the entire English NER training data into the target language by taking English sentences S = s 1 , s 2 , ..., s n and translating them into target sentencesT =t 1 ,t 2 , ...,t n . The label of each English word is copied to be the label of the target word. We can then train an NER model directly using the translated data. Notably, because the model has access to the surface forms of the target sentences, it can use the character sequences of the target language as part of its input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training the NER Model",
"sec_num": "2.2.4"
},
{
"text": "During learning, all word embeddings are normalized to lie on the unit ball, allowing every training pair an equal contribution to the objective and improving word translation accuracy (Artetxe et al., 2016) . When training the NER model, however, we do not normalize the word embeddings, because preliminary experiments showed the original unnormalized embeddings gave superior results. We suspect this is due to frequency information conveyed by vector length, an important signal for NER. (Named entities appear less frequently in the monolingual corpus.) Figure 1 shows an example of the embeddings and translations learned with our approach trained on Spanish and English data from the experiments (see \u00a74 for more details). As shown in the figure, there is usually a noticeable difference between the word embeddings of a word pair in different languages, which is inevitable because different languages have distinct traits and different monolingual data, and as a result it is intrinsically hard to learn a perfect alignment. This indicates that models trained directly on data using the source Instead of directly modeling the shared embedding space (Guo et al., 2015; Zhang et al., 2016; Fang and Cohn, 2017; Ni et al., 2017) , we leverage the shared embedding space for word translation. As shown in Figure 1 , unaligned word pairs can still be translated correctly with our method, as the embeddings are still closer to the correct translations than the closest incorrect one.",
"cite_spans": [
{
"start": 185,
"end": 207,
"text": "(Artetxe et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 1159,
"end": 1177,
"text": "(Guo et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 1178,
"end": 1197,
"text": "Zhang et al., 2016;",
"ref_id": "BIBREF50"
},
{
"start": 1198,
"end": 1218,
"text": "Fang and Cohn, 2017;",
"ref_id": "BIBREF14"
},
{
"start": 1219,
"end": 1235,
"text": "Ni et al., 2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 559,
"end": 567,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1311,
"end": 1319,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training the NER Model",
"sec_num": "2.2.4"
},
{
"text": "We describe the model we use to perform NER. We will first describe the basic hierarchical neural CRF tagging model (Lample et al., 2016; Ma and Hovy, 2016; Yang et al., 2016) , and introduce the self-attention mechanism that we propose to deal with divergence of word order.",
"cite_spans": [
{
"start": 116,
"end": 137,
"text": "(Lample et al., 2016;",
"ref_id": "BIBREF22"
},
{
"start": 138,
"end": 156,
"text": "Ma and Hovy, 2016;",
"ref_id": "BIBREF27"
},
{
"start": 157,
"end": 175,
"text": "Yang et al., 2016)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NER Model Architecture",
"sec_num": "3"
},
{
"text": "The hierarchical CRF model consists of three components: a character-level neural network, either an RNN or a CNN, that allows the model to capture subword information, such as morphological variations and capitalization patterns; a wordlevel neural network, usually an RNN, that consumes word representations and produces context sensitive hidden representations for each word; and a linear-chain CRF layer that models the dependency between labels and performs inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Neural CRF",
"sec_num": "3.1"
},
{
"text": "In this paper, we closely follow the architecture proposed by Lample et al. (2016) , and use bidirectional LSTMs for both the character level and word level neural networks. Specifically, given an input sequence of words (w 1 , w 2 , ..., w n ), and each word's corresponding character sequence, the model first produces a representation for each word, x i , by concatenating its character representation with its word embedding. Subsequently, the word representations of the input se- ",
"cite_spans": [
{
"start": 62,
"end": 82,
"text": "Lample et al. (2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Neural CRF",
"sec_num": "3.1"
},
{
"text": "quence (x 1 , x 2 , \u2022 \u2022 \u2022 , x n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Neural CRF",
"sec_num": "3.1"
},
{
"text": "The training-time inputs to our model are in essence corrupted sentences from the target language (e.g., Spanish), which have a different order from natural target sentences. We propose to alleviate this problem by adding a self-attention layer (Vaswani et al., 2017) on top of the wordlevel Bi-LSTM. Self-attention provides each word with a context feature vector based on all the words of a sentence. As the context vectors are obtained irrespective of the words' positions in a sentence, at test time, the model is more likely to see vectors similar to those seen at training time, which we posit introduces a level of flexibility with respect to the word order, and thus may allow for better generalization.",
"cite_spans": [
{
"start": 245,
"end": 267,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Attention",
"sec_num": "3.2"
},
{
"text": "Let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Attention",
"sec_num": "3.2"
},
{
"text": "H = [h 1 , h 2 , \u2022 \u2022 \u2022 , h n ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Attention",
"sec_num": "3.2"
},
{
"text": "be a sequence of word-level hidden representations. We apply a single layer MLP on H to obtain the queries Q and keys K = tanh(HW + b), where W \u2208 R d\u00d7d is a parameter matrix and b \u2208 R d is a bias term, with d being the hidden state size. The output of attention layer is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Attention",
"sec_num": "3.2"
},
{
"text": "H a = softmax(QK ) (E \u2212 I)H = [h a 1 , h a 2 , ..., h a 3 ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Attention",
"sec_num": "3.2"
},
{
"text": "where I is an identity matrix and E is an all-one matrix. The term (E \u2212 I) serves as an attention mask that prevents the weights from centering on the word itself, as we would like to provide each word with sentence level context. The outputs from the self-attention layer are then concatenated with the original hidden representations to form the final inputs to the CRF layer, which are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Attention",
"sec_num": "3.2"
},
{
"text": "([h 1 , h a 1 ], [h 2 , h a 2 ], ..., [h 3 , h a 3 ]).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Self-Attention",
"sec_num": "3.2"
},
{
"text": "To examine the effectiveness of both of our proposed methods, we conduct four sets of experiments. First, we evaluate our model both with and without provided dictionaries on a benchmark NER dataset and compare with previous state-ofthe-art results. Second, we compare our methods against a recently proposed dictionary-based translation baseline (Mayhew et al., 2017) by directly applying our model on their translated data. 3 Subsequently, we conduct an ablation study to further understand our proposed methods. Lastly, we apply our methods to a truly low-resource language, Uyghur.",
"cite_spans": [
{
"start": 347,
"end": 368,
"text": "(Mayhew et al., 2017)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We evaluate our proposed methods on the benchmark CoNLL 2002 and 2003 NER datasets (Tjong Kim Sang, 2002 ; Tjong Kim Sang and De Meulder, 2003) , which contain 4 European languages, English, German, Dutch and Spanish. For all experiments, we use English as the source language and translate its training data into the target language. We train a model on the translated data, and test it on the target language. For each experiment, we run our models 5 times using different seeds and report the mean and standard deviation, as suggested by Reimers and Gurevych (2017) . Word Embeddings For all languages, we use two different embedding methods, fastText (Bojanowski et al., 2017) and GloVe (Pennington et al., 2014) , to perform word-embedding based translations and train the NER model, respectively. For fastText, we use the publicly available embeddings trained on Wikipedia for all languages. For GloVe, we use the publicly available embeddings pre-trained on Gigaword and Wikipedia for English. For Spanish, German and Dutch, we use Spanish Gigaword and Wikipedia, German WMT News Crawl data and Wikipedia, and Dutch Wikipedia, respectively, to train the GloVe word embeddings. We use a vocabulary size of 100,000 for both embedding methods.",
"cite_spans": [
{
"start": 90,
"end": 104,
"text": "Kim Sang, 2002",
"ref_id": "BIBREF41"
},
{
"start": 113,
"end": 143,
"text": "Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF42"
},
{
"start": 541,
"end": 568,
"text": "Reimers and Gurevych (2017)",
"ref_id": "BIBREF37"
},
{
"start": 655,
"end": 680,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF5"
},
{
"start": 691,
"end": 716,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "Dictionary We consider three different settings to obtain the seed dictionary, including two methods that do not use parallel resources:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "1. Use identical character strings shared between the two vocabularies as the seed dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "2. Lample et al. (2018) 's method of using adversarial learning to induce a mapping that aligns the two embedding spaces, and the mutual nearest neighbors in the shared space will be used as a dictionary. The learning procedure is formulated as a two player game, where a discriminator is trained to distinguish words from the two embedding spaces, and a linear mapping is trained to align the two embedding spaces and thus fool the discriminator.",
"cite_spans": [
{
"start": 3,
"end": 23,
"text": "Lample et al. (2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "3. Use a provided dictionary. In our experiments, we use the ones provided by Lample et al. (2018), 4 each of which contain 5,000 source words and about 10,000 entries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "Translation We follow the general procedure described in Section 2, and replace each word from the English training data with its corresponding word in the target language. For out-ofvocabulary (OOV) words, we simply keep them as-is. We capitalize the resulting sentences following the pattern of the original English words. Note that for German, simply following the English capitalization pattern does not work, because all nouns in German are capitalized. To handle this problem, we count the number of times each word is capitalized in Wikipedia, and capitalize the word if the probability is greater than 0.6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "Network Parameters For our experiments, we set the character embedding size to be 25, character level LSTM hidden size to be 50, and word level LSTM hidden size to be 200. For OOV words, we initialize an unknown embedding by uniformly sampling from range [\u2212 3 emb , + 3 emb ], where emb is the size of embedding, 100 in our case. We replace each number with 0 when used as input to the character level Bi-LSTM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "Network Training We use SGD with momentum to train the NER model for 30 epochs, and select the best model on the target language development set. We choose the initial learning rate to be \u03b7 0 = 0.015, and update it using a learning decay mechanism after each epoch, \u03b7 t = \u03b7 0 1+\u03c1t , where t is the number of completed epoch and \u03c1 = 0.05 is the decay rate. We use a batch size of 10 and evaluate the model per 150 batches within each epoch. We apply dropout on the inputs to the word-level Bi-LSTM, the outputs of the word-level Bi-LSTM, and the outputs of the self-attention layer to prevent overfitting. The selfattention dropout rate is set to 0.5 when using our translated data, and 0.2 when using cheaptranslation data. We use 0.5 for all other dropouts. The word embeddings are not fine-tuned during training. Table 1 presents our results on transferring from English to three other languages, alongside results from previous studies. Here \"BWET\" (bilingual word embedding translation) denotes using the hierarchical neural CRF model trained on data translated from English. As can be seen from the table, our methods outperform previous state-of-theart results on Spanish and Dutch by a large margin and perform competitively on German even without using any parallel resources. We achieve similar results using different seed dictionaries, and produce the best results when adding the selfattention mechanism to our model. Despite the good performance on Spanish and Dutch, our model does not outperform the previous best result on German, and we speculate that there are a few reasons. First, German has rich morphology and contains many compound words, making the word embeddings less reliable. Our supervised result on German indicates the same problem, as it is about 8 F 1 points worse than Spanish and Dutch. Second, these difficulties become more pronounced in the cross-lingual setting, leading to a noisier embedding space alignment, which lowers the quality of BWE-based translation. We believe that this is a problem with all methods using word embeddings. In such cases, more resource-intensive methods may be necessary. Table 1 also presents results of a comparison between our proposed BWE translation method and the \"cheap translation\" baseline of (Mayhew et al., 2017) . The size of the dictionaries used by both Table 1 : NER F 1 scores. * Approaches that use more resources than ours (\"Wikipedia\" means Wikipedia is used not as a monolingual corpus, but to provide external knowledge). \u2020 Approaches that use multiple languages for transfer. \"Only Eng. data\" is the model used in Mayhew et al. (2017) trained on their data translated from English without using Wikipedia and other languages. The \"data from Mayhew et al. (2017) \" is the same data translated from only English they used. \"Id.c.\" indicates using identical character strings between the two languages as the seed dictionary. \"Adv.\" indicates using adversarial training and mutual nearest neighbors to induce a seed dictionary. Our supervised results are obtained using models trained on annotated corpus from CoNLL.",
"cite_spans": [
{
"start": 2270,
"end": 2291,
"text": "(Mayhew et al., 2017)",
"ref_id": "BIBREF28"
},
{
"start": 2604,
"end": 2624,
"text": "Mayhew et al. (2017)",
"ref_id": "BIBREF28"
},
{
"start": 2731,
"end": 2751,
"text": "Mayhew et al. (2017)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 815,
"end": 822,
"text": "Table 1",
"ref_id": null
},
{
"start": 2140,
"end": 2147,
"text": "Table 1",
"ref_id": null
},
{
"start": 2336,
"end": 2343,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.1"
},
{
"text": "approaches are given in the right-most column.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Dictionary-Based Translation",
"sec_num": "4.2.1"
},
{
"text": "Using our model on their translated data from English outperforms the baseline scores produced by their models over all languages, a testament to the strength of our neural CRF baseline. The results produced by our model on their data indicate that our approach is effective, as we manage to outperform their approaches on all three languages using much smaller dictionaries and even without dictionaries. Also, we see that self-attention is effective when applied on their data, which also does not carry the correct word order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Dictionary-Based Translation",
"sec_num": "4.2.1"
},
{
"text": "In this section, we study the effects of different ways of using bilingual word embeddings and the resulting induced translations. As we pointed out previously, finding translations has two advantages:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Does Translation Work Better?",
"sec_num": "4.2.2"
},
{
"text": "(1) the model can be trained on the exact points from the target embedding space, and (2) the model has access to the target language's original character sequences. Here, we conduct ablation studies over these two variables. Specifically, we consider the following three variants. 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Does Translation Work Better?",
"sec_num": "4.2.2"
},
{
"text": "\u2022 Common space This is the most common setting for using bilingual word embeddings, and has recently been applied in NER (Ni et al., 2017) . In short, the source and target word embeddings are cast into a common space, namely X = XV and Y = Y U , and the model is trained with the source side embedding and the source character sequence, and directly applied on the target side.",
"cite_spans": [
{
"start": 121,
"end": 138,
"text": "(Ni et al., 2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Why Does Translation Work Better?",
"sec_num": "4.2.2"
},
{
"text": "\u2022 Replace In this setting, we replace each original word embedding x i with its nearest neighbor y i in the common space but do not perform translation. This way, the model will be trained with target word embeddings and source-side character sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Does Translation Work Better?",
"sec_num": "4.2.2"
},
{
"text": "\u2022 Translation This is our proposed approach, where the model is trained on both exact points in the target space and target language character sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Why Does Translation Work Better?",
"sec_num": "4.2.2"
},
{
"text": "The three variants are compared in Table 2 . The \"common space\" variant performs the worst by a large margin, confirming our hypothesis that discrepancy between the two embedding spaces harms the model's ability to generalize. From the comparison between the \"replace\" and \"translation,\" we observe that having access to the target language's character sequence helps performance, especially for German, perhaps due in part to its capitalization patterns, which differ from English. In this case, we have to lower-case all the words for character inputs in order to prevent the model from overfitting the English capitalization pattern. ",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 42,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Why Does Translation Work Better?",
"sec_num": "4.2.2"
},
{
"text": "In this section, we directly apply our approach to Uyghur, a truly low-resource language with very limited monolingual and parallel resources. We test our model on 199 annotated evaluation documents from the DARPA LORELEI program (the \"unsequestered set\") and compare with previously reported results in the cross-lingual setting by Mayhew et al. (2017) . Similar to our previous experiments, we transfer from English, use fast-Text embeddings trained on Common Crawl and Wikipedia 6 and a provided dictionary to perform translation, and use GloVe trained on a monolingual corpus that has 30 million tokens to perform NER. Results are presented in Table 3 . Our method performs competitively, considering that we use a much smaller dictionary than Mayhew et al. (2017) and no knowledge from Wikipedia in Uyghur. Our best results come from a combined approach: using word embeddings to translate words that are not covered by Mayhew et al. (2017) 's dictionary (last line of Table 3 ). Note that for the CoNLL languages, Mayhew et al. (2017) used Wikipedia for the Wikifier features (Tsai et al., 2016) , while for Uyghur they used it for translating named entities, which is crucial for low-resource languages when some named entities are not covered by the dictionary or the translation is not reliable. We suspect that the unreliable translation of named entities is the ma-6 https://github.com/facebookresearch/ fastText/blob/master/docs/crawl-vectors. md jor reason why our method alone performs worse but performs better when combined with their data that has access to higher quality translations of named entities.",
"cite_spans": [
{
"start": 333,
"end": 353,
"text": "Mayhew et al. (2017)",
"ref_id": "BIBREF28"
},
{
"start": 748,
"end": 768,
"text": "Mayhew et al. (2017)",
"ref_id": "BIBREF28"
},
{
"start": 925,
"end": 945,
"text": "Mayhew et al. (2017)",
"ref_id": "BIBREF28"
},
{
"start": 1020,
"end": 1040,
"text": "Mayhew et al. (2017)",
"ref_id": "BIBREF28"
},
{
"start": 1082,
"end": 1101,
"text": "(Tsai et al., 2016)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [
{
"start": 648,
"end": 655,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 974,
"end": 981,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Case Study: Uyghur",
"sec_num": "4.3"
},
{
"text": "The table omits results using adversarial learning and identical character strings, as both failed (F 1 scores around 10). We attribute these failures to the low quality of Uyghur word embeddings and the fact that the two languages are distant. Also, Uyghur is mainly written in Arabic script, making the identical character method inappropriate. Overall, this reveals a practical challenge for multilingual embedding methods, where the underlying distributions of the text in the two languages are divergent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Case Study: Uyghur",
"sec_num": "4.3"
},
{
"text": "Cross-Lingual Learning Cross-lingual learning approaches can be loosely classified into two categories: annotation projection and languageindependent transfer. Annotation projection methods create training data by using parallel corpora to project annotations from the source to the target language. Such approaches have been applied to many tasks under the cross-lingual setting, such as POS tagging (Yarowsky et al., 2001; Das and Petrov, 2011; T\u00e4ckstr\u00f6m et al., 2013; Fang and Cohn, 2016) , mention detection (Zitouni and Florian, 2008) and parsing (Hwa et al., 2005; McDonald et al., 2011) .",
"cite_spans": [
{
"start": 401,
"end": 424,
"text": "(Yarowsky et al., 2001;",
"ref_id": "BIBREF48"
},
{
"start": 425,
"end": 446,
"text": "Das and Petrov, 2011;",
"ref_id": "BIBREF10"
},
{
"start": 447,
"end": 470,
"text": "T\u00e4ckstr\u00f6m et al., 2013;",
"ref_id": "BIBREF39"
},
{
"start": 471,
"end": 491,
"text": "Fang and Cohn, 2016)",
"ref_id": "BIBREF13"
},
{
"start": 512,
"end": 539,
"text": "(Zitouni and Florian, 2008)",
"ref_id": "BIBREF52"
},
{
"start": 552,
"end": 570,
"text": "(Hwa et al., 2005;",
"ref_id": "BIBREF20"
},
{
"start": 571,
"end": 593,
"text": "McDonald et al., 2011)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Language independent transfer-based approaches build models using language independent and delexicalized features. For instance, Zirikly and Hagiwara (2015) transfers word cluster and gazetteer features through the use of comparable copora. Tsai et al. (2016) links words to Wikipedia entries and uses the entry category as features to train language independent NER models. Recently, Ni et al. (2017) propose to project word embeddings into a common space as language independent features. These approaches utilize such features by training a model on the source language and directly applying it to the target language.",
"cite_spans": [
{
"start": 129,
"end": 156,
"text": "Zirikly and Hagiwara (2015)",
"ref_id": "BIBREF51"
},
{
"start": 241,
"end": 259,
"text": "Tsai et al. (2016)",
"ref_id": "BIBREF43"
},
{
"start": 385,
"end": 401,
"text": "Ni et al. (2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Another way of performing language independent transfer resorts to multi-task learning, where a model is trained jointly across different languages by sharing parameters to allow for knowledge transfer (Ammar et al., 2016a; Cotterell and Duh, 2017; Lin et al., 2018) . However, such approaches usually require some amounts of training data in the target language for bootstrapping, which is different from our unsupervised approach that requires no labeled resources in the target language.",
"cite_spans": [
{
"start": 202,
"end": 223,
"text": "(Ammar et al., 2016a;",
"ref_id": "BIBREF0"
},
{
"start": 224,
"end": 248,
"text": "Cotterell and Duh, 2017;",
"ref_id": "BIBREF9"
},
{
"start": 249,
"end": 266,
"text": "Lin et al., 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Bilingual Word Embeddings There have been two general paradigms in obtaining bilingual word vectors besides using dictionaries: through parallel corpora and through joint training. Approaches based on parallel corpora usually learn bilingual word embeddings that can produce similar representations for aligned sentences (Hermann and Blunsom, 2014; Chandar et al., 2014) . Jointlytrained models combine the common monolingual training objective with a cross-lingual training objective that often comes from parallel corpus (Zou et al., 2013; Gouws et al., 2015) . Recently, unsupervised approaches also have been used to align two sets of word embeddings by learning a mapping through adversarial learning or selflearning (Zhang et al., 2017; Artetxe et al., 2017; Lample et al., 2018) .",
"cite_spans": [
{
"start": 321,
"end": 348,
"text": "(Hermann and Blunsom, 2014;",
"ref_id": null
},
{
"start": 349,
"end": 370,
"text": "Chandar et al., 2014)",
"ref_id": "BIBREF6"
},
{
"start": 523,
"end": 541,
"text": "(Zou et al., 2013;",
"ref_id": "BIBREF53"
},
{
"start": 542,
"end": 561,
"text": "Gouws et al., 2015)",
"ref_id": "BIBREF16"
},
{
"start": 722,
"end": 742,
"text": "(Zhang et al., 2017;",
"ref_id": "BIBREF49"
},
{
"start": 743,
"end": 764,
"text": "Artetxe et al., 2017;",
"ref_id": "BIBREF3"
},
{
"start": 765,
"end": 785,
"text": "Lample et al., 2018)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this paper, we propose two methods to tackle the cross-lingual NER problem under the unsupervised transfer setting. To address the challenge of lexical mapping, we find translations of words in a shared embedding space built from a seed lexicon. To alleviate word order divergence across languages, we add a self-attention mechanism to our neural architecture. With these methods combined, we are able to achieve state-of-the-art or competitive results on commonly tested languages under a cross-lingual setting, with lower resource requirements than past approaches. We also evaluate the challenges of applying these methods to an extremely low-resource language, Uyghur.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The source code is available at https://github. com/thespectrewithin/cross-lingual_NER",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use k = 3 in this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We thank the authors ofMayhew et al. (2017) for sharing their data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/facebookresearch/ MUSE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In this study, we use GloVe for learning bilingual embeddings and word translations instead of fastText.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Stephen Mayhew for sharing the data, and Zihang Dai for meaningful discussion. This research was sponsored by Defense Advanced Research Projects Agency Information Innovation Office (I2O) under the Low Resource Languages for Emergent Incidents (LORELEI) program, issued by DARPA/I2O under Contract No. HR0011-15-C0114. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. government. The U.S. government is authorized to reproduce and distribute reprints for government purposes notwithstanding any copyright notation here on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Many languages, one parser",
"authors": [
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Mulcaire",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "431--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waleed Ammar, George Mulcaire, Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2016a. Many lan- guages, one parser. Transactions of the Association for Computational Linguistics, 4:431-444.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Massively multilingual word embeddings",
"authors": [
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Mulcaire",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A. Smith. 2016b. Massively multilingual word embeddings. https://arxiv.org/pdf/1602.01925.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning principled bilingual mappings of word embeddings while preserving monolingual invariance",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "2289--2294",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2016. Learning principled bilingual mappings of word em- beddings while preserving monolingual invariance. In EMNLP, pages 2289-2294.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning bilingual word embeddings with (almost) no bilingual data",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "451--462",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In ACL, pages 451-462.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Phonologically aware neural model for named entity recognition in low resource transfer settings",
"authors": [
{
"first": "Akash",
"middle": [],
"last": "Bharadwaj",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mortensen",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1462--1472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akash Bharadwaj, David Mortensen, Chris Dyer, and Jaime Carbonell. 2016. Phonologically aware neu- ral model for named entity recognition in low re- source transfer settings. In EMNLP, pages 1462- 1472.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An autoencoder approach to learning bilingual word representations",
"authors": [
{
"first": "Sarath",
"middle": [],
"last": "Chandar",
"suffix": ""
},
{
"first": "Stanislas",
"middle": [],
"last": "Lauly",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Mitesh",
"middle": [],
"last": "Khapra",
"suffix": ""
},
{
"first": "Balaraman",
"middle": [],
"last": "Ravindran",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Vikas",
"suffix": ""
},
{
"first": "Amrita",
"middle": [],
"last": "Raykar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Saha",
"suffix": ""
}
],
"year": 2014,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "1853--1861",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sarath Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh Khapra, Balaraman Ravindran, Vikas C Raykar, and Amrita Saha. 2014. An autoencoder approach to learning bilingual word representations. In NIPS, pages 1853-1861.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Named entity recognition with bidirectional lstm-cnns",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Chiu",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nichols",
"suffix": ""
}
],
"year": 2016,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "4",
"issue": "",
"pages": "357--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transac- tions of the Association for Computational Linguis- tics, 4:357-370.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Lowresource named entity recognition with crosslingual, character-level neural conditional random fields",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
}
],
"year": 2017,
"venue": "IJCNLP",
"volume": "",
"issue": "",
"pages": "91--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell and Kevin Duh. 2017. Low- resource named entity recognition with cross- lingual, character-level neural conditional random fields. In IJCNLP, pages 91-96.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised part-of-speech tagging with bilingual graph-based projections",
"authors": [
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2011,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "600--609",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dipanjan Das and Slav Petrov. 2011. Unsupervised part-of-speech tagging with bilingual graph-based projections. In ACL, pages 600-609.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Improving zero-shot learning by mitigating the hubness problem",
"authors": [
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgiana Dinu and Marco Baroni. 2014. Improving zero-shot learning by mitigating the hubness prob- lem. CoRR, abs/1412.6568.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Building a multilingual named entityannotated corpus using annotation projection",
"authors": [
{
"first": "Maud",
"middle": [],
"last": "Ehrmann",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Ralf",
"middle": [],
"last": "Steinberger",
"suffix": ""
}
],
"year": 2011,
"venue": "RANLP",
"volume": "",
"issue": "",
"pages": "118--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maud Ehrmann, Marco Turchi, and Ralf Steinberger. 2011. Building a multilingual named entity- annotated corpus using annotation projection. In RANLP, pages 118-124.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning when to trust distant supervision: An application to lowresource POS tagging using cross-lingual projection",
"authors": [
{
"first": "Meng",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2016,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "178--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meng Fang and Trevor Cohn. 2016. Learning when to trust distant supervision: An application to low- resource POS tagging using cross-lingual projection. In CoNLL, pages 178-186.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Model transfer for tagging low-resource languages using a bilingual dictionary",
"authors": [
{
"first": "Meng",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "587--593",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meng Fang and Trevor Cohn. 2017. Model transfer for tagging low-resource languages using a bilingual dictionary. In ACL, pages 587-593.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Improving vector space word representations using multilingual correlation",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "462--471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui and Chris Dyer. 2014. Improving vec- tor space word representations using multilingual correlation. In ACL, pages 462-471.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Bilbowa: Fast bilingual distributed representations without word alignments",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
}
],
"year": 2015,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "748--756",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: Fast bilingual distributed represen- tations without word alignments. In ICML, pages 748-756.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Cross-lingual dependency parsing based on distributed representations",
"authors": [
{
"first": "Jiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL",
"volume": "1",
"issue": "",
"pages": "1234--1244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2015. Cross-lingual depen- dency parsing based on distributed representations. In ACL, volume 1, pages 1234-1244.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multilingual models for compositional distributed semantics",
"authors": [],
"year": 2014,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "58--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann and Phil Blunsom. 2014. Multi- lingual models for compositional distributed seman- tics. In ACL, pages 58-68.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Bidirectional LSTM-CRF models for sequence tagging",
"authors": [
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidi- rectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Bootstrapping parsers via syntactic projection across parallel texts",
"authors": [
{
"first": "Rebecca",
"middle": [],
"last": "Hwa",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
},
{
"first": "Amy",
"middle": [],
"last": "Weinberg",
"suffix": ""
},
{
"first": "Clara",
"middle": [],
"last": "Cabezas",
"suffix": ""
},
{
"first": "Okan",
"middle": [],
"last": "Kolak",
"suffix": ""
}
],
"year": 2005,
"venue": "Natural language engineering",
"volume": "11",
"issue": "3",
"pages": "311--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural language engineering, 11(3):311-325.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Multilingual named entity recognition using parallel data and metadata from wikipedia",
"authors": [
{
"first": "Sungchul",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Hwanjo",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2012,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "694--702",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sungchul Kim, Kristina Toutanova, and Hwanjo Yu. 2012. Multilingual named entity recognition using parallel data and metadata from wikipedia. In ACL, pages 694-702.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "260--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL, pages 260-270.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Word translation without parallel data",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv",
"middle": [],
"last": "Jgou",
"suffix": ""
}
],
"year": 2018,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv Jgou. 2018. Word translation without parallel data. In ICLR.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A multi-lingual multi-task architecture for low-resource sequence labeling",
"authors": [
{
"first": "Ying",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Shengqi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2018,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "799--809",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ying Lin, Shengqi Yang, Veselin Stoyanov, and Heng Ji. 2018. A multi-lingual multi-task architecture for low-resource sequence labeling. In ACL, pages 799-809.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A structured self-attentive sentence embedding",
"authors": [
{
"first": "Zhouhan",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Minwei",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Cicero",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhouhan Lin, Minwei Feng, Cicero Nogueira dos San- tos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. In ICLR.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Empower sequence labeling with taskaware neural language model",
"authors": [
{
"first": "L",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Shang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Gui",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2018,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Liu, J. Shang, F. Xu, X. Ren, H. Gui, J. Peng, and J. Han. 2018. Empower sequence labeling with task- aware neural language model. In AAAI.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "End-to-end sequence labeling via bi-directional lstm-cnns-crf",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "1064--1074",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional lstm-cnns-crf. In ACL, pages 1064-1074.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Cheap translation for cross-lingual named entity recognition",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Mayhew",
"suffix": ""
},
{
"first": "Chen-Tse",
"middle": [],
"last": "Tsai",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2017,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "2526--2535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Mayhew, Chen-Tse Tsai, and Dan Roth. 2017. Cheap translation for cross-lingual named entity recognition. In EMNLP, pages 2526-2535.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Multi-source transfer of delexicalized dependency parsers",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Hall",
"suffix": ""
}
],
"year": 2011,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "62--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In EMNLP, pages 62-72.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Exploiting similarities among languages for machine translation",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for ma- chine translation. CoRR, abs/1309.4168.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In NIPS, pages 3111-3119.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Weakly supervised cross-lingual named entity recognition via effective annotation and representation projection",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Georgiana",
"middle": [],
"last": "Dinu",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "1470--1480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jian Ni, Georgiana Dinu, and Radu Florian. 2017. Weakly supervised cross-lingual named entity recognition via effective annotation and representa- tion projection. In ACL, pages 1470-1480.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Learning multilingual named entity recognition from wikipedia",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Nothman",
"suffix": ""
},
{
"first": "Nicky",
"middle": [],
"last": "Ringland",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Tara",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "James R",
"middle": [],
"last": "Curran",
"suffix": ""
}
],
"year": 2013,
"venue": "Artificial Intelligence",
"volume": "194",
"issue": "",
"pages": "151--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, and James R Curran. 2013. Learning mul- tilingual named entity recognition from wikipedia. Artificial Intelligence, 194:151-175.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP, pages 1532-1543.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Semi-supervised sequence tagging with bidirectional language models",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Russell",
"middle": [],
"last": "Power",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "",
"issue": "",
"pages": "1756--1765",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Waleed Ammar, Chandra Bhagavat- ula, and Russell Power. 2017. Semi-supervised se- quence tagging with bidirectional language models. In ACL, pages 1756-1765.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "2227--2237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In NAACL, pages 2227-2237.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Reporting score distributions makes a difference: Performance study of lstm-networks for sequence tagging",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2017,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "338--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of lstm-networks for sequence tagging. In EMNLP, pages 338-348.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax",
"authors": [
{
"first": "L",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "H",
"middle": [
"P"
],
"last": "David",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Turban",
"suffix": ""
},
{
"first": "Nils",
"middle": [
"Y"
],
"last": "Hamblin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hammerla",
"suffix": ""
}
],
"year": 2017,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In ICLR.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Token and type constraints for cross-lingual part-of-speech tagging",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Ryan",
"middle": [
"T"
],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2013,
"venue": "TACL",
"volume": "1",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar T\u00e4ckstr\u00f6m, Dipanjan Das, Slav Petrov, Ryan T. McDonald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. TACL, 1:1-12.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Cross-lingual word clusters for direct transfer of linguistic structure",
"authors": [
{
"first": "Oscar",
"middle": [],
"last": "T\u00e4ckstr\u00f6m",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2012,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "477--487",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oscar T\u00e4ckstr\u00f6m, Ryan McDonald, and Jakob Uszko- reit. 2012. Cross-lingual word clusters for direct transfer of linguistic structure. In NAACL, pages 477-487.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "",
"suffix": ""
},
{
"first": "Tjong Kim",
"middle": [],
"last": "Sang",
"suffix": ""
}
],
"year": 2002,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In CoNLL, pages 1-4.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik F Tjong Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In CoNLL, pages 142-147.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Cross-lingual named entity recognition via wikification",
"authors": [
{
"first": "Chen-Tse",
"middle": [],
"last": "Tsai",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Mayhew",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2016,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "219--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen-Tse Tsai, Stephen Mayhew, and Dan Roth. 2016. Cross-lingual named entity recognition via wikifica- tion. In CoNLL, pages 219-228.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 6000-6010.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Cross-lingual projected expectation regularization for weakly supervised learning",
"authors": [
{
"first": "Mengqiu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics (TACL)",
"volume": "2",
"issue": "",
"pages": "55--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mengqiu Wang and Christopher D. Manning. 2014. Cross-lingual projected expectation regularization for weakly supervised learning. Transactions of the Association for Computational Linguistics (TACL), 2(5):55-66.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Multi-task cross-lingual sequence tagging from scratch",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2016. Multi-task cross-lingual sequence tag- ging from scratch. CoRR, abs/1603.06270.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Transfer learning for sequence tagging with hierarchical recurrent networks",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Ruslan Salakhutdinov, and William W. Cohen. 2017. Transfer learning for sequence tag- ging with hierarchical recurrent networks.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Inducing multilingual text analysis tools via robust projection across aligned corpora",
"authors": [
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Ngai",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Wicentowski",
"suffix": ""
}
],
"year": 2001,
"venue": "HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Yarowsky, G. Ngai, and R. Wicentowski. 2001. In- ducing multilingual text analysis tools via robust projection across aligned corpora. In HLT.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Adversarial training for unsupervised bilingual lexicon induction",
"authors": [
{
"first": "Meng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Huanbo",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2017,
"venue": "ACL",
"volume": "1",
"issue": "",
"pages": "1959--1970",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In ACL, volume 1, pages 1959-1970.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Ten pairs to tag -multilingual POS tagging via coarse mapping between embeddings",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Gaddy",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "Tommi",
"middle": [
"S"
],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2016,
"venue": "NAACL",
"volume": "",
"issue": "",
"pages": "1307--1317",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan Zhang, David Gaddy, Regina Barzilay, and Tommi S. Jaakkola. 2016. Ten pairs to tag -mul- tilingual POS tagging via coarse mapping between embeddings. In NAACL, pages 1307-1317.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Crosslingual transfer of named entity recognizers without parallel corpora",
"authors": [
{
"first": "Ayah",
"middle": [],
"last": "Zirikly",
"suffix": ""
},
{
"first": "Masato",
"middle": [],
"last": "Hagiwara",
"suffix": ""
}
],
"year": 2015,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "390--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ayah Zirikly and Masato Hagiwara. 2015. Cross- lingual transfer of named entity recognizers without parallel corpora. In ACL, pages 390-396. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Mention detection crossing the language barrier",
"authors": [
{
"first": "Imed",
"middle": [],
"last": "Zitouni",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
}
],
"year": 2008,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "600--609",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Imed Zitouni and Radu Florian. 2008. Mention detec- tion crossing the language barrier. In EMNLP, pages 600-609.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Bilingual word embeddings for phrase-based machine translation",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Will",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "1393--1398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Will Y Zou, Richard Socher, Daniel Cer, and Christo- pher D Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In EMNLP, pages 1393-1398.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Self-attentive Bi-LSTM-CRF Model embeddings may not generalize well to the slightly different embeddings of the target language.",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "are fed into a word level Bi-LSTM, which models the contextual dependency within each sentence and outputs a sequence of context sensitive hidden representations(h 1 , h 2 , \u2022 \u2022 \u2022 , h n ).A CRF layer is then applied on top of the word level LSTM and takes in as its input the sequence of hidden representations(h 1 , h 2 , \u2022 \u2022 \u2022 , h n ), and defines the joint distribution of all possible output label sequences. The Viterbi algorithm is used during decoding.",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"html": null,
"content": "<table><tr><td colspan=\"2\">Model</td><td>Spanish</td><td>Dutch</td><td>German</td><td>Extra Resources</td></tr><tr><td>*</td><td>T\u00e4ckstr\u00f6m et al. (2012)</td><td>59.30</td><td>58.40</td><td>40.40</td><td>parallel corpus</td></tr><tr><td>*</td><td>Nothman et al. (2013)</td><td>61.0</td><td>64.00</td><td>55.80</td><td>Wikipedia</td></tr><tr><td>*</td><td>Tsai et al. (2016)</td><td>60.55</td><td>61.60</td><td>48.10</td><td>Wikipedia</td></tr><tr><td>*</td><td>Ni et al. (2017)</td><td>65.10</td><td>65.40</td><td>58.50</td><td>Wikipedia, parallel corpus, 5K dict.</td></tr><tr><td>* \u2020</td><td>Mayhew et al. (2017)</td><td>65.95</td><td>66.50</td><td>59.11</td><td>Wikipedia, 1M dict.</td></tr><tr><td colspan=\"2\">Mayhew et al. (2017) (only Eng. data)</td><td>51.82</td><td>53.94</td><td>50.96</td><td>1M dict.</td></tr><tr><td colspan=\"2\">Our methods:</td><td/><td/><td/><td/></tr><tr><td/><td>BWET (id.c.)</td><td colspan=\"4\">71.14 \u00b1 0.60 70.24 \u00b1 1.18 57.03 \u00b1 0.25 -</td></tr><tr><td/><td>BWET (id.c.) + self-att.</td><td colspan=\"4\">72.37 \u00b1 0.65 70.40 \u00b1 1.16 57.76 \u00b1 0.12 -</td></tr><tr><td/><td>BWET (adv.)</td><td colspan=\"4\">70.54 \u00b1 0.85 70.13 \u00b1 1.04 55.71 \u00b1 0.47 -</td></tr><tr><td/><td colspan=\"5\">BWET (adv.) + self-att. 71.03 Our supervised results 86.26 \u00b1 0.40 86.40 \u00b1 0.17 78.16 \u00b1 0.45 annotated corpus</td></tr></table>",
"type_str": "table",
"text": "\u00b1 0.44 71.25 \u00b1 0.79 56.90 \u00b1 0.76 -BWET 71.33 \u00b1 1.26 69.39 \u00b1 0.53 56.95 \u00b1 1.20 10K dict. BWET + self-att. 71.67 \u00b1 0.86 70.90 \u00b1 1.09 57.43 \u00b1 0.95 10K dict. BWET on data from Mayhew et al. (2017) 66.53 \u00b1 1.12 69.24 \u00b1 0.66 55.39 \u00b1 0.98 1M dict. BWET + self-att. on data from Mayhew et al. (2017) 66.90 \u00b1 0.65 69.31 \u00b1 0.49 55.98 \u00b1 0.65 1M dict.",
"num": null
},
"TABREF1": {
"html": null,
"content": "<table><tr><td>Model</td><td>Spanish</td><td>Dutch</td><td>German</td></tr><tr><td colspan=\"2\">Common space 65</td><td/><td/></tr></table>",
"type_str": "table",
"text": ".40 \u00b1 1.22 66.15 \u00b1 1.62 43.73 \u00b1 0.94 Replace 68.21 \u00b1 1.22 69.37 \u00b1 1.33 48.59 \u00b1 1.21 Translation 69.21 \u00b1 0.95 69.39 \u00b1 1.21 53.94 \u00b1 0.66",
"num": null
},
"TABREF2": {
"html": null,
"content": "<table><tr><td colspan=\"2\">Model</td><td colspan=\"2\">Uyghur Unsequestered Set Extra Resources</td></tr><tr><td>* \u2020</td><td>Mayhew et al. (2017)</td><td>51.32</td><td>Wikipedia, 100K dict.</td></tr><tr><td colspan=\"2\">Mayhew et al. (2017) (only Eng. data)</td><td>27.20</td><td>Wikipedia, 100K dict.</td></tr><tr><td/><td>BWET</td><td>25.73 \u00b1 0.89</td><td>5K dict.</td></tr><tr><td/><td>BWET + self-att.</td><td>26.38 \u00b1 0.34</td><td>5K dict.</td></tr><tr><td>*</td><td>BWET on data from Mayhew et al. (2017)</td><td>30.20 \u00b1 0.98</td><td>Wikipedia, 100K dict.</td></tr><tr><td>*</td><td colspan=\"2\">BWET + self-att. on data from Mayhew et al. (2017) 30.68 \u00b1 0.45</td><td>Wikipedia, 100K dict.</td></tr><tr><td>*</td><td>Combined (see text)</td><td>31.61 \u00b1 0.46</td><td>Wikipedia, 100K dict., 5K dict.</td></tr><tr><td>*</td><td>Combined + self-att.</td><td>32.09 \u00b1 0.61</td><td>Wikipedia, 100K dict., 5K dict.</td></tr></table>",
"type_str": "table",
"text": "Comparison of different ways of using bilingual word embeddings, within our method (NER F 1 ).",
"num": null
},
"TABREF3": {
"html": null,
"content": "<table/>",
"type_str": "table",
"text": "NER F 1 scores on Uyghur. * Approaches using language-specific features and resources (\"Wikipedia\" means Wikipedia is used not as a monolingual corpus, but to provide external knowledge). \u2020 Approaches that transfer from multiple languages and use language-specific techniques.",
"num": null
}
}
}
}