ACL-OCL / Base_JSON /prefixS /json /S19 /S19-1006.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S19-1006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:47:16.864340Z"
},
"title": "Scalable Cross-Lingual Transfer of Neural Sentence Embeddings",
"authors": [
{
"first": "Hanan",
"middle": [],
"last": "Aldarmaki",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The George Washington University",
"location": {}
},
"email": "aldarmaki@gwu.edu"
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The George Washington University",
"location": {}
},
"email": "diabmona@amazon.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We develop and investigate several crosslingual alignment approaches for neural sentence embedding models, such as the supervised inference classifier, InferSent, and sequential encoder-decoder models. We evaluate three alignment frameworks applied to these models: joint modeling, representation transfer learning, and sentence mapping, using parallel text to guide the alignment. Our results support representation transfer as a scalable approach for modular cross-lingual alignment of neural sentence embeddings, where we observe better performance compared to joint models in intrinsic and extrinsic evaluations, particularly with smaller sets of parallel data.",
"pdf_parse": {
"paper_id": "S19-1006",
"_pdf_hash": "",
"abstract": [
{
"text": "We develop and investigate several crosslingual alignment approaches for neural sentence embedding models, such as the supervised inference classifier, InferSent, and sequential encoder-decoder models. We evaluate three alignment frameworks applied to these models: joint modeling, representation transfer learning, and sentence mapping, using parallel text to guide the alignment. Our results support representation transfer as a scalable approach for modular cross-lingual alignment of neural sentence embeddings, where we observe better performance compared to joint models in intrinsic and extrinsic evaluations, particularly with smaller sets of parallel data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Probabilistic sentence representation models generally fall into two categories: bottom-up compositional models, where sentence embeddings are composed from word embeddings via a linear function like averaging, and top-down compositional models that are trained with a sentencelevel objective, typically within a neural architecture. Sequential data like sentences can be modeled using recurrent, recursive, or convolutional networks, which can implicitly learn intermediate sentence representations suitable for each learning task. Depending on the training objective, these intermediate representations sometimes encode enough semantic and syntactic features to be suitable as general-purpose sentence embeddings. For examples, it was shown in Conneau et al. (2017a) that a model trained to maximize inference classification accuracy can yield generic representations that perform well across a wide set of extrinsic classification benchmarks. Other training objectives, like denoising auto-encoders or neural sequence to sequence models (Hill et al., 2016) , can also yield general-purpose representations with different characteristics. While bottomup models can achieve superior performance in tasks that are independent of syntax, such as topic categorization, neural models often yield representations that encode syntactic and positional features, which results in superior performance in tasks that rely on sentence structure .",
"cite_spans": [
{
"start": 746,
"end": 768,
"text": "Conneau et al. (2017a)",
"ref_id": "BIBREF11"
},
{
"start": 1040,
"end": 1059,
"text": "(Hill et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "General-purpose sentence embeddings can be used as features in various classification tasks, or to directly assess the similarity of a pair of sentences using the cosine measure. It is often desired to generalize word and sentence embeddings across several languages to facilitate cross-lingual transfer learning (Zhou et al., 2016) and mining of parallel sentences . For word embeddings, cross-lingual learning can be achieved in various ways (Upadhyay et al., 2016) , such as learning directly with a cross-lingual objective (Shi et al., 2015) or post-hoc alignment of monolingual word embeddings using dictionaries (Ammar et al., 2016) , parallel corpora (Gouws et al., 2015; Klementiev et al., 2012) , or even with no bilingual supervision (Conneau et al., 2017b; . For bottom-up composition like vector averaging, word-level alignment is sufficient to yield cross-lingual sentence embeddings. For top-down sentence embeddings, the efforts in cross-lingual learning are more limited. Typically, a multi-faceted cross-lingual learning objective is used to align the sentence models while training them, as in Soyer et al. (2014) . Cross-lingual sentence embeddings can also be learned via a neural machine translation framework trained jointly for multiple languages (Schwenk and Douze, 2017) .",
"cite_spans": [
{
"start": 313,
"end": 332,
"text": "(Zhou et al., 2016)",
"ref_id": "BIBREF32"
},
{
"start": 444,
"end": 467,
"text": "(Upadhyay et al., 2016)",
"ref_id": "BIBREF31"
},
{
"start": 527,
"end": 545,
"text": "(Shi et al., 2015)",
"ref_id": "BIBREF27"
},
{
"start": 618,
"end": 638,
"text": "(Ammar et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 658,
"end": 678,
"text": "(Gouws et al., 2015;",
"ref_id": "BIBREF15"
},
{
"start": 679,
"end": 703,
"text": "Klementiev et al., 2012)",
"ref_id": "BIBREF21"
},
{
"start": 744,
"end": 767,
"text": "(Conneau et al., 2017b;",
"ref_id": "BIBREF12"
},
{
"start": 1112,
"end": 1131,
"text": "Soyer et al. (2014)",
"ref_id": "BIBREF29"
},
{
"start": 1270,
"end": 1295,
"text": "(Schwenk and Douze, 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While they indeed yield cross-lingual embeddings, the joint training models in existing literature pose some practical limitations: simultaneous training requires massive computational re-sources, particularly for sequential models like the bi-directional LSTM networks typically used to encode sentences. In addition, the joint framework does not allow post-hoc or modular training, where new languages can be added and aligned to existing pre-trained encoders. More recently, proposed an approach for crosslingual sentence embeddings by aligning encoders of new languages to a pre-trained English encoder using parallel corpora. Such approach promises to be more suitable for modular training of general sentence encoders, although so far it has only been evaluated in natural language inference classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we develop and evaluate three alignment frameworks: joint modeling, representation transfer learning, and sentence mapping, applied on two modern general-purpose sentence embedding models: the inference-based encoder, InferSent (Conneau et al., 2017a) , and the sequential denoising auto-encoder, SDAE (Hill et al., 2016) . For most approaches, we rely on parallel sentences as sentence-level dictionaries for cross-lingual supervision. We report the performance on sentence translation retrieval and crosslingual document classification. Our results support representation transfer as a scalable approach for modular cross-lingual alignment that works well across different neural models and evaluation benchmarks.",
"cite_spans": [
{
"start": 243,
"end": 266,
"text": "(Conneau et al., 2017a)",
"ref_id": "BIBREF11"
},
{
"start": 317,
"end": 336,
"text": "(Hill et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Learning bilingual compositional representations can be achieved by optimizing a bilingual objective on parallel corpora. In Pham et al. (2015) , distributed representations for bilingual phrases and sentences are learned using an extended version of the paragraph vector model (Le and Mikolov, 2014) by forcing parallel sentences to share one vector. In Soyer et al. (2014) , cross-lingual compositional embeddings are learned by optimizing a joint bilingual objective that aligns parallel source and target representations by minimizing the Euclidean distances between them, and a monolingual objective that maximizes the similarity between similar phrases. The monolingual objective was implemented by maximizing the similarity between random phrases and subphrases within the same sentence. Cross-lingual representations can also be induced implicitly within a machine learning framework that is trained jointly for multiple language pairs. In Schwenk and Douze (2017) , encoders and decoders for the given languages are trained jointly using a neural sequence to sequence model (Sutskever et al., 2014) using parallel corpora that are partially aligned; that is, each language within a pair is also part of at least one other parallel corpus. Neural machine translation can also be achieved with a single encoder and decoder that handles several input languages (Johnson et al., 2017) , but the latter has not been evaluated as a general-purpose sentence representation model. According to Hill et al. (2016) , the quality of the representations induced using a machine translation objective is lower than other neural models trained with different compositional objectives, such as Denoising Auto-Encoders and Skip-Thought (Kiros et al., 2015) . Mono-lingual evaluation of sentence representation models can be found in Hill et al. (2016) , , and Conneau and Kiela (2018) . In Aldarmaki and Diab (2016), a modular training objective has been proposed for cross-lingual sentence embedding. However, their application was limited to the specific matrix factorization model they discussed. More recently, proposed a modular transfer learning objective and evaluated it on neural sentence encoders using cross-lingual natural language inference classification. Our representation transfer framework is very similar to their approach, although we use a simpler loss function. In addition, we evaluate the framework as a general-purpose sentence encoder and compare it to other frameworks.",
"cite_spans": [
{
"start": 125,
"end": 143,
"text": "Pham et al. (2015)",
"ref_id": "BIBREF23"
},
{
"start": 278,
"end": 300,
"text": "(Le and Mikolov, 2014)",
"ref_id": "BIBREF22"
},
{
"start": 355,
"end": 374,
"text": "Soyer et al. (2014)",
"ref_id": "BIBREF29"
},
{
"start": 948,
"end": 972,
"text": "Schwenk and Douze (2017)",
"ref_id": "BIBREF25"
},
{
"start": 1083,
"end": 1107,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF30"
},
{
"start": 1367,
"end": 1389,
"text": "(Johnson et al., 2017)",
"ref_id": "BIBREF18"
},
{
"start": 1495,
"end": 1513,
"text": "Hill et al. (2016)",
"ref_id": "BIBREF17"
},
{
"start": 1729,
"end": 1749,
"text": "(Kiros et al., 2015)",
"ref_id": "BIBREF20"
},
{
"start": 1826,
"end": 1844,
"text": "Hill et al. (2016)",
"ref_id": "BIBREF17"
},
{
"start": 1853,
"end": 1877,
"text": "Conneau and Kiela (2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We selected two modern general-purpose sentence embedding models, the Inference-based classification model (InferSent) described in Conneau et al. (2017a) , and the Sequential Denoising Auto-Encoder (SDAE) described in Hill et al. (2016) . Both are implemented using a bidirectional LSTM network as an encoder followed by a classification or decoding network. We describe three possible cross-lingual alignment frameworks:",
"cite_spans": [
{
"start": 132,
"end": 154,
"text": "Conneau et al. (2017a)",
"ref_id": "BIBREF11"
},
{
"start": 219,
"end": 237,
"text": "Hill et al. (2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "Joint cross-lingual modeling: We extend the monolingual objective of each model to multiple languages to be trained simultaneously via direct cross-lingual interactions in the objective function. This is in line with most existing cross-lingual extensions for top-down compositional models Representation transfer learning: We directly optimize the sentence embeddings of new languages to match their translations in a parallel language (i.e. English). A similar approach was independently developed in .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "Sentence mapping: Following the modular alignment framework for word embeddings (Smith et al., 2017), we fit an orthogonal transformation matrix on monolingual embeddings using a parallel corpus as a dictionary. Sentence mapping has been evaluated for word averaging models in Aldarmaki and Diab (2019) .",
"cite_spans": [
{
"start": 277,
"end": 302,
"text": "Aldarmaki and Diab (2019)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approach",
"sec_num": "3"
},
{
"text": "Most neural sentence embedding models are based on a sequential encoder-typically a bi-directional Long Short-Term Memory (Schuster and Paliwal, 1997) -followed by either a sequential decoder or a classifier. These models can be categorized according to their training objective:",
"cite_spans": [
{
"start": 122,
"end": 150,
"text": "(Schuster and Paliwal, 1997)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Architectures",
"sec_num": "3.1"
},
{
"text": "Classification Accuracy: Sentence encoders can be trained by maximizing the accuracy in an extrinsic evaluation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architectures",
"sec_num": "3.1"
},
{
"text": "For example, InferSent (Conneau et al., 2017a) is trained on the Stanford Natural Language Inference (SNLI) dataset for inference classification (Bowman et al., 2015sss) . This type of model requires labeled training data, which can make it challenging to expand across different languages.",
"cite_spans": [
{
"start": 23,
"end": 46,
"text": "(Conneau et al., 2017a)",
"ref_id": "BIBREF11"
},
{
"start": 145,
"end": 169,
"text": "(Bowman et al., 2015sss)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Architectures",
"sec_num": "3.1"
},
{
"text": "Reconstruction: Using raw monolingual data, sentence encoders can be trained by minimizing the reconstruction loss, where a decoder is trained simultaneously to reconstruct the input sentence from the intermediate representation-e.g. Sequential Auto-Encoder (SAE) and Sequential Denoising Auto-Encoder (SDAE) (Hill et al., 2016) . The latter introduces textual noise on the input sentence to make the embeddings more robust.",
"cite_spans": [
{
"start": 309,
"end": 328,
"text": "(Hill et al., 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Architectures",
"sec_num": "3.1"
},
{
"text": "Translation: In Neural Machine Translation (NMT), a model is trained to maximizes the accuracy of generating a translation from the intermediate representation of the source sentence. Unlike modern NMT systems that rely on attention mechanisms, this model is trained for the purpose of sentence embedding, so only the intermediate representations are used as input to the decoder. This model requires parallel corpora for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Architectures",
"sec_num": "3.1"
},
{
"text": "The three objectives above are illustrated in Figures 1 and 2. We use the single-layer bidirectional LSTM encoder architecture with max-pooling described in Conneau et al. (2017a) for all encoders, and an LSTM decoders for SDAE and NMT. ",
"cite_spans": [
{
"start": 157,
"end": 179,
"text": "Conneau et al. (2017a)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 46,
"end": 53,
"text": "Figures",
"ref_id": null
}
],
"eq_spans": [],
"section": "Architectures",
"sec_num": "3.1"
},
{
"text": "We first discuss our joint cross-lingual neural models based on the above architectures. Note that joint modeling requires modifying the architecture and objective function of each model in a way that includes simultaneous interactions of cross-lingual sentence embeddings. This can be achieved in various ways with any degree of complexity, but we specifically aim to evaluate a direct extension of each loss function without extraneous objectives or constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Cross-Lingual Modeling",
"sec_num": "3.2"
},
{
"text": "The Sequential Denoising Auto-Encoder (SDAE) is trained to reconstruct the original input sentence from the intermediate sentence representation, where the input is corrupted with linguistic noise, such as word substitutions and reordering (Hill et al., 2016) . This allows the model to robustly learn sentence representations from raw monolingual data. The Neural Machine Translation model, as depicted in Figure 2 , has an identical architecture, with the only difference being the language of the input sentence. A cross-lingual extension of SDAE naturally leads to the NMT objective. We combine the SDAE and NMT objectives in a joint architecture, where multiple encoders are trained simultaneously with a single shared decoder. We alternate the input language (and the encoder) in each training batch, and the intermediate sentence embeddings are used as input to the shared decoder. Since the decoder is trained to predict the target sentence from the intermediate sentence representation regardless of input language identity, the encoders are expected to be updated in a way that results in consistent crosslingual embeddings. Joint multi-lingual NMT has been previously shown to yield cross-lingual representations, as in Schwenk and Douze (2017) .",
"cite_spans": [
{
"start": 240,
"end": 259,
"text": "(Hill et al., 2016)",
"ref_id": "BIBREF17"
},
{
"start": 1231,
"end": 1255,
"text": "Schwenk and Douze (2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 407,
"end": 415,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Joint Cross-Lingual Encoder-Decoder",
"sec_num": "3.2.1"
},
{
"text": "Since InferSent is trained with an extrinsic classification objective, bilingual or multilingual optimization requires annotated data in each language. At the time of development, the SNLI dataset was only available in English 1 , so we translated the training and evaluation datasets to Spanish and German using Amazon Translate. Note that in practice, machine translation might not be a viable option, especially if we try to extend the model to low-resource languages. Modern NMT systems require millions of parallel sentences to achieve good translation performance. For our purposes, the translated data allow us to assess the performance in different settings. Similar to the joint SDAE/NMT model, we train encoders for all languages simultaneously. Since the input to the classifier consists of an ordered pair of sentences, we randomly pick a language for the premise and a language for the hypothesis in each training batch and use their respective encoders. A single classifier is shared regardless of the input languages. Similar to the monolingual case, the model is trained to maximize the performance in the inference classification task, which is cross-lingual in this case. An illustration of a training example is shown in Figure 3 , where the premise is in German and the hypothesis in English.",
"cite_spans": [],
"ref_spans": [
{
"start": 1240,
"end": 1248,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Joint Cross-Lingual InferSent",
"sec_num": "3.3"
},
{
"text": "In the representation transfer framework, we use a monolingual pre-trained model to guide the training of additional encoders without the original supervised training objective. Using a parallel corpus that has source sentences aligned with English translations, we first generate the representations for the English sentences using a pre-trained SDAE or InferSent model. Then, we use these representations as a target to train an encoder for the other language in a supervised manner. The pivot encoder remains unchanged and only the new encoder is updated during training to ensure that independently trained encoders will still be aligned. Several functions can be used to achieve this, such as the L1 or L2 loss to minimize the distances be- tween the source and target representations, or to maximize the cosine of the angle between them. Empirically, we observed no notable difference between these alternatives. 2 The transfer learning approach is illustrated in Figure 4 .",
"cite_spans": [
{
"start": 919,
"end": 920,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 970,
"end": 978,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Representation Transfer Learning",
"sec_num": "3.4"
},
{
"text": "We follow the approach used for word-level transformation, where a dictionary is used to fit an orthogonal transformation matrix from the source to the target vector space (Smith et al., 2017). To extend this to sentences, we use a parallel corpus as a dictionary, and fit a transformation matrix between their sentence embeddings. After training, we apply the learned transformation post-hoc on newly generated sentence embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Mapping",
"sec_num": "3.5"
},
{
"text": "In a well-aligned cross-lingual vector space, sentences should be clustered with their translations across various languages. As discussed in Schwenk and Douze (2017) this can be measured with sentence translation retrieval: the accuracy of retrieving the correct translation for each source sentence from the target side of a test parallel corpus. This is done using nearest neighbor search with the cosine as a similarity measure. While not exactly an intrinsic evaluation metric, this scheme is the closest measure of alignment quality at the sentence level across all features in the vector space.",
"cite_spans": [
{
"start": 142,
"end": 166,
"text": "Schwenk and Douze (2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "We used bottom-up embeddings composed using weighted averaging with smooth inverse frequency (Arora et al., 2017; , which has been shown to work well as monolingual sentence embeddings compared to other bottom-up approaches. We use skipgram with subword information (Bojanowski et al., 2017) , i.e. FastText, for the word embeddings, which are also used as input to the neural models. We applied static dictionary alignment using the approach and dictionaries in Smith et al. 2017, in addition to sentence mapping using the parallel corpora. We trained the monolingual FastText word embeddings and SDAE models using the 1 Billion Word benchmark (Chelba et al., 2014) for English, and WMT'12 News Crawl data for Spanish and German (Callison-Burch et al., 2012) . We used WMT'12 Common Crawl data for cross-lingual alignment, and WMT'12 test sets for evaluations. We used the augmented SNLI data described in (Dasgupta et al., 2018) and their translations for training the mono-lingual and joint InferSent models. For all datasets and languages, the only preprocessing performed was tokenization.",
"cite_spans": [
{
"start": 93,
"end": 113,
"text": "(Arora et al., 2017;",
"ref_id": "BIBREF5"
},
{
"start": 266,
"end": 291,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 645,
"end": 666,
"text": "(Chelba et al., 2014)",
"ref_id": "BIBREF9"
},
{
"start": 730,
"end": 759,
"text": "(Callison-Burch et al., 2012)",
"ref_id": "BIBREF8"
},
{
"start": 907,
"end": 930,
"text": "(Dasgupta et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "One of our evaluation objective is to assess the minimal bilingual data requirements for each framework, so we split the training parallel corpora into subsets of increasing size from 1,000 to 1 million sentences, where we double the size in each step. We report sentence translation retrieval accuracies in all language directions, using en for English, es for Spanish, and de for German 3 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "The results of the various SDAE models compared with the baselines are shown in Figure 5 . With less than 100K parallel sentences, the joint SDAE/NMT model yielded poor performance compared to all models, but with 100K and more Figure 5 : Nearest neighbor translation accuracy as a function of (log) parallel corpus size. (sent) to sentence-level mapping, and (dict) refers to the baseline (using a static dictionary for mapping). The legend shows the average accuracies of each model using 1M parallel sentences. data, the model quickly exceeded the performance of all others by a large margin. Transfer learning achieved the second best performance, although it lagged behind the joint model with large parallel sets. With small amounts of parallel text, all models outperformed the joint SDAE/NMT, particularly the word based FastText models. Sentence mapping performed on average better than the static dictionary baseline, but FastText sentence mapping was generally better. Figure 6 shows the results of the InferSent alignment models. Note that the joint InferSent model was trained with supervision using the translated SNLI data instead of the variable-size parallel corpora, so the performance is constant with respect to the number of parallel sentences. The joint model did not learn to align the crosslingual sentences. Figure 6 : Nearest neighbor translation accuracy as a function of (log) parallel corpus size. (sent) to sentence-level mapping, and (dict) refers to the baseline (using a static dictionary for mapping). The legend shows the average accuracies of each model using 1M parallel sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 80,
"end": 89,
"text": "Figure 5",
"ref_id": null
},
{
"start": 229,
"end": 237,
"text": "Figure 5",
"ref_id": null
},
{
"start": 981,
"end": 989,
"text": "Figure 6",
"ref_id": null
},
{
"start": 1334,
"end": 1342,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1"
},
{
"text": "tion retrieval accuracies even with relatively small amounts of parallel text (\u223c 5K sentences). Sentence mapping also performed better than the word-based baselines with additional parallel data (> 20K).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.1"
},
{
"text": "In this section, we compare the overall performance of different types of models on sentence translation retrieval. We plotted the average crosslingual accuracy (averaged over all language directions) by the best performing variant of each model in Figure 7 . With small amounts of parallel text, around 5K sentences, the best performance was achieved using InferSent transfer model. The model continued to yield the highest performance until it was exceeded by the joint SDAE/NMT model at 500K sentences. The representation transfer models for SDAE exceeded the FastText model at around 20K sentences, and achieved comparable performance to InferSent sentence mapping. ",
"cite_spans": [],
"ref_spans": [
{
"start": 249,
"end": 257,
"text": "Figure 7",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Overall Evaluation",
"sec_num": "4.2"
},
{
"text": "The people are taking photos of the statue. A group of people looking at a statue. People are gathered by the water. Query: A vehicle is crossing a river Spanish A sedan is stuck in the middle of a river. People are crossing a river. A taxi cab is driving down a path of snow.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English",
"sec_num": null
},
{
"text": "A person is near a river. People are crossing a river. A Land Rover is splashing water as it crosses a river. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English",
"sec_num": null
},
{
"text": "The joint InferSent model was trained to maximize the cross-lingual classification accuracy on cross-lingual inference data. The cross-lingual inference classification performance was comparable to the monolingual case for each language. The monolingual accuracies were around 83%, 79%, and 79% for English, German, and Spanish, respectively. The cross-lingual accuracy was around 79%. Given this relatively high performance in NLI classification and the poor performance in cross-lingual translation retrieval, we surmise that the 3-way classification objective is not demanding enough to learn general-purpose semantic representations. In addition, high per-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of Joint InferSent Performance",
"sec_num": "4.3"
},
{
"text": "Cross-lingual Nearest Neighbors Query: Tons of people are gathered around the statue Spanish Food and wine are on the table that has many people surrounding it. Some people enjoying their brunch together in the outdoor seating area of a restaurant... The group of people are game developers creating a new video game in their office.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language",
"sec_num": null
},
{
"text": "The group of people are flying in the air on their unicorns . A group of people are standing around with smiles on their faces... A group of people dressed as clowns stroll into the Bigtop Circus holding signs. Query: A vehicle is crossing a river Spanish People and a baby are crossing the street at a crosswalk to get home. The person in the picture is riding a bike slowing up hill , pumping the pedals as hard as they can. The man , wearing scuba gear , jumps off the side of the boat into the ocean below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English",
"sec_num": null
},
{
"text": "A person in a coat with a briefcase walks down a street next to the bus lane. A man waterskiing in a river with a large wall in the background. A person waterskiing in a river with a wall in the background. formance in a specific extrinsic evaluation task is not necessarily an indication of general embedding quality. Tables 1 and 2 show examples of monolingual and cross-lingual nearest neighbors (or their English translations) from the hypotheses in SNLI test sets. The cross-lingual nearest neighbors did share several semantic aspects with the query sentence; subjects or verbs or combinations of these were observed in nearest neighbors. However, the exact translations were not the nearest neighbors in most cases, and the nearest neighbors often included several extraneous pieces of content not present in the query sentence. The mono-lingual nearest neighbors, on the other hand, were more semantically similar to each other, not only in the semantic features that are present, but also in their exclusions of dissimilar details.",
"cite_spans": [],
"ref_spans": [
{
"start": 319,
"end": 333,
"text": "Tables 1 and 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "English",
"sec_num": null
},
{
"text": "We surmise that only a subset of semantic features were learned by the InferSent objective given the specific characteristics of the SNLI training sets. In other words, the model was not pushed to preserve the full semantic content since only a small subset of features were useful for entailment relationships. The higher similarity among monolingual nearest neighbors is likely an artifact of the underlying word embeddings passing through the same encoder network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "English",
"sec_num": null
},
{
"text": "Relying on a single measure is never sufficient to probe all characteristics of a vector space. Extrinsic evaluation can be another useful tool to measure the effectiveness of various cross-lingual models, although extrinsic tasks typically measure specific and narrow aspects of semantics. Nevertheless, we can still gain some insights about certain characteristics of these models and their applicability. One of the most widely used tasks for cross-lingual evaluation is the Cross-Lingual Document Classification benchmark (CLDC), where a model is trained in one language and tested on another (Schwenk and Li, 2018; Klementiev et al., 2012) .",
"cite_spans": [
{
"start": 597,
"end": 619,
"text": "(Schwenk and Li, 2018;",
"ref_id": "BIBREF26"
},
{
"start": 620,
"end": 644,
"text": "Klementiev et al., 2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "4.4"
},
{
"text": "We report the average classification accuracies in CLDC across all language directions (a total of six directions) using the datasets in Schwenk and Li (2018) ; the multi-layer perceptron was used as a classifier trained for each source language, then tested in the remaining two.",
"cite_spans": [
{
"start": 137,
"end": 158,
"text": "Schwenk and Li (2018)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "4.4"
},
{
"text": "The highest accuracy was achieved using FastText vectors, followed by InferSent transfer and sentence mapping models. With large enough parallel corpora, the performance of SDAE/NMT exceeded the transfer model, but with smaller data, SDAE transfer model achieved consistently higher performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "4.4"
},
{
"text": "These results are consistent with the trend of these models in mono-lingual topic categorization , where word averaging achieved consistently higher performance than all neural models. This indicates that crosslingual models share the same semantic characteristics as their underlying mono-lingual counterparts. We should underscore that CLDC is a rather coarse categorization task where documents are classified into four categories. Note also that the FastText model achieved relatively high performance even when it was aligned with only 1K parallel sentences, a condition in which sentence translation retrieval accuracy was less that 40%. This poor correlation with sentence translation retrieval accuracies indicates that neither evaluation framework is reliable on its own. Our intuition is that sentence translation retrieval is a more com- prehensive measure since all features in the vector space weigh equally in calculating the cosine similarity; on the other hand, a supervised classifier weighs features according to their correlations with the target classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "4.4"
},
{
"text": "We explored different approaches for cross-lingual alignment of top-down sentence embedding models: joint modeling, representation transfer, and sentence mapping. With sufficient amounts of parallel text, joint modeling yielded superior performance in the joint SDAE and NMT model, while joint InferSent failed to yield good alignments. Our results underscore the difficulty of joint modeling itself in addition to its relatively high data and memory requirements. With smaller amounts of parallel text, representation transfer worked reasonably well across all models, whereas sentence mapping was generally worse. Moreover, the transfer and sentence mapping frameworks enable modular training where additional languages can be added without retraining existing models and without labeled training data (as in InferSent), which allows scaling neural models to more languages with less resources. In extrinsic evaluation using cross-lingual document classification, transfer models achieved consistently better performance than joint models. Between the two sentence embedding models we evaluated, InferSent yielded better performance than SDAE and NMT, except in the joint framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "In practice, joint and transfer learning can be combined in various ways according to data availability and modeling choices. A multi-task framework can be used to optimize both objectives at once. Given the lower data cost of representation transfer models, a joint model can be trained first for a set of resource-rich languages, followed by transfer learning for low-resource languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Other cross-lingual natural language inference corpora are now publicly available, but our experiments were conducted before their release.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We settled on using Adam optimization (Kingma and Ba, 2014) with L1 loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This evaluation scheme was recently introduced in Aldarmaki and Diab (2019) with data splits that are now available for download. Note that we used slightly older datasets in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning cross-lingual representations with matrix factorization",
"authors": [
{
"first": "Hanan",
"middle": [],
"last": "Aldarmaki",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Workshop on Multilingual and Cross-lingual Methods in NLP",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanan Aldarmaki and Mona Diab. 2016. Learning cross-lingual representations with matrix factoriza- tion. In Proceedings of the Workshop on Multilin- gual and Cross-lingual Methods in NLP, pages 1-9.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Evaluation of unsupervised compositional representations",
"authors": [
{
"first": "Hanan",
"middle": [],
"last": "Aldarmaki",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanan Aldarmaki and Mona Diab. 2018. Evaluation of unsupervised compositional representations. Pro- ceedings of the 27th International Conference on Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Contextaware crosslingual mapping",
"authors": [
{
"first": "Hanan",
"middle": [],
"last": "Aldarmaki",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1903.03243"
]
},
"num": null,
"urls": [],
"raw_text": "Hanan Aldarmaki and Mona Diab. 2019. Context- aware crosslingual mapping. arXiv preprint arXiv:1903.03243.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised word mapping using structural similarities in monolingual embeddings",
"authors": [
{
"first": "Hanan",
"middle": [],
"last": "Aldarmaki",
"suffix": ""
},
{
"first": "Mahesh",
"middle": [],
"last": "Mohan",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association of Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hanan Aldarmaki, Mahesh Mohan, and Mona Diab. 2018. Unsupervised word mapping using structural similarities in monolingual embeddings. Transac- tions of the Association of Computational Linguis- tics, 6.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Massively multilingual word embeddings",
"authors": [
{
"first": "Waleed",
"middle": [],
"last": "Ammar",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Mulcaire",
"suffix": ""
},
{
"first": "Yulia",
"middle": [],
"last": "Tsvetkov",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1602.01925"
]
},
"num": null,
"urls": [],
"raw_text": "Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. 2016. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A simple but tough-to-beat baseline for sentence embeddings",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A large annotated corpus for learning natural language inference",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Samuel R Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015sss. A large anno- tated corpus for learning natural language inference. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Findings of the 2012 workshop on statistical machine translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Seventh Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "10--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-Burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 workshop on statistical ma- chine translation. In Proceedings of the Seventh Workshop on Statistical Machine Translation, pages 10-51.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "One billion word benchmark for measuring progress in statistical language modeling",
"authors": [
{
"first": "Ciprian",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Ge",
"suffix": ""
}
],
"year": 2014,
"venue": "Fifteenth Annual Conference of the International Speech Communication Association",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2014. One billion word benchmark for mea- suring progress in statistical language modeling. In Fifteenth Annual Conference of the International Speech Communication Association.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Senteval: An evaluation toolkit for universal sentence representations",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representa- tions. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Supervised learning of universal sentence representations from natural language inference data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "670--680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017a. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Word translation without parallel data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.04087"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv\u00e9 J\u00e9gou. 2017b. Word translation without parallel data. arXiv preprint arXiv:1710.04087.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Xnli: Evaluating crosslingual sentence representations",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ruty",
"middle": [],
"last": "Rinott",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Adina",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Samuel",
"middle": [
"R"
],
"last": "Bowman",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating cross- lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Evaluating compositionality in sentence embeddings",
"authors": [
{
"first": "Ishita",
"middle": [],
"last": "Dasgupta",
"suffix": ""
},
{
"first": "Demi",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Stuhlm\u00fcller",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Noah D",
"middle": [],
"last": "Gershman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.04302"
]
},
"num": null,
"urls": [],
"raw_text": "Ishita Dasgupta, Demi Guo, Andreas Stuhlm\u00fcller, Samuel J Gershman, and Noah D Goodman. 2018. Evaluating compositionality in sentence embed- dings. arXiv preprint arXiv:1802.04302.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bilbowa: Fast bilingual distributed representations without word alignments",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 32nd International Conference on Machine Learning (ICML-15)",
"volume": "",
"issue": "",
"pages": "748--756",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: Fast bilingual distributed represen- tations without word alignments. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 748-756.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Effective parallel corpus mining using bilingual sentence embeddings",
"authors": [
{
"first": "Mandy",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Qinlan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Heming",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [
"Hernandez"
],
"last": "Abrego",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Yun-Hsuan",
"middle": [],
"last": "Sung",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Strope",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "165--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mandy Guo, Qinlan Shen, Yinfei Yang, Heming Ge, Daniel Cer, Gustavo Hernandez Abrego, Keith Stevens, Noah Constant, Yun-hsuan Sung, Brian Strope, et al. 2018. Effective parallel corpus mining using bilingual sentence embeddings. In Proceed- ings of the Third Conference on Machine Transla- tion: Research Papers, pages 165-176.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning distributed representations of sentences from unlabelled data",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "1367--1377",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In Proceedings of NAACL- HLT, pages 1367-1377.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Corrado",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association of Computational Linguistics",
"volume": "5",
"issue": "1",
"pages": "339--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, et al. 2017. Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association of Computational Linguistics, 5(1):339-351.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Skip-thought vectors",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Ruslan",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Raquel",
"middle": [],
"last": "Zemel",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Urtasun",
"suffix": ""
},
{
"first": "Sanja",
"middle": [],
"last": "Torralba",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fidler",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3294--3302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294-3302.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Inducing crosslingual distributed representations of words",
"authors": [
{
"first": "Alexandre",
"middle": [],
"last": "Klementiev",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Titov",
"suffix": ""
},
{
"first": "Binod",
"middle": [],
"last": "Bhattarai",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING 2012",
"volume": "",
"issue": "",
"pages": "1459--1474",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexandre Klementiev, Ivan Titov, and Binod Bhat- tarai. 2012. Inducing crosslingual distributed repre- sentations of words. Proceedings of COLING 2012, pages 1459-1474.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed rep- resentations of sentences and documents. In Inter- national Conference on Machine Learning, pages 1188-1196.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning distributed representations for multilingual text sequences",
"authors": [
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "88--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hieu Pham, Thang Luong, and Christopher Manning. 2015. Learning distributed representations for mul- tilingual text sequences. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 88-94.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Bidirectional recurrent neural networks",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kuldip",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paliwal",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Transactions on Signal Processing",
"volume": "45",
"issue": "11",
"pages": "2673--2681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning joint multilingual sentence representations with neural machine translation",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Matthijs",
"middle": [],
"last": "Douze",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "157--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk and Matthijs Douze. 2017. Learn- ing joint multilingual sentence representations with neural machine translation. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 157-167.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A corpus for multilingual document classification in eight languages",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk and Xian Li. 2018. A corpus for multilingual document classification in eight lan- guages. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eval- uation (LREC-2018).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Learning cross-lingual word embeddings via matrix co-factorization",
"authors": [
{
"first": "Tianze",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "567--572",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianze Shi, Zhiyuan Liu, Yang Liu, and Maosong Sun. 2015. Learning cross-lingual word embeddings via matrix co-factorization. In Proceedings of the 53rd Annual Meeting of the Association for Computa- tional Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 2: Short Papers), volume 2, pages 567-572.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax",
"authors": [
{
"first": "L",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "H",
"middle": [
"P"
],
"last": "David",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Turban",
"suffix": ""
},
{
"first": "Nils",
"middle": [
"Y"
],
"last": "Hamblin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hammerla",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1702.03859"
]
},
"num": null,
"urls": [],
"raw_text": "Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. arXiv preprint arXiv:1702.03859.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Leveraging monolingual data for crosslingual compositional word representations",
"authors": [
{
"first": "Hubert",
"middle": [],
"last": "Soyer",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Akiko",
"middle": [],
"last": "Aizawa",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6334"
]
},
"num": null,
"urls": [],
"raw_text": "Hubert Soyer, Pontus Stenetorp, and Akiko Aizawa. 2014. Leveraging monolingual data for crosslingual compositional word representations. arXiv preprint arXiv:1412.6334.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in neural information process- ing systems, pages 3104-3112.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Cross-lingual models of word embeddings: An empirical comparison",
"authors": [
{
"first": "Shyam",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shyam Upadhyay, Manaal Faruqui, Chris Dyer, and Dan Roth. 2016. Cross-lingual models of word em- beddings: An empirical comparison. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), volume 1.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Cross-lingual sentiment classification with bilingual document representation learning",
"authors": [
{
"first": "Xinjie",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jianguo",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1403--1412",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinjie Zhou, Xiaojun Wan, and Jianguo Xiao. 2016. Cross-lingual sentiment classification with bilingual document representation learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), volume 1, pages 1403-1412.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Illustrations of neural sentence embedding architectures based on LSTM encoders. (a) shows an unrolled LSTM encoder with word embeddings. (b) shows InferSent architecture with a softmax classification network on top of the encoder. Illustrations of LSTM encoder-decoder architectures for sentence embeddings. (a) Sequential Auto-Encoder objective, where the input and output are the same sentence. (b) Neural Machine Translation objective, where the output is a translation of the input sentence from a parallel corpus."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Illustrations of a joint training step, where different languages are used for the premise and hypothesis."
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Representation transfer model, with pre-trained English encoder and L1 loss."
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Nearest neighbor translation accuracy as a function of (log) parallel corpus size. The legend shows the average accuracies of each model using 1M parallel sentences.LanguageMonolingual Nearest Neighbors Query: Tons of people are gathered around the statue Spanish There are several people sitting around a table. There are several people outside of a building. There are multiple people present."
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Average cross-lingual document classification accuracy as a function of (log) parallel corpus size. The legend shows the average accuracies of each model using 1M parallel sentences."
},
"TABREF3": {
"content": "<table/>",
"type_str": "table",
"text": "Mono-lingual nearest neighbors (or their translations) of a sample of query sentences from SNLI test set using joint InferSent encoders. Phrases similar to the query sentences are shown in bold.",
"num": null,
"html": null
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"text": "Cross-lingual nearest neighbors (or their translations) of a sample of query sentences from SNLI test set using joint InferSent encoders. Phrases similar to the query sentences are shown in bold.",
"num": null,
"html": null
}
}
}
}