ACL-OCL / Base_JSON /prefixC /json /clssts /2020.clssts-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:27:48.481381Z"
},
"title": "SEARCHER: Shared Embedding Architecture for Effective Retrieval",
"authors": [
{
"first": "Joel",
"middle": [],
"last": "Barry",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California",
"location": {}
},
"email": "joelb@isi.edu"
},
{
"first": "Elizabeth",
"middle": [],
"last": "Boschee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California",
"location": {}
},
"email": "boschee@isi.edu"
},
{
"first": "Marjorie",
"middle": [],
"last": "Freedman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California",
"location": {}
},
"email": ""
},
{
"first": "Scott",
"middle": [],
"last": "Miller",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Southern California",
"location": {}
},
"email": "smiller@isi.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe an approach to cross lingual information retrieval that does not rely on explicit translation of either document or query terms. Instead, both queries and documents are mapped into a shared embedding space where retrieval is performed. We discuss potential advantages of the approach in handling polysemy and synonymy. We present a method for training the model, and give details of the model implementation. We present experimental results for two cases: Somali-English and Bulgarian-English CLIR.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe an approach to cross lingual information retrieval that does not rely on explicit translation of either document or query terms. Instead, both queries and documents are mapped into a shared embedding space where retrieval is performed. We discuss potential advantages of the approach in handling polysemy and synonymy. We present a method for training the model, and give details of the model implementation. We present experimental results for two cases: Somali-English and Bulgarian-English CLIR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A fundamental design decision in cross-lingual information retrieval is whether to translate the queries, the documents, or both. In this paper, we discuss a substantially different alternative where neither the query nor the document is translated. Instead, both the queries and documents are projected into a shared embedding space and retrieval is performed there. The approach offers potential advantages in handling synonymy, i.e. where synonymous query terms can match a single document term (or vice-versa), as well as for document-language polysemy, i.e. where a particular document term can have one of several meanings depending on context. In tests on two languages, Somali and Bulgarian, we observed a level of performance that is competitive with the \"document translation\" approach, including when translation is performed using a state-ofthe-art tensor-to-tensor model. For one of the languages, Somali, the shared embedding approach was also able to outperform a hybrid strategy involving both query and document translation. All experimental results were from IARPA's MATERIAL evaluation task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Methods for constructing cross-lingual (and multilingual) word embeddings have been extensively investigated for the past several years (Hermann and Blunsom, 2014; Luong, Pham, and Manning, 2015; Gouws, Bengio, and Corrado, 2015) and several pre-trained resources are publicly available. To begin exploring the possibility of applying shared embeddings for CLIR, we constructed a baseline system and tested a few state-of-the-art publiclyavailable variants, including MUSE (Conneau et al., 2017 ). The baseline system architecture is shown in Figure 1 .",
"cite_spans": [
{
"start": 136,
"end": 163,
"text": "(Hermann and Blunsom, 2014;",
"ref_id": null
},
{
"start": 164,
"end": 195,
"text": "Luong, Pham, and Manning, 2015;",
"ref_id": null
},
{
"start": 196,
"end": 229,
"text": "Gouws, Bengio, and Corrado, 2015)",
"ref_id": "BIBREF3"
},
{
"start": 473,
"end": 494,
"text": "(Conneau et al., 2017",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 543,
"end": 551,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Initial Experiments",
"sec_num": "2."
},
{
"text": "In this system, document relevancy is determined based on cosine distance between query and document terms. More specifically, a document is considered responsive to a query if at least one of the document words is within a fixed threshold (in embedding space) of the query. Despite basing our experiments on state-of-the-art embeddings, initial performance was low. The AQWV score (Actual Query Weighted Value) for MATERIAL's Swahili-English analysis set was 0.03; for Tagalog-English it was 0.07.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Initial Experiments",
"sec_num": "2."
},
{
"text": "Three factors seemed to account for the low AQWV scores. First, embedding spaces are not uniform; some regions are densely packed with words while other regions are only sparsely populated. Thus, no consistent interpretation of distance exists, making the selection of a single matching threshold problematic. Second, although simple linear transformations are capable of aligning semanticallyrelated words across languages, the alignments are not sufficiently precise to identify exact term translationsparticularly for MATERIAL's lexical queries. Finally, our retrieval mechanism was massively under-parameterized; initial experiments attempted to optimize a complex CLIR task by adjusting only a single scalar threshold parameter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations of the Baseline Approach",
"sec_num": "2.1"
},
{
"text": "Overcoming these limitations would require a sufficiently parameterized model that could be trained for the CLIR task. Implicit in this approach is the need for training data and for a well-defined training objective. In principle, data provided by the MATERIAL program could provide the training examples and AQWV could serve as the objective function. However, MATERIAL's rules explicitly prohibit directly training on this data and, in any case, the relatively small number of queries and relevance judgements is insufficient to train an adequate model (e.g., embedding parameters alone require estimating millions of floatingpoint values).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data and Objective Function",
"sec_num": "3."
},
{
"text": "Instead, we defined a simplified sentence-retrieval task for which training data is readily available. Specifically, given an English query term (q) and a foreign language sentence (S):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data and Objective Function",
"sec_num": "3."
},
{
"text": "\u2022 Sentence S is relevant to query q if there exists at least one plausible translation of S containing q.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data and Objective Function",
"sec_num": "3."
},
{
"text": "For this proxy task, large numbers of training examples can be extracted from a parallel corpus such as used to train machine translation systems. Specifically, any English term that occurs anywhere in a bitext sentence can be treated as a query and its corresponding foreign-language sentence treated as a positive example. Negative examples can be randomly drawn from foreign-language bitext sentences (any randomly selected sentence is probably not relevant, but we can additionally verify that its corresponding English sentence does not contain the query term). Figure 2 shows examples of training instances from a Swahili/English parallel corpus. The sentence in the first row translates to \"The fine for passing another vehicle improperly is 400 shillings.\" Similarly, the sentence in the third row translates to \"Think about people with phones since in Tanzania so many people are using phones.\" The sentences in rows 2 and 4 are randomly selected Swahili sentences that do not contain the query term.",
"cite_spans": [],
"ref_spans": [
{
"start": 567,
"end": 575,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Data and Objective Function",
"sec_num": "3."
},
{
"text": "Given a training corpus of such examples, the probability that a sentence S is relevant to a query q, i.e. ( | , ), can be optimized using the standard cross-entropy objective function H",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data and Objective Function",
"sec_num": "3."
},
{
"text": "( ) = . * \u2212log ( ( | , ) + (1 \u2212 ) ! * \u2212 log91 \u2212 ( | , ):)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data and Objective Function",
"sec_num": "3."
},
{
"text": "where X is the set of training examples and z are the true labels (1 for relevant, 0 for irrelevant).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data and Objective Function",
"sec_num": "3."
},
{
"text": "For the actual MATERIAL task, the relevance of a document to a query phrase is taken as the maximum relevance over sentences in the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Data and Objective Function",
"sec_num": "3."
},
{
"text": "Now that we have identified suitable training data and an objective function, we next consider the challenge of model design.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4."
},
{
"text": "Here we introduce the following elements:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4."
},
{
"text": "\u2022 Query encoder: maps English terms into the shared embedding space \u2022 Sentence encoder: maps foreign-language terms into the shared embedding space",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4."
},
{
"text": "\u2022 Attention mechanism: selects regions of the sentence based on the query \u2022 Matching mechanism: determines how closely the selected region matches the query \u2022 Activation function: maps matching scores to probability values",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4."
},
{
"text": "An overview of the generic SEARCHER architecture is shown in Figure 3 . The retrieval process proceeds as follows. First, each foreign-language word is mapped into the shared embedding space. These embeddings are contextualized, as described in Sections 5 and 6. Next, the English query term is mapped into the common embedding space. An attention mechanism then selects the region of the foreign-language sentence that appears most relevant to the query and outputs its embedding. The selected region's embedding is compared to the query by a matching function which outputs a matching score. Finally, the matching score is passed through an activation function that produces the probability of relevance. Importantly, this activation function also receives a separate query-specific bias value. This bias value helps overcome non-uniformity in the embedding space by requiring some terms to match more closely than others depending on the density of their surrounding neighborhoods. In all of our experiments, we use a sigmoidal activation function.",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 69,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "4."
},
{
"text": "Beginning with models such as BERT (Devlin et al., 2018) and ELMO (Peters et al., 2018) , contextualized embeddings have proven useful for a wide range of tasks. While MATERIAL's queries typically contain only one or a few words, and therefore offer little opportunity for query contextualization, our proxy CLIR task evaluates relevance over complete sentences, offering the possibility of contextualizing document embeddings. A potential advantage of such contextualization is the resolution of polysemous terms. Specifically, a contextualized model can learn to situate polysemous terms in different regions of the embedding space depending on context. For example the Swahili term \"nyanya\" can be translated alternatively as \"grandmother\" or \"tomatoes,\" as shown in Figure 4 .",
"cite_spans": [
{
"start": 35,
"end": 56,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF1"
},
{
"start": 66,
"end": 87,
"text": "(Peters et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 770,
"end": 778,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Contextualized Embedding Spaces",
"sec_num": "5."
},
{
"text": "Ideally, a contextualized model will place the different senses of a polysemous term in different locations in the embedding space, thereby reducing the possibility of spurious matches (e.g. retrieving grandmothers when searching for tomatoes).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Contextualized Embedding Spaces",
"sec_num": "5."
},
{
"text": "We note that in SEARCHER, contextualized embeddings are used only for document terms; non-contextual embeddings are used for query terms. Performing retrieval in a shared embedding space is also potentially useful for resolving synonymous terms. For example, the Swahili term 'gari' can be translated equivalently as \"car\" or \"vehicle,\" as shown in Figure 5 . Ideally, the model will place synonymous terms in similar positions in the embedding space, thereby increasing the possibility of matching any of the alternatives (e.g. retrieving a document containing \"gari\" whether the query term is \"car\" or \"vehicle\").",
"cite_spans": [],
"ref_spans": [
{
"start": 349,
"end": 357,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Contextualized Embedding Spaces",
"sec_num": "5."
},
{
"text": "In this section, we consider details of the sentence encoder mentioned in Section 4. Specifically, SEARCHER's sentence encoder produces contextualized embeddings using a deep convolutional model consisting of 15 convolution layers, each of diameter 3. This architecture yields a receptive field of 31 words, providing 15 words of context on each side of a term. The encoder is similar to that described in (Gehring et al., 2018) .",
"cite_spans": [
{
"start": 406,
"end": 428,
"text": "(Gehring et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Encoder",
"sec_num": "6."
},
{
"text": "In detail, each convolution block consists of a dropout layer, a convolution layer, a GLU layer (gated linear units), and residual connections. A fixed embedding size of 512 is maintained throughout the network.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Encoder",
"sec_num": "6."
},
{
"text": "We use an identical encoder in our convolutional machine translation system. In fact, we have found that pretraining the encoder in an MT setting, then transferring the encoder to SEARCHER, and continuing to train the remaining CLIR elements is an effective method for speeding convergence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convolutional Encoder",
"sec_num": "6."
},
{
"text": "Our generic SEARCHER architecture leaves room for various alternatives at the level of individual components. For instance, while we use a convolutional sentence encoder, it would be perfectly reasonable to substitute a transformer architecture.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplifications",
"sec_num": "7."
},
{
"text": "One alternative involving the attention and matching mechanisms leads to a particularly attractive simplification. Specifically, if the attention mechanism is the commonly used form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplifications",
"sec_num": "7."
},
{
"text": "( , ) = . \" \" \"\u2208|%| \" = exp ( \" ) \u2211 exp ( \" ) \"\u2208|%| \" = \u2022 \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplifications",
"sec_num": "7."
},
{
"text": "and the matcher is a simple dot product, then the resulting architecture (after some algebra) reduces to that shown in Figure 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 127,
"text": "Figure 6",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Simplifications",
"sec_num": "7."
},
{
"text": "We have found this simplified architecture to be effective, producing results at least as good as more complex variations. A further simplification is obtainable by replacing the softmax pooling layer with a hard maxpooling layer. Both simplified variations produce similar results. The softmax variation requires fewer training cycles (because max-pooling updates just the single bestmatching term on each training cycle, whereas softmax pooling updates all words in proportion to their distance from the query). On the other hand, max pooling appears to yield slightly sharper probability distributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simplifications",
"sec_num": "7."
},
{
"text": "The SEARCHER model shown in Figure 6 bears a striking resemblance to the baseline model described in Section 2. The most important difference is that the SEARCHER model is specifically trained to perform CLIR whereas the baseline model relies on pretrained embeddings. Other differences are:",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 36,
"text": "Figure 6",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Relation to the Baseline Model",
"sec_num": "8."
},
{
"text": "\u2022 Contextualized embeddings replace individual word embeddings \u2022 Dot products replace cosine distances (which are simply normalized dot products) \u2022 Softmax pooling (essentially, a soft OR function) replaces the logical OR \u2022 A sigmoidal activation function (essentially, a soft threshold) replaces hard thresholding \u2022 The positions of the combining function (softmax/logical OR) and the activation function (sigmoid/hard threshold) are exchanged \u2022 A bias term is introduced for each query term",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation to the Baseline Model",
"sec_num": "8."
},
{
"text": "We tested SEARCHER in two MATERIAL languages, Somali and Bulgarian. For each language, we also evaluated traditional translation-based CLIR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "9."
},
{
"text": "For the Somali case, we compare performance with several different machine translation models. These include syntax-based statistical machine translation and two types of neural machine translation: tensor-to-tensor (Vaswani et al., 2017) and convolutional (Gehring et al., 2018) . For the neural models, we follow best practices in training, including the use of substantial back-translated data. In all cases, the MT system is applied to translate the foreign language documents into English. We also evaluate alternatives where, in addition to translating the documents, we translate the English queries into the foreign language using translation tables obtained by a statistical alignment process. This strategy improves the probability of matching queries to documents by translating in both directions.",
"cite_spans": [
{
"start": 216,
"end": 238,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 257,
"end": 279,
"text": "(Gehring et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "9."
},
{
"text": "Results for Somali, as shown in Table 1 , are encouraging. Entries in the table that are designated (+source) indicate the combined strategy where queries are also translated. Evaluating on two different MATERIAL data sets (designated analysis and dev), SEARCHER outperformed the \"document translation\" strategy for all translation models as well as the combined strategy where both the documents and the queries are translated. For the Bulgarian case, we compare SEARCHER with only our best machine translation model, a tensor-to-tensor model, and evaluate only on MATERIAL analysis documents. Once again, the MT system is applied to translate the foreign-language documents into English. As before, we also evaluate the combined strategy, translating both documents and queries.",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 39,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "9."
},
{
"text": "Results for Bulgarian are shown in Table 2 . In this case, results are somewhat different. In general, performance is much better. SEARCHER's performance matches the \"document translation\" strategy alone. However, when query translation is added, the combined translation strategy noticeably outperforms SEARCHER. We suspect that part of the explanation for the differences in relative performance is the amount of training data available. Specifically, large quantities of paracrawl data for Bulgarian provide a significant boost in MT accuracy. ",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 42,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "9."
},
{
"text": "We have conducted numerous experiments with SEARCHER models. We have identified an effective general architecture and derived simplified variations that perform well. We found that training for a proxy task (sentence retrieval) is a useful strategy and that adequate training examples can be derived from bitexts. While much work remains to be done, we have demonstrated that shared embedding space models can be an effective method for CLIR, providing a competitive alternative to document translation models, including those based on state-of-theart neural MT. In one language, Somali, we found that SEARCHER outperformed all the translation-based alternatives that we evaluated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary",
"sec_num": "10."
},
{
"text": "This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract # FA8650-17-C9116. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "11."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Word Translation Without Parallel Data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Herv\u00e9",
"middle": [],
"last": "J\u00e9gou",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1710.04087"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, Herv\u00e9 J\u00e9gou, (2017). Word Translation Without Parallel Data, arXiv:1710.04087",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, arXiv:1810.04805",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Convolutional Sequence to Sequence Learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann",
"middle": [
"N"
],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.03122"
]
},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin, (2017). Convolutional Sequence to Sequence Learning, arXiv:1705.03122",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bilbowa: Fast bilingual distributed representations without word alignments",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
}
],
"year": 2015,
"venue": "Proc. of the Workshop on Vector Space Modeling for NLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Gouws, Yoshua Bengio, and Greg Corrado, (2015). Bilbowa: Fast bilingual distributed representations without word alignments. In Proc. of ICML Thang Luong, Hieu Pham, and Christopher D. Manning, (2015). Bilingual word representations with monolingual quality in mind. In Proc. of the Workshop on Vector Space Modeling for NLP.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Multilingual Models for Compositional Distributional Semantics",
"authors": [],
"year": null,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Multilingual Models for Compositional Distributional Semantics. In Proc. of ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "Matthew",
"middle": [
"E"
],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, (2018). Deep contextualized word representations, arXiv:1802.05365",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Attention Is All You Need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.03762"
]
},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin, (2017). Attention Is All You Need, arXiv:1706.03762",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Baseline architecture"
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Polysemy in shared embedding space Examples of training instances Figure 3, Generic SEARCHER Architecture"
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Simplified SEARCHER Architecture Figure 5: Synonymy in shared embedding space"
},
"TABREF0": {
"text": "AQWV of various systems on Somali",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
},
"TABREF1": {
"text": "",
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null
}
}
}
}