ACL-OCL / Base_JSON /prefixD /json /D17 /D17-1026.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D17-1026",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:16:47.516881Z"
},
"title": "Learning Paraphrastic Sentence Embeddings from Back-Translated Bitext",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Toyota Technological Institute at Chicago",
"location": {
"postCode": "60637",
"settlement": "Chicago",
"region": "IL",
"country": "USA"
}
},
"email": "jwieting@ttic.edu"
},
{
"first": "Jonathan",
"middle": [],
"last": "Mallinson",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"settlement": "Edinburgh",
"country": "UK"
}
},
"email": "j.mallinson@ed.ac.uk"
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Toyota Technological Institute at Chicago",
"location": {
"postCode": "60637",
"settlement": "Chicago",
"region": "IL",
"country": "USA"
}
},
"email": "kgimpel@ttic.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We consider the problem of learning general-purpose, paraphrastic sentence embeddings in the setting of Wieting et al. (2016b). We use neural machine translation to generate sentential paraphrases via back-translation of bilingual sentence pairs. We evaluate the paraphrase pairs by their ability to serve as training data for learning paraphrastic sentence embeddings. We find that the data quality is stronger than prior work based on bitext and on par with manually-written English paraphrase pairs, with the advantage that our approach can scale up to generate large training sets for many languages and domains. We experiment with several language pairs and data sources, and develop a variety of data filtering techniques. In the process, we explore how neural machine translation output differs from humanwritten sentences, finding clear differences in length, the amount of repetition, and the use of rare words. 1",
"pdf_parse": {
"paper_id": "D17-1026",
"_pdf_hash": "",
"abstract": [
{
"text": "We consider the problem of learning general-purpose, paraphrastic sentence embeddings in the setting of Wieting et al. (2016b). We use neural machine translation to generate sentential paraphrases via back-translation of bilingual sentence pairs. We evaluate the paraphrase pairs by their ability to serve as training data for learning paraphrastic sentence embeddings. We find that the data quality is stronger than prior work based on bitext and on par with manually-written English paraphrase pairs, with the advantage that our approach can scale up to generate large training sets for many languages and domains. We experiment with several language pairs and data sources, and develop a variety of data filtering techniques. In the process, we explore how neural machine translation output differs from humanwritten sentences, finding clear differences in length, the amount of repetition, and the use of rare words. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Pretrained word embeddings have received a great deal of attention from the research community, but there is much less work on developing pretrained embeddings for sentences. Here we target sentence embeddings that are \"paraphrastic\" in the sense that two sentences with similar meanings are close in the embedding space. Wieting et al. (2016b) developed paraphrastic sentence embeddings that are useful for semantic textual similarity tasks and can also be used as initialization for supervised semantic tasks.",
"cite_spans": [
{
"start": 322,
"end": 344,
"text": "Wieting et al. (2016b)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "R: We understand that has already commenced, but there is a long way to go. T: This situation has already commenced, but much still needs to be done. R: The restaurant is closed on Sundays. No breakfast is available on Sunday mornings. T: The restaurant stays closed Sundays so no breakfast is served these days. R: Improved central bank policy is another huge factor. T: Another crucial factor is the improved policy of the central banks. To learn their sentence embeddings, Wieting et al. used the Paraphrase Database (PPDB) (Ganitkevitch et al., 2013) . PPDB contains a large set of paraphrastic textual fragments extracted automatically from bilingual text (\"bitext\"), which is readily available for languages and domains. Versions of PPDB have been released for several languages (Ganitkevitch and Callison-Burch, 2014) .",
"cite_spans": [
{
"start": 476,
"end": 495,
"text": "Wieting et al. used",
"ref_id": null
},
{
"start": 527,
"end": 554,
"text": "(Ganitkevitch et al., 2013)",
"ref_id": "BIBREF16"
},
{
"start": 785,
"end": 824,
"text": "(Ganitkevitch and Callison-Burch, 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, more recent work has shown that the fragmental nature of PPDB's pairs can be problematic, especially for recurrent networks (Wieting and Gimpel, 2017) . Better performance can be achieved with a smaller set of sentence pairs derived from aligning Simple English and standard English Wikipedia (Coster and Kauchak, 2011) . While effective, this type of data is inherently limited in size and scope, and not available for languages other than English.",
"cite_spans": [
{
"start": 133,
"end": 159,
"text": "(Wieting and Gimpel, 2017)",
"ref_id": "BIBREF41"
},
{
"start": 302,
"end": 328,
"text": "(Coster and Kauchak, 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "PPDB is appealing in that it only requires bitext. We would like to retain this property but develop a data resource with sentence pairs rather than phrase pairs. We turn to neural machine translation (NMT) (Sutskever et al., 2014; Bahdanau et al., 2014; Sennrich et al., 2016a) , which has matured recently to yield strong performance especially in terms of producing grammatical outputs.",
"cite_spans": [
{
"start": 207,
"end": 231,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF36"
},
{
"start": 232,
"end": 254,
"text": "Bahdanau et al., 2014;",
"ref_id": "BIBREF6"
},
{
"start": 255,
"end": 278,
"text": "Sennrich et al., 2016a)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we build NMT systems for three language pairs, then use them to back-translate the non-English side of the training bitext. The resulting data consists of sentence pairs containing an English reference and the output of an X-to-English NMT system. Table 1 shows examples. We use this data for training paraphrastic sentence embeddings, yielding results that are much stronger than when using PPDB and competitive with the Simple English Wikipedia data. Since bitext is abundant and available for many language pairs and domains, 2 we also develop several methods of filtering the data, including based on sentence length, quality measures, and measures of difference between the reference and its back-translation. We find length to be an effective filtering method, showing that very short length ranges-where the translation is 1 to 10 wordsare best for learning.",
"cite_spans": [],
"ref_spans": [
{
"start": 263,
"end": 270,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In studying quality measures for filtering, we train a classifier to predict if a sentence is a reference or a back-translation, then score sentences by the classifier score. This investigation allows us to examine the kinds of phenomena that best distinguish NMT output from references in this controlled setting of translating the bitext training data. NMT output has more repetitions of both words and longer n-grams, and uses fewer rare words than the references.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We release our generated sentence pairs to the research community with the hope that the data can inspire others to develop additional filtering methods, to experiment with richer architectures for sentence embeddings, and to further analyze the differences between neural machine translations and references.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We describe related work in learning generalpurpose sentence embeddings, work in automatically generating or discovering paraphrases, and finally prior work in leveraging neural machine translation for embedding learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Paraphrastic sentence embeddings. Our learning and evaluation setting is the same as that considered by Wieting et al. (2016b) and Wieting et al. (2016a) , in which the goal is to learn paraphrastic sentence embeddings that can be used for downstream tasks. They trained models on PPDB and evaluated them using a suite of semantic textual similarity (STS) tasks and supervised semantic tasks. Others have begun to consider this setting as well (Arora et al., 2017) .",
"cite_spans": [
{
"start": 104,
"end": 126,
"text": "Wieting et al. (2016b)",
"ref_id": "BIBREF39"
},
{
"start": 131,
"end": 153,
"text": "Wieting et al. (2016a)",
"ref_id": "BIBREF38"
},
{
"start": 444,
"end": 464,
"text": "(Arora et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Other work in learning general purpose sentence embeddings has used autoencoders (Socher et al., 2011; Hill et al., 2016) , encoder-decoder architectures (Kiros et al., 2015) , or other learning frameworks (Le and Mikolov, 2014; Pham et al., 2015) . Wieting et al. (2016b) and Hill et al. (2016) provide many empirical comparisons to this prior work. For conciseness, we compare only to the strongest configurations from their results.",
"cite_spans": [
{
"start": 81,
"end": 102,
"text": "(Socher et al., 2011;",
"ref_id": "BIBREF35"
},
{
"start": 103,
"end": 121,
"text": "Hill et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 154,
"end": 174,
"text": "(Kiros et al., 2015)",
"ref_id": "BIBREF24"
},
{
"start": 206,
"end": 228,
"text": "(Le and Mikolov, 2014;",
"ref_id": "BIBREF27"
},
{
"start": 229,
"end": 247,
"text": "Pham et al., 2015)",
"ref_id": "BIBREF31"
},
{
"start": 250,
"end": 272,
"text": "Wieting et al. (2016b)",
"ref_id": "BIBREF39"
},
{
"start": 277,
"end": 295,
"text": "Hill et al. (2016)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Paraphrase generation and discovery. There is a rich history of research in generating or finding naturally-occurring sentential paraphrases (Barzilay and McKeown, 2001; Dolan and Brockett, 2005; Zhao et al., 2010; Coster and Kauchak, 2011; Xu et al., 2014 Xu et al., , 2015 .",
"cite_spans": [
{
"start": 141,
"end": 169,
"text": "(Barzilay and McKeown, 2001;",
"ref_id": "BIBREF8"
},
{
"start": 170,
"end": 195,
"text": "Dolan and Brockett, 2005;",
"ref_id": "BIBREF12"
},
{
"start": 196,
"end": 214,
"text": "Zhao et al., 2010;",
"ref_id": "BIBREF45"
},
{
"start": 215,
"end": 240,
"text": "Coster and Kauchak, 2011;",
"ref_id": "BIBREF10"
},
{
"start": 241,
"end": 256,
"text": "Xu et al., 2014",
"ref_id": "BIBREF43"
},
{
"start": 257,
"end": 274,
"text": "Xu et al., , 2015",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The most relevant work uses bilingual corpora, e.g., Zhao et al. (2008) and Bannard and Callison-Burch (2005) , the latter leading to PPDB. Our goals are highly similar to those of the PPDB project, which has also been produced for many languages (Ganitkevitch and Callison-Burch, 2014 ) since it only relies on the availability of bilingual text.",
"cite_spans": [
{
"start": 53,
"end": 71,
"text": "Zhao et al. (2008)",
"ref_id": "BIBREF46"
},
{
"start": 76,
"end": 109,
"text": "Bannard and Callison-Burch (2005)",
"ref_id": "BIBREF7"
},
{
"start": 247,
"end": 285,
"text": "(Ganitkevitch and Callison-Burch, 2014",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Prior work has shown that PPDB can be used for learning embeddings for words and phrases (Faruqui et al., 2015; Wieting et al., 2015) . However, when learning sentence embeddings, Wieting and Gimpel (2017) showed that PPDB is not as effective as sentential paraphrases, especially for recurrent networks. These results are intuitive because the phrases in PPDB are short and often cut across constituent boundaries. For sentential paraphrases, Wieting and Gimpel (2017) used a dataset developed for text simplification by Coster and Kauchak (2011) . It was created by aligning sentences from Simple English and standard English Wikipedia. We compare our data to both PPDB and this Wikipedia dataset.",
"cite_spans": [
{
"start": 89,
"end": 111,
"text": "(Faruqui et al., 2015;",
"ref_id": "BIBREF14"
},
{
"start": 112,
"end": 133,
"text": "Wieting et al., 2015)",
"ref_id": "BIBREF40"
},
{
"start": 180,
"end": 205,
"text": "Wieting and Gimpel (2017)",
"ref_id": "BIBREF41"
},
{
"start": 444,
"end": 469,
"text": "Wieting and Gimpel (2017)",
"ref_id": "BIBREF41"
},
{
"start": 522,
"end": 547,
"text": "Coster and Kauchak (2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Neural machine translation for paraphrastic embedding learning. Sutskever et al. (2014) trained NMT systems and visualized part of the space of the source language encoder for their English\u2192French system. Hill et al. (2016) evaluated the encoders of English-to-X NMT systems as sentence representations, finding them to perform poorly compared to several other methods based on unlabeled data. Mallinson et al. (2017) adapted trained NMT models to produce sentence similarity scores in semantic evaluations. They used pairs of NMT systems, one to translate an English sentence into multiple foreign translations and the other to then translate back to English. Other work has used neural MT architectures and training settings to obtain better word embeddings (Hill et al., 2014a,b) .",
"cite_spans": [
{
"start": 64,
"end": 87,
"text": "Sutskever et al. (2014)",
"ref_id": "BIBREF36"
},
{
"start": 205,
"end": 223,
"text": "Hill et al. (2016)",
"ref_id": "BIBREF20"
},
{
"start": 394,
"end": 417,
"text": "Mallinson et al. (2017)",
"ref_id": "BIBREF28"
},
{
"start": 760,
"end": 782,
"text": "(Hill et al., 2014a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our approach differs in that we only use the NMT system to generate training data for training sentence embeddings, rather than use it as the source of the model. This permits us to decouple decisions made in designing the NMT architecture from decisions about which models we will use for learning sentence embeddings. Thus we can benefit from orthogonal work in designing neural architectures to embed sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We now describe the NMT systems we use for generating data for learning sentence embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "In our experiments, we use three encoder-decoder NMT models: Czech\u2192English, French\u2192English, and German\u2192English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "We used Groundhog 3 as the implementation of the NMT systems for all experiments. We generally followed the settings and training procedure from previous work (Bahdanau et al., 2014; Sennrich et al., 2016a) . As such, all networks have a hidden layer size of 1000 and an embedding layer size of 620. During training, we used Adadelta (Zeiler, 2012), a minibatch size of 80, and the training set was reshuffled between epochs. We trained a network for approximately 7 days on a single GPU (TITAN X), then the embedding layer was fixed and training continued, as suggested by Jean et al. (2015) , for 12 hours. Additionally, the softmax was calculated over a filtered list of candidate translations. Following Jean et al. (2015) , during decoding, we restrict the softmax layers' output vocabulary to include: the 10000 most common words, the top 25 unigram translations, and the gold translations' unigrams.",
"cite_spans": [
{
"start": 159,
"end": 182,
"text": "(Bahdanau et al., 2014;",
"ref_id": "BIBREF6"
},
{
"start": 183,
"end": 206,
"text": "Sennrich et al., 2016a)",
"ref_id": "BIBREF33"
},
{
"start": 574,
"end": 592,
"text": "Jean et al. (2015)",
"ref_id": "BIBREF22"
},
{
"start": 708,
"end": 726,
"text": "Jean et al. (2015)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "All systems were trained on the available training data from the WMT15 shared translation task (15.7 million, 39.2 million, and 4.2 million sentence pairs for CS\u2192EN, FR\u2192EN, and DE\u2192EN,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural Machine Translation",
"sec_num": "3"
},
{
"text": "French German Europarl 650,000 2,000,000 2,000,000 Common Crawl 160,000 3,000,000 2,000,000 News Commentary 150,000 200,000 200,000 UN -12,000,000 -10 9 French-English -22,000,000 -CzEng 14,700,000 -- respectively). The training data included: Europarl v7 (Koehn, 2005) , the Common Crawl corpus, the UN corpus (Eisele and Chen, 2010) , News Commentary v10, the 10 9 French-English corpus, and CzEng 1.0 (Bojar et al., 2016) . A breakdown of the sizes of these corpora can be found in Table 3. The data was pre-processed using standard pre-processing scripts found in Moses (Koehn et al., 2007) . Rare words were split into sub-word units, following Sennrich et al. (2016b) . BLEU scores on the WMT2015 test set for each NMT system can be seen in Table 3 . To produce paraphrases we use \"backtranslation\", i.e., we use our X\u2192English NMT systems to translate the non-English sentence in each training sentence pair into English. We directly use the bitext on which the models were trained. This could potentially lead to pairs in which the reference and translation match exactly, if the model has learned to memorize the reference translations seen during training. However, in practice, since we have so much bitext to draw from, we can easily find data in which they do not match exactly.",
"cite_spans": [
{
"start": 256,
"end": 269,
"text": "(Koehn, 2005)",
"ref_id": "BIBREF25"
},
{
"start": 311,
"end": 334,
"text": "(Eisele and Chen, 2010)",
"ref_id": "BIBREF13"
},
{
"start": 404,
"end": 424,
"text": "(Bojar et al., 2016)",
"ref_id": "BIBREF9"
},
{
"start": 574,
"end": 594,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF26"
},
{
"start": 650,
"end": 673,
"text": "Sennrich et al. (2016b)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 747,
"end": 754,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Czech",
"sec_num": null
},
{
"text": "Thus our generated data consists of pairs of English references from the bitext along with the NMT-produced English back-translations. We use beam search with a width of 50 to generate multiple translations for each non-English sentence, each of which is a candidate paraphrase for the English reference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Czech",
"sec_num": null
},
{
"text": "Example outputs of this process are in Table 1 , showing some rich paraphrase phenomena in the data. These examples show non-trivial phrase substitutions (\"there is a long way to go\" and \"much still needs to be done\"), sentences being merged and simplified, and sentences being rearranged.",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Czech",
"sec_num": null
},
{
"text": "For examples of erroneous paraphrases that can be generated by this process, see Table 11 .",
"cite_spans": [],
"ref_spans": [
{
"start": 81,
"end": 89,
"text": "Table 11",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Czech",
"sec_num": null
},
{
"text": "Our goal is to compare our paraphrase dataset to other datasets by using each to train sentence embeddings, keeping the models and learning procedure fixed. So we select models and a loss function from prior work (Wieting et al., 2016b; Wieting and Gimpel, 2017) .",
"cite_spans": [
{
"start": 213,
"end": 236,
"text": "(Wieting et al., 2016b;",
"ref_id": "BIBREF39"
},
{
"start": 237,
"end": 262,
"text": "Wieting and Gimpel, 2017)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models and Training",
"sec_num": "4"
},
{
"text": "We wish to embed a word sequence s into a fixedlength vector. We denote the tth word in s as s t , and we denote its word embedding by x t . We focus on two models in this paper. The first model, which we call AVG, simply averages the embeddings x t of all words in s. The only parameters learned in this model are those in the word embeddings themselves, which are stored in the word embedding matrix W w . This model was found by Wieting et al. (2016b) to perform very strongly for semantic similarity tasks.",
"cite_spans": [
{
"start": 432,
"end": 454,
"text": "Wieting et al. (2016b)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1"
},
{
"text": "The second model, the GATED RECURRENT AV-ERAGING NETWORK (GRAN) (Wieting and Gimpel, 2017) , combines the benefits of AVG and long short-term memory (LSTM) recurrent neural networks (Hochreiter and Schmidhuber, 1997) . It first uses an LSTM to generate a hidden vector, h t , for each word s t in s. Then h t is used to compute a gate that is elementwise-multiplied with x t , resulting in a new hidden vector a t for each step t:",
"cite_spans": [
{
"start": 64,
"end": 90,
"text": "(Wieting and Gimpel, 2017)",
"ref_id": "BIBREF41"
},
{
"start": 182,
"end": 216,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "a t = x t \u03c3(W x x t + W h h t + b)",
"eq_num": "(1)"
}
],
"section": "Models",
"sec_num": "4.1"
},
{
"text": "where W x and W h are parameter matrices, b is a parameter vector, and \u03c3 is the elementwise logistic sigmoid function. After all a t have been generated for a sentence, they are averaged to produce the embedding for that sentence. The GRAN reduces to AVG if the output of the gate is always 1. This model includes as learnable parameters those of the LSTM, the word embeddings, and the additional parameters in Eq. (1). We use W c to denote the \"compositional\" parameters, i.e., all parameters other than the word embeddings. Our motivation for choosing these two models is that they both work well in this transfer learning setting (Wieting et al., 2016b) and they are architecturally similar with one crucial difference: only the GRAN takes into account word order. This difference plays an important role in the effectiveness of the different filtering methods as explored in Section 5.",
"cite_spans": [
{
"start": 633,
"end": 656,
"text": "(Wieting et al., 2016b)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4.1"
},
{
"text": "We follow the training procedure of Wieting et al. (2015) and Wieting et al. (2016b) . The training data is a set S of paraphrastic pairs s 1 , s 2 and we optimize a margin-based loss:",
"cite_spans": [
{
"start": 36,
"end": 57,
"text": "Wieting et al. (2015)",
"ref_id": "BIBREF40"
},
{
"start": 62,
"end": 84,
"text": "Wieting et al. (2016b)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "min Wc,Ww 1 |S| s 1 ,s 2 \u2208S max(0, \u03b4 \u2212 cos(g(s1), g(s2)) + cos(g(s1), g(t1))) + max(0, \u03b4 \u2212 cos(g(s1), g(s2)) + cos(g(s2), g(t2))) +\u03bbc Wc 2 +\u03bbw Ww initial \u2212Ww 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "where g is the model (AVG or GRAN), \u03b4 is the margin, \u03bb c and \u03bb w are regularization parameters, W w initial is the initial word embedding matrix, and t 1 and t 2 are \"negative examples\" taken from a mini-batch during optimization. The intuition is that we want the two texts to be more similar to each other (cos(g(s 1 ), g(s 2 ))) than either is to their respective negative examples t 1 and t 2 , by a margin of at least \u03b4. To select t 1 and t 2 , we choose the most similar sentence in some set (other than those in the given pair). For simplicity we use the mini-batch for this set, i.e., we choose t 1 for a given s 1 , s 2 as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "t 1 = argmax t: t,\u2022 \u2208S b \\{ s 1 ,s 2 } cos(g(s 1 ), g(t))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "where S b \u2286 S is the current mini-batch. That is, we want to choose a negative example t i that is similar to s i according to the current model. The downside is that we may occasionally choose a phrase t i that is actually a true paraphrase of s i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training",
"sec_num": "4.2"
},
{
"text": "We now investigate how best to use our generated paraphrase data for training universal paraphrastic sentence embeddings. We consider 10 data sources: Common Crawl (CC), Europarl (EP), and News Commentary (News) from all 3 language pairs, as well as the 10 9 French-English data (Giga). We extract 150,000 reference/backtranslation pairs from each data source. We use 100,000 of these to mine for training data for our sentence embedding models, and the remaining 50,000 are used as train/validation/test data for the reference classification and language models described below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We evaluate the quality of a paraphrase dataset by using the experimental setting of Wieting et al. (2016b) . We use the paraphrases as training data to create paraphrastic sentence embeddings, using the cosine of the embeddings as the measure of semantic relatedness, then evaluate the embeddings on the SemEval semantic textual similarity (STS) tasks from 2012 to 2015 (Agirre et al., 2012 (Agirre et al., , 2013 (Agirre et al., , 2014 (Agirre et al., , 2015 , the SemEval 2015 Twitter task (Xu et al., 2015) , and the SemEval 2014 SICK Semantic Relatedness task (Marelli et al., 2014) . Given two sentences, the aim of the STS tasks is to predict their similarity on a 0-5 scale, where 0 indicates the sentences are on different topics and 5 indicates that they are completely equivalent. As our test set, we report the average Pearson's r over these 22 sentence similarity tasks. 4 As development data, we use the 2016 STS tasks (Agirre et al., 2016) , where the tuning criterion is the average Pearson's r over its 5 datasets.",
"cite_spans": [
{
"start": 85,
"end": 107,
"text": "Wieting et al. (2016b)",
"ref_id": "BIBREF39"
},
{
"start": 371,
"end": 391,
"text": "(Agirre et al., 2012",
"ref_id": "BIBREF4"
},
{
"start": 392,
"end": 414,
"text": "(Agirre et al., , 2013",
"ref_id": "BIBREF3"
},
{
"start": 415,
"end": 437,
"text": "(Agirre et al., , 2014",
"ref_id": "BIBREF1"
},
{
"start": 438,
"end": 460,
"text": "(Agirre et al., , 2015",
"ref_id": "BIBREF0"
},
{
"start": 493,
"end": 510,
"text": "(Xu et al., 2015)",
"ref_id": "BIBREF42"
},
{
"start": 565,
"end": 587,
"text": "(Marelli et al., 2014)",
"ref_id": "BIBREF29"
},
{
"start": 933,
"end": 954,
"text": "(Agirre et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.1"
},
{
"text": "For fair comparison among different datasets and dataset filtering methods described below, we use only 24,000 training examples for nearly all experiments. Different filtering methods produce different amounts of training data, and using 24,000 examples allows us to keep the amount of training data constant across filtering methods. It also allows us to complete these several thousand experiments in a reasonable amount of time. In Section 5.8 below, we discuss experiments that scale up to larger amounts of training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.2"
},
{
"text": "We use PARAGRAM-SL999 embeddings (Wieting et al., 2015) to initialize the word embedding matrix (W w ) for both models. For all experiments, we fix the mini-batch size to 100, \u03bb w to 0, \u03bb c to 0, and the margin \u03b4 to 0.4. We train AVG for 20 epochs, and the GRAN for 3, since it converges much faster. For optimization we use Adam (Kingma and Ba, 2014) with a learning rate of 0.001.",
"cite_spans": [
{
"start": 33,
"end": 55,
"text": "(Wieting et al., 2015)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.2"
},
{
"text": "We compare to two data resources used in previous work to learn paraphrastic sentence embeddings. The first is phrase pairs from PPDB, used by Wieting et al. (2016b) and Wieting et al. (2016a) . PPDB comes in different sizes (S, M, L, XL, XXL, and XXXL), where each larger size subsumes all smaller ones. The pairs in PPDB are sorted by a confidence measure and so the smaller sets contain higher precision paraphrases. We use PPDB XL in this paper, which consists of fairly high precision paraphrases. The other data source is the aligned Simple English / standard English Wikipedia data developed by Coster and Kauchak (2011) and used for learning paraphrastic sentence embeddings by Wieting and Gimpel (2017) . We refer to this data source as \"SimpWiki\". We refer to our back-translated data as \"NMT\".",
"cite_spans": [
{
"start": 143,
"end": 165,
"text": "Wieting et al. (2016b)",
"ref_id": "BIBREF39"
},
{
"start": 170,
"end": 192,
"text": "Wieting et al. (2016a)",
"ref_id": "BIBREF38"
},
{
"start": 602,
"end": 627,
"text": "Coster and Kauchak (2011)",
"ref_id": "BIBREF10"
},
{
"start": 686,
"end": 711,
"text": "Wieting and Gimpel (2017)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.2"
},
{
"text": "We first compare datasets, randomly sampling 24,000 sentence pairs from each of PPDB, Simp-Wiki, and each of our NMT datasets. The only hyperparameter to tune for this experiment is the stopping epoch, which we tune based on our development set. The results are shown in Table 4 . We find that the NMT datasets are all effective as training data, outperforming PPDB in all cases when using the GRAN. There are exceptions when using AVG, for which PPDB is quite strong. This is sensible because AVG is not sensitive to word order, so the fragments in PPDB do not cause problems. However, when using the GRAN, which is sensitive to word order, the NMT data is consistently better than PPDB. It often exceeds the performance of training on the SimpWiki data, which consists entirely of humanwritten sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 271,
"end": 278,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Dataset Comparison",
"sec_num": "5.3"
},
{
"text": "Above we showed that the NMT data is better than PPDB when using a GRAN and often as good as SimpWiki. Since we have access to so much more NMT data than SimpWiki (which is limited to fewer than 200k sentence pairs), we next experiment with several approaches for filtering the NMT data. We first consider filtering based on length, described in Section 5.5. We then consider filtering based on several quality measures designed to find more natural and higher-quality translations, described in Section 5.6. Finally, we consider several measures of diversity. By diversity we mean here a measure of the lexical and syntactic difference between the reference and its paraphrase. We describe these experiments in Section 5.7. We note that these filtering methods are not all mutually exclusive and could be combined, though in this paper we experiment with each individually and leave combination to future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Filtering Methods",
"sec_num": "5.4"
},
{
"text": "We first consider filtering candidate sentence pairs by length, i.e., the number of tokens in the translation. The tunable parameters are the upper and lower bounds of the translation lengths.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Length Filtering",
"sec_num": "5.5"
},
{
"text": "We experiment with a partition of length ranges, showing the results in Table 5 . These results are averages across all language pairs and data sources of training data for each length range shown. We find it best to select NMT data where the translations have between 0 and 10 tokens, with performance dropping as sentence length increases. This is true for both the GRAN and AVG models. We do the same filtering for the SimpWiki data, though the trend is not nearly as strong. Therefore this is unlikely due to the nature of the evaluation data, and may be due to machine translation quality dropping as sentence length increases. This trend appears even though the datasets with higher ranges have more tokens of training data, since only the number of training sentence pairs is kept constant across configurations.",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 79,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Length Filtering",
"sec_num": "5.5"
},
{
"text": "We then tune the length range using our development data, considering the following length ranges: [0, 10] [10, 20] , [10, 30] , [10, 100] , [15, 25] , [15, 30] , [15, 100] , [20, 30] , [20, 100] , [30, 100] . We tune over ranges as well as language, data source, and stopping epoch, each time training on 24,000 sentence pairs. We report the average test results over all languages and datasets in Table 6 . We compare to a baseline that draws a random set of data, showing that length-based filtering leads to gains of nearly half a point on average across our test sets. The tuned length ranges are short for both NMT and SimpWiki. The distribution of lengths in the NMT and SimpWiki data is fairly similar. The 10 NMT datasets all have mean translation lengths between 22 and 28 tokens. The data has fairly large standard deviations (11-25 tokens) indicating that there are some very long translations in the data. SimpWiki has a mean length of 24.2 and a standard deviation of 13.1.",
"cite_spans": [
{
"start": 99,
"end": 102,
"text": "[0,",
"ref_id": null
},
{
"start": 103,
"end": 106,
"text": "10]",
"ref_id": null
},
{
"start": 107,
"end": 111,
"text": "[10,",
"ref_id": null
},
{
"start": 112,
"end": 115,
"text": "20]",
"ref_id": null
},
{
"start": 118,
"end": 122,
"text": "[10,",
"ref_id": null
},
{
"start": 123,
"end": 126,
"text": "30]",
"ref_id": null
},
{
"start": 129,
"end": 133,
"text": "[10,",
"ref_id": null
},
{
"start": 134,
"end": 138,
"text": "100]",
"ref_id": null
},
{
"start": 141,
"end": 145,
"text": "[15,",
"ref_id": null
},
{
"start": 146,
"end": 149,
"text": "25]",
"ref_id": null
},
{
"start": 152,
"end": 156,
"text": "[15,",
"ref_id": null
},
{
"start": 157,
"end": 160,
"text": "30]",
"ref_id": null
},
{
"start": 163,
"end": 167,
"text": "[15,",
"ref_id": null
},
{
"start": 168,
"end": 172,
"text": "100]",
"ref_id": null
},
{
"start": 175,
"end": 179,
"text": "[20,",
"ref_id": null
},
{
"start": 180,
"end": 183,
"text": "30]",
"ref_id": null
},
{
"start": 186,
"end": 190,
"text": "[20,",
"ref_id": null
},
{
"start": 191,
"end": 195,
"text": "100]",
"ref_id": null
},
{
"start": 198,
"end": 202,
"text": "[30,",
"ref_id": null
},
{
"start": 203,
"end": 207,
"text": "100]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 399,
"end": 406,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Length Filtering",
"sec_num": "5.5"
},
{
"text": "We also consider filtering based on several measures of the \"quality\" of the back-translation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality Filtering",
"sec_num": "5.6"
},
{
"text": "\u2022 Translation Cost: We use the cost (negative log likelihood) of the translation from the NMT system, divided by the number of tokens in the translation. \u2022 Language Model: We train a separate language model for each language/data pair on 40,000 references that are separate from the 100,000 used for mining data. Due to the small data size, we train a 3-gram language model and use the KenLM toolkit (Heafield, 2011) . \u2022 Reference/Translation Classification: We train binary classifiers to predict whether a given sentence is a reference or translation (described in Section 5.6.1). We use the probability of being a reference as the score for filtering.",
"cite_spans": [
{
"start": 400,
"end": 416,
"text": "(Heafield, 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Quality Filtering",
"sec_num": "5.6"
},
{
"text": "For translation cost, we tune the upper bound of the cost over the range [0.2, 1] using increments of 0.1. For the language model, we tune an upper bound on the perplexity of the translations among the set {25, 50, 75, 100, 150, 200, \u221e}. For the classifier, we tune the minimum probability of being a reference over the range [0, 0.9] using increments of 0.1. Table 7 shows average test results over all languages and datasets after tuning hyperparameters on our development data for each. The translation cost and language model are not helpful for filtering, as random selection outperforms them. Both methods are outperformed by the reference classifier, which slightly outperforms random selection when using the stronger GRAN model. We now discuss further how we trained the reference classifier and the data characteristics that it reveals. We did not experiment with quality filtering for SimpWiki since it is human-written text.",
"cite_spans": [],
"ref_spans": [
{
"start": 360,
"end": 367,
"text": "Table 7",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Quality Filtering",
"sec_num": "5.6"
},
{
"text": "We experiment with predicting whether a given sentence is a reference or a back-translation, hypothesizing that generated sentences with high probabilities of being references are of higher quality. We train two kinds of binary classifiers, one using an LSTM and the other using word averaging, followed by a softmax layer. We select 40,000 reference/translation pairs for training and 5,000 for each of validation and testing. A single example is a sentence with label 1 if it is a reference translation and 0 if it is a translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference/Translation Classification",
"sec_num": "5.6.1"
},
{
"text": "In training, we consider the entire k-best list as examples of translations, selecting one translation to be the 0-labeled example. We either do this randomly or we score each sentence in the k-best list using our model and select the one with the highest probability of being a reference as the 0-labeled example. We tune this choice as well as an L 2 regularizer on the word embeddings (tuned over {10 \u22125 , 10 \u22126 , 10 \u22127 , 10 \u22128 , 0}). We use PARAGRAM-SL999 embeddings (Wieting et al., Model Table 8 : Results of reference/translation classification (accuracy\u00d7100). The highest score in each column is in boldface. Final two columns show accuracies of positive (reference) and negative classes, respectively. 2015) to initialize the word embeddings for both models. Models were trained by minimizing cross entropy for 10 epochs using Adam with learning rate 0.001. We performed this procedure separately for each of the 10 language/data pairs. The results are shown in Table 8 . While performance varies greatly across data sources, the LSTM always outperforms the word averaging model. For our translation-reference classification, we note that our results can be further improved. We also trained models on 90,000 examples, essentially doubling the amount of data, and the results improved by about 2% absolute on each dataset on both the validation and testing data.",
"cite_spans": [
{
"start": 471,
"end": 493,
"text": "(Wieting et al., Model",
"ref_id": null
}
],
"ref_spans": [
{
"start": 494,
"end": 501,
"text": "Table 8",
"ref_id": null
},
{
"start": 971,
"end": 978,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Reference/Translation Classification",
"sec_num": "5.6.1"
},
{
"text": "Analyzing Reference Classification. We inspected the output of our reference classifier and noted a few qualitative trends which we then verified empirically. First, neural MT systems tend to use a smaller vocabulary and exhibit more restricted use of phrases. They correspondingly tend to show more repetition in terms of both words and longer n-grams. This hypothesis can be verified empirically in several ways. We do so by calculating the entropy of the unigrams and trigrams for both the references and the translations from our 150,000 reference-translation pairs. 5 We also calculate the repetition percentage of unigrams and trigrams in both the references and translations. This is defined as the percentage of words that are repetitions (i.e., have already appeared in the sentence). For unigrams, we only consider words consisting of at least 3 characters.",
"cite_spans": [
{
"start": 571,
"end": 572,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reference/Translation Classification",
"sec_num": "5.6.1"
},
{
"text": "The results are shown in Table 9 , in which we subtract the translation value from the reference value for each measure. The translated text has lower n-gram entropies and higher rates of repetition. This appears for all datasets, but is strongest for common crawl and French-English 10 9 .",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 9",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Reference/Translation Classification",
"sec_num": "5.6.1"
},
{
"text": "We also noticed that translations are less likely to use rare words, instead willing to use a larger sequence of short words to convey the same meaning. We found that translations were sometimes more vague and, unsurprisingly, were more likely to be ungrammatical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference/Translation Classification",
"sec_num": "5.6.1"
},
{
"text": "We check whether our classifier is learning these patterns by computing the reference probabilities P (R) of 100,000 randomly sampled translation-reference pairs from each dataset (the same used to train models). We then compute the correlation between our classification score and different metrics: the repetition rate of the sentence, the average inverse-document frequency (IDF) of the sentence, 6 and the translation length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference/Translation Classification",
"sec_num": "5.6.1"
},
{
"text": "The results are shown in Table 10 . Negative correlations with repetitions indicates that fewer repetitions lead to higher P (R). A positive correlation with average IDF indicates that P (R) rewards the use of rare words. Interestingly, negative correlation with length suggests that the classifier prefers 6 Wikipedia was used to calculate the frequencies of the tokens. All tokens were lowercased. Table 10 : Spearman's \u03c1 between our reference classifier probability and various measures.",
"cite_spans": [
{
"start": 307,
"end": 308,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 25,
"end": 33,
"text": "Table 10",
"ref_id": "TABREF0"
},
{
"start": 400,
"end": 408,
"text": "Table 10",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Reference/Translation Classification",
"sec_num": "5.6.1"
},
{
"text": "Sentence P (R) R: Room was comfortable and the staff at the front desk were very helpful.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reference/Translation Classification",
"sec_num": "5.6.1"
},
{
"text": "1.0 T: The staff were very nice and the room was very nice and the staff were very nice. Table 11 : Illustrative examples of references (R) and back-translations (T), along with probabilities from the reference classifier. See text for details. more concise sentences. 7 We show examples of these phenomena in Table 11 . The first two examples show the tendency of NMT to repeat words and phrases. The second two show how they tend to use sequences of common words (\"put at risk\") rather than rare words (\"endangering\").",
"cite_spans": [
{
"start": 269,
"end": 270,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 89,
"end": 97,
"text": "Table 11",
"ref_id": "TABREF0"
},
{
"start": 310,
"end": 318,
"text": "Table 11",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Reference/Translation Classification",
"sec_num": "5.6.1"
},
{
"text": "We consider several filtering criteria based on measures that encourage particular amounts of disparity between the reference and its backtranslation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity Filtering",
"sec_num": "5.7"
},
{
"text": "\u2022 n-gram Overlap: Our n-gram overlap measures are calculated by counting n-grams of a given order in both the reference and translation, then dividing the number of shared n-grams by the total number of n-grams in the reference or translation, whichever has fewer. We use three n-gram overlap scores (n \u2208 {1, 2, 3}).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity Filtering",
"sec_num": "5.7"
},
{
"text": "\u2022 BLEU Score: We use a smoothed sentencelevel BLEU variant from Nakov et al. (2012) that uses smoothing for all n-gram lengths and also smooths the brevity penalty. For both methods, the tunable hyperparameters are the upper and lower bounds for the above scores. We tune over the cross product of lower bounds {0, 0.1, 0.2, 0.3} and upper bounds {0.6, 0.7, 0.8, 0.9, 1.0}. Our intuition is that the best data will have some amount of n-gram overlap, but not too much. Too much n-gram overlap will lead to pairs that are not useful for learning.",
"cite_spans": [
{
"start": 64,
"end": 83,
"text": "Nakov et al. (2012)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity Filtering",
"sec_num": "5.7"
},
{
"text": "The results are shown in Table 12 , for both models and for both NMT and SimpWiki. We find that the diversity filtering methods lead to consistent improvements when training on SimpWiki. We believe this is because many of the sentence pairs in SimpWiki are near-duplicates and these filtering methods favor data with more differences.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 33,
"text": "Table 12",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Diversity Filtering",
"sec_num": "5.7"
},
{
"text": "Diversity filtering can also help when selecting NMT data, though the differences are smaller. We do note that unigram overlap is the strongest filtering strategy for AVG. When looking at the threshold tuning, the best lower bounds are often 0 or 0.1 and the best upper bounds are typically 0.6-0.7, indicating that sentence pairs with a high degree of word overlap are not useful for training. We also find that the GRAN benefits more from filtering based on higher-order n-gram overlap than AVG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Diversity Filtering",
"sec_num": "5.7"
},
{
"text": "Unlike the SimpWiki data, which is naturally limited and only available for English, we can scale our approach. Since we use data on which the NMT systems were trained and perform backtranslation, we can easily produce large training sets of paraphrastic sentence pairs for many languages and data domains, limited only by the availability of bitext.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scaling Up",
"sec_num": "5.8"
},
{
"text": "To test this, we took the tuned filtering methods and language/data pairs (according to our development dataset only), and trained them on more data. These were CC-CS for GRAN and CC-DE for AVG. We also trained each model on the same number of sentence pairs from SimpWiki. 8 We also compare to PPDB XL, and since PPDB has fewer tokens per example, we use enough PPDB data so that it has at least as many tokens as the SimpWiki data used in the experiment. 9 Table 13 shows clear improvements when using more training data, providing evidence that our approach can scale to larger datasets. The NMT data surpasses SimpWiki for the GRAN, while the SimpWiki and NMT data perform similarly for AVG. PPDB is outperformed by both data sources for both models. Even when we train on all 52M tokens in PPDB XXL, AVG only reaches 66.5.",
"cite_spans": [
{
"start": 274,
"end": 275,
"text": "8",
"ref_id": null
}
],
"ref_spans": [
{
"start": 459,
"end": 467,
"text": "Table 13",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Scaling Up",
"sec_num": "5.8"
},
{
"text": "We showed how back-translation can be used to generate effective training data for paraphrastic sentence embeddings. We explored filtering strategies that improve the generated data; in doing so, we identified characteristics that distinguish NMT output from references. Our hope is that these results can enable learning paraphrastic sentence embeddings with powerful neural architectures across many languages and domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Generated paraphrases and code are available at http: //ttic.uchicago.edu/\u02dcwieting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For example, CzEng 1.6(Bojar et al., 2016) contains a billion words across its 8 domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available at https://github.com/sebastienj/LV_groundhog.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Statistical significance testing is nontrivial due to averaging Pearson's r so we leave it to future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We randomly selected translations from the beam search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is noteworthy because the average sentence length of translations and references is not significantly different.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Since the CC-CS data was the smallest dataset used to train the CS NMT system (SeeTable 3), we only used 100,000 pairs for the GRAN experiment. For AVG, we used the full 167,689.9 800,011 pairs for GRAN and 1,341,188 for AVG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357. We thank the developers of Theano (Theano Development Team, 2016) and NVIDIA Corporation for donating GPUs used in this research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Inigo",
"middle": [],
"last": "Lopez-Gazpio",
"suffix": ""
},
{
"first": "Montse",
"middle": [],
"last": "Maritxalar",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
},
{
"first": "Larraitz",
"middle": [],
"last": "Uria",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Inigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. SemEval-2015 task 2: Semantic tex- tual similarity, English, Spanish and pilot on inter- pretability. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "SemEval-2014 task 10: Multilingual semantic textual similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. SemEval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Carmen",
"middle": [],
"last": "Banea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Rigau",
"suffix": ""
},
{
"first": "Janyce",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of SemEval",
"volume": "",
"issue": "",
"pages": "497--511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. Proceedings of SemEval, pages 497-511.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "*SEM 2013 shared task: Semantic textual similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
},
{
"first": "Weiwei",
"middle": [],
"last": "Guo",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez- Agirre, and Weiwei Guo. 2013. *SEM 2013 shared task: Semantic textual similarity. In Second Joint Conference on Lexical and Computational Seman- tics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "SemEval-2012 task 6: A pilot on semantic textual similarity",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Gonzalez-Agirre",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the First Joint Conference on Lexical and Computational Semantics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Com- putational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A simple but tough-to-beat baseline for sentence embeddings",
"authors": [
{
"first": "Sanjeev",
"middle": [],
"last": "Arora",
"suffix": ""
},
{
"first": "Yingyu",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Tengyu",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence em- beddings. In Proceedings of the International Con- ference on Learning Representations.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Paraphrasing with bilingual parallel corpora",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Bannard",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Bannard and Chris Callison-Burch. 2005. Para- phrasing with bilingual parallel corpora. In Pro- ceedings of the 43rd Annual Meeting on Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Extracting paraphrases from a parallel corpus",
"authors": [
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kathleen R Mckeown",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th annual meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "50--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Regina Barzilay and Kathleen R McKeown. 2001. Ex- tracting paraphrases from a parallel corpus. In Pro- ceedings of the 39th annual meeting on Association for Computational Linguistics, pages 50-57.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "CzEng 1.6: Enlarged Czech-English Parallel Corpus with Processing Tools Dockered",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Du\u0161ek",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kocmi",
"suffix": ""
},
{
"first": "Jind\u0159ich",
"middle": [],
"last": "Libovick\u00fd",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Nov\u00e1k",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Sudarikov",
"suffix": ""
},
{
"first": "Du\u0161an",
"middle": [],
"last": "Vari\u0161",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of 19th International Conference on Text, Speech, and Dialogue (TSD)",
"volume": "",
"issue": "",
"pages": "231--238",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Ond\u0159ej Du\u0161ek, Tom Kocmi, Jind\u0159ich Li- bovick\u00fd, Michal Nov\u00e1k, Martin Popel, Roman Su- darikov, and Du\u0161an Vari\u0161. 2016. CzEng 1.6: En- larged Czech-English Parallel Corpus with Process- ing Tools Dockered. In Proceedings of 19th Inter- national Conference on Text, Speech, and Dialogue (TSD), pages 231-238.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Simple english wikipedia: a new text simplification task",
"authors": [
{
"first": "William",
"middle": [],
"last": "Coster",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Kauchak",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers",
"volume": "2",
"issue": "",
"pages": "665--669",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William Coster and David Kauchak. 2011. Simple en- glish wikipedia: a new text simplification task. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies: short papers-Volume 2, pages 665-669.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th international conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase cor- pora: Exploiting massively parallel news sources. In Proceedings of the 20th international conference on Computational Linguistics, page 350.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Automatically constructing a corpus of sentential paraphrases",
"authors": [
{
"first": "B",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of IWP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William B Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proc. of IWP.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "MultiUN: A multilingual corpus from united nation documents",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Eisele",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Eisele and Yu Chen. 2010. MultiUN: A multilingual corpus from united nation documents. In Proceedings of the Seventh conference on In- ternational Language Resources and Evaluation (LREC'10).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Retrofitting word vectors to semantic lexicons",
"authors": [
{
"first": "Manaal",
"middle": [],
"last": "Faruqui",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "Sujay",
"middle": [],
"last": "Kumar Jauhar",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The multilingual paraphrase database",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch and Chris Callison-Burch. 2014. The multilingual paraphrase database. In Proceedings of the Ninth International Conference on Language Re- sources and Evaluation (LREC-2014).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "PPDB: The Paraphrase Database",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of HLT-NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Paraphrase Database. In Proceedings of HLT-NAACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "KenLM: faster and smaller language model queries",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the EMNLP 2011 Sixth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "187--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield. 2011. KenLM: faster and smaller language model queries. In Proceedings of the EMNLP 2011 Sixth Workshop on Statistical Ma- chine Translation, pages 187-197.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Embedding word similarity with neural machine translation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Sebastien",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "Coline",
"middle": [],
"last": "Devin",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6448"
]
},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Kyunghyun Cho, Sebastien Jean, Coline Devin, and Yoshua Bengio. 2014a. Embedding word similarity with neural machine translation. arXiv preprint arXiv:1412.6448.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Not all neural embeddings are born equal",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Sebastien",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "Coline",
"middle": [],
"last": "Devin",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1410.0718"
]
},
"num": null,
"urls": [],
"raw_text": "Felix Hill, KyungHyun Cho, Sebastien Jean, Coline Devin, and Yoshua Bengio. 2014b. Not all neu- ral embeddings are born equal. arXiv preprint arXiv:1410.0718.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning distributed representations of sentences from unlabelled data",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "On using very large target vocabulary for neural machine translation",
"authors": [
{
"first": "S\u00e9bastien",
"middle": [],
"last": "Jean",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Roland",
"middle": [],
"last": "Memisevic",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00e9bastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1-10.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Raquel Urtasun, and Sanja Fidler",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Yukun",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [
"R"
],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"S"
],
"last": "Zemel",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Torralba",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3294--3302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Kiros, Yukun Zhu, Ruslan R. Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urta- sun, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in Neural Information Processing Sys- tems, pages 3294-3302.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Europarl: A parallel corpus for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 10th Machine Translation Summit",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of the 10th Machine Translation Summit, pages 79-86.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ondrej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Ses- sions, pages 177-180.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1405.4053"
]
},
"num": null,
"urls": [],
"raw_text": "Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Paraphrasing revisited with neural machine translation",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Mallinson",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "1",
"issue": "",
"pages": "881--893",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Mallinson, Rico Sennrich, and Mirella Lap- ata. 2017. Paraphrasing revisited with neural ma- chine translation. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Pa- pers, pages 881-893.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "SemEval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Marelli",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Menini",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Zamparelli",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Marelli, Luisa Bentivogli, Marco Baroni, Raf- faella Bernardi, Stefano Menini, and Roberto Zam- parelli. 2014. SemEval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014).",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Optimizing for sentence-level BLEU+1 yields short translations",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzman",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vo",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COL-ING 2012",
"volume": "",
"issue": "",
"pages": "1979--1994",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Francisco Guzman, and Stephan Vo- gel. 2012. Optimizing for sentence-level BLEU+1 yields short translations. In Proceedings of COL- ING 2012, pages 1979-1994.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Jointly optimizing word representations for lexical and sentential tasks with the c-phrase model",
"authors": [
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Nghia The Pham",
"suffix": ""
},
{
"first": "Angeliki",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nghia The Pham, Germ\u00e1n Kruszewski, Angeliki Lazaridou, and Marco Baroni. 2015. Jointly opti- mizing word representations for lexical and senten- tial tasks with the c-phrase model. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers).",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Monolingual machine translation for paraphrase generation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Quirk, Chris Brockett, and William Dolan. 2004. Monolingual machine translation for para- phrase generation. In Proceedings of the 2004 Con- ference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Dynamic pooling and unfolding recursive autoencoders for paraphrase detection",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"H"
],
"last": "Huang",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2011,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011. Dynamic pooling and unfolding recursive autoen- coders for paraphrase detection. In Advances in Neural Information Processing Systems.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In Advances in Neural Information Process- ing Systems, pages 3104-3112.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Theano: A Python framework for fast computation of mathematical expressions",
"authors": [],
"year": 2016,
"venue": "Theano Development Team",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theano Development Team. 2016. Theano: A Python framework for fast computation of mathematical ex- pressions. arXiv e-prints, abs/1605.02688.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Charagram: Embedding words and sentences via character n-grams",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016a. Charagram: Embedding words and sentences via character n-grams. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Towards universal paraphrastic sentence embeddings",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016b. Towards universal paraphrastic sentence embeddings. In Proceedings of Interna- tional Conference on Learning Representations.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "From paraphrase database to compositional paraphrase model and back",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the ACL (TACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu, and Dan Roth. 2015. From paraphrase database to compositional paraphrase model and back. Transactions of the ACL (TACL).",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Revisiting recurrent networks for paraphrastic sentence embeddings",
"authors": [
{
"first": "John",
"middle": [],
"last": "Wieting",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Wieting and Kevin Gimpel. 2017. Revisiting re- current networks for paraphrastic sentence embed- dings. In Proceedings of the Annual Meeting of the Association for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "SemEval-2015 task 1: Paraphrase and semantic similarity in Twitter (PIT)",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "William B",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Xu, Chris Callison-Burch, and William B Dolan. 2015. SemEval-2015 task 1: Paraphrase and seman- tic similarity in Twitter (PIT). In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval).",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Extracting lexically divergent paraphrases from Twitter",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "William",
"middle": [
"B"
],
"last": "Dolan",
"suffix": ""
},
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "435--448",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Xu, Alan Ritter, Chris Callison-Burch, William B. Dolan, and Yangfeng Ji. 2014. Extracting lexically divergent paraphrases from Twitter. Transactions of the Association for Computational Linguistics, 2:435-448.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "ADADELTA: an adaptive learning rate method",
"authors": [
{
"first": "D",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zeiler",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Leveraging multiple MT engines for paraphrase generation",
"authors": [
{
"first": "Shiqi",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1326--1334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shiqi Zhao, Haifeng Wang, Xiang Lan, and Ting Liu. 2010. Leveraging multiple MT engines for para- phrase generation. In Proceedings of the 23rd Inter- national Conference on Computational Linguistics, pages 1326-1334.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Pivot approach for extracting paraphrase patterns from bilingual corpora",
"authors": [
{
"first": "Shiqi",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "780--788",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shiqi Zhao, Haifeng Wang, Ting Liu, and Sheng Li. 2008. Pivot approach for extracting paraphrase pat- terns from bilingual corpora. In Proceedings of the Annual Meeting of the Association for Computa- tional Linguistics, pages 780-788.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "and Sweden have been supporters of CTBT for a long time now. 0.06 R: We thought Mr Haider ' s Austria was endangering our freedom. 1.0 T: We thought that our freedom was put at risk by Austria by Mr Haider. 0.09",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Illustrative examples of references (R) paired with back-translations (T)."
},
"TABREF1": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>Language</td><td>% BLEU</td></tr><tr><td>Czech\u2192English</td><td>19.7</td></tr><tr><td>French\u2192English</td><td>20.1</td></tr><tr><td>German\u2192English</td><td>28.2</td></tr></table>",
"text": "Dataset sizes (numbers of sentence pairs) for data domains used for training NMT systems."
},
"TABREF2": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "BLEU scores on the WMT2015 test set."
},
"TABREF4": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Test results (average Pearson's r \u00d7 100 over 22 STS datasets) using a random selection of 24,000 examples from each data source."
},
"TABREF7": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">NMT</td><td colspan=\"2\">SimpWiki</td></tr><tr><td>Filtering Method</td><td colspan=\"4\">GRAN AVG GRAN AVG</td></tr><tr><td>None (Random)</td><td>66.9</td><td>65.5</td><td>67.2</td><td>65.8</td></tr><tr><td>Length</td><td>67.3</td><td>66.0</td><td>67.4</td><td>66.2</td></tr><tr><td colspan=\"5\">Tuned Len. Range [0,10] [0,10] [0,10] [0,15]</td></tr></table>",
"text": "Test correlations for our models when trained on sentences with particular length ranges (averaged over languages and data sources for the NMT rows). Results are on STS datasets (Pearson's r \u00d7 100)."
},
"TABREF8": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Length filtering test results after tuning length ranges on development data (averaged over languages and data sources for the NMT rows). Results are on STS datasets (Pearson's r \u00d7 100)."
},
"TABREF10": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Quality filtering test results after tuning quality hyperparameters on development data (averaged over languages and data sources for the NMT rows). Results are on STS datasets (Pearson's r \u00d7 100)."
},
"TABREF13": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>: Differences in entropy and repetition of</td></tr><tr><td>unigrams/trigrams in references and translations.</td></tr><tr><td>Negative values indicate translations have a higher</td></tr><tr><td>value, so references show consistently higher en-</td></tr><tr><td>tropies and lower repetition rates.</td></tr></table>",
"text": ""
},
"TABREF16": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table><tr><td>: Diversity filtering test results after tun-</td></tr><tr><td>ing filtering hyperparameters on development data</td></tr><tr><td>(averaged over languages and data sources for the</td></tr><tr><td>NMT rows). Results are on STS datasets (Pear-</td></tr><tr><td>son's r \u00d7 100).</td></tr></table>",
"text": ""
},
"TABREF18": {
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>",
"text": "Test results with more training data. More data helps both AVG and GRAN to match or surpass training on SimpWiki. Both comfortably surpass PPDB. The number of training examples used is in parentheses."
}
}
}
}