| { |
| "paper_id": "Y16-2024", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:47:08.037662Z" |
| }, |
| "title": "Dealing with Out-Of-Vocabulary Problem in Sentence Alignment Using Word Similarity", |
| "authors": [ |
| { |
| "first": "Hai-Long", |
| "middle": [], |
| "last": "Trieu", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "trieulh@jaist.ac.jp" |
| }, |
| { |
| "first": "Le-Minh", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "nguyenml@jaist.ac.jp" |
| }, |
| { |
| "first": "Phuong-Thai", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "National University", |
| "location": { |
| "settlement": "Hanoi", |
| "country": "Vietnam, Vietnam" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Sentence alignment plays an essential role in building bilingual corpora which are valuable resources for many applications like statistical machine translation. In various approaches of sentence alignment, length-and-word-based methods which are based on sentence length and word correspondences have been shown to be the most effective. Nevertheless a drawback of using bilingual dictionaries trained by IBM Models in length-and-word-based methods is the problem of out-of-vocabulary (OOV). We propose using word similarity learned from monolingual corpora to overcome the problem. Experimental results showed that our method can reduce the OOV ratio and achieve a better performance than some other lengthand-word-based methods. This implies that using word similarity learned from monolingual data may help to deal with OOV problem in sentence alignment.", |
| "pdf_parse": { |
| "paper_id": "Y16-2024", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Sentence alignment plays an essential role in building bilingual corpora which are valuable resources for many applications like statistical machine translation. In various approaches of sentence alignment, length-and-word-based methods which are based on sentence length and word correspondences have been shown to be the most effective. Nevertheless a drawback of using bilingual dictionaries trained by IBM Models in length-and-word-based methods is the problem of out-of-vocabulary (OOV). We propose using word similarity learned from monolingual corpora to overcome the problem. Experimental results showed that our method can reduce the OOV ratio and achieve a better performance than some other lengthand-word-based methods. This implies that using word similarity learned from monolingual data may help to deal with OOV problem in sentence alignment.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Sentence alignment plays an important role in building bilingual corpora for statistical machine translation and many other tasks. Given documents from two languages, the task is to align sentences which are translations of each other. There are three main methods in sentence alignment including lengthbased, word-based, and the combination of the first two methods. Length-based methods were proposed in (Brown et al., 1991; Gale and Church, 1993) . (Wu, 1994) and (Melamed, 1996) introduced methods based on word correspondences. Length-based and word-based methods were also combined to make hybrid methods (Moore, 2002; Varga et al., 2007) .", |
| "cite_spans": [ |
| { |
| "start": 406, |
| "end": 426, |
| "text": "(Brown et al., 1991;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 427, |
| "end": 449, |
| "text": "Gale and Church, 1993)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 452, |
| "end": 462, |
| "text": "(Wu, 1994)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 467, |
| "end": 482, |
| "text": "(Melamed, 1996)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 611, |
| "end": 624, |
| "text": "(Moore, 2002;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 625, |
| "end": 644, |
| "text": "Varga et al., 2007)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Length-based methods which are only based on the number of words or characters in sentence pairs can run very fast but show a low accuracy. Meanwhile, word-based methods which use bilingual lexicon gain high accuracy, but heavily depend on available lexical resources. The length-and-word-based methods which combine length-based and wordbased methods (Moore, 2002; Varga et al., 2007) do not depend on lexical resources and overcome the problem of low accuracy in length-based methods. Nonetheless, a drawback of these length-and-wordbased methods which trained a bilingual dictionary using IBM models is the OOV problem.", |
| "cite_spans": [ |
| { |
| "start": 352, |
| "end": 365, |
| "text": "(Moore, 2002;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 366, |
| "end": 385, |
| "text": "Varga et al., 2007)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this work, we propose an approach to deal with the OOV problem in sentence alignment based on word similarity learned from monolingual corpora. Words that were not contained in the bilingual dictionaries were replaced by their similar words from the monolingual corpora. Experiments conducted on English-Vietnamese sentence alignment showed that using word similarity learned from monolingual corpora can help to reduce the OOV ratio and lead to an improvement in comparison with some other lengthand-word-based methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We describe phases used in our method in Section 2. Experimental results and discussions are analysed in Section 3. An overview of related researches is discussed in Section 4, and conclusions are drawn in Section 5. : Phases in our model; S: the text of source language, T: the text of target language; S 1 , T 1 : sentences aligned by the length-based phase; S 2 , T 2 : sentences aligned by the length-and-word-based phase; S', T': monolingual corpora of the source and target languages, respectively. The components of the length-and-word-based method (Moore, 2002) are bounded by the dashed frame.", |
| "cite_spans": [ |
| { |
| "start": 556, |
| "end": 569, |
| "text": "(Moore, 2002)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this section, we describe phases used in our method, which include four phases: the length-based phase, the training bilingual dictionaries, using word similarity to deal with the OOV problem, and the combination of length-based and word-based methods. The model is illustrated in Figure 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 284, |
| "end": 292, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Let l e and l v be the lengths of English and Vietnamese sentences, respectively. Then, l e and l v varies according to Poisson distribution as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Length-based Phase", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (l v |l e ) = exp \u2212lvr (l e r) lv l v !", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Length-based Phase", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Where r is the ratio of the mean length of Vietnamese sentences to the mean length of English sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Length-based Phase", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "As shown in (Moore, 2002) , the length-based phase based on the Poisson distribution was slightly better than the Gaussian distribution proposed by (Brown et al., 1991) .", |
| "cite_spans": [ |
| { |
| "start": 12, |
| "end": 25, |
| "text": "(Moore, 2002)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 148, |
| "end": 168, |
| "text": "(Brown et al., 1991)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Length-based Phase", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (l v |l e ) = \u03b1exp \u2212 log( l v l e ) \u2212 \u03bc) 2 2\u03c3 2", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Length-based Phase", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Where \u03bc and \u03c3 2 are the mean and variance of the Gaussian distribution, respectively. The lengthbased model based on the Poisson distribution was shown to be simpler to estimate than the model based on the Gaussian distribution which has to iteratively estimate the variance \u03c3 2 using the expectation maximization (EM) algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Length-based Phase", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Our model was based on the length-based model using the Poisson distribution.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Length-based Phase", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Sentence pairs extracted from the length-based phase are then used to train IBM Model 1 (Brown et al., 1993) to build a bilingual dictionary. Let e and v be English and Vietnamese sentences, respectively. The procedure of generating sentence v from a sentence e with the length of l e is as follows: ", |
| "cite_spans": [ |
| { |
| "start": 88, |
| "end": 108, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training IBM Model 1", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "1. Selecting a length l v for the sentence v 2. For each word position j in { 1..l v } of v:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training IBM Model 1", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "tr(v j |e i )", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Training IBM Model 1", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Where is the uniform probability for all possible lengths of v.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training IBM Model 1", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In the sentence alignment task based on word correspondences, bilingual dictionaries trained on IBM models can help to produce highly accurate sentence pairs when they contain reliable word pairs with a high percentage of vocabulary coverage. The OOV problem appears when the bilingual dictionary does not contain word pairs which are necessary to produce a correct alignment of sentences. The higher the OOV ratio, the lower the performance. The bilingual dictionary can also be expanded by training IBM models on available bilingual data. However, such resources are very rare especially for lowresource language pairs like English-Vietnamese. Meanwhile, monolingual data is easy to acquire in an abundant amount. We propose using word similarity learned from monolingual corpora to overcome the OOV problem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using Word Similarity to Deal with OOV", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Monolingual corpora of English and Vietnamese were used to train two word similarity models separately using a continuous bag-of-word model. In continuous bag-of-words models, words are predicted based on their context, and words that appear in the same context tend to be clustered together as similar words. We used word2vec (Mikolov et al., 2013) , a powerful continuous bag-of-words model to train word similarity. The word2vec model can run very fast and enables to train continuous vector representations of words on large data sets.", |
| "cite_spans": [ |
| { |
| "start": 327, |
| "end": 349, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using Word Similarity to Deal with OOV", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The word similarity models were then used to enrich the bilingual dictionary. 1. Let (e i \u2212 v j ) be a word pair in the dictionary in which e i is the English word, and v j is the Vietnamese word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using Word Similarity to Deal with OOV", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "2. Let (a) sim(e i ) = {e i 1 , ..., e im } (b) sim(v j ) = {v j 1 , ..., v jn }", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using Word Similarity to Deal with OOV", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "be sets of similar words of e i and v j , respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using Word Similarity to Deal with OOV", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "3. The dictionary can be expanded as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using Word Similarity to Deal with OOV", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "(a) For e in sim(e i ):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using Word Similarity to Deal with OOV", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "add pairs (e \u2212 v j ) to the dictionary (b) For v in sim(v j ): add pairs (e i \u2212v ) to the dictionary (c) score(e \u2212 v j ) = score(e i \u2212 v j ) * cosine(e i \u2212 e ) (d) score(e i \u2212 v ) = score(e i \u2212 v j ) * cosine(v j \u2212 v )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using Word Similarity to Deal with OOV", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Where score(a, b) is the word translation probability of the word pair (a, b) by training IBM Model 1. cosine (a, b) is the cosine similarity between a and b from word similarity models.", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 116, |
| "text": "(a, b)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using Word Similarity to Deal with OOV", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The expanded dictionary can help to cover a higher ratio of vocabulary, which reduces the OOV ratio and improves overall performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Using Word Similarity to Deal with OOV", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The expanded dictionary was then combined with the length-based phase described in Section 2.1 to produce final alignments, which are described as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Length-based and Word-based", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (e, v) = P 1\u22121 (l e , l v ) (l e + 1) lv ( lv j=1 le i=0 tr(v j |e i ))( le i=1 f u (e i ))", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Length-based and Word-based", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "Where f u is the observed relative unigram frequency of the word in the text in the corresponding language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Length-based and Word-based", |
| "sec_num": "2.4" |
| }, |
| { |
| "text": "We conducted experiments on the sentence alignment task for English-Vietnamese, a low-resource language pair. We evaluated our method on the test set collected from the website. 1 After preprocessing the collected data, we conducted sentence alignment manually to achieve the reference data. We publish these data sets on the website. 2 The statistics of these data sets are shown in Table 1 . In order to produce a more reliable bilingual dictionary, we added an available bilingual corpus to train IBM Model 1, which was collected from the IWSLT2015 workshop. 3 The dataset contains subtitles of TED talks (Cettolo et al., 2012) . The IWSLT2015 training data is shown in In the preprocessing steps, we tokenized these datasets using the tokenizer of Moses script 4 for English and JVnTextpro 5 for Vietnamese. The datasets were then lowercased. For Vietnamese, we conducted word segmentation using JVnTextpro.", |
| "cite_spans": [ |
| { |
| "start": 335, |
| "end": 336, |
| "text": "2", |
| "ref_id": null |
| }, |
| { |
| "start": 562, |
| "end": 563, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 608, |
| "end": 630, |
| "text": "(Cettolo et al., 2012)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 384, |
| "end": 391, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Setup", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For the sentence alignment algorithm, we reimplemented phases in the model (Moore, 2002) using Java.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistics", |
| "sec_num": null |
| }, |
| { |
| "text": "To evaluate performance we used common metrics: Precision, Recall, and F-measure (V\u00e9ronis and Langlais, 2000) .", |
| "cite_spans": [ |
| { |
| "start": 81, |
| "end": 109, |
| "text": "(V\u00e9ronis and Langlais, 2000)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistics", |
| "sec_num": null |
| }, |
| { |
| "text": "In order to train word similarity models, we used English and Vietnamese monolingual corpora. For English we used the one-billion-words 6 dataset which contains almost 1B words. To build a huge monolingual corpus of Vietnamese, we extracted articles from the web (www.baomoi.com) 7 . The data set was then preprocessed to achieve 22 million Vietnamese sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Word Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We used word2vec from gensim python 8 to train two word-similarity models on the monolingual corpora. We set the cbow model with configurations: window size=5, vector size=100, min count = 10. The word2vec trained model of Vietnamese is also available on the website. 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Word Similarity", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We compared our model with the two other lengthand-word-based methods: M-align 9 (Moore, 2002) and Hun-align 10 (Varga et al., 2007) . We showed how our method can deal with the OOV problem.", |
| "cite_spans": [ |
| { |
| "start": 112, |
| "end": 132, |
| "text": "(Varga et al., 2007)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Result and Discussion", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We setup the length-based phase's threshold to 0.99 to extract highest sentence pairs. Then in the length-and-word-based phase, we setup the threshold to 0.9 to ensure a high confidence. Experimental results are shown in Overall, the performance of our model slightly improved the M-align in all scores of precision, recall, and f-measure. Our model also gained higher performance than Hun-align. Although Hun-align can achieve the highest recall of 73.60% due to the approach that Hun-align constructs dictionaries, the method produced a number of error results, so this caused the lowest precision.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Result and Discussion", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "A problem of using the IBM Model 1 as in Moore's method was the OOV. When the dictionary cannot cover a high ratio of vocabulary, it decreases the contribution of the word-based phase. The average OOV ratio is shown in Table 4 . In comparison with M-align, using word similarity in our model reduced the OOV ratio from 7.37% to 4.33% in English and from 7.74% to 6.80% in Vietnamese vocabulary. By using word similarity models we overcame the problem of OOV. The following discussion will show how the word similarity models helped to reduce the OOV ratio. We describe word similarity models using word2vec with examples. Tables 5 and 6 show examples of OOV words and their most similar words extracted from the word similarity models. The word similarity models can explore not only helpful similar words in terms of variants in morphology but also words that share the same meaning but different morphemes. There are useful similar words that can have the same meaning as the OOV words like word pairs (\"intends\" and \"aims\") or (\"honours\" and \"awards\"), (\"qu\u00e1t\", \"m\u1eafng\"), (\"ghe\", \"\u0111\u00f2\"). However, because in the word2vec model words are predicted based on their context in terms of windows, some word pairs may contain different meanings like (\"bangkok\", \"jakarta\"), or (\"pagoda\", \"citadel\"), (\"ph\u1edf\", \"c\u01a1m\"). Therefore extracting suitable similar words is also needed to be further investigated.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 219, |
| "end": 226, |
| "text": "Table 4", |
| "ref_id": "TABREF7" |
| }, |
| { |
| "start": 622, |
| "end": 636, |
| "text": "Tables 5 and 6", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Result and Discussion", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We show an example of how our method deals with the OOV problem in Table 7 . The word pairs (reunification-th\u1ed1ng_nh\u1ea5t) and (impressively-m\u1ea1nh_m\u1ebd) were not covered by the dictionary using IBM Model 1, and this became an example of OOV. Examples of similar word pairs are shown in Table 8 , and translation word pairs trained by IBM Model 1 are shown in Table 9 . Because (reunification-unification) was a similar word pair, and the translation word pair (unification-th\u1ed1ng_nh\u1ea5t) was contained in the dictionary, the new translation word pair (reunification-th\u1ed1ng_nh\u1ea5t) was then created. Similarly, the new translation word pair (impressively-m\u1ea1nh_m\u1ebd) was created via the similar word pair (impressively-impressive) and the translation word pair (impressive-m\u1ea1nh_m\u1ebd). Table 10 shows induced translation word pairs. By using word similarity learned from monolingual corpora, a number of OOV words can be replaced by their similar words, which helped to reduce the OOV ratio and improve performance in overall.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 67, |
| "end": 74, |
| "text": "Table 7", |
| "ref_id": "TABREF11" |
| }, |
| { |
| "start": 279, |
| "end": 286, |
| "text": "Table 8", |
| "ref_id": "TABREF12" |
| }, |
| { |
| "start": 352, |
| "end": 359, |
| "text": "Table 9", |
| "ref_id": "TABREF14" |
| }, |
| { |
| "start": 766, |
| "end": 774, |
| "text": "Table 10", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Result and Discussion", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Sentence alignment is an essential task in natural language processing, which builds bilingual corpora, a valuable resource in many applications like statistical machine translation, word sense disambiguation, information retrieval, etc. The task can be solved based on the number of words or : An example of bilingual dictionary trained by IBM Model 1 (Score: translation probability); the translations to English (italic) were conducted by the authors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Score English Vietnamese 0.215471 reunification th\u1ed1ng_nh\u1ea5t (to unify) 0.369082 impressively m\u1ea1nh_m\u1ebd (impressive) characters (Brown et al., 1991; Gale and Church, 1993) . These methods are fast and effective in some closed language pairs like English-French but achieve low performance in language pairs like English-Chinese. Word-based methods were proposed in (Kay and R\u00f6scheisen, 1993; Chen, 1993; Wu, 1994; Melamed, 1996; Ma, 2006) , based on lexical resources. These methods showed better performance than length-based methods, but they depend on available linguistic resources, which are rare and expensive to achieve in almost all language pairs, especially in low-resource languages like English-Vietnamese. Hybrid methods which combine lengthbased and word-based methods as shown in (Moore, 2002; Varga et al., 2007) can overcome the low accuracy of length-based methods, and these methods also do not depend on lexical resources. (Varga et al., 2007) proposed building bilingual corpora for medium-density languages. This can overcome the problem of the unavailability of bilingual resources of low-resource languages by building dictionaries and merge them to make a huge dictionary to cover a high ratio of vocabulary. However, because the method does not compute the score of word pairs in dictionaries, this leads to a low precision. Moore's method (Moore, 2002) can gain high accuracy, but the method has to deal with the OOV problem. Our model is similar to Moore's method, but we can overcome the OOV problem based on word similarity learned from monolingual corpora using a continuous bag-of-words model.", |
| "cite_spans": [ |
| { |
| "start": 124, |
| "end": 144, |
| "text": "(Brown et al., 1991;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 145, |
| "end": 167, |
| "text": "Gale and Church, 1993)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 361, |
| "end": 387, |
| "text": "(Kay and R\u00f6scheisen, 1993;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 388, |
| "end": 399, |
| "text": "Chen, 1993;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 400, |
| "end": 409, |
| "text": "Wu, 1994;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 410, |
| "end": 424, |
| "text": "Melamed, 1996;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 425, |
| "end": 434, |
| "text": "Ma, 2006)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 791, |
| "end": 804, |
| "text": "(Moore, 2002;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 805, |
| "end": 824, |
| "text": "Varga et al., 2007)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 939, |
| "end": 959, |
| "text": "(Varga et al., 2007)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 1362, |
| "end": 1375, |
| "text": "(Moore, 2002)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Continuous bag-of-words models were proposed in (Mikolov et al., 2013) , which can learn word similarity on very monolingual data. The model also has been applied to learn phrase similarity on monolingual data to improve statistical machine translation (Zhao et al., 2015) .", |
| "cite_spans": [ |
| { |
| "start": 48, |
| "end": 70, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 253, |
| "end": 272, |
| "text": "(Zhao et al., 2015)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In using monolingual data for alignment tasks, (Trieu et al., 2014) proposed using word clustering trained on monolingual data to improve the Moore's method (Moore, 2002) . In our model, we also based on word similarity learned from monolingual data, but we used a strong technique of word vector representation, word2vec, to learn word similarity. (Songyot and Chiang, 2014) proposed a method using word similarity from monolingual corpora to improve machine translation. In the work of (Songyot and Chiang, 2014) , the word similarity is trained based on a word context model using a feedforward neural network and then applied to improve statistical machine translation.", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 67, |
| "text": "(Trieu et al., 2014)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 157, |
| "end": 170, |
| "text": "(Moore, 2002)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 349, |
| "end": 375, |
| "text": "(Songyot and Chiang, 2014)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 488, |
| "end": 514, |
| "text": "(Songyot and Chiang, 2014)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The idea of using the word similarity model learned from monolingual data based on word2vec in our work is closed to the research of (Li et al., 2016) . In (Li et al., 2016) , the word similarity model is used to substitute rare words in neural machine translation. In our work, we adopted the word similarity model to overcome the out-of-vocabulary problem in sentence alignment.", |
| "cite_spans": [ |
| { |
| "start": 133, |
| "end": 150, |
| "text": "(Li et al., 2016)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 156, |
| "end": 173, |
| "text": "(Li et al., 2016)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In this work, we propose using word similarity to overcome the problem of OOV in sentence alignment. The word2vec model was trained on monolingual corpora to produce word-similarity models. These models were then combined with the bilingual word dictionary trained on IBM Model 1, which were integrated to length-and-word-based phase in a sentence alignment algorithm. Our method can reduce the OOV ratio with similar words learned from monolingual corpora, which leads to an improvement in comparison with some other length-andword-based methods. Using word similarity trained on monolingual corpora based on a distributed word representation model like word2vec may help to reduce the OOV in sentence alignment. Some aspects of this work need to be more investigated in future work like: applying word similarity in sentence alignment in a large scale data; exploring the contribution of word2vec in this task like using both the cbow and skip-gram models. We also plan to further leverage monolingual corpora to sentence alignment and then apply to statistical machine translation, especially for low-resource languages.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "30th Pacific Asia Conference on Language, Information and Computation (PACLIC 30)Seoul, Republic of Korea, October 28-30, 2016", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.vietnamtourism.com/ PACLIC 30 Proceedings", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://github.com/nguyenlab/SentAlign-Similarity 3 https://sites.google.com/site/iwsltevaluation2015/mt-track 4 http://www.statmt.org/moses/?n=moses.baseline 5 http://jvntextpro.sourceforge.net/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.statmt.org/lm-benchmark/ 7 http://www.baomoi.com/ 8 https://radimrehurek.com/gensim/models/word2vec.html 9 http://research.microsoft.com/en-us/downloads/aafd5dcf-4dcc-49b2-8a22-f7055113e656/ 10 http://mokk.bme.hu/en/resources/hunalign/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Aligning sentences in parallel corpora", |
| "authors": [ |
| { |
| "first": "Jennifer", |
| "middle": [ |
| "C" |
| ], |
| "last": "Peter F Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert L", |
| "middle": [], |
| "last": "Lai", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Proceedings of the 29th annual meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "169--176", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter F Brown, Jennifer C Lai, and Robert L Mercer. 1991. Aligning sentences in parallel corpora. In Pro- ceedings of the 29th annual meeting on Association for Computational Linguistics, pages 169-176. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The mathematics of statistical machine translation: Parameter estimation", |
| "authors": [ |
| { |
| "first": "Vincent J Della", |
| "middle": [], |
| "last": "Peter F Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen A Della", |
| "middle": [], |
| "last": "Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert L", |
| "middle": [], |
| "last": "Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "263--311", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter F Brown, Vincent J Della Pietra, Stephen A Della Pietra, and Robert L Mercer. 1993. The mathemat- ics of statistical machine translation: Parameter esti- mation. Computational linguistics, 19(2):263-311.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Wit3: Web inventory of transcribed and translated talks", |
| "authors": [ |
| { |
| "first": "Mauro", |
| "middle": [], |
| "last": "Cettolo", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Girardi", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcello", |
| "middle": [], |
| "last": "Federico", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 16th Conference of the European Association for Machine Translation (EAMT)", |
| "volume": "", |
| "issue": "", |
| "pages": "261--268", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. Wit3: Web inventory of transcribed and trans- lated talks. In Proceedings of the 16th Conference of the European Association for Machine Translation (EAMT), pages 261-268.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Aligning sentences in bilingual corpora using lexical information", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Stanley F Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the 31st annual meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "9--16", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stanley F Chen. 1993. Aligning sentences in bilingual corpora using lexical information. In Proceedings of the 31st annual meeting on Association for Computa- tional Linguistics, pages 9-16. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "A program for aligning sentences in bilingual corpora", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "William", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenneth", |
| "middle": [ |
| "W" |
| ], |
| "last": "Gale", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Church", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational linguistics", |
| "volume": "19", |
| "issue": "1", |
| "pages": "75--102", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "William A Gale and Kenneth W Church. 1993. A program for aligning sentences in bilingual corpora. Computational linguistics, 19(1):75-102.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Texttranslation alignment", |
| "authors": [ |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Kay", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "R\u00f6scheisen", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational linguistics", |
| "volume": "19", |
| "issue": "1", |
| "pages": "121--142", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Martin Kay and Martin R\u00f6scheisen. 1993. Text- translation alignment. Computational linguistics, 19(1):121-142.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Towards zero unknown word in neural machine translation", |
| "authors": [ |
| { |
| "first": "Xiaoqing", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiajun", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Chengqing", |
| "middle": [], |
| "last": "Zong", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 25th International Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiaoqing Li, Jiajun Zhang, and Chengqing Zong. 2016. Towards zero unknown word in neural machine trans- lation. In Proceedings of the 25th International Con- ference on Artificial Intelligence.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Champollion: A robust parallel text sentence aligner", |
| "authors": [ |
| { |
| "first": "Xiaoyi", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "LREC 2006: Fifth International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "489--492", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiaoyi Ma. 2006. Champollion: A robust parallel text sentence aligner. In LREC 2006: Fifth International Conference on Language Resources and Evaluation, pages 489-492.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A geometric approach to mapping bitext correspondence", |
| "authors": [ |
| { |
| "first": "Melamed", |
| "middle": [], |
| "last": "I Dan", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "I Dan Melamed. 1996. A geometric approach to mapping bitext correspondence. arXiv preprint cmp- lg/9609009.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Efficient estimation of word representations in vector space", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1301.3781" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representa- tions in vector space. arXiv preprint arXiv:1301.3781.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Fast and accurate sentence alignment of bilingual corpora", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Robert", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Moore", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robert C Moore. 2002. Fast and accurate sentence alignment of bilingual corpora. Springer.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Improving word alignment using word similarity", |
| "authors": [ |
| { |
| "first": "Theerawat", |
| "middle": [], |
| "last": "Songyot", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1840--1845", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Theerawat Songyot and David Chiang. 2014. Improving word alignment using word similarity. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing, pages 1840-1845.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Improving moore's sentence alignment method using bilingual word clustering", |
| "authors": [ |
| { |
| "first": "Hai-Long", |
| "middle": [], |
| "last": "Trieu", |
| "suffix": "" |
| }, |
| { |
| "first": "Phuong-Thai", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Kim-Anh", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Knowledge and Systems Engineering", |
| "volume": "", |
| "issue": "", |
| "pages": "149--160", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hai-Long Trieu, Phuong-Thai Nguyen, and Kim-Anh Nguyen. 2014. Improving moore's sentence align- ment method using bilingual word clustering. In Knowledge and Systems Engineering, pages 149-160. Springer.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Parallel corpora for medium density languages. Amsterdam studies in the theory and history of linguistic science series 4", |
| "authors": [ |
| { |
| "first": "D\u00e1niel", |
| "middle": [], |
| "last": "Varga", |
| "suffix": "" |
| }, |
| { |
| "first": "P\u00e9ter", |
| "middle": [], |
| "last": "Hal\u00e1csy", |
| "suffix": "" |
| }, |
| { |
| "first": "Andr\u00e1s", |
| "middle": [], |
| "last": "Kornai", |
| "suffix": "" |
| }, |
| { |
| "first": "Viktor", |
| "middle": [], |
| "last": "Nagy", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "292", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D\u00e1niel Varga, P\u00e9ter Hal\u00e1csy, Andr\u00e1s Kornai, Viktor Nagy, L\u00e1szl\u00f3 N\u00e9meth, and Viktor Tr\u00f3n. 2007. Paral- lel corpora for medium density languages. Amsterdam studies in the theory and history of linguistic science series 4, 292:247.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Evaluation of parallel text alignment systems", |
| "authors": [ |
| { |
| "first": "Jean", |
| "middle": [], |
| "last": "V\u00e9ronis", |
| "suffix": "" |
| }, |
| { |
| "first": "Philippe", |
| "middle": [], |
| "last": "Langlais", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Parallel text processing", |
| "volume": "", |
| "issue": "", |
| "pages": "369--388", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jean V\u00e9ronis and Philippe Langlais. 2000. Evaluation of parallel text alignment systems. In Parallel text pro- cessing, pages 369-388. Springer.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Aligning a parallel english-chinese corpus statistically with lexical criteria", |
| "authors": [ |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of the 32nd annual meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "80--87", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekai Wu. 1994. Aligning a parallel english-chinese cor- pus statistically with lexical criteria. In Proceedings of the 32nd annual meeting on Association for Computa- tional Linguistics, pages 80-87. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Learning translation models from monolingual continuous representations", |
| "authors": [ |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Hany", |
| "middle": [], |
| "last": "Hassan", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Auli", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kai Zhao, Hany Hassan, and Michael Auli. 2015. Learn- ing translation models from monolingual continuous representations. In Proceedings of the 2015 Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "Figure 1: Phases in our model; S: the text of source language, T: the text of target language; S 1 , T 1 : sentences aligned by the length-based phase; S 2 , T 2 : sentences aligned by the length-and-word-based phase; S', T': monolingual corpora of the source and target languages, respectively. The components of the length-and-word-based method (Moore, 2002) are bounded by the dashed frame." |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "(a) Selecting a word e i in e (b) For each pair (j, e i ): choosing a word v j to fill the position j P (v|e) = (l e + 1)" |
| }, |
| "TABREF1": { |
| "text": "Statistics of Test Corpus", |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "text": "", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>Statistics</td><td>iwslt15</td></tr><tr><td>Sentences (English)</td><td>129,327</td></tr><tr><td>Sentences (Vietnamese)</td><td>129,327</td></tr><tr><td>Average length (English)</td><td>19</td></tr><tr><td>Average length (Vietnamese)</td><td>18</td></tr><tr><td>Vocabulary Size (English)</td><td>46,669</td></tr><tr><td>Vocabulary Size (Vietnamese)</td><td>50,667</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "text": "Statistics of the IWSLT15 Corpus", |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "text": "", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>Setup</td><td colspan=\"3\">M-align Hun-align OurMethod</td></tr><tr><td>Reference</td><td>837</td><td>837</td><td>837</td></tr><tr><td>Results</td><td>580</td><td>1373</td><td>609</td></tr><tr><td>Correct</td><td>412</td><td>616</td><td>433</td></tr><tr><td>Precision</td><td>71.03%</td><td>44.87%</td><td>71.10%</td></tr><tr><td>Recall</td><td>49.22%</td><td>73.60%</td><td>51.73%</td></tr><tr><td colspan=\"2\">F-measure 58.15%</td><td>55.75%</td><td>59.89%</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "text": "Experimental results. (Reference, Results, Correct: number of sentence pairs in reference set, results from systems, and correct sentences, respectively.)", |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table" |
| }, |
| "TABREF7": { |
| "text": "Average OOV ratio.", |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table" |
| }, |
| "TABREF9": { |
| "text": "Examples of English Word Similarity Model", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>OOV</td><td>Similar</td><td>Cosine</td></tr><tr><td>Words</td><td>Words</td><td>Similarity</td></tr><tr><td>qu\u00e1t (to shout)</td><td>m\u1eafng (to scold)</td><td>0.35</td></tr><tr><td>qu\u00e1t (to shout)</td><td>n\u1ea1t (to bully)</td><td>0.32</td></tr><tr><td>h\u1ee7y (to destroy)</td><td>ho\u1ea1i (to ruin)</td><td>0.50</td></tr><tr><td>h\u1ee7y (to destroy)</td><td>d\u1ee1 (to unload)</td><td>0.42</td></tr><tr><td>h\u1ee7y (to destroy)</td><td>ph\u00e1 (to demolish)</td><td>0.36</td></tr><tr><td>ghe (junk)</td><td>thuy\u1ec1n (boat)</td><td>0.64</td></tr><tr><td>ghe (junk)</td><td>xu\u1ed3ng (whaleboat)</td><td>0.61</td></tr><tr><td>ghe (junk)</td><td>\u0111\u00f2 (ferry)</td><td>0.56</td></tr><tr><td colspan=\"2\">ph\u1edf (noodle soup) ch\u00e1o (rice gruel)</td><td>0.67</td></tr><tr><td colspan=\"2\">ph\u1edf (noodle soup) c\u01a1m (rice)</td><td>0.65</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF10": { |
| "text": "Examples of Vietnamese Word Similarity Model. The italic words in brackets are corresponding English meaning which were translated by the authors.", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>Language</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF11": { |
| "text": "An example of English-Vietnamese OOV. The translations to English (italic) were conducted by the authors.", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>OOV</td><td>Similar</td><td>Cosine</td></tr><tr><td>Words</td><td>Words</td><td>Similarity</td></tr><tr><td colspan=\"2\">reunification independence</td><td>0.71</td></tr><tr><td colspan=\"2\">reunification unification</td><td>0.67</td></tr><tr><td colspan=\"2\">reunification peace</td><td>0.62</td></tr><tr><td>impressively</td><td>amazingly</td><td>0.74</td></tr><tr><td colspan=\"2\">impressively impressive</td><td>0.74</td></tr><tr><td>impressively</td><td>exquisitely</td><td>0.72</td></tr><tr><td>impressively</td><td>brilliantly</td><td>0.71</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF12": { |
| "text": "An example of similar word pairs trained on monolingual corpus", |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table" |
| }, |
| "TABREF14": { |
| "text": "", |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table" |
| }, |
| "TABREF15": { |
| "text": "Induced translation word pairs; the translations to English (italic) were conducted by the authors.", |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table" |
| } |
| } |
| } |
| } |