| { |
| "paper_id": "Y11-1013", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:39:11.836854Z" |
| }, |
| "title": "A Graph-based Bilingual Corpus Selection Approach for SMT *", |
| "authors": [ |
| { |
| "first": "Wenhan", |
| "middle": [], |
| "last": "Chao", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "BeiHang University", |
| "location": { |
| "addrLine": "37# Xueyuan Rd" |
| } |
| }, |
| "email": "chaowenhan@buaa.edu.cn" |
| }, |
| { |
| "first": "Zhoujun", |
| "middle": [], |
| "last": "Li", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In statistical machine translation, the number of sentence pairs in the bilingual corpus is very important to the quality of translation. However, when the quantity reaches some extent, enlarging the corpus has less effect on the translation quality; whereas increasing greatly the time and space complexity to train the translation model, which hinders the development of statistical machine translation. In this paper, we propose a graph-based bilingual corpus selection approach, which makes use of the structural information of corpus to measure and update the importance of each sentence pair, and then selects a sentence pair with the highest importance each time. Our experiments in a Chinese-English translation task show that, selecting only 50% of the whole corpus by the graph-based selection approach as training set, we can obtain the near translation result with the one using the whole corpus.", |
| "pdf_parse": { |
| "paper_id": "Y11-1013", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In statistical machine translation, the number of sentence pairs in the bilingual corpus is very important to the quality of translation. However, when the quantity reaches some extent, enlarging the corpus has less effect on the translation quality; whereas increasing greatly the time and space complexity to train the translation model, which hinders the development of statistical machine translation. In this paper, we propose a graph-based bilingual corpus selection approach, which makes use of the structural information of corpus to measure and update the importance of each sentence pair, and then selects a sentence pair with the highest importance each time. Our experiments in a Chinese-English translation task show that, selecting only 50% of the whole corpus by the graph-based selection approach as training set, we can obtain the near translation result with the one using the whole corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In statistical machine translation, large scale of bilingual corpus is very important. In order to improve the quality of translation, there are two viewpoints about the use of corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "One way is to collect more and more bilingual corpus to improve the quality of translation model, such as extracting the sentence pairs from the comparable corpus (Smith et al., 2010; Uszkoreit et al., 2010) . However, some researchers found that, after the quantity of the sentence pairs (Han et al., 2009) in the corpus reaches some extent, adding more sentence pairs will not improve the quality of translation significantly. On the other hand, larger and larger corpus will consume more and more resources, which hinders the research progress of machine translation in some degree.", |
| "cite_spans": [ |
| { |
| "start": 163, |
| "end": 183, |
| "text": "(Smith et al., 2010;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 184, |
| "end": 207, |
| "text": "Uszkoreit et al., 2010)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 289, |
| "end": 307, |
| "text": "(Han et al., 2009)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This type of approaches assumes the sentence pairs in the corpus are independent each other, not considering the relationship between sentence pairs and their effect on the translation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The other view is to mine the potential of training corpus through corpus selection and optimization to improve the quality of the translation model. And it also includes three ways: the first one is to select and optimize the training corpus to adapt to the test set (Lu et al., 2007) or the domain (Yasuda et al,. 2008) ; the second one is to select the sentence pairs with high quality as training corpus (Chen et al., 2006; Han et al., 2009) , in which the quality is measured through the features of the sentence pair itself, such as the number of words that can be translated each other in the sentence pair; the third one is to measure and sort the sentence pairs based on the number of unknown n-grams in the sentences, and then select the sentence pair with the highest scores each time (Eck et al. 2005) .", |
| "cite_spans": [ |
| { |
| "start": 268, |
| "end": 285, |
| "text": "(Lu et al., 2007)", |
| "ref_id": null |
| }, |
| { |
| "start": 300, |
| "end": 321, |
| "text": "(Yasuda et al,. 2008)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 408, |
| "end": 427, |
| "text": "(Chen et al., 2006;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 428, |
| "end": 445, |
| "text": "Han et al., 2009)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 796, |
| "end": 813, |
| "text": "(Eck et al. 2005)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This type of approaches considers the quality difference between the sentence pairs in the corpus. However, it still views the sentence pairs as independent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we assume the quality of the translation model is related to the coverage and quality of the selected corpus, and expect to select the sentence pairs with high quality as possible when maximizing the coverage of the selected corpus. And we propose a graph-based bilingual corpus selection approach, which makes use of the structural information of corpus to measure and update the importance of each sentence pair, and then selects a sentence pair with the highest importance each time. The underlying principle is that we should select a sentence pair each time to maximize the coverage and quality of the selected sentence pairs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the rest of this paper, we first introduce how to measure the importance of each sentence pair based on the bilingual graph in Section 2, and then describe the framework of graph-based bilingual corpus selection approach in Section 3, emphasizing on corpus selection algorithm. Section 4 shows the results of the experiments, and we conclude in Section 5 and 6.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Assume that the bilingual corpus BC is composed of a collection of sentence pairs <f,c>, which consists of two sentences that come from two languages F and C respectively and can be translated each other.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Terminology and Notation Monolingual Sub-Graph", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The monolingual sentences of each language in the collection of sentence pairs will construct an undirected graph, called Monolingual Sub-Graph. The two graphs are represented as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Terminology and Notation Monolingual Sub-Graph", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "G f =<V f ,E f > and G c =<V c ,E c > respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Terminology and Notation Monolingual Sub-Graph", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Where G f =<V f ,E f > represents the undirected graph constructed by the sentences of language F in the corpus, and a node f\u2208V f represents a sentence of language F, and if the similarity between two sentences f 1 and f 2 are greater than the threshold f Similarly, we use G c =<V c ,E c > represents the undirected graph constructed by the sentences of language C in the corpus, and a nodes c\u2208V c represents a sentence of language C, and an edge represents that the similarity between the two sentences and are greater than the threshold", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Terminology and Notation Monolingual Sub-Graph", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u03c3 , i.e.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Terminology and Notation Monolingual Sub-Graph", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "c E c c \u2208 ) , ( 2 \u03c3 \u2265 ) , ( 2 1 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Terminology and Notation Monolingual Sub-Graph", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Bilingual graph is an undirected graph constructed by the sentence pairs in the bilingual corpus BC, represented as G=<V f,c , E f,c ,>, where v\u2208V f,c represents a sentence pair in the corpus. For each sentence pair <f,c> \u2208V f,c , it will be f\u2208V f and c\u2208V c .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bilingual Graph", |
| "sec_num": null |
| }, |
| { |
| "text": "An edge", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bilingual Graph", |
| "sec_num": null |
| }, |
| { |
| "text": "c f E v v , 2 1 ) , ( \u2208", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bilingual Graph", |
| "sec_num": null |
| }, |
| { |
| "text": "represents the similarity between two sentence pairs v 1 and v 2 is greater than the threshold", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bilingual Graph", |
| "sec_num": null |
| }, |
| { |
| "text": "c f , \u03c3 , i.e. c f v v sim , 2 1 ) , ( \u03c3 \u2265", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bilingual Graph", |
| "sec_num": null |
| }, |
| { |
| "text": ", and the similarity between two sentence pairs can be calculated based on the similarities between the monolingual sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bilingual Graph", |
| "sec_num": null |
| }, |
| { |
| "text": "If the node v does not connected to any other node, then we call the node v as an isolated sentence pair.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bilingual Graph", |
| "sec_num": null |
| }, |
| { |
| "text": "Given the set of selected sentence pairs S, the quantity of information of a sentence pair is the quantity of the novel information it can provide, i.e. it represents the novelty of the sentence pair.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantity of Information (QI)", |
| "sec_num": null |
| }, |
| { |
| "text": "In the beginning, \u03c6 = S", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantity of Information (QI)", |
| "sec_num": null |
| }, |
| { |
| "text": ", and the quantity of information for each sentence pair will be the largest value. And as the S increases, the quantity of information for each unselected sentence pair will be updated dynamically, removing the redundancy information between S and the sentence pair.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantity of Information (QI)", |
| "sec_num": null |
| }, |
| { |
| "text": "The quantity of information for the whole corpus will be the sum of quantity of information for all sentences pairs in the corpus, and it represents the coverage of the selected corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Quantity of Information (QI)", |
| "sec_num": null |
| }, |
| { |
| "text": "For each unselected sentence pair, the coverage is the quantity of redundancy information between the sentence pair and all of the other unselected sentence pairs in the bilingual corpus. And it is the sum of redundancy information between the sentence pair and each unselected sentence pair in the corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coverage of Sentence Pair (CSP)", |
| "sec_num": null |
| }, |
| { |
| "text": "In the bilingual graph, the coverage for each sentence pair only considers the sentence pairs that connect to it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coverage of Sentence Pair (CSP)", |
| "sec_num": null |
| }, |
| { |
| "text": "The underlying principle is that the more the number of the similar sentence pairs with high quality is, the better of the quality of the sentence pair is. Thus, CSP represents the quality of the sentence pair.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coverage of Sentence Pair (CSP)", |
| "sec_num": null |
| }, |
| { |
| "text": "The importance of sentence pair consists of two parts: the quantity of information (QI) and the coverage (CSP) of the sentence pair.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Importance of Sentence Pair (ISP)", |
| "sec_num": null |
| }, |
| { |
| "text": "The underlying assumption is that if the quantity of information and the coverage for the sentence pair is high, then importance of the sentence pair is high.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Importance of Sentence Pair (ISP)", |
| "sec_num": null |
| }, |
| { |
| "text": "After the bilingual graph has been constructed, our goal is to compute the importance for each sentence pair via the structural information of the corpus, and then make the sentence selection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Importance Equation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The importance of sentence pair (ISP) consists of two parts: the quantity of information QI and the coverage CSP. Given the bilingual corpus BC and the set of selected sentence pair S, the QI for a sentence pair is equal to the initial quantity of information of , represented as , subtracting the redundancy information contained in the S according to :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Importance Equation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "a v a v 0 QI a v ) , ( ) ( ) , ( 0 a a a v S RI v QI S v QI \u2212 =", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Importance Equation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Where represents the quantity of information for when given S; is the initial QI for , i.e. the QI for", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Importance Equation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": ") , ( S v QI a a v ) ( 0 a v QI a v \u03c6 = S . ) , ( a v S RI", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Importance Equation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "represents the redundancy information contained in S according to .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Importance Equation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "a v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Importance Equation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The importance of sentence pair ISP is the sum of the quantity of information QI and the coverage CSP.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Importance Equation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": ") , , ( ) , ( ) , ( BC S v SRI S v QI S v ISP a a a + = (2) \u2211 \u2211 \u2260 \u2208 \u2260 \u2208 \u22c5 = = b a b b a b v v BC v b b a v v BC v b a a S v QI v v sim S v v RI BC S v SRI ) , ( ) , ( ) , , ( ) , , (", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Importance Equation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Where represents the importance of when given S\uff0c represents the coverage of in corpus BC, when given S and BC. represents the redundancy information contained in according to , when given S; represents the similarity between and ; and equals the multiple of the and the QI of .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Importance Equation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": ") , ( S v ISP a a v ) , , ( BC S v SRI a a v ) , , ( S v v RI b a a v b v ) , ( b a v v sim a v b v ) , , ( S v v RI b a ) , ( b a v v sim b v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Importance Equation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Given the bilingual graph G, we assume that the redundancy information contained in S for only be relevant to the sentence pairs that connect to in S.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Importance Equation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "a v a v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Importance Equation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "So, we rewrite the as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Importance Equation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": ") , ( a v S RI ) ( ) , ( ) , ( 0 a a b S v a G v QI v v sim v S RI b \u22c5 \u222a = \u2208 (4)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Importance Equation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "represents the similarity between the union of all of the similar sentence pairs of in S and .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Where", |
| "sec_num": null |
| }, |
| { |
| "text": "a b S v v v sim b \u2208 \u222a a v a v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") , (", |
| "sec_num": null |
| }, |
| { |
| "text": "Similarly, we can rewrite and as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") , (", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": ") , , ( S v v RI b a ) , , ( BC S v SRI a \u2211 \u2211 \u2208 \u2208 \u22c5 = = G b a G b a E v v b G a b E v v b a G a G S v QI v v sim S v v RI BC S v SRI ) , ( ) , ( ) , ( ) , ( ) , , ( ) , , (", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": ") , (", |
| "sec_num": null |
| }, |
| { |
| "text": "Thus, our importance equation will be:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") , (", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": ")] , ( 1 )[ ( ) , ( 0 a b S v a a G v v sim v QI S v QI b \u2208 \u222a \u2212 = (6) \u2211 \u2208 \u22c5 + = G b a E v v b G a b a G a G S v QI v v sim S v QI S v ISP ) , ( ) , ( ) , ( ) , ( ) , (", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": ") , (", |
| "sec_num": null |
| }, |
| { |
| "text": "The underlying principle is: given the set of selected sentence pair S, the importance of sentence pair will be the sum of the quantity of information itself and the redundancy information between the sentence pair and all the unselected sentence pairs it connects to.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": ") , (", |
| "sec_num": null |
| }, |
| { |
| "text": "In order to implement the graph-based bilingual corpus selection, the graph-based bilingual corpus selection consists of three steps as shown in ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Framework", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In order to construct the monolingual graph, we need to measure the similarities between the sentences. Considering the corpus is very large, we will take a very simple approach, which counts the number of co-occur words and obtain the similarity as follows: The similarity threshold will affect the structure of the monolingual graph, and then affect the computing of the importance of the sentence pair. If f \u03c3 or c \u03c3 is too large, it will decrease the edges in the graph, and make the isolated sentence pair increasing; on the other hand, if f \u03c3 or c \u03c3 is too small, there will be too many edges, weakening the ability of distinguishing the importance of sentence pair using the structural information. Thus, we should try to find a balance between avoiding too many isolated sentence pairs and avoiding too many connected sentence pairs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Measure the Similarity between Sentences", |
| "sec_num": null |
| }, |
| { |
| "text": "| | | | | | 2 ) , (", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Measure the Similarity between Sentences", |
| "sec_num": null |
| }, |
| { |
| "text": "After constructing the two monolingual graphs respectively, we can construct the bilingual graph. We obtain the connection between two sentence pairs in the following way:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construct Bilingual Graph", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "For any two sentence pairs and", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construct Bilingual Graph", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "> =< a a a c f v , > =< b b b c f v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construct Bilingual Graph", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": ", \uff0cif connects to and connects to in the two monolingual graph respectively, we say connects to , i.e. ; ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construct Bilingual Graph", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "v b v c f b a E v v , ) , ( \u2208", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construct Bilingual Graph", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The similarity between two sentence pairs is the average of the two similarities between the monolingual sentences:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Construct Bilingual Graph", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": ")] , ( ) , ( [ 2 1 ) , ( b a b a b a c c sim f f sim v v sim + \u00d7 =", |
| "eq_num": "(9)" |
| } |
| ], |
| "section": "Construct Bilingual Graph", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "After constructing the bilingual graph, we can now make the corpus selection as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Bilingual Corpus Selection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "1. Initialize the ; 1 ) ( 0 = a v QI 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Bilingual Corpus Selection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Update the importance of each unselected sentence pair by Equ.(6) and 7; 3. Sort the sentence pairs by the importance; 4. Select the sentence pair with the highest importance, and add it to the list S; 5. Repeat 2-4, until all of the sentence pairs in the corpus have been selected. 6. Output the sentence pairs in the list S in order.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Bilingual Corpus Selection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "When selecting a sentence pair each time, it need updated all the other sentence pairs that the selected sentence pair can reach through a connected path, and then sort all of the sentence pairs again. So the complexity of the selecting algorithm is very high.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Bilingual Corpus Selection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We note when selecting a sentence pair, only the sentence pairs that the selected sentence connects will change their quantities of information, so we divide the updating of the importance into two steps:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Bilingual Corpus Selection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Step 1: update the quantity of information using the following iterative way:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Bilingual Corpus Selection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "When t=k=0, 1 ) , ( = \u03c6 a G v QI , i.e. 1 ) ( 0 = a v QI", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Bilingual Corpus Selection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "for all of the ;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Bilingual Corpus Selection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "a v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Bilingual Corpus Selection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "When t=k+1, assuming the new selected sentence pair is , then only updating the quantity of information of each sentence pair that connects to:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Bilingual Corpus Selection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "n s a v n s )] , ( 1 [ ) , ( ) , ( 1 n a a k n a k s v sim S v QI s S v QI G G \u2212 \u00d7 = \u222a + (10)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Bilingual Corpus Selection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "The above equation will compute the quantity of information of approximately.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Bilingual Corpus Selection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "a v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Bilingual Corpus Selection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Note that we set the initial quantity of information of each sentence pair as 1 here, that is, all of the sentence pairs have the same QI in the beginning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph-based Bilingual Corpus Selection", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Step 2: in the selecting step, re-calculate the importance of the sentence pair with the highest importance by equation 7. If the importance has been changed, then sort it and test the next sentence pair with the highest importance; otherwise select the sentence pair, execute Step 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "QI", |
| "sec_num": null |
| }, |
| { |
| "text": "Taking the above two steps, it avoids updating and sorting the importance of each sentence pair that the selected sentence pair can reach through a connected path.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "QI", |
| "sec_num": null |
| }, |
| { |
| "text": "Thus the pseudo code of the final algorithm is shown as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "QI", |
| "sec_num": null |
| }, |
| { |
| "text": "ALGORITHM: Graph_Based_BiCorpus_Selection INPUT: Bilingual Graph G of Corpus BC OUTPUT:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "QI", |
| "sec_num": null |
| }, |
| { |
| "text": "Selected Sentence List S 1: S = <> ; 2. FOR each Va in G DO 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "QI", |
| "sec_num": null |
| }, |
| { |
| "text": "Calculate the initial ISP_Va by equation 6and 7; 4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "QI", |
| "sec_num": null |
| }, |
| { |
| "text": "Insert into the list L in descending order; 5. WHILE L is not empty DO 6: Va = L. RemoveHead(); 7: ISP_new = CalcISP(Va); //by equation 7 ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "QI", |
| "sec_num": null |
| }, |
| { |
| "text": "We choose the training set in the Chinese-English news translation task in CWMT2009 as our bilingual corpus. After removing the sentence pairs in which one of the lengths of the monolingual sentences is greater than 50, we obtain a bilingual corpus containing about 2M sentence pairs, represented as BC.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Set", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In order to tune the translation model, we take one of the development sets in CWMT2009 as our development set, which is the test set in the Chinese-English news translation task in SSMT2007.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Set", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We also choose another develop set in CWMT2009 as our test set, the statistics of our data sets are shown in Table 1 . ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 109, |
| "end": 116, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Set", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In order to analyze the efficiency of our graph-based corpus selection algorithm, we implement several different corpus selection algorithms and compare them with our selection algorithm. In each experiment, we select a subset of the whole bilingual corpus by specifying the ratio via each selection algorithm, and take it as training set to train the translation model, and then compare the translation quality.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We take the state-of-the-art statistical translation system Moses 1 as the decoder, and BLEU as the evaluation metric.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Our experiments are designed as follows: Method 1: Baseline I (Random Selection)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 Take the whole of the BC as training corpus to train the translation model;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 Random select specific ratios (10%, 30%, 50%, 60%, 70%, 80%) of the sentence pairs in the BC as training corpora to train the translation models respectively; Method 2: Baseline II ( Unseen gram-based Selection)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 Select the sentence pair with the highest weight each time, and we calculate the weight of the sentence using the weight 1,2 (Eck et al. 2005) , which considered the length of the sentence and bi-grams, and generated the best results as reported in (Eck et al. 2005 ). \u2022 Select specific ratios (10%, 30%, 50%, 60%, 70%, 80%) of the sentence pairs in the BC as training corpora to train the translation models respectively; Method 3: Graph-based Corpus Selection (Considering the QI Only)", |
| "cite_spans": [ |
| { |
| "start": 127, |
| "end": 144, |
| "text": "(Eck et al. 2005)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 251, |
| "end": 267, |
| "text": "(Eck et al. 2005", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 Take the graph-based Corpus Selection algorithm, but it only consider the quantity of information of the sentence pair, i.e. the importance is equal to the quantity of information. So, the algorithm need not update the coverage. \u2022 Select specific ratios (10%, 30%, 50%, 60%, 70%, 80%) of the sentence pairs in the BC as training corpora to train the translation models respectively; Method 4: Graph-based Corpus Selection (Considering the QI and CSP)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "\u2022 Take the graph-based Corpus Selection algorithm, here the importance is the sum of the quantity of information and the coverage; \u2022 Select specific ratios (10%, 30%, 50%, 60%, 70%, 80%) of the sentence pairs in the BC as training corpora to train the translation models respectively;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Methods 3 and 4 need to construct the monolingual graphs and bilingual graph. We set the similarity threshold as 0.4, and the statistics of the graphs are shown in Table 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 164, |
| "end": 171, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In table 2, the column 1 represents the three graphs (two monolingual graphs and a bilingual graph), the column 2 is the amount of edges in each graph, the column 3 is the average edge for each sentence or sentence pair in each graph, and the column 4 is the amount of the isolated nodes, which have no similar sentences or sentence pairs in the graph. Note the amount of the points is 2378944. From the table we can see that about 36.3% of sentence pairs in the bilingual graph are isolated sentence pairs. So we should adjust the thresholds to avoid so many isolated sentence pairs in the future.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "After selecting the subsets of the corpus with specific ratios, the statistics of them are shown as Table 3 . 14.4 148", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 100, |
| "end": 107, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In table 3, the column 1 represents the specific ratios, the columns 2 to 5 represents the four selection methods, and each of them consists of two sub-columns, average length of the source sentences (Avg.) and the number of out of vocabulary words (OOV).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "From the table, we can see that when using the random selection, the average lengths of the sentences are all near to the average sentence lengths of the whole corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Both in method 2, 3 and 4, the numbers of OOV words decrease very quickly, and it can reflect the coverage of the selected corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Finally, we obtained the translation results shown in Table 4 . In table 4, the column 1 represents the specific ratios, the columns 2 to 5 represents the BLEU% scores for four selection methods.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 54, |
| "end": 61, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The results from the table 4 show, given the specific ratios of the training corpus, using unseen gram-based selection (Method 2) and graph-based corpus selection methods (Method 3 and 4) will obtain better results than using the random selection method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The results obtained by Method 2 and Method 3 are similar, since both of them consider the QI when given the set of selected sentence pairs S only.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "However, when comparing the Method 2 and 3 with Method 4, we find Method 2 and 3 obtained better results when selecting only 10% and 30% of the corpus, and after increasing the ratios, method 4 obtains the better results, especially it obtains the best results when selecting only 80% of the corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We conclude that the quality of the translation model depends on both of the quality and coverage of the bilingual corpus. In the beginning, Method 2 and 3 obtain better coverage than Method 4 (see the number of the OOV in table 3); however, as enlarging the corpus, Method 4 can get similar coverage with Method 2 and 3, but it can obtain better sentence pairs, since it selects the sentence pair with best importance each time. Thus, it generates better results later.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "And we can also see from the table 4 that, when using the method 4 to make corpus selection, selecting only 50% of the bilingual corpus will generate near result with selecting 60%~100% of the whole corpus. That is, when using more than 50% of the whole corpus, it does not obtain significant improvement. This shows the efficiency of our graph-based corpus selection approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Especially, when selecting 80% of the corpus, it will get the best results, overcoming the result using the whole corpus. This shows that there may be noisy data in the corpus, which decreases the quality of the translation, and Method 4 could filter the noisy sentence pairs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In statistical machine translation, there are three ways to make effective use of the bilingual corpus. One is assuming the sentence pair has different effects on the translation, so it need estimate the quality of each sentence pair, and sort them. Chen et al. (2006) provided a quality sorting model for the sentence pair, which estimated the quality of each sentence pair via many features, such as language model, sentence length, word alignment etc. Their experiments showed that, when using the same number of sentence pairs as training corpus, selecting the sentence pairs with high quality would improve the quality of translation. Han et al. (2009) provided another approach. They divided the sentence pairs in the corpus into two types: literal translation and free translation, the first was low-level word-word translation, and the latter was high-level translation. They assumed that SMT could be viewed as low-level translation system, which should be supervised by the sentence pairs with literal translation. So they provided word-match metric and grammar-match metric to find the sentence pairs with literal translation, and selected them as training corpus. Their experiments showed that, when taking the sentence pairs with literal translation as baseline, adding the sentence pairs with free translation would not improve the translation quality all the while.", |
| "cite_spans": [ |
| { |
| "start": 250, |
| "end": 268, |
| "text": "Chen et al. (2006)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 640, |
| "end": 657, |
| "text": "Han et al. (2009)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": null |
| }, |
| { |
| "text": "These approaches considered the difference between the qualities of sentence pairs. However, they only used the features of each sentence pair itself, and the quality would not be updated as the selection process.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": null |
| }, |
| { |
| "text": "Our approach measures the importance of each sentence pair, which only uses the structural information, i.e. the relationship between sentence pairs, and the importance will be updated dynamically during the selection.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": null |
| }, |
| { |
| "text": "The other way is to select and optimize the corpus according to the test set. Lu et al. (2007) proposed the corpus selection and optimization approaches based on the information retrieval methods. The first one retrieved the similar sentence pairs in the corpus according to the test set, and took them as the training corpus; the latter increased the occur number for each similar sentence pair, so that the importance of the similar sentence pair in the translation model will be enlarged. These approaches made the translation model more adaptive to the test set. Their experiments showed the improvement in the translation quality.", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 94, |
| "text": "Lu et al. (2007)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": null |
| }, |
| { |
| "text": "The graph-based selection approach in this paper does not consider the test set at all; however, it just considers the corpus itself, especially the structural information within the corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": null |
| }, |
| { |
| "text": "Eck et al. (2005) provided a simple way to sort and select the sentence pairs based on the number of unseen n-grams in the selected data set. The approach considered only the quantity of information between the selected data set and the unselected sentence pair.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, we proposed a graph-based bilingual corpus selection framework, which measures and updates the importance of each sentence pair based on the structural information of the bilingual corpus, and then selects the sentence pair with the highest importance each time, until it obtains the subset of the corpus with specific ratio. Experiments showed that, through selecting only 50% of the corpus, we can obtain near translation quality with the whole corpus using the graph-based selection approach. We can even obtain better results than the whole corpus when selecting 80% of the corpus, which suggests that the corpus may contain noisy data and decrease the quality of the translation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Thus, through the graph-based corpus selection approach, we can select only a part of corpus to train the translation model, which will reduce the time and space complexity largely when building machine translation system, while not decreasing the translation quality significantly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "However, since our approach is just a basic framework, we will improve the following issues in the future:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Considering more effective approaches to build bilingual graph; Improving the graph-based selection algorithm, such as considering the difference between the sentence pairs' ; 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Combining the other features with the structural feature to measure the importance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "QI", |
| "sec_num": null |
| }, |
| { |
| "text": "25th Pacific Asia Conference on Language, Information and Computation, pages 120-129", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.statmt.org", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Low cost portability for statistical machine translation based on n-gram coverage", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [ |
| "D" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [ |
| "D" |
| ], |
| "last": "Shi", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "L" |
| ], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Eck", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Vogel", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Waibel", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Conference Proceedings: the tenth Machine Translation", |
| "volume": "20", |
| "issue": "", |
| "pages": "227--234", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, Y.D., X.D. Shi and C.L. Zhou. 2006. Research on Filtering Parallel Corpus: A Ranking Model. Journal of Chinese Information Processing, Vol.20 Supplement, pp.66-70, 2006. Eck, M., S. Vogel and A. Waibel. 2005. Low cost portability for statistical machine translation based on n-gram coverage. Conference Proceedings: the tenth Machine Translation Summit (MT Summit X) pp.227-234, Phuket, Thailand, September 13-15,2005.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Train the machine with what it can learn -corpus selection for SMT", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [ |
| "W" |
| ], |
| "last": "Han", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "Z" |
| ], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [ |
| "J" |
| ], |
| "last": "Zhao", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 2nd Workshop on Building and Using Comparable Corpora", |
| "volume": "", |
| "issue": "", |
| "pages": "27--33", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Han, X.W., H.Z. Li and T.J. Zhao. 2009. Train the machine with what it can learn -corpus selection for SMT. Proceedings of the 2nd Workshop on Building and Using Comparable Corpora, Suntec, Singapore, 6 August 2009; pp.27-33, 2009.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Improving statistical machine translation performance by training data selection and optimization", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [ |
| "J" |
| ], |
| "last": "L\u00fc", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "343--350", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L\u00fc, Y.J., J. Huang and Q. Liu. 2007. Improving statistical machine translation performance by training data selection and optimization. EMNLP-CoNLL-2007: Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, June 28-30, 2007, Prague, Czech Republic; pp. 343-350, 2007.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Intelligent Selection of Language Model Training Data", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [ |
| "C" |
| ], |
| "last": "Moore", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the ACL 2010 Conference Short Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "11--16", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Moore, R.C. and W. Lewis. 2010. Intelligent Selection of Language Model Training Data. Proceedings of the ACL 2010 Conference Short Papers, pp. 220-224, Uppsala, Sweden, 11- 16 July 2010.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Extracting Parallel Sentences from Comparable Corpora using Document Level Alignment. Human Language Technologies: The", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "R" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Quirk", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Smith, J.R., C. Quirk and K. Toutanova. 2010. Extracting Parallel Sentences from Comparable Corpora using Document Level Alignment. Human Language Technologies: The 2010", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Annual Conference of the North American Chapter of the ACL", |
| "authors": [], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "403--411", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Annual Conference of the North American Chapter of the ACL, pp. 403-411, Los Angeles, California, June 2010.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Method of selecting training data to build a compact and efficient translation model", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [ |
| "K J Q" |
| ], |
| "last": "Yasuda", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Yamamoto", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sumita", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Third International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "655--660", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yasuda. K.J., R.Q. Zhang, H. Yamamoto and E. Sumita. 2008. Method of selecting training data to build a compact and efficient translation model. IJCNLP 2008: Third International Joint Conference on Natural Language Processing, January 7-12, 2008, Hyderabad, India; pp.655-660.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Method of selecting training sets to build compact and efficient language model. MT Summit XI Workshop: Using corpora for natural language generation: language generation and machine translation", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [ |
| "K J" |
| ], |
| "last": "Yasuda", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Yamamoto", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sumita", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "31--37", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yasuda. K.J., H. Yamamoto and E. Sumita. 2007. Method of selecting training sets to build compact and efficient language model. MT Summit XI Workshop: Using corpora for natural language generation: language generation and machine translation, 11 September 2007, Copenhagen, Denmark; pp.31-37, 2007.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Large Scale Parallel Document Mining for Machine Translation", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "M" |
| ], |
| "last": "Ponte", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "C" |
| ], |
| "last": "Popat", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Dubiner", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1101--1109", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Uszkoreit, J., J.M. Ponte, A.C. Popat, and M. Dubiner. 2010. Large Scale Parallel Document Mining for Machine Translation. Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pp. 1101-1109, Beijing, August 2010.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "uris": null, |
| "text": "The graph-based corpus selection framework.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "text": "represents the number of the co-occur words between the sentences and , | | and represent the number of words in sentences and respectively.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "text": "The pseudo code of graph-based corpus selection algorithm.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td>Chinese</td><td>English</td></tr><tr><td>Train.</td><td>Sentences</td><td colspan=\"2\">2,378,944</td></tr><tr><td>corpus</td><td>Words</td><td>34,362,755</td><td>34,921,267</td></tr><tr><td/><td>Vocabulary</td><td>193309</td><td>307095</td></tr><tr><td>Dev.</td><td>Sentences</td><td/><td>1002</td></tr><tr><td>Set</td><td>Words</td><td>26,285</td><td/></tr><tr><td>Test</td><td>Sentences</td><td/><td>1006</td></tr><tr><td>Set</td><td>Words</td><td>27,477</td><td/></tr></table>", |
| "text": "", |
| "html": null |
| }, |
| "TABREF2": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td/><td>Amount of</td><td>Avg. Edge</td><td>Amount of Isolated Nodes</td></tr><tr><td/><td>Edges</td><td/><td/></tr><tr><td>Mono. Graph (Chinese)</td><td>77,135,825</td><td>64.8</td><td>445,684 (18.7%)</td></tr><tr><td>Mono. Graph (English)</td><td>208,614,318</td><td>175.4</td><td>366,690 (15.4%)</td></tr><tr><td>Bilingual Graph</td><td>19,731,976</td><td>16.6</td><td>864,281 (36.3%)</td></tr></table>", |
| "text": "The statistics of the graphs ( N=2378944).", |
| "html": null |
| }, |
| "TABREF3": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Ratios</td><td colspan=\"2\">Method 1</td><td colspan=\"2\">Method 2</td><td colspan=\"2\">Method 3</td><td colspan=\"2\">Method 4</td></tr><tr><td/><td>Avg.</td><td>OOV</td><td>Avg.</td><td>OOV</td><td>Avg.</td><td>OOV</td><td>Avg.</td><td>OOV</td></tr><tr><td>10%</td><td>14.4</td><td>389</td><td>11.9</td><td>191</td><td>15.2</td><td>361</td><td>13.5</td><td>359</td></tr><tr><td>30%</td><td>14.4</td><td>228</td><td>14.0</td><td>150</td><td>15.6</td><td>189</td><td>16.2</td><td>261</td></tr><tr><td>50%</td><td>14.4</td><td>186</td><td>14.8</td><td>148</td><td>15.8</td><td>158</td><td>16.3</td><td>156</td></tr><tr><td>60%</td><td>14.4</td><td>176</td><td>15.4</td><td>148</td><td>15.7</td><td>151</td><td>15.6</td><td>151</td></tr><tr><td>70%</td><td>14.4</td><td>165</td><td>14.6</td><td>148</td><td>15.1</td><td>150</td><td>15.2</td><td>149</td></tr><tr><td>80%</td><td>14.4</td><td>165</td><td>14.2</td><td>148</td><td>14.6</td><td>148</td><td>14.8</td><td>148</td></tr><tr><td>100%</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
| "text": "", |
| "html": null |
| }, |
| "TABREF4": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Ratios</td><td>Method 1</td><td>Method 2</td><td>Method 3</td><td>Method 4</td></tr><tr><td>10%</td><td>18.84</td><td>20.03</td><td>19.32</td><td>19.51</td></tr><tr><td>30%</td><td>19.91</td><td>20.68</td><td>20.78</td><td>20.30</td></tr><tr><td>50%</td><td>20.76</td><td>21.08</td><td>21.10</td><td>21.25</td></tr><tr><td>60%</td><td>20.96</td><td>21.00</td><td>21.00</td><td>21.34</td></tr><tr><td>70%</td><td>21.14</td><td>21.26</td><td>21.54</td><td>21.27</td></tr><tr><td>80%</td><td>21.25</td><td>21.26</td><td>21.30</td><td>21.58</td></tr><tr><td>100%</td><td/><td>21.51</td><td/><td/></tr></table>", |
| "text": "", |
| "html": null |
| } |
| } |
| } |
| } |