| { |
| "paper_id": "E12-1025", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T10:35:56.925555Z" |
| }, |
| "title": "Active learning for interactive machine translation", |
| "authors": [ |
| { |
| "first": "Jes\u00fas", |
| "middle": [], |
| "last": "Gonz\u00e1lez-Rubio", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Sistemas Inform\u00e1ticos y Computaci\u00f3n U. Polit\u00e8cnica de Val\u00e8ncia C. de Vera s/n", |
| "location": { |
| "postCode": "46022", |
| "settlement": "Valencia", |
| "country": "Spain" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Ortiz-Mart\u00ednez", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Sistemas Inform\u00e1ticos y Computaci\u00f3n U. Polit\u00e8cnica de Val\u00e8ncia C. de Vera s/n", |
| "location": { |
| "postCode": "46022", |
| "settlement": "Valencia", |
| "country": "Spain" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Francisco", |
| "middle": [], |
| "last": "Casacuberta", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Sistemas Inform\u00e1ticos y Computaci\u00f3n U. Polit\u00e8cnica de Val\u00e8ncia C. de Vera s/n", |
| "location": { |
| "postCode": "46022", |
| "settlement": "Valencia", |
| "country": "Spain" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Translation needs have greatly increased during the last years. In many situations, text to be translated constitutes an unbounded stream of data that grows continually with time. An effective approach to translate text documents is to follow an interactive-predictive paradigm in which both the system is guided by the user and the user is assisted by the system to generate error-free translations. Unfortunately, when processing such unbounded data streams even this approach requires an overwhelming amount of manpower. Is in this scenario where the use of active learning techniques is compelling. In this work, we propose different active learning techniques for interactive machine translation. Results show that for a given translation quality the use of active learning allows us to greatly reduce the human effort required to translate the sentences in the stream.", |
| "pdf_parse": { |
| "paper_id": "E12-1025", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Translation needs have greatly increased during the last years. In many situations, text to be translated constitutes an unbounded stream of data that grows continually with time. An effective approach to translate text documents is to follow an interactive-predictive paradigm in which both the system is guided by the user and the user is assisted by the system to generate error-free translations. Unfortunately, when processing such unbounded data streams even this approach requires an overwhelming amount of manpower. Is in this scenario where the use of active learning techniques is compelling. In this work, we propose different active learning techniques for interactive machine translation. Results show that for a given translation quality the use of active learning allows us to greatly reduce the human effort required to translate the sentences in the stream.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Translation needs have greatly increased during the last years due to phenomena such as globalization and technologic development. For example, the European Parliament 1 translates its proceedings to 22 languages in a regular basis or Project Syndicate 2 that translates editorials into different languages. In these and many other examples, data can be viewed as an incoming unbounded stream since it grows continually with time (Levenberg et al., 2010) . Manual translation of such streams of data is extremely expensive given the huge volume of translation required, 1 http://www.europarl.europa.eu 2 http://project-syndicate.org therefore various automatic machine translation methods have been proposed.", |
| "cite_spans": [ |
| { |
| "start": 430, |
| "end": 454, |
| "text": "(Levenberg et al., 2010)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 570, |
| "end": 571, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "However, automatic statistical machine translation (SMT) systems are far from generating error-free translations and their outputs usually require human post-editing in order to achieve high-quality translations. One way of taking advantage of SMT systems is to combine them with the knowledge of a human translator in the interactive-predictive machine translation (IMT) framework (Foster et al., 1998; Langlais and Lapalme, 2002 ; Barrachina et al., 2009) , which is a particular case of the computer-assisted translation paradigm (Isabelle and Church, 1997) . In the IMT framework, a state-of-the-art SMT model and a human translator collaborate to obtain highquality translations while minimizing required human effort.", |
| "cite_spans": [ |
| { |
| "start": 382, |
| "end": 403, |
| "text": "(Foster et al., 1998;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 404, |
| "end": 430, |
| "text": "Langlais and Lapalme, 2002", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 433, |
| "end": 457, |
| "text": "Barrachina et al., 2009)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 533, |
| "end": 560, |
| "text": "(Isabelle and Church, 1997)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Unfortunately, the application of either postediting or IMT to data streams with massive data volumes is still too expensive, simply because manual supervision of all instances requires huge amounts of manpower. For such massive data streams the need of employing active learning (AL) is compelling. AL techniques for IMT selectively ask an oracle (e.g. a human translator) to supervise a small portion of the incoming sentences. Sentences are selected so that SMT models estimated from them translate new sentences as accurately as possible. There are three challenges when applying AL to unbounded data streams (Zhu et al., 2010) . These challenges can be instantiated to IMT as follows:", |
| "cite_spans": [ |
| { |
| "start": 613, |
| "end": 631, |
| "text": "(Zhu et al., 2010)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "1. The pool of candidate sentences is dynamically changing, whereas existing AL algorithms are dealing with static datasets only.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "2. Concepts such as optimum translation and translation probability distribution are continually evolving whereas existing AL algorithms only deal with constant concepts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "3. Data volume is unbounded which makes impractical to batch-learn one single system from all previously translated sentences. Therefore, model training must be done in an incremental fashion.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this work, we present a proposal of AL for IMT specifically designed to work with stream data. In short, our proposal divides the data stream into blocks where AL techniques for static datasets are applied. Additionally, we implement an incremental learning technique to efficiently train the base SMT models as new data is available.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A body of work has recently been proposed to apply AL techniques to SMT (Haffari et al., 2009; Ambati et al., 2010; Bloodgood and Callison-Burch, 2010) . The aim of these works is to build one single optimal SMT model from manually translated data extracted from static datasets. None of them fit in the setting of data streams. Some of the above described challenges of AL from unbounded streams have been previously addressed in the MT literature. In order to deal with the evolutionary nature of the problem, Nepveu et al. (2004) propose an IMT system with dynamic adaptation via cache-based model extensions for language and translation models. Pursuing the same goal for SMT, Levenberg et al., (2010) study how to bound the space when processing (potentially) unbounded streams of parallel data and propose a method to incrementally retrain SMT models. Another method to efficiently retrain a SMT model with new data was presented in (Ortiz-Mart\u00ednez et al., 2010) . In this work, the authors describe an application of the online learning paradigm to the IMT framework.", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 94, |
| "text": "(Haffari et al., 2009;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 95, |
| "end": 115, |
| "text": "Ambati et al., 2010;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 116, |
| "end": 151, |
| "text": "Bloodgood and Callison-Burch, 2010)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 512, |
| "end": 532, |
| "text": "Nepveu et al. (2004)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 681, |
| "end": 705, |
| "text": "Levenberg et al., (2010)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 939, |
| "end": 968, |
| "text": "(Ortiz-Mart\u00ednez et al., 2010)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "To the best of our knowledge, the only previous work on AL for IMT is (Gonz\u00e1lez-Rubio et al., 2011) . There, the authors present a na\u00efve application of the AL paradigm for IMT that do not take into account the dynamic change in probability distribution of the stream. Nevertheless, results show that even that simple AL framework halves the required human effort to obtain a certain translation quality.", |
| "cite_spans": [ |
| { |
| "start": 70, |
| "end": 99, |
| "text": "(Gonz\u00e1lez-Rubio et al., 2011)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In this work, the AL framework presented in (Gonz\u00e1lez-Rubio et al., 2011) is extended in an effort to address all the above described challenges. In short, we propose an AL framework for IMT that splits the data stream into blocks. This approach allows us to have more context to model the changing probability distribution of the stream (challenge 2) and results in a more accurate sampling of the changing pool of sentences (challenge 1). In contrast to the proposal described in (Gonz\u00e1lez-Rubio et al., 2011) , we define sentence sampling strategies whose underlying models can be updated with the newly available data. This way, the sentences to be supervised by the user are chosen taking into account previously supervised sentences. To efficiently retrain the underlying SMT models of the IMT system (challenge 3), we follow the online learning technique described in (Ortiz-Mart\u00ednez et al., 2010) . Finally, we integrate all these elements to define an AL framework for IMT with an objective of obtaining an optimum balance between translation quality and human user effort.", |
| "cite_spans": [ |
| { |
| "start": 44, |
| "end": 73, |
| "text": "(Gonz\u00e1lez-Rubio et al., 2011)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 482, |
| "end": 511, |
| "text": "(Gonz\u00e1lez-Rubio et al., 2011)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 875, |
| "end": 904, |
| "text": "(Ortiz-Mart\u00ednez et al., 2010)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "IMT can be seen as an evolution of the SMT framework. Given a sentence f from a source language to be translated into a sentence e of a target language, the fundamental equation of SMT (Brown et al., 1993) is defined as follows:", |
| "cite_spans": [ |
| { |
| "start": 181, |
| "end": 205, |
| "text": "SMT (Brown et al., 1993)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Interactive machine translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "e = arg max e P r(e | f )", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Interactive machine translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where P r(e | f ) is usually approximated by a log linear translation model (Koehn et al., 2003) . In this case, the decision rule is given by the expression:\u00ea", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 96, |
| "text": "(Koehn et al., 2003)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Interactive machine translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "= arg max e M m=1 \u03bb m h m (e, f )", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Interactive machine translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "where each h m (e, f ) is a feature function representing a statistical model and \u03bb m its weight. In the IMT framework, a human translator is introduced in the translation process to collaborate with an SMT model. For a given source sentence, the SMT model fully automatically generates an initial translation. The human user checks this translation, from left to right, correcting the first Figure 1 : IMT session to translate a Spanish sentence into English. The desired translation is the translation the human user have in mind. At interaction-0, the system suggests a translation (e s ). At interaction-1, the user moves the mouse to accept the first eight characters \"To view \" and presses the a key (k), then the system suggests completing the sentence with \"list of resources\" (a new e s ). Interactions 2 and 3 are similar. In the final interaction, the user accepts the current translation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 392, |
| "end": 400, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Interactive machine translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "error. Then, the SMT model proposes a new extension taking the correct prefix, e p , into account. These steps are repeated until the user accepts the translation. Figure 1 illustrates a typical IMT session. In the resulting decision rule, we have to find an extension e s for a given prefix e p . To do this we reformulate equation 1as follows, where the term P r(e p | f ) has been dropped since it does not depend on e s :", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 164, |
| "end": 172, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Interactive machine translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "e s = arg max es P r(e p , e s | f ) (3) \u2248 arg max es p(e s | f , e p )", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Interactive machine translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The search is restricted to those sentences e which contain e p as prefix. Since e \u2261 e p e s , we can use the same log-linear SMT model, equation (2), whenever the search procedures are adequately modified (Barrachina et al., 2009) .", |
| "cite_spans": [ |
| { |
| "start": 206, |
| "end": 231, |
| "text": "(Barrachina et al., 2009)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Interactive machine translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The aim of the IMT framework is to obtain highquality translations while minimizing the required human effort. Despite the fact that IMT may reduce the required effort with respect to postediting, it still requires the user to supervise all the translations. To address this problem, we propose to use AL techniques to select only a small number of sentences whose translations are worth to be supervised by the human expert.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning for IMT", |
| "sec_num": "4" |
| }, |
| { |
| "text": "This approach implies a modification of the user-machine interaction protocol. For a given source sentence, the SMT model generates an initial translation. Then, if this initial translation is classified as incorrect or \"worth of supervision\", we perform a conventional IMT procedure as in Figure 1 . If not, we directly return the initial automatic translation and no effort is required from the user. At the end of the process, we use the new sentence pair (f , e) available to refine the SMT models used by the IMT system.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 290, |
| "end": 298, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Active learning for IMT", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In this scenario, the user only checks a small number of sentences, thus, final translations are not error-free as in conventional IMT. However, results in previous works (Gonz\u00e1lez-Rubio et al., 2011) show that this approach yields important reduction in human effort. Moreover, depending on the definition of the sampling strategy, we can modify the ratio of sentences that are interactively translated to adapt our system to the requirements of a specific translation task. For example, if the main priority is to minimize human effort, our system can be configured to translate all the sentences without user intervention.", |
| "cite_spans": [ |
| { |
| "start": 171, |
| "end": 200, |
| "text": "(Gonz\u00e1lez-Rubio et al., 2011)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning for IMT", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Algorithm 1 describes the basic algorithm to implement AL for IMT. The algorithm receives as input an initial SMT model, M , a sampling strategy, S, a stream of source sentences, F, and the block size, B. First, a block of B sentences, X, is extracted from the data stream (line 3). From this block, we sample those sentences, Y , that are worth to be supervised by the human expert (line 4). For each of the sentences in X, the current SMT model generates an initial translation, e, (line 6). If the sentence has been sampled as worthy of supervision, f \u2208 Y , the user is required to interactively translate it (lines 8-13) as exemplified in Figure 1 . The source sentence f and its human-supervised translation, e, are then used to retrain the SMT model (line 14). Otherwise, we directly output the automatic translation\u00ea as our final translation (line 17).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 643, |
| "end": 651, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Active learning for IMT", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Most of the functions in the algorithm denote different steps in the interaction between the human user and the machine: \u2022 genSuffix(M, f , e p ): returns the suffix of maximum probability that extends prefix e p .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning for IMT", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning for IMT", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u2022 validTranslation(e): returns True if the user considers the current translation to be correct and False otherwise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning for IMT", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Apart from these, the two elements that define the performance of our algorithm are the sampling strategy S(X, M ) and the retrain(M, (f , e)) function. On the one hand, the sampling strategy decides which sentences should be supervised by the user, which defines the human effort required by the algorithm. Section 5 describes our implementation of the sentence sampling to deal with the dynamic nature of data streams. On the other hand, the retrain(\u2022) function incrementally trains the SMT model with each new training pair (f , e). Section 6 describes the implementation of this function.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Active learning for IMT", |
| "sec_num": "4" |
| }, |
| { |
| "text": "A good sentence sampling strategy must be able to select those sentences that along with their correct translations improve most the performance of the SMT model. To do that, the sampling strategy have to correctly discriminate \"informative\" sentences from those that are not. We can make different approximations to measure the informativeness of a given sentence. In the following sections, we describe the three different sampling strategies tested in our experimentation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentence sampling strategies", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Arguably, the simplest sampling approach is random sampling, where the sentences are randomly selected to be interactively translated. Although simple, it turns out that random sampling perform surprisingly well in practice. The success of random sampling stem from the fact that in data stream environments the translation probability distributions may vary significantly through time. While general AL algorithms ask the user to translate informative sentences, they may significantly change probability distributions by favoring certain translations, consequently, the previously human-translated sentences may no longer reveal the genuine translation distribution in the current point of the data stream (Zhu et al., 2007) . This problem is less severe for static data where the candidate pool is fixed and AL algorithms are able to survey all instances. Random sampling avoids this problem by randomly selecting sentences for human supervision. As a result, it always selects those sentences with the most similar distribution to the current sentence distribution in the data stream.", |
| "cite_spans": [ |
| { |
| "start": 708, |
| "end": 726, |
| "text": "(Zhu et al., 2007)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Random sampling", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "One technique to measure the informativeness of a sentence is to directly measure the amount of new information that it will add to the SMT model. This sampling strategy considers that sentences with rare n-grams are more informative. The intuition for this approach is that rare n-grams need to be seen several times in order to accurately estimate their probability.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "n-gram coverage sampling", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "To do that, we store the counts for each n-gram present in the sentences used to train the SMT model. We assume that an n-gram is accurately represented when it appears A or more times in the training samples. Therefore, the score for a given sentence f is computed as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "n-gram coverage sampling", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "C(f ) = N n=1 |N <A n (f )| N n=1 |N n (f )| (5)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "n-gram coverage sampling", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "where N n (f ) is the set of n-grams of size n in f , N <A n (f ) is the set of n-grams of size n in f that are inaccurately represented in the training data and N is the maximum n-gram order. In the experimentation, we assume N = 4 as the maximum n-gram order and a value of 10 for the threshold A. This sampling strategy works by selecting a given percentage of the highest scoring sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "n-gram coverage sampling", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We update the counts of the n-grams seen by the SMT model with each new sentence pair. Hence, the sampling strategy is always up-to-date with the last training data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "n-gram coverage sampling", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Another technique is to consider that the most informative sentence is the one the current SMT model translates worst. The intuition behind this approach is that an SMT model can not generate good translations unless it has enough information to translate the sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dynamic confidence sampling", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The usual approach to compute the quality of a translation hypothesis is to compare it to a reference translation, but, in this case, it is not a valid option since reference translations are not available. Hence, we use confidence estimation (Gandrabur and Foster, 2003; Blatz et al., 2004; Ueffing and Ney, 2007) to estimate the probability of correctness of the translations. Specifically, we estimate the quality of a translation from the confidence scores of their individual words.", |
| "cite_spans": [ |
| { |
| "start": 254, |
| "end": 271, |
| "text": "and Foster, 2003;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 272, |
| "end": 291, |
| "text": "Blatz et al., 2004;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 292, |
| "end": 314, |
| "text": "Ueffing and Ney, 2007)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dynamic confidence sampling", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The confidence score of a word e i of the translation e = e 1 . . . e i . . . e I generated from the source sentence f = f 1 . . . f j . . . f J is computed as described in (Ueffing and Ney, 2005) :", |
| "cite_spans": [ |
| { |
| "start": 173, |
| "end": 196, |
| "text": "(Ueffing and Ney, 2005)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dynamic confidence sampling", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "C w (e i , f ) = max 0\u2264j\u2264| f | p(e i |f j )", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Dynamic confidence sampling", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "where p(e i |f j ) is an IBM model 1 (Brown et al., 1993) bilingual lexicon probability and f 0 is the empty source word. The confidence score for the full translation e is computed as the ratio of its words classified as correct by the word confidence measure. Therefore, we define the confidencebased informativeness score as:", |
| "cite_spans": [ |
| { |
| "start": 37, |
| "end": 57, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dynamic confidence sampling", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "C(e, f ) = 1 \u2212 |{e i | C w (e i , f ) > \u03c4 w }| | e |", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "Dynamic confidence sampling", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Finally, this sampling strategy works by selecting a given percentage of the highest scoring sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dynamic confidence sampling", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We dynamically update the confidence sampler each time a new sentence pair is added to the SMT model. The incremental version of the EM algorithm (Neal and Hinton, 1999 ) is used to incrementally train the IBM model 1.", |
| "cite_spans": [ |
| { |
| "start": 146, |
| "end": 168, |
| "text": "(Neal and Hinton, 1999", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dynamic confidence sampling", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "To retrain the SMT model, we implement the online learning techniques proposed in (Ortiz-Mart\u00ednez et al., 2010) . In that work, a stateof-the-art log-linear model (Och and Ney, 2002) and a set of techniques to incrementally train this model were defined. The log-linear model is composed of a set of feature functions governing different aspects of the translation process, including a language model, a source sentence-length model, inverse and direct translation models, a target phrase-length model, a source phraselength model and a distortion model.", |
| "cite_spans": [ |
| { |
| "start": 82, |
| "end": 111, |
| "text": "(Ortiz-Mart\u00ednez et al., 2010)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 163, |
| "end": 182, |
| "text": "(Och and Ney, 2002)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Retraining of the SMT model", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The incremental learning algorithm allows us to process each new training sample in constant time (i.e. the computational complexity of training a new sample does not depend on the number of previously seen training samples). To do that, a set of sufficient statistics is maintained for each feature function. If the estimation of the feature function does not require the use of the well-known expectation-maximization (EM) algorithm (Dempster et al., 1977 ) (e.g. n-gram language models), then it is generally easy to incrementally extend the model given a new training sample. By contrast, if the EM algorithm is required (e.g. word alignment models), the estimation procedure has to be modified, since the conventional EM algorithm is designed for its use in batch learning scenarios. For such models, the incremental version of the EM algorithm (Neal and Hinton, 1999) is applied. A detailed description of the update algorithm for each of the models in the log-linear combination is presented in (Ortiz-Mart\u00ednez et al., 2010) .", |
| "cite_spans": [ |
| { |
| "start": 435, |
| "end": 457, |
| "text": "(Dempster et al., 1977", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 850, |
| "end": 873, |
| "text": "(Neal and Hinton, 1999)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1002, |
| "end": 1031, |
| "text": "(Ortiz-Mart\u00ednez et al., 2010)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Retraining of the SMT model", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We carried out experiments to assess the performance of the proposed AL implementation for IMT. In each experiments, we started with an initial SMT model that is incrementally updated with the sentences selected by the current sampling strategy. Due to the unavailability of public benchmark data streams, we selected a relatively large corpus and treated it as a data stream for AL. To simulate the interaction with the user, we used the reference translations in the data stream corpus as the translation the human user would like to obtain. Since each experiment is carried out under the same conditions, if one sampling strategy outperforms its peers, then we can safely conclude that this is because the sentences selected to be translated are more informative.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "7" |
| }, |
| { |
| "text": "The training data comes from the Europarl corpus as distributed for the shared task in the NAACL 2006 workshop on statistical machine translation (Koehn and Monz, 2006) . We used this data to estimate the initial log-linear model used by our IMT system (see Section 6). The weights of the different feature functions were tuned by means of minimum error-rate training (Och, 2003) executed on the Europarl development corpus. Once the SMT model was trained, we use the News Commentary corpus (Callison-Burch et al., 2007) to simulate the data stream. The size of these corpora is shown in Table 1 . The reasons to choose the News Commentary corpus to carry out our experiments are threefold: first, its size is large enough to simulate a data stream and test our AL techniques in the long term; second, it is out-of-domain data which allows us to simulate a real-world situation that may occur in a translation company, and, finally, it consists in editorials from eclectic domain: general politics, economics and science, which effectively represents the variations in the sentence distributions of the simulated data stream.", |
| "cite_spans": [ |
| { |
| "start": 146, |
| "end": 168, |
| "text": "(Koehn and Monz, 2006)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 368, |
| "end": 379, |
| "text": "(Och, 2003)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 491, |
| "end": 520, |
| "text": "(Callison-Burch et al., 2007)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 588, |
| "end": 595, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training corpus and data stream", |
| "sec_num": "7.1" |
| }, |
| { |
| "text": "We want to measure both the quality of the generated translations and the human effort required to obtain them.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Assessment criteria", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "We measure translation quality with the wellknown BLEU (Papineni et al., 2002) score.", |
| "cite_spans": [ |
| { |
| "start": 55, |
| "end": 78, |
| "text": "(Papineni et al., 2002)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Assessment criteria", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "To estimate human user effort, we simulate the actions taken by a human user in its interaction with the IMT system. The first translation hypothesis for each given source sentence is compared with a single reference translation and the longest common character prefix (LCP) is obtained. The first non-matching character is replaced by the corresponding reference character and then a new translation hypothesis is produced (see Figure 1 ). This process is iterated until a full match with the reference is obtained. Each computation of the LCP would correspond to the user looking for the next error and moving the pointer to the corresponding position of the translation hypothesis. Each character replacement, on the other hand, would correspond to a keystroke of the user.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 429, |
| "end": 437, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Assessment criteria", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "Bearing this in mind, we measure the user effort by means of the keystroke and mouse-action ratio (KSMR) (Barrachina et al., 2009) . This measure has been extensively used to report results in the IMT literature. KSMR is calculated as the number of keystrokes plus the number of mouse movements divided by the total number of reference characters. From a user point of view the two types of actions are different and require different types of effort (Macklovitch, 2006) . In any case, as an approximation, KSMR assumes that both actions require a similar effort.", |
| "cite_spans": [ |
| { |
| "start": 105, |
| "end": 130, |
| "text": "(Barrachina et al., 2009)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 451, |
| "end": 470, |
| "text": "(Macklovitch, 2006)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Assessment criteria", |
| "sec_num": "7.2" |
| }, |
| { |
| "text": "In this section, we report results for three different experiments. First, we studied the performance of the sampling strategies when dealing with the sampling bias problem. In the second experiment, we carried out a typical AL experiment measuring the performance of the sampling strategies as a function of the percentage of the corpus used to retrain the SMT model. Finally, we tested our AL implementation for IMT in order to study the tradeoff between required human effort and final translation quality.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental results", |
| "sec_num": "7.3" |
| }, |
| { |
| "text": "In this experiment, we want to study the performance of the different sampling strategies when dealing with the sampling bias problem. Figure 2 shows the evolution of the translation quality, in terms of BLEU, across different data blocks for the three sampling strategies described in section 5, namely, dynamic confidence sampling (DCS), n-gram coverage sampling (NS) and random sampling (RS). On the one hand, the x-axis represents the data blocks number in their temporal order. On the other hand, the y-axis represents the BLEU score when automatically translating a block. Such translation is obtained by the SMT model trained with translations supervised by the user up to that point of the data stream. To fairly compare the different methods, we fixed the percentage of words supervised by the human user (10%). In addition to this, we used a block size of 500 sentences. Similar results were obtained for other block sizes.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 135, |
| "end": 143, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dealing with the sampling bias", |
| "sec_num": "7.3.1" |
| }, |
| { |
| "text": "Results in Figure 2 indicate that the performances for the data blocks fluctuate and fluctuations are quite significant. This phenomenon is due to the eclectic domain of the sentences in the data stream. Additionally, the steady increase in performance is caused by the increasing amount of data used to retrain the SMT model. Regarding the results for the different sampling strategies, DCS consistently outperformed RS and NS. This observation asserts that for concept drifting data streams with constant changing translation distributions, DCS can adaptively ask the user to translate sentences to build a superior SMT model. On the other hand, NS obtains worse results that RS. This result can be explained by the fact that NS is independent of the target language and just looks into the source language, while DCS takes into account both the source sentence and its automatic translation. Similar phenomena has been reported in a previous work on AL for SMT (Haffari et al., 2009) .", |
| "cite_spans": [ |
| { |
| "start": 964, |
| "end": 986, |
| "text": "(Haffari et al., 2009)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 11, |
| "end": 19, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dealing with the sampling bias", |
| "sec_num": "7.3.1" |
| }, |
| { |
| "text": "We carried out experiments to study the performance of the different sampling strategies. To this end, we compare the quality of the initial automatic translations generated in our AL implementation for IMT (line 6 in Algorithm 1). Figure 3 shows the BLEU score of these initial translations represented as a function of the percentage of the corpus used to retrain the SMT model. The percentage of the corpus is measured in number of running words.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 232, |
| "end": 240, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "AL performance", |
| "sec_num": "7.3.2" |
| }, |
| { |
| "text": "In Figure 3 , we present results for the three sampling strategies described in section 5. Additionally, we also compare our techniques with the AL technique for IMT proposed in (Gonz\u00e1lez-Rubio et al., 2011) . Such technique is similar to DCS but it does not update the IBM model 1 used by the confidence sampler with the newly available human-translated sentences. This technique is referred to as static confidence sampler (SCS).", |
| "cite_spans": [ |
| { |
| "start": 178, |
| "end": 207, |
| "text": "(Gonz\u00e1lez-Rubio et al., 2011)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 11, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "AL performance", |
| "sec_num": "7.3.2" |
| }, |
| { |
| "text": "Results in Figure 3 indicate that the performance of the retrained SMT models increased as more data was incorporated. Regarding the sampling strategies, DCS improved the results obtained by the other sampling strategies. NS obtained by far the worst results, which confirms the results shown in the previous experiment. Finally, as it can be seen, SCS obtained slightly worst results than DCS showing the importance of dynamically adapting the underlying model used by the sampling strategy.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 11, |
| "end": 19, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "AL performance", |
| "sec_num": "7.3.2" |
| }, |
| { |
| "text": "Finally, we studied the balance between required human effort and final translation error. This can be useful in a real-world scenario where a translation company is hired to translate a stream of sentences. Under these circumstances, it would be important to be able to predict the effort required from the human translators to obtain a certain translation quality.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Balancing human effort and translation quality", |
| "sec_num": "7.3.3" |
| }, |
| { |
| "text": "The experiment simulate this situation using our proposed IMT system with AL to translate the stream of sentences. To have a broad view of the behavior of our system, we repeated this translation process multiple times requiring an increasing human effort each time. Experiments range from a fully-automatic translation system with no need of human intervention to a system where the human is required to supervise all the sentences. Figure 4 presents results for SCS (see section 7.3.2) and the sentence selection strategies presented in section 5. In addition, we also present results for a static system without AL (w/o AL). This system is equal to SCS but it do not perform any SMT retraining.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 434, |
| "end": 442, |
| "text": "Figure 4", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Balancing human effort and translation quality", |
| "sec_num": "7.3.3" |
| }, |
| { |
| "text": "Results in Figure 4 show a consistent reduction in required user effort when using AL. For a given human effort the use of AL methods allowed to obtain twice the translation quality. Regarding the different AL sampling strategies, DCS obtains the better results but differences with other methods are slight.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 11, |
| "end": 19, |
| "text": "Figure 4", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Balancing human effort and translation quality", |
| "sec_num": "7.3.3" |
| }, |
| { |
| "text": "Varying the sentence classifier, we can achieve a balance between final translation quality and required human effort. This feature allows us to adapt the system to suit the requirements of the particular translation task or to the available economic or human resources. For example, if a translation quality of 60 BLEU points is satisfactory, then the human translators would need to modify only a 20% of the characters of the automatically generated translations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Balancing human effort and translation quality", |
| "sec_num": "7.3.3" |
| }, |
| { |
| "text": "Finally, it should be noted that our IMT systems with AL are able to generate new suffixes and retrain with new sentence pairs in tenths of a second. Thus, it can be applied in real time scenarios.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Balancing human effort and translation quality", |
| "sec_num": "7.3.3" |
| }, |
| { |
| "text": "In this work, we have presented an AL framework for IMT specially designed to process data streams with massive volumes of data. Our proposal splits the data stream in blocks of sentences of a certain size and applies AL techniques individually for each block. For this purpose, we implemented different sampling strategies that measure the informativeness of a sentence according to different criteria.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "8" |
| }, |
| { |
| "text": "To evaluate the performance of our proposed sampling strategies, we carried out experiments comparing them with random sampling and the only previously proposed AL technique for IMT described in (Gonz\u00e1lez-Rubio et al., 2011) . According to the results, one of the proposed sampling strategies, specifically the dynamic confidence sampling strategy, consistently outperformed all the other strategies.", |
| "cite_spans": [ |
| { |
| "start": 195, |
| "end": 224, |
| "text": "(Gonz\u00e1lez-Rubio et al., 2011)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "8" |
| }, |
| { |
| "text": "The results in the experimentation show that the use of AL techniques allows us to make a tradeoff between required human effort and final translation quality. In other words, we can adapt our system to meet the translation quality requirements of the translation task or the available human resources.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "8" |
| }, |
| { |
| "text": "As future work, we plan to investigate on more sophisticated sampling strategies such as those based in information density or query-bycommittee. Additionally, we will conduct experiments with real users to confirm the results obtained by our user simulation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions and future work", |
| "sec_num": "8" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n o 287576. Work also supported by the EC (FEDER/FSE) and the Spanish MEC under the MIPRCV Consolider Ingenio 2010 program (CSD2007-00018) and iTrans2 (TIN2009-14511) project and by the Generalitat Valenciana under grant ALMPR (Prometeo/2009/01).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Active learning and crowd-sourcing for machine translation", |
| "authors": [ |
| { |
| "first": "Vamshi", |
| "middle": [], |
| "last": "Ambati", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Vogel", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaime", |
| "middle": [], |
| "last": "Carbonell", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. of the conference on International Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "2169--2174", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2010. Active learning and crowd-sourcing for ma- chine translation. In Proc. of the conference on International Language Resources and Evaluation, pages 2169-2174.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Statistical approaches to computer-assisted translation", |
| "authors": [ |
| { |
| "first": "Sergio", |
| "middle": [], |
| "last": "Barrachina", |
| "suffix": "" |
| }, |
| { |
| "first": "Oliver", |
| "middle": [], |
| "last": "Bender", |
| "suffix": "" |
| }, |
| { |
| "first": "Francisco", |
| "middle": [], |
| "last": "Casacuberta", |
| "suffix": "" |
| }, |
| { |
| "first": "Jorge", |
| "middle": [], |
| "last": "Civera", |
| "suffix": "" |
| }, |
| { |
| "first": "Elsa", |
| "middle": [], |
| "last": "Cubel", |
| "suffix": "" |
| }, |
| { |
| "first": "Shahram", |
| "middle": [], |
| "last": "Khadivi", |
| "suffix": "" |
| }, |
| { |
| "first": "Antonio", |
| "middle": [], |
| "last": "Lagarda", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| }, |
| { |
| "first": "Jes\u00fas", |
| "middle": [], |
| "last": "Tom\u00e1s", |
| "suffix": "" |
| }, |
| { |
| "first": "Enrique", |
| "middle": [], |
| "last": "Vidal", |
| "suffix": "" |
| }, |
| { |
| "first": "Juan-Miguel", |
| "middle": [], |
| "last": "Vilar", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Computational Linguistics", |
| "volume": "35", |
| "issue": "", |
| "pages": "3--28", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sergio Barrachina, Oliver Bender, Francisco Casacu- berta, Jorge Civera, Elsa Cubel, Shahram Khadivi, Antonio Lagarda, Hermann Ney, Jes\u00fas Tom\u00e1s, En- rique Vidal, and Juan-Miguel Vilar. 2009. Sta- tistical approaches to computer-assisted translation. Computational Linguistics, 35:3-28.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Confidence estimation for machine translation", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Blatz", |
| "suffix": "" |
| }, |
| { |
| "first": "Erin", |
| "middle": [], |
| "last": "Fitzgerald", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Foster", |
| "suffix": "" |
| }, |
| { |
| "first": "Simona", |
| "middle": [], |
| "last": "Gandrabur", |
| "suffix": "" |
| }, |
| { |
| "first": "Cyril", |
| "middle": [], |
| "last": "Goutte", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Kulesza", |
| "suffix": "" |
| }, |
| { |
| "first": "Alberto", |
| "middle": [], |
| "last": "Sanchis", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicola", |
| "middle": [], |
| "last": "Ueffing", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. of the international conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "315--321", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto Sanchis, and Nicola Ueffing. 2004. Confidence es- timation for machine translation. In Proc. of the in- ternational conference on Computational Linguis- tics, pages 315-321.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Bucking the trend: large-scale cost-focused active learning for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Bloodgood", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "854--864", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Bloodgood and Chris Callison-Burch. 2010. Bucking the trend: large-scale cost-focused active learning for statistical machine translation. In Proc. of the Association for Computational Linguistics, pages 854-864.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "The mathematics of statistical machine translation: parameter estimation", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "Della" |
| ], |
| "last": "Vincent", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [ |
| "A" |
| ], |
| "last": "Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "L" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "", |
| "pages": "263--311", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19:263-311.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Meta-) evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison", |
| "suffix": "" |
| }, |
| { |
| "first": "-", |
| "middle": [], |
| "last": "Burch", |
| "suffix": "" |
| }, |
| { |
| "first": "Cameron", |
| "middle": [], |
| "last": "Fordyce", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Christof", |
| "middle": [], |
| "last": "Monz", |
| "suffix": "" |
| }, |
| { |
| "first": "Josh", |
| "middle": [], |
| "last": "Schroeder", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proc. of the Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "136--158", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (Meta-) evaluation of machine translation. In Proc. of the Workshop on Statistical Machine Translation, pages 136-158.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Maximum likelihood from incomplete data via the EM algorithm", |
| "authors": [ |
| { |
| "first": "Arthur", |
| "middle": [], |
| "last": "Dempster", |
| "suffix": "" |
| }, |
| { |
| "first": "Nan", |
| "middle": [], |
| "last": "Laird", |
| "suffix": "" |
| }, |
| { |
| "first": "Donald", |
| "middle": [], |
| "last": "Rubin", |
| "suffix": "" |
| } |
| ], |
| "year": 1977, |
| "venue": "Journal of the Royal Statistical Society", |
| "volume": "39", |
| "issue": "1", |
| "pages": "1--38", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Arthur Dempster, Nan Laird, and Donald Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statis- tical Society., 39(1):1-38.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Target-text mediated interactive machine translation", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Foster", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Isabelle", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Plamondon", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Machine Translation", |
| "volume": "12", |
| "issue": "", |
| "pages": "175--194", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George Foster, Pierre Isabelle, and Pierre Plamon- don. 1998. Target-text mediated interactive ma- chine translation. Machine Translation, 12:175- 194.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Confidence estimation for text prediction", |
| "authors": [ |
| { |
| "first": "Simona", |
| "middle": [], |
| "last": "Gandrabur", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Foster", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proc. of the Conference on Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "315--321", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simona Gandrabur and George Foster. 2003. Confi- dence estimation for text prediction. In Proc. of the Conference on Computational Natural Language Learning, pages 315-321.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "An active learning scenario for interactive machine translation", |
| "authors": [ |
| { |
| "first": "Jes\u00fas", |
| "middle": [], |
| "last": "Gonz\u00e1lez-Rubio", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Ortiz-Mart\u00ednez", |
| "suffix": "" |
| }, |
| { |
| "first": "Francisco", |
| "middle": [], |
| "last": "Casacuberta", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proc. of the 13thInternational Conference on Multimodal Interaction", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jes\u00fas Gonz\u00e1lez-Rubio, Daniel Ortiz-Mart\u00ednez, and Francisco casacuberta. 2011. An active learn- ing scenario for interactive machine translation. In Proc. of the 13thInternational Conference on Mul- timodal Interaction. ACM.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Active learning for statistical phrase-based machine translation", |
| "authors": [ |
| { |
| "first": "Gholamreza", |
| "middle": [], |
| "last": "Haffari", |
| "suffix": "" |
| }, |
| { |
| "first": "Maxim", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "" |
| }, |
| { |
| "first": "Anoop", |
| "middle": [], |
| "last": "Sarkar", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proc. of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "415--423", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gholamreza Haffari, Maxim Roy, and Anoop Sarkar. 2009. Active learning for statistical phrase-based machine translation. In Proc. of the North Ameri- can Chapter of the Association for Computational Linguistics, pages 415-423.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Special issue on new tools for human translators", |
| "authors": [ |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Isabelle", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenneth", |
| "middle": [ |
| "Ward" |
| ], |
| "last": "Church", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Machine Translation", |
| "volume": "12", |
| "issue": "1-2", |
| "pages": "1--2", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pierre Isabelle and Kenneth Ward Church. 1997. Spe- cial issue on new tools for human translators. Ma- chine Translation, 12(1-2):1-2.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Manual and automatic evaluation of machine translation between european languages", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Christof", |
| "middle": [], |
| "last": "Monz", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proc. of the Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "102--121", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn and Christof Monz. 2006. Man- ual and automatic evaluation of machine transla- tion between european languages. In Proc. of the Workshop on Statistical Machine Translation, pages 102-121.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Statistical phrase-based translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Franz", |
| "middle": [ |
| "Josef" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology", |
| "volume": "1", |
| "issue": "", |
| "pages": "48--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Pro- ceedings of the 2003 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics on Human Language Technology -Vol- ume 1, pages 48-54.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Trans Type: development-evaluation cycles to boost translator's productivity. Machine Translation", |
| "authors": [ |
| { |
| "first": "Philippe", |
| "middle": [], |
| "last": "Langlais", |
| "suffix": "" |
| }, |
| { |
| "first": "Guy", |
| "middle": [], |
| "last": "Lapalme", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "17", |
| "issue": "", |
| "pages": "77--98", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philippe Langlais and Guy Lapalme. 2002. Trans Type: development-evaluation cycles to boost trans- lator's productivity. Machine Translation, 17:77- 98.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Stream-based translation models for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Abby", |
| "middle": [], |
| "last": "Levenberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| }, |
| { |
| "first": "Miles", |
| "middle": [], |
| "last": "Osborne", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "394--402", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abby Levenberg, Chris Callison-Burch, and Miles Os- borne. 2010. Stream-based translation models for statistical machine translation. In Proc. of the North American Chapter of the Association for Compu- tational Linguistics, pages 394-402, Los Angeles, California, June.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "TransType2: the last word", |
| "authors": [ |
| { |
| "first": "Elliott", |
| "middle": [], |
| "last": "Macklovitch", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proc. of the conference on International Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "167--184", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elliott Macklovitch. 2006. TransType2: the last word. In Proc. of the conference on International Lan- guage Resources and Evaluation, pages 167-17.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "A view of the EM algorithm that justifies incremental, sparse, and other variants", |
| "authors": [ |
| { |
| "first": "Radford", |
| "middle": [], |
| "last": "Neal", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "355--368", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Radford Neal and Geoffrey Hinton. 1999. A view of the EM algorithm that justifies incremental, sparse, and other variants. Learning in graphical models, pages 355-368.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Adaptive language and translation models for interactive machine translation", |
| "authors": [ |
| { |
| "first": "Laurent", |
| "middle": [], |
| "last": "Nepveu", |
| "suffix": "" |
| }, |
| { |
| "first": "Guy", |
| "middle": [], |
| "last": "Lapalme", |
| "suffix": "" |
| }, |
| { |
| "first": "Philippe", |
| "middle": [], |
| "last": "Langlais", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Foster", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc, of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "190--197", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Laurent Nepveu, Guy Lapalme, Philippe Langlais, and George Foster. 2004. Adaptive language and trans- lation models for interactive machine translation. In Proc, of EMNLP, pages 190-197, Barcelona, Spain, July.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Discriminative training and maximum entropy models for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Franz", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "295--302", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Franz Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statisti- cal machine translation. In Proc. of the Association for Computational Linguistics, pages 295-302.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Minimum error rate training in statistical machine translation", |
| "authors": [ |
| { |
| "first": "Franz", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proc. of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "160--167", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Franz Och. 2003. Minimum error rate training in sta- tistical machine translation. In Proc. of the Associa- tion for Computational Linguistics, pages 160-167.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Online learning for interactive statistical machine translation", |
| "authors": [ |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Ortiz-Mart\u00ednez", |
| "suffix": "" |
| }, |
| { |
| "first": "Ismael", |
| "middle": [], |
| "last": "Garc\u00eda-Varea", |
| "suffix": "" |
| }, |
| { |
| "first": "Francisco", |
| "middle": [], |
| "last": "Casacuberta", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proc. of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "546--554", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Daniel Ortiz-Mart\u00ednez, Ismael Garc\u00eda-Varea, and Francisco Casacuberta. 2010. Online learning for interactive statistical machine translation. In Proc. of the North American Chapter of the Association for Computational Linguistics, pages 546-554.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "BLEU: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "Kishore", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "Todd", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Jing", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "311--318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for auto- matic evaluation of machine translation. In Proc. of the Association for Computational Linguistics, pages 311-318.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Application of word-level confidence measures in interactive statistical machine translation", |
| "authors": [ |
| { |
| "first": "Nicola", |
| "middle": [], |
| "last": "Ueffing", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proc. of the European Association for Machine Translation conference", |
| "volume": "", |
| "issue": "", |
| "pages": "262--270", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nicola Ueffing and Hermann Ney. 2005. Applica- tion of word-level confidence measures in interac- tive statistical machine translation. In Proc. of the European Association for Machine Translation con- ference, pages 262-270.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Wordlevel confidence estimation for machine translation", |
| "authors": [ |
| { |
| "first": "Nicola", |
| "middle": [], |
| "last": "Ueffing", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Computational Linguistics", |
| "volume": "33", |
| "issue": "", |
| "pages": "9--40", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nicola Ueffing and Hermann Ney. 2007. Word- level confidence estimation for machine translation. Computational Linguistics, 33:9-40.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Active learning from data streams", |
| "authors": [ |
| { |
| "first": "Xingquan", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Peng", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaodong", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Yong", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proc. of the 7th IEEE International Conference on Data Mining", |
| "volume": "", |
| "issue": "", |
| "pages": "757--762", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xingquan Zhu, Peng Zhang, Xiaodong Lin, and Yong Shi. 2007. Active learning from data streams. In Proc. of the 7th IEEE International Conference on Data Mining, pages 757-762. IEEE Computer So- ciety.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Active learning from stream data using optimal weight classifier ensemble", |
| "authors": [ |
| { |
| "first": "Xingquan", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Peng", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaodong", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Yong", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Transactions on Systems, Man and Cybernetics Part B", |
| "volume": "40", |
| "issue": "", |
| "pages": "1607--1621", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xingquan Zhu, Peng Zhang, Xiaodong Lin, and Yong Shi. 2010. Active learning from stream data using optimal weight classifier ensemble. Transactions on Systems, Man and Cybernetics Part B, 40:1607- 1621, December.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "translate(M, f ): returns the most probable automatic translation of f given by M . \u2022 validPrefix(e): returns the prefix of e input : M (initial SMT model) S (sampling strategy) F (stream of source sentences) B (block size) auxiliar : X (block of sentences) Y (sentences worth of supervision) validPrefix(e); 10\u00ea s = genSuffix(M, f , e p ); 11 e = e p\u00eas ; 12 until validTranslation(e) ; 13 M = retrain(M, (f , e)); Pseudo-code of the proposed algorithm to implement AL for IMT from unbounded data streams. validated by the user as correct. This prefix includes the correction k.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF1": { |
| "text": "Performance of the AL methods across different data blocks. Block size 500. Human supervision 10% of the corpus.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF2": { |
| "text": "BLEU of the initial automatic translations as a function of the percentage of the corpus used to retrain the model.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF4": { |
| "text": "Quality of the data stream translation (BLEU) as a function of the required human effort (KSMR). w/o AL denotes a system with no retraining.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF1": { |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td>corpus</td><td colspan=\"2\">use sentences</td><td>words (Spa/Eng)</td></tr><tr><td>Europarl</td><td colspan=\"2\">train 731K devel. 2K</td><td>15M/15M 60K/58K</td></tr><tr><td>News Commentary</td><td>test</td><td>51K</td><td>1.5M/1.2M</td></tr></table>", |
| "type_str": "table", |
| "text": "Size of the Spanish-English corpora used in the experiments. K and M stand for thousands and millions of elements respectively." |
| } |
| } |
| } |
| } |