| { |
| "paper_id": "A00-1019", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T01:12:51.717016Z" |
| }, |
| "title": "Unit Completion for a Computer-aided Translation System", |
| "authors": [ |
| { |
| "first": "Philippe", |
| "middle": [], |
| "last": "Langlais", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "succursale Centre-ville Montral (Qubec)", |
| "location": { |
| "postCode": "H3C 3J7", |
| "country": "Canada" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Foster", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "succursale Centre-ville Montral (Qubec)", |
| "location": { |
| "postCode": "H3C 3J7", |
| "country": "Canada" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Guy", |
| "middle": [], |
| "last": "Lapalme", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "succursale Centre-ville Montral (Qubec)", |
| "location": { |
| "postCode": "H3C 3J7", |
| "country": "Canada" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Rali", |
| "middle": [ |
| "/" |
| ], |
| "last": "Diro", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "succursale Centre-ville Montral (Qubec)", |
| "location": { |
| "postCode": "H3C 3J7", |
| "country": "Canada" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This work is in the context of TRANSTYPE, a system that observes its user as he or she types a translation and repeatedly suggests completions for the text already entered. The user may either accept, modify, or ignore these suggestions. We describe the design, implementation, and performance of a prototype which suggests completions of units of texts that are longer than one word.", |
| "pdf_parse": { |
| "paper_id": "A00-1019", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This work is in the context of TRANSTYPE, a system that observes its user as he or she types a translation and repeatedly suggests completions for the text already entered. The user may either accept, modify, or ignore these suggestions. We describe the design, implementation, and performance of a prototype which suggests completions of units of texts that are longer than one word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "TRANSTYPE is part of a project set up to explore an appealing solution to Interactive Machine Translation (IMT) . In constrast to classical IMT systems, where the user's role consists mainly of assisting the computer to analyse the source text (by answering questions about word sense, ellipses, phrasal attachments, etc), in TRANSTYPE the interaction is directly concerned with establishing the target text.", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 111, |
| "text": "(IMT)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "Our interactive translation system works as follows: a translator selects a sentence and begins typing its translation. After each character typed by the translator, the system displays a proposed completion, which may either be accepted using a special key or rejected by continuing to type. Thus the translator remains in control of the translation process and the machine must continually adapt its suggestions in response to his or her input. We are currently undertaking a study to measure the extent to which our word-completion prototype can improve translator productivity. The conclusions of this study will be presented elsewhere.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "The first version of TrtANSTYPE (Foster et al., 1997) only proposed completions for the current word. This paper deals with predictions which extend to the next several words in the text. The potential gain from multiple-word predictions can be appreciated in the one-sentence translation task reported in table 1, where a hypothetical user saves over 60% of the keystrokes needed to produce a translation in a word completion scenario, and about 85% in a \"unit\" completion scenario.", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 53, |
| "text": "(Foster et al., 1997)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "In all the figures that follow, we use different fonts to differentiate the various input and output: italics are used for the source text, sans-serif for characters typed by the user and typewriter-like for characters completed by the system. The first few lines of the table 1 give an idea of how TransType functions. Let us assume the unit scenario (see column 2 of the table) and suppose that the user wants to produce the sentence \"Ce projet de loi est examin~ ~ la chambre des communes\" as a translation for the source sentence \"This bill is examined in the house of commons\". The first hypothesis that the system produces before the user enters a character is loi (law). As this is not a good guess from TRANSTYPE the user types the first character (c) of the words he or she wants as a translation. Taking this new input into account, TRANSTYPE then modifies its proposal so that it is compatible whith what the translator has typed. It suggests the desired sequence ce projet de Ioi, which the user can simply validate by typing a dedicated key. Continuing in this way, the user and TRANSTYPE alternately contribute to the final translation. A screen copy of this prototype is provided in figure 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": null |
| }, |
| { |
| "text": "The Core Engine", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "The core of TRANSTYPE is a completion engine which comprises two main parts: an evaluator which assigns probabilistic scores to completion hypotheses and a generator which uses the evaluation function to select the best candidate for completion.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2", |
| "sec_num": null |
| }, |
| { |
| "text": "The evaluator is a function p(t [t', s) which assigns to each target-text unit t an estimate of its probability given a source text s and the tokens t' which precede t in the current translation of s. 1 Our approach to modeling this distribution is based to a large extent on that of the IBM group (Brown et al., 1993) , but it differs in one significant aspect: whereas the IB-M model involves a \"noisy channel\" decomposition, we use a linear combination of separate prediction- : A one-sentence session illustrating the word-and unit-completion tasks. The first column indicates the target words the user is expected to produce. The next two columns indicate respectively the prefixes typed by the user and the completions proposed by the system in a word-completion task. The last two columns provide the same information for the unit-completion task. The total number of keystrokes for both tasks is reported in the last line. + indicates the acceptance key typed by the user. A completion is denoted by a/13 where a is the typed prefix and 13 the completed part. Completions for different prefixes are separated by \u2022.", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 39, |
| "text": "[t', s)", |
| "ref_id": null |
| }, |
| { |
| "start": 298, |
| "end": 318, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Evaluator", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "is powerful, it has the disadvantage that p(slt' , t) is more expensive to compute than p(tls ) when using IBM-style translation models. Since speed is crucial for our application, we chose to forego the noisy channel approach in the work described here. Our linear combination model is described as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Evaluator", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "pCtlt',s) = pCtlt') a(t',s) + pCtls) [1 -exit',s)] (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Evaluator", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "\u2022 ~ \u2022 \u2022 v J language translation", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Evaluator", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "where a(t', s) E [0, 1] are context-dependent interpolation coefficients. For example, the translation model could have a higher weight at the start of a sentence but the contribution of the language model might become more important in the middle or the end of the sentence\u2022 A study of the weightings for these two models is described elsewhere\u2022 In the work described here we did not use the contribution of the language model (that is,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Evaluator", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "a(t', s) = O, V t', s).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Evaluator", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Techniques for weakening the independence assumptions made by the IBM models 1 and 2 have been proposed in recent work (Brown et al., 1993; Berger et al., 1996; Och and Weber, 98; Wang and Waibel, 98; Wu and Wong, 98) . These studies report improvements on some specific tasks (task-oriented limited vocabulary) which by nature are very different from the task TRANSTYPE is devoted to. Furthermore, the underlying decoding strategies are too time consuming for our application\u2022 We therefore use a translation model based on the simple linear interpolation given in equation 2 which combines predictions of two translation models --Ms and M~ -both based on IBM-like model 2 (Brown et al., 1993) . Ms was trained on single words and Mu, described in section 3, was trained on both words and units.", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 139, |
| "text": "(Brown et al., 1993;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 140, |
| "end": 160, |
| "text": "Berger et al., 1996;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 161, |
| "end": 179, |
| "text": "Och and Weber, 98;", |
| "ref_id": null |
| }, |
| { |
| "start": 180, |
| "end": 200, |
| "text": "Wang and Waibel, 98;", |
| "ref_id": null |
| }, |
| { |
| "start": 201, |
| "end": 217, |
| "text": "Wu and Wong, 98)", |
| "ref_id": null |
| }, |
| { |
| "start": 673, |
| "end": 693, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Evaluator", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "-- _", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "The Evaluator", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "word unit", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Evaluator", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "where Ps and Pu stand for the probabilities given respectively by Ms and M~. G(s) represents the new sequence of tokens obtained after grouping the tokens of s into units. The grouping operator G is illustrated in table 2 and is described in section 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Evaluator", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The task of the generator is to identify units that match the current prefix typed by the user, and pick the best candidate according to the evaluator. Due to time considerations, the generator introduces a division of the target vocabulary into two parts: a small active component whose contents are always searched for a match to the current prefix, and a much larger passive part over (380,000 word forms) which comes into play only when no candidates are found in the active vocabulary. The active part is computed dynamically when a new sentence is selected by the translator. It is composed of a few entities (tokens and units) that are likely to appear in the translation. It is a union of the best candidates provided by each model Ms and M~ over the set of all possible target tokens (resp. units) that have a non-null translation probability of being translated by any of the current source tokens (resp. units). Table 2 : Role of the generator for a sample pair of sentences (t is the translation of s in our corpus). G(s) is the sequence of source tokens recasted by the grouping operator G. A8 indicates the 10 best tokens according to the word model, Au the 10 best units according to the unit model.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 923, |
| "end": 930, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Generator", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Automatically identifying which source words or groups of words will give rise to which target words or groups of words is a fundamental problem which remains open. In this work, we decided to proceed in two steps: a) monolingually identifying groups of words that would be better handled as units in a given context, and b) mapping the resulting source and target units. To train our unit models, we used a segment of the Hansard corpus consisting of 15,377 pairs of sentences, totaling 278,127 english tokens (13,543 forms) and 292,865 french tokens (16,399 forms).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Unit Associations", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Finding relevant units in a text has been explored in many areas of natural language processing. Our approach relies on distributional and frequency statistics computed on each sequence of words found in a training corpus. For sake of efficiency, we used the suffix array technique to get a compact representation of our training corpus. This method allows the efficient retrieval of arbitrary length n-grams (Nagao and Mori, 94; Haruno et al., 96; Ikehara et al., 96; Shimohata et al., 1997; Russell, 1998) . The literature abounds in measures that can help to decide whether words that co-occur are linguistically significant or not. In this work, the strength of association of a sequence of words w[ = wl,..., wn is computed by two measures: a likelihood-based one p(w'~) (where g is the likelihood ratio given in (Dunning, 93)) and an entropy-based one e(w'~) (Shimohata et al., 1997) . Letting T stand for the training text and m a token:", |
| "cite_spans": [ |
| { |
| "start": 409, |
| "end": 429, |
| "text": "(Nagao and Mori, 94;", |
| "ref_id": null |
| }, |
| { |
| "start": 430, |
| "end": 448, |
| "text": "Haruno et al., 96;", |
| "ref_id": null |
| }, |
| { |
| "start": 449, |
| "end": 468, |
| "text": "Ikehara et al., 96;", |
| "ref_id": null |
| }, |
| { |
| "start": 469, |
| "end": 492, |
| "text": "Shimohata et al., 1997;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 493, |
| "end": 507, |
| "text": "Russell, 1998)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 865, |
| "end": 889, |
| "text": "(Shimohata et al., 1997)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Finding Monolingual Units", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "p(w~) = argming(w~, uS1 ) (3) ie]l,n[ e(w'~) = 0.5x +k ~rnlw,~meT h ( Ireq(w'~ m) k Ir~q(wT) ]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Finding Monolingual Units", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Intuitively, the first measurement accounts for the fact that parts of a sequence of words that should be considered as a whole should not appear often by themselves. The second one reflects the fact that a salient unit should appear in various contexts (i.e. should have a high entropy score).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Finding Monolingual Units", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We implemented a cascade filtering strategy based on the likelihood score p, the frequency f, the length l and the entropy value e of the sequences. A first filter (.~\"1 (lmin, fmin, Pmin, emin)) removes any sequence s for which l(s) < lmin or p(s) < Pmin or e(s) < e,nin or f(s) < fmin. A second filter (~'2) removes sequences that are included in preferred ones. In terms of sequence reduction, applying ~1 (2, 2, 5.0, 0.2) on the 81,974 English sequences of at least two tokens seen at least twice in our training corpus, less than 50% of them (39,093) were filtered: 17,063 (21%) were removed because of their low entropy value, 25,818 (31%) because of their low likelihood value.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Finding Monolingual Units", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Mapping the identified units (tokens or sequences) to their equivalents in the other language was achieved by training a new translation model (IBM 2) using the EM algorithm as described in (Brown et al., 1993) . This required grouping the tokens in our training corpus into sequences, on the basis of the unit lexicons identified in the previous step (we will refer to the results of this grouping as the sequencebased corpus). To deal with overlapping possibilities, we used a dynamic programming scheme which optimized a criterion C given by equation 4 over a set S of all units collected for a given language plus all single words. G(w~) is obtained by returning the path that maximized B(n). We investigated several Ccriteria and we found C~--a length-based measurc to be the most satisfactory. Table 2 shows an output of the grouping function. Table 3 : Bilingual associations. The first column indicates a source unit, the second one its frequency in the training corpus. The third column reports its 3-best ranked target associations (a being a token or a unit, p being the translation probability). The second half of the table reports NP-associations obtained after the filter described in the text.", |
| "cite_spans": [ |
| { |
| "start": 190, |
| "end": 210, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 800, |
| "end": 807, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 850, |
| "end": 857, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Mapping", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We investigated three ways of estimating the parameters of the unit model. In the first one, El, the translation parameters are estimated by applying the EM algorithm in a straightforward fashion over all entities (tokens and units) present at least twice in the sequence-based corpus 2. The two next methods filter the probabilities obtained with the Ez method. In E2, all probabilities p(tls ) are set to 0 whenever s is a token (not a unit), thus forcing the model to contain only associations between source units and target entities (tokens or units). In E3 any parameter of the model that involves a token is removed (that is, p(tls ) = 0 if t or s is a token).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mapping", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The resulting model will thus contain only unit associations. In both cases, the final probabilities are renormalized. Table 3 shows a few entries from a unit model (Mu) obtained after 15 iterations of the EM-algorithm on a sequence corpus resulting from the application of the length-grouping criterion (dr) over a lexicon of units whose likelihood score is above 5.0. The probabilities have been obtained by application of the method E2.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 119, |
| "end": 126, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Mapping", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We found many partially correct associations", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mapping", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Cover the years/au fils des, we have/nous, etc) that illustrate the weakness of decoupling the unit identification from the mapping problem. In most cas-2The entities seen only once are mapped to a special \"unknown\" word es however, these associations have a lower probability than the good ones. We also found few erratic associations (the first time/e'dtait, some hon. members/t, etc) due to distributional artifacts. It is also interesting to note that the good associations we found are not necessary compositional in nature (we must/il Iaut, people of canada/les canadiens, of eourse/6videmment, etc).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mapping", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "One way to increase the precision of the mapping process is to impose some linguistic constraints on the sequences such as simple noun-phrase contraints (Ganssier, 1995; Kupiec, 1993; hua Chen and Chen, 94; Fung, 1995; Evans and Zhai, 1996) . It is also possible to focus on non-compositional compounds, a key point in bilingual applications (Su et al., 1994; Melamed, 1997; Lin, 99) . Another interesting approach is to restrict sequences to those that do not cross constituent boundary patterns (Wu, 1995; Furuse and Iida, 96) . In this study, we filtered for potential sequences that are likely to be noun phrases, using simple regular expressions over the associated part-of-speech tags. An excerpt of the association probabilities of a unit model trained considering only the NP-sequences is given in table 3. Applying this filter (referred to as JrNp in the following) to the 39,093 english sequences still surviving after previous filters ~'1 and ~'2 removes 35,939 of them (92%). Table 4 : Completion results of several translation models, spared: theoretical proportion of characters saved; ok: number of target units accepted by the user; good: number of target units that matched the expected whether they were proposed or not; nu: number of sentences for which no target unit was found by the translation model; u: number of sentences for which at least one helpful unit has been found by the model, but not necessarily proposed.", |
| "cite_spans": [ |
| { |
| "start": 153, |
| "end": 169, |
| "text": "(Ganssier, 1995;", |
| "ref_id": null |
| }, |
| { |
| "start": 170, |
| "end": 183, |
| "text": "Kupiec, 1993;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 184, |
| "end": 206, |
| "text": "hua Chen and Chen, 94;", |
| "ref_id": null |
| }, |
| { |
| "start": 207, |
| "end": 218, |
| "text": "Fung, 1995;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 219, |
| "end": 240, |
| "text": "Evans and Zhai, 1996)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 342, |
| "end": 359, |
| "text": "(Su et al., 1994;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 360, |
| "end": 374, |
| "text": "Melamed, 1997;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 375, |
| "end": 383, |
| "text": "Lin, 99)", |
| "ref_id": null |
| }, |
| { |
| "start": 497, |
| "end": 507, |
| "text": "(Wu, 1995;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 508, |
| "end": 528, |
| "text": "Furuse and Iida, 96)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 988, |
| "end": 995, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Filtering", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "More than half of the 3,154 remaining NP-sequences contain only two words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Filtering", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We collected completion results on a test corpus of 747 sentences (13,386 english tokens and 14,506 french ones) taken from the Hansard corpus. These sentences have been selected randomly among sentences that have not been used for the training. Around 18% of the source and target words are not known by the translation model. The baseline models (line 1 and 2) are obtained without any unit model (i.e. /~ = 1 in equation 2). The first one is obtained with an IBM-like model 1 while the second is an IBM-like model 2. We observe that for the pair of languages we considered, model 2 improves the amount of saved keystrokes of almost 3% compared to model 1. Therefore we made use of alignment probabilities for the other models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The three next blocks in table 4 show how the parameter estimation method affects performance. Training models under the C1 method gives the worst results. This results from the fact that the wordto-word probabilities trained on the sequence based corpus (predicted by Mu in equation 2) are less accurate than the ones learned from the token based corpus. The reason is simply that there are less occurrences of each token, especially if many units are identified by the grouping operator.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In methods C2 and C3, the unit model of equation 2 only makes predictions pu(tls ) when s is a source unit, thus lowering the noise compared to method \u00a31.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We also observe in these three blocks the influence of sequence filtering: the more we filter, the better the results. This holds true for all estimation methods tried. In the fifth block of table 4 we observe the positive influence of the NP-filtering, especially when using the third estimation method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The best combination we found is reported in line 15. It outperforms the baseline by around 1.5%. This model has been obtained by retaining all sequences seen at least two times in the training corpus for which the likelihood test value was above 5 and the entropy score above 0.2 (5rl (2, 2, 5, 0.2)). In terms of the coverage of this unit model, it is interesting to note that among the 747 sentences of the test session, there were 228 for which the model did not propose any units at all. For 425 of the remaining sentences, the model proposed at least one helpful (good or partially good) unit. The active vocabulary for these sentences contained an average of around 2.5 good units per sentence, of which only half (495) were proposed during the session. The fact that this model outperforms others despite its relatively poor coverage (compared to the others) may be explained by the fact that it also removes part of the noise introduced by decoupling the identification of the salient units from the training procedure. Furthermore, as we mentionned earlier, the more we filter, the less the grouping scheeme presented in equation 4 remains necessary, thus reducing a possible source of noise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The fact that this model outperforms others, despite its relatively poor coverage, is due to the fact that it also removes part of the noise that is introduced by dissociating the identification of the salient units from the training procedure. ~rthermore, as we mentioned earlier, the more we filter, the less the grouping scheme presented in equation 4 remains necessary, thus further reducing an other possible source of noise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We have described a prototype system called TRANSTYPE which embodies an innovative approach to interactive machine translation in which the interaction is directly concerned with establishing the target text. We proposed and tested a mechanism to enhance TRANSTYPE by having it predict sequences of words rather than just completions for the current word. The results show a modest improvement in prediction performance which will serve as a baseline for our future investigations. One obvious direction for future research is to revise our current strategy of decoupling the selection of units from their bilingual context.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "1We assume the existence of a deterministic procedure for tokenizing the target text.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "TRANSTYPE is a project funded by the Natural Sciences and Engineering Research Council of Canada. We are undebted to Elliott Macklovitch and Pierre Isabelle for the fruitful orientations they gave to this work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowlegments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A maximum entropy approach to natural language processing", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [ |
| "L" |
| ], |
| "last": "Berger", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [ |
| "A" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [ |
| "J Della" |
| ], |
| "last": "Pietra", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Computational Linguistics", |
| "volume": "22", |
| "issue": "1", |
| "pages": "39--71", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam L. Berger, Stephen A. Della Pietra, and Vin- cent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Compu- tational Linguistics, 22(1):39-71.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The mathematics of machine trmaslation: Parameter estimation", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Peter", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [ |
| "A" |
| ], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Della", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "L" |
| ], |
| "last": "Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "263--312", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincen- t Della J. Pietra, and Robert L. Mercer. 1993. The mathematics of machine trmaslation: Pa- rameter estimation. Computational Linguistics, 19(2):263-312, June.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Accurate methods for the statistics of surprise and coincidence", |
| "authors": [ |
| { |
| "first": "Ted", |
| "middle": [], |
| "last": "Dunning", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Computational Linguistics", |
| "volume": "93", |
| "issue": "1", |
| "pages": "61--74", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ted Dunning. 93. Accurate methods for the statis- tics of surprise and coincidence. Computational Linguistics, 19(1):61-74.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Nounphrase analysis in unrestricted text for information retrieval", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "Chengxiang", |
| "middle": [], |
| "last": "Evans", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zhai", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "17--24", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David A. Evans and Chengxiang Zhai. 1996. Noun- phrase analysis in unrestricted text for informa- tion retrieval. In Proceedings of the 34th Annu- al Meeting of the Association for Computational Linguistics, pages 17-24, Santa Cruz, California.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Target-text Mediated Interactive Machine Translation", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Foster", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Isabelle", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierre", |
| "middle": [], |
| "last": "Plamondon", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Machine Translation", |
| "volume": "12", |
| "issue": "", |
| "pages": "175--194", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George Foster, Pierre Isabelle, and Pierre Plamon- don. 1997. Target-text Mediated Interactive Ma- chine Translation. Machine Translation, 12:175- 194.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A pattern matching method for finding noun and proper noun translations from noisy parallel corpora", |
| "authors": [ |
| { |
| "first": "Pascale", |
| "middle": [], |
| "last": "Fung", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "236--243", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pascale Fung. 1995. A pattern matching method for finding noun and proper noun translations from noisy parallel corpora. In Proceedings of the 33rd Annual Meeting of the Association for Compu- tational Linguistics, pages 236-243, Cambridge, Massachusetts.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Eric Gaussier. 1995. Modles statistiques et patrons morphosyntaxiques pour l'extraction de lcxiques bilingues", |
| "authors": [ |
| { |
| "first": "Osamu", |
| "middle": [], |
| "last": "Furuse", |
| "suffix": "" |
| }, |
| { |
| "first": "Hitoshi", |
| "middle": [], |
| "last": "Iida", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proceedings of the 16th International Conference On Computational Linguistics", |
| "volume": "96", |
| "issue": "", |
| "pages": "jan-- vier", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Osamu Furuse and Hitoshi Iida. 96. Incremen- tal translation utilizing constituent boundray pat- terns. In Proceedings of the 16th International Conference On Computational Linguistics, pages 412-417, Copenhagen, Denmark. Eric Gaussier. 1995. Modles statistiques et patron- s morphosyntaxiques pour l'extraction de lcxiques bilingues. Ph.D. thesis, Universit de Paris 7, jan- vier.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Kuang hua Chen and Hsin-Hsi Chen. 94. Extracting noun phrases from large-scale texts: A hybrid approach and its automatic evaluation", |
| "authors": [ |
| { |
| "first": "Masahiko", |
| "middle": [], |
| "last": "Haruno", |
| "suffix": "" |
| }, |
| { |
| "first": "Satoru", |
| "middle": [], |
| "last": "Ikehara", |
| "suffix": "" |
| }, |
| { |
| "first": "Takefumi", |
| "middle": [], |
| "last": "Yamazaki", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "96", |
| "issue": "", |
| "pages": "234--241", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Masahiko Haruno, Satoru Ikehara, and Takefumi Yamazaki. 96. Learning bilingual collocations by word-level sorting. In Proceedings of the 16th In- ternational Conference On Computational Lin- guistics, pages 525-530, Copenhagen, Denmark. Kuang hua Chen and Hsin-Hsi Chen. 94. Extract- ing noun phrases from large-scale texts: A hybrid approach and its automatic evaluation. In Pro- ceedings of the 32nd Annual Meeting of the Asso- ciation for Computational Linguistics, pages 234- 241, Las Cruces, New Mexico.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "96. A statistical method for extracting uinterupted and interrupted collocations from very large corpora", |
| "authors": [ |
| { |
| "first": "Satoru", |
| "middle": [], |
| "last": "Ikehara", |
| "suffix": "" |
| }, |
| { |
| "first": "Satoshi", |
| "middle": [], |
| "last": "Shirai", |
| "suffix": "" |
| }, |
| { |
| "first": "Hajine", |
| "middle": [], |
| "last": "Uchino", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proceedings of the 16th International Conference On Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "574--579", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Satoru Ikehara, Satoshi Shirai, and Hajine Uchino. 96. A statistical method for extracting uinterupt- ed and interrupted collocations from very large corpora. In Proceedings of the 16th International Conference On Computational Linguistics, pages 574-579, Copenhagen, Denmark.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "An algorithm for finding noun phrase correspondences in bilingual corpora", |
| "authors": [ |
| { |
| "first": "Julian", |
| "middle": [], |
| "last": "Kupiec", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "17--22", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Julian Kupiec. 1993. An algorithm for finding noun phrase correspondences in bilingual corpora. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, pages 17-22, Colombus, Ohio.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Automatic identification of noncompositional phrases", |
| "authors": [], |
| "year": null, |
| "venue": "Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "317--324", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekang Lin. 99. Automatic identification of non- compositional phrases. In Proceedings of the 37th Annual Meeting of the Association for Computa- tional Linguistics, pages 317-324, College Park, Maryland.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Automatic discovery of noncompositional coumpounds in parallel data", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Melamed", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the 2nd Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "97--108", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dan Melamed. 1997. Automatic discovery of non- compositional coumpounds in parallel data. In Proceedings of the 2nd Conference on Empirical Methods in Natural Language Processing, pages 97-108, Providence, RI, August, lst-2nd.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "A new method of n-gram statistics for large number of n and automatic extraction of words and phrases from large text data of japanese", |
| "authors": [ |
| { |
| "first": "Makoto", |
| "middle": [], |
| "last": "Nagao", |
| "suffix": "" |
| }, |
| { |
| "first": "Shinsuke", |
| "middle": [], |
| "last": "Mori", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proceedings of the 16th International Conference On Computational Linguistics", |
| "volume": "94", |
| "issue": "", |
| "pages": "611--615", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Makoto Nagao and Shinsuke Mori. 94. A new method of n-gram statistics for large number of n and automatic extraction of words and phrases from large text data of japanese. In Proceedings of the 16th International Conference On Com- putational Linguistics, volume 1, pages 611-615, Copenhagen, Denmark.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Improving statistical natural language translation with categories and rules", |
| "authors": [ |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Franz", |
| "suffix": "" |
| }, |
| { |
| "first": "Hans", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Weber", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "98", |
| "issue": "", |
| "pages": "985--989", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Franz Josef Och and Hans Weber. 98. Improving statistical natural language translation with cate- gories and rules. In Proceedings of the 36th Annu- al Meeting of the Association for Computational Linguistics, pages 985-989, Montreal, Canada.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Identification of salient token sequences", |
| "authors": [ |
| { |
| "first": "Graham", |
| "middle": [], |
| "last": "Russell", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Graham Russell. 1998. Identification of salient to- ken sequences. Internal report, RALI, University of Montreal, Canada.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Retrieving collocations by cooccurrences and word order constraints", |
| "authors": [ |
| { |
| "first": "Sayori", |
| "middle": [], |
| "last": "Shimohata", |
| "suffix": "" |
| }, |
| { |
| "first": "Toshiyuki", |
| "middle": [], |
| "last": "Sugio", |
| "suffix": "" |
| }, |
| { |
| "first": "Junji", |
| "middle": [], |
| "last": "Nagata", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "476--481", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sayori Shimohata, Toshiyuki Sugio, and Junji Nagata. 1997. Retrieving collocations by co- occurrences and word order constraints. In Pro- ceedings of the 35th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 476- 481, Madrid Spain.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "A corpus-based approach to automatic compound extraction", |
| "authors": [ |
| { |
| "first": "Keh-Yih", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wen", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jing-Shin", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "242--247", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Keh-Yih Su, Ming-Wen Wu, and Jing-Shin Chang. 1994. A corpus-based approach to automatic com- pound extraction. In Proceedings of the 32nd An- nual Meeting of the Association for Computation- al Linguistics, pages 242-247, Las Cruces, New Mexico.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Modeling with structures in statistical machine translation", |
| "authors": [ |
| { |
| "first": "Ye-", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Yi", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Waibel", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "98", |
| "issue": "", |
| "pages": "1357--1363", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ye-Yi Wang and Alex Waibel. 98. Modeling with structures in statistical machine translation. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics, vol- ume 2, pages 1357-1363, Montreal, Canada.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Machine translation with a stochastic grammatical channel", |
| "authors": [ |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hongsing", |
| "middle": [], |
| "last": "Wong", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "98", |
| "issue": "", |
| "pages": "1408--1414", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekai Wu and Hongsing Wong. 98. Machine trans- lation with a stochastic grammatical channel. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics, pages 1408-1414, Montreal, Canada.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Stochastic inversion transduction grammars, with application to segmentation, bracketing, and alignment of parallel corpora", |
| "authors": [ |
| { |
| "first": "Dekai", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the International Joint Conference on Artificial Intelligence", |
| "volume": "2", |
| "issue": "", |
| "pages": "1328--1335", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dekai Wu. 1995. Stochastic inversion transduc- tion grammars, with application to segmentation, bracketing, and alignment of parallel corpora. In Proceedings of the International Joint Conference on Artificial Intelligence, volume 2, pages 1328- 1335, Montreal, Canada.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF2": { |
| "text": "Example of an interaction in TRANSTYPE with the source text in the top half of the screen. The target text is typed in the bottom half with suggestions given by the menu at the insertion point.", |
| "uris": null, |
| "type_str": "figure", |
| "num": null |
| }, |
| "TABREF0": { |
| "text": "", |
| "num": null, |
| "content": "<table><tr><td/><td/><td/><td colspan=\"3\">This bill is examined in the house of commons</td></tr><tr><td/><td/><td colspan=\"3\">word-completion task</td><td>unit-completion task</td></tr><tr><td/><td colspan=\"3\">preL completions</td><td colspan=\"2\">pref. completions</td></tr><tr><td>ce</td><td>ce+</td><td colspan=\"2\">/loi \u2022 C/'</td><td>c-l-</td><td>/loJ. \u2022 c</td></tr><tr><td>projet</td><td>p+</td><td>/est\u2022</td><td>p/rojet</td><td/></tr><tr><td>de</td><td>d+</td><td colspan=\"2\">/trbs \u2022 d/e</td><td/></tr><tr><td>Ioi</td><td>I+</td><td colspan=\"2\">/t=~s \u2022 I/oi</td><td/></tr><tr><td>est</td><td>e+</td><td colspan=\"2\">/de \u2022 e/st</td><td/></tr><tr><td>examin~</td><td>e+</td><td colspan=\"2\">/en \u2022 e/xamin6</td><td/></tr><tr><td/><td>~+</td><td colspan=\"2\">/par \u2022 ~/ 1~</td><td/></tr><tr><td>chambre</td><td>+</td><td colspan=\"2\">/chambre</td><td/></tr><tr><td>des</td><td>de+</td><td colspan=\"2\">/co,~unes \u2022 d/e</td><td>\u2022 de/s</td></tr><tr><td>communes</td><td>+</td><td colspan=\"2\">/communes</td><td/></tr></table>", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF1": { |
| "text": "", |
| "num": null, |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "text": "shows the 10 most likely tokens and units in the active vocabulary for an example source sentence.that. is \u2022 what. the . prime, minister . said \u2022 and. i \u2022 have. outlined\u2022 what. has. happened . since\u2022 then.. c' -est. ce-que, le-premier -ministre, adit.,.et.j',", |
| "num": null, |
| "content": "<table><tr><td/><td colspan=\"3\">ai. r4sum4-ce. qui.s'-est-</td></tr><tr><td/><td>produit -depuis \u2022 .</td><td/><td/></tr><tr><td colspan=\"4\">g(s) that is what \u2022 the prime minister said \u2022 , and i</td></tr><tr><td/><td colspan=\"3\">\u2022 have . outlined \u2022 what has happened \u2022 since</td></tr><tr><td/><td>then \u2022 .</td><td/><td/></tr><tr><td>As</td><td>\u2022 \u2022 \u2022 est \u2022 ce \u2022 ministre</td><td>\u2022 que.</td><td>et \u2022 a \u2022</td></tr><tr><td>A~</td><td/><td/><td/></tr></table>", |
| "html": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |