| { |
| "paper_id": "E03-1007", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T10:25:04.668840Z" |
| }, |
| "title": "Using POS Information for Statistical Machine Translation into Morphologically Rich Languages", |
| "authors": [ |
| { |
| "first": "Nicola", |
| "middle": [], |
| "last": "Ueffing", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "RWTH Aachen -University of Technology", |
| "location": {} |
| }, |
| "email": "tueffing@cs.rwth-aachen.de" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "RWTH Aachen -University of Technology", |
| "location": {} |
| }, |
| "email": "neyl@cs.rwth-aachen.de" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "When translating from languages with hardly any inflectional morphology like English into morphologically rich languages, the English word forms often do not contain enough information for producing the correct fullform in the target language. We investigate methods for improving the quality of such translations by making use of part-ofspeech information and maximum entropy modeling. Results for translations from English into Spanish and Catalan are presented on the LC-STAR corpus which consists of spontaneously spoken dialogues in the domain of appointment scheduling and travel planning.", |
| "pdf_parse": { |
| "paper_id": "E03-1007", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "When translating from languages with hardly any inflectional morphology like English into morphologically rich languages, the English word forms often do not contain enough information for producing the correct fullform in the target language. We investigate methods for improving the quality of such translations by making use of part-ofspeech information and maximum entropy modeling. Results for translations from English into Spanish and Catalan are presented on the LC-STAR corpus which consists of spontaneously spoken dialogues in the domain of appointment scheduling and travel planning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In this paper, we address the question of how partof-speech (POS) information can help improving the quality of Statistical Machine Translation (SMT). One of the main problems when translating from a language with hardly any inflectional morphology (which is English in our experiments) into one with richer morphology (here: Spanish and Catalan) is the production of the correct inflected form in the target language. We introduce transformations to the English string that are based on the part-of-speech information and show how this knowledge source can help SMT. Systematic evaluations will show that the quality of the gen-erated translations is improved. The transformations we apply are the following:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Treatment of verbs In Catalan and Spanish, the pronoun before a verb is often omitted and instead, the person is expressed via the ending of the verb. The same holds for future tense and for the modes expressed through 'would' and 'should' in English. Since this makes it hard to generate the correct translation of a given English verb, we propose a method resulting in English word forms containing sufficient information.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In English, interrogative phrases have a word order that is different from declarative sentences: Either an auxiliary 'do' is inserted or the order of verb and pronoun is inverted. Since this is different in Spanish and Catalan, we modify the word order in English to make it more similar to the Spanish/Catalan one and to help the verb treatment mentioned above.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question inversion", |
| "sec_num": null |
| }, |
| { |
| "text": "The paper is organized as follows: Related work is treated in Section 2. In Section 3, we shortly review the statistical approach to machine translation. Then, we introduce the transformations that we apply to the less inflected language of the two under consideration (namely English) in Section 4. After describing the maximum entropy approach and the training procedure we use for the statistical lexicon in Section 5, we present results on the trilingual LC-STAR corpus in Section 6. Then, we conclude and present ideas about future work in Section 7.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question inversion", |
| "sec_num": null |
| }, |
| { |
| "text": "Publications dealing with the integration of linguistic information into the process of statistical machine translation are rather few although this had already been suggested in (Brown et al., 1992) . (NieBen and Ney, 2001b) introduce hierarchical lexicon models including baseform and POS information for translation from German into English. Information contained in the German entries that are not relevant for the generation of the English translation are omitted. Unlike this, we investigate methods for enriching English with knowledge to help selecting the correct fullform in a morphologically richer language. (Niefien and Ney, 2001a) propose reordering operations for the language pair German-English that help SMT by harmonizing word order between source and target. The question inversion we apply was inspired by this; nevertheless, we do not perform a full morpho-syntactic analysis, but make use only of POS information which can be obtained from freely available tools. (Garcia-Varea et al., 2001 ) apply a maximum entropy approach for training the statistical lexicon, but do not take any linguistic information into account. The use of POS information for improving statistical alignment quality is described in (Toutanova et al., 2002) , but no translation results are presented.", |
| "cite_spans": [ |
| { |
| "start": 179, |
| "end": 199, |
| "text": "(Brown et al., 1992)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 202, |
| "end": 225, |
| "text": "(NieBen and Ney, 2001b)", |
| "ref_id": null |
| }, |
| { |
| "start": 620, |
| "end": 644, |
| "text": "(Niefien and Ney, 2001a)", |
| "ref_id": null |
| }, |
| { |
| "start": 987, |
| "end": 1013, |
| "text": "(Garcia-Varea et al., 2001", |
| "ref_id": null |
| }, |
| { |
| "start": 1231, |
| "end": 1255, |
| "text": "(Toutanova et al., 2002)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The goal of machine translation is the translation of an input string Si,. . . , s j in the source language into a target language string ti tI. We choose the string that has maximal probability given the source string, Pr(tils1). Applying Bayes' decision rule yields the following criterion:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "arg max Pr(t i s i ) tf = arg max{Pr(t1) \u2022 Pr(s1 tf 4)1 (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Through this decomposition of the probability, we obtain two knowledge sources: the translation and the language model. Those two can be modelled independently of each other. The correspondence between the words in the source and the target string is described by alignments that assign target word positions to each source word position. The probability of a certain target language word to occur in the target string is assumed to depend basically only on the source words aligned to it. The search is denoted by the arg max operation in Eq. 1, i.e. it explores the space of all possible target language strings and all possible alignments between the source and the target language string to find the one with maximal probability. The input string can be preprocessed before being passed to the search algorithm. If necessary, the inverse of these transformations will be applied to the generated output string. In the work presented here, we restrict ourselves to transforming only one language of the two: the source, which has the less inflected morphology. For descriptions of SMT systems see for example (Germann et al., 2001; Och et al., 1999; Tillmann and Ney, 2002; Vogel et al., 2000; Wang and Waibel, 1997) .", |
| "cite_spans": [ |
| { |
| "start": 1112, |
| "end": 1134, |
| "text": "(Germann et al., 2001;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 1135, |
| "end": 1152, |
| "text": "Och et al., 1999;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1153, |
| "end": 1176, |
| "text": "Tillmann and Ney, 2002;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1177, |
| "end": 1196, |
| "text": "Vogel et al., 2000;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 1197, |
| "end": 1219, |
| "text": "Wang and Waibel, 1997)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Machine Translation", |
| "sec_num": "3" |
| }, |
| { |
| "text": "When translating from English into languages with a highly inflected morphology, the production of the correct fullform often causes problems. Our experience on several corpora shows that the error rate of a translation from English into morphologically richer languages decreases by 10% relative if we aim at producing only the correct baseform instead of the fully inflected word. The transfer of the meaning expressed in the baseform is easier than deciding on the correct inflected form.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transformations in the Less Inflected Language", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Especially the translation of verbs is difficult since there are many different inflections in Spanish and Catalan whereas there are only few in English. Moreover, the pronouns and modals are often omitted in Spanish and Catalan and this information is expressed through the suffix. This makes it very hard for word-based systems to generate the correct inflection from the English verb which does not contain sufficient information. Thus, several English words will have to be aligned to the Spanish or Catalan verbs. This process is rela-tively difficult for the algorithm and causes noise in the statistical lexicon if English pronouns are regarded as translations of Spanish or Catalan verbs. In order to enrich the English verb with the needed information, we combine pronouns and/or modals with following verbs and treat those combinations as 'new' fullform words in English. Thus we can obtain the information needed to select the correct verb form in the target language from one single English word. The identification of English pronouns, modals and verbs was done by POS tagging applied to the English part of the corpus. We decided to transform the source language instead of the target language, because in this case we need only the POS tags of the source language as additional knowledge source and nothing else. Another possible approach would have been to split the suffix in the target language (e.g. 'esta' into 'estar P3S'). This would require postprocessing tools that are able to generate the correct verb form from the baseform and the person and tense information. Table 1 gives examples of words that have been spliced to form new entries of the English lexicon. For example, we splice the phrase 'you think' to form the single entry 'you_think' which contains sufficient information for producing the correct Spanish verb form 'crees' or the Catalan 'creus' . Similarly, the modal auxiliaries can be added as well, like in the entry 'you_will_have' which is much better suited for being translated into 'tendras' (Spanish) or 'tindras' (Catalan) than the verb 'have' alone. Moreover, in a single word based lexicon, three single entries would have to be added for the translation of 'you will have' into 'tendras': (you,tendras), (will,tendras) and (have,tendras), which spreads the translation probability over far too many entries and makes the probability distribution unfocused. As the last example in Table 1 shows, 'you can go' is spliced only into two words instead of one in order to better match the Spanish/Catalan form.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 1589, |
| "end": 1596, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 2432, |
| "end": 2439, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Treatment of Verbs", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In English interrogative phrases, either an auxiliary 'do' is inserted or the order of verb and pronoun is inverted. The auxiliary 'do' does not carry information that is relevant when translating into Spanish or Catalan. Thus, we can remove it from the sentence without harming the translation process (as described in (NieBen and Ney, 2001a) for the language pair German-English). However, we do not remove a question supporting 'do' in past tense, i. e. 'did' is kept in the phrase, because this is the only word containing the tense information. Afterwards, we can merge the pronoun and verb as depicted in Table 2 : 'did you go' is transformed into 'you_did go'. We do not splice 'you_did' and 'go', because the English simple past is translated into present perfect in Catalan; and it is very likely to be translated into present perfect in Spanish, especially in colloquial language as it is present in this task. The form 'you_did go' is well suited to be translated into the Spanish 'has ido' or the Catalan 'has anat'. If there is no question supporting 'do' and the order of pronoun and verb is inverted -see the example 'how are you?' in Table 3 -we first swap the two words and then perform the splicing step. This is done in order to avoid having two lexical entries with the same translation: for example, ' you_are' and the interrogative 'are_you' both have the same translation in Spanish or Catalan, respectively. Table 3 presents examples of transformed English questions. Comparing them to the Spanish and Catalan reference, we see that it is easier to find a word-to-word mapping for the modified English sentences.", |
| "cite_spans": [ |
| { |
| "start": 320, |
| "end": 343, |
| "text": "(NieBen and Ney, 2001a)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 611, |
| "end": 618, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 1150, |
| "end": 1157, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 1432, |
| "end": 1439, |
| "text": "Table 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Question Treatment", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "If we merge the pronouns/modals and verbs as described above, it might happen that the verb itself (or one of its inflections) has never been seen in training except from its appearance in the new entries in the lexicon which result from the splic- ing operation. This makes it impossible to translate the verb itself, because it is then unknown to the system. The same holds for combinations of pronouns and verbs that are unseen in training, e. g. the training corpus contains the bigram 'I went', but not the one 'she went'. In order to overcome this problem, we train our lexicon model using maximum entropy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Training", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The maximum entropy approach (Berger et al., 1996) presents a powerful framework for the combination of several knowledge sources. This principle recommends to choose the distribution which preserves as much uncertainty as possible in terms of maximizing the entropy. The distribution is required to satisfy constraints, which represent facts known from the data. These constraints are expressed on the basis of feature functions hu,(s,t),", |
| "cite_spans": [ |
| { |
| "start": 29, |
| "end": 50, |
| "text": "(Berger et al., 1996)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Maximum Entropy Approach", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "where (s, t) is a pair of source and target word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Maximum Entropy Approach", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The lexicon probability of a source word given the target word has the following functional form 1 t) Z(t) exP Y.' L_, Am h\",(s,t) with the normalization factor", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 130, |
| "text": "Am h\",(s,t)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Maximum Entropy Approach", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Z(t) = E exp [E X\",h,\",(s' ,t)]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Maximum Entropy Approach", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "where A = {Am } is the set of model parameters with one weight A, for each feature function hm . ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Maximum Entropy Approach", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "hs, ,v (s ,t) = S(s. s') \u2022 V erb(t, v) where 1, if t contains the verb v V erb(t, v) = 0, otherwise", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Maximum Entropy Approach", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "This enables us to translate the verb alone even if it occurs in the training corpus only as a spliced entry.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Maximum Entropy Approach", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "For an introduction to maximum entropy modeling and training procedures, the reader is referred to the corresponding literature, for instance (Berger et al., 1996) or (Ratnaparkhi, 1997) .", |
| "cite_spans": [ |
| { |
| "start": 142, |
| "end": 163, |
| "text": "(Berger et al., 1996)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 167, |
| "end": 186, |
| "text": "(Ratnaparkhi, 1997)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Maximum Entropy Approach", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We performed the following training steps:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "\u2022 transform the English (= source language) part of the corpus as described in Sections 4.1 and 4.2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "\u2022 train the statistical translation system using this modified source language corpus 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "\u2022 with the resulting alignment, train the lexicon model using maximum entropy with the features described in Section 5.1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "This training can be performed using converging iterative training procedures like described by (Darroch and Ratcliff, 1972) or (Della Pietra et al., 1997) 2 . The basic training procedures for the translation system and the language model need not be changed.", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 124, |
| "text": "(Darroch and Ratcliff, 1972)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "For translation, we can use an SMT system where the search algorithm does not have to be modified. Before the translation process, we transform the input in the same way as the training corpus before training the alignment (see Section 5.2). We simply have to exclude those words from splicing where the splicing operation yields an unknown word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation process", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "'This training was done using the GIZA++ toolkit which can be downloaded from http://www-i6.informatik.rwthaachen.deroch/software/GIZA++.html 2 We made use of the toolkit YASMET which can be downloaded from http://www-i6.informatik.rwthaachen.deroch/software/YASMET.html", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation process", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We performed experiments on the trilingual corpus which is successively built within the LC-STAR project. It comprises the languages English, Spanish and Catalan, whereof we used English as source and Spanish and Catalan as target languages. At the time of our experiments, we had about 13k sentences per language available; the statistics are given in Table 4 . The corpus consists of transcriptions of spontaneously spoken dialogues. Thus, the sentences often lack correct syntactic structure. The domain of this task is appointment scheduling and travel arrangements. The POS information for the English part of the corpus was generated using the Brill tagger3 . As Table 4 shows, the splicing operation increases the cardinality of the English vocabulary as well as the number of singletons significantly. Nevertheless, they are still below those numbers for Spanish and Catalan.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 353, |
| "end": 360, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 669, |
| "end": 676, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Corpora", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "The quality of the output of our machine translation system is measured automatically by comparing the generated translation to a given reference translation. The two following criteria are used:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "\u2022 WER (word error rate):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "The word error rate is based on the Levenshtein distance. It is computed as the minimum number of substitution, insertion and deletion operations that have to be performed to convert the generated string into the reference string. Since some sentences in the develop and test set occur several times with different reference translations (which holds especially for short sentences like 'okay, good-bye'), we calculate the minimal distance to this set of references as proposed in .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "\u2022 BLEU (bilingual evaluation understudy): (Papineni et al., 2002) have proposed a method of automatic machine translation evaluation, which they call \"BLEU\". It is based on the notion of modified n-gram precision, for which all candidate n-gram counts in the translation are collected and clipped against their corresponding maximum reference counts. These clipped candidate counts are summed and normalized by the total number of candidate n-grams. Since BLEU expresses quality, we determine 100-BLEU to transform it into an error measure.", |
| "cite_spans": [ |
| { |
| "start": 42, |
| "end": 65, |
| "text": "(Papineni et al., 2002)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Although these measures are only approximations, they seem to be sufficient at the present level of performance of machine translation systems.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We compared the two statistical lexica obtained from the baseline system and from the maximum entropy training on the transformed corpus. For the baseline lexicon, we observed an average of 5.82 Catalan translation candidates per English word and 6.16 Spanish translation candidates. These numbers are significantly reduced in the lexicon which was trained on the transformed corpus using maximum entropy: there, we have an average of 4.20 for Catalan and 4.46 for Spanish. Especially for (nominative) English pronouns (which have many verbs as translation candidates in the baseline lexicon), the number of translation candidates was substantially scaled down by a factor around 4. This shows that our method was successful in producing a more focused lexicon probability distribution. We performed translation experiments with an implementation of the IBM-4 translation model (Brown et al., 1993) . A description of the system can be found in (Tillmann and Ney, 2002) . Table 5 presents an assessment of translation quality for both the language pairs English-Catalan and English-Spanish. We see that there is a significant decrease in error rate for the translation into Catalan. This change is consistent across both error rates, the WER and 100-BLEU. For translations from English into Spanish, the improvement is less substantial. A reason for this might be that the Spanish vocabulary contains more entries and the ratio between fullforms and baseforms is higher: 1.57 for Spanish versus 1.53 for Catalan4 . This makes it more difficult for the system to choose the correct inflection when generating a Spanish sentence. We assume that the extension of our approach to other word classes than verbs will yield a quality gain for translations into Spanish. Table 6 shows several sentences from the English LC-STAR develop and test corpus that were trans- lated into Catalan. We see that it is easier for the system to generate the correct verb inflection in Catalan if the verb is enriched with the pronoun. In the baseline system, it happens that words are inserted -like 'far' as translation of 'will' in the second example which is incorrect. This can be avoided by the splicing of words.", |
| "cite_spans": [ |
| { |
| "start": 878, |
| "end": 898, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 945, |
| "end": 969, |
| "text": "(Tillmann and Ney, 2002)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 972, |
| "end": 979, |
| "text": "Table 5", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 1763, |
| "end": 1770, |
| "text": "Table 6", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "In the last example, we see that the baseline system generates one word each for the English 'I prefer' and does not find the correct translation, whereas transformations yield an accurate translation of this expression, because the spliced word contains sufficient information.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "We presented a method for improving quality of statistical machine translation from English into morphologically richer languages like Spanish and Catalan. Using POS tags as additional knowledge source, we enrich the English verbs such that they contain more information relevant for selecting the correct inflected form in the target language. The lexicon model was then trained using the maximum entropy approach, taking the verbs as additional features. Results were given for translation from English into Spanish and Catalan on the LC-STAR corpus which consists of spontaneously spoken dialogues in the domain of appointment scheduling and travel arrangement. Our experiments show that translation quality can be significantly increased through the use of our approach: the word error rate on the Catalan development set for example decreased by 2.5% absolute. We plan to investigate other methods of enriching the English words with information. It will be interesting to see how other word classes, e. g. nouns, can be handled in order to improve quality of translations into languages with a highly inflected morphology.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "I. Garcia-Varea, F.J. Och, H. Ney, and F. Casacuberta. 2001 . Refined lexicon models for statistical machine translation using a maximum entropy approach. In Proc. 39th Annual Meeting of the Assoc. for Computational Linguistics -joint with EACL, pages 204-211, Toulouse, France, July. ", |
| "cite_spans": [ |
| { |
| "start": 22, |
| "end": 59, |
| "text": "Och, H. Ney, and F. Casacuberta. 2001", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "The Brill tagger can be downloaded from http://www.research.microsoft.com/users/brill/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The lemmatization of Spanish and Catalan was produced using the analyser from UPC Barcelona: MACO+ and RE-LAX.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was partly supported by the LC-STAR project by the European Community (1ST project ref. no. 2001-32216).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": "8" |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A maximum entropy approach to natural language processing", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "L" |
| ], |
| "last": "Berger", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [ |
| "J" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Computational Linguistics", |
| "volume": "22", |
| "issue": "1", |
| "pages": "39--72", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A.L. Berger, S.A. Della Pietra, and V.J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39-72, March.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Analysis, statistical transfer, and synthesis in machine translation", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "F" |
| ], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [ |
| "J" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "D" |
| ], |
| "last": "Lafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proc. TMI 1992: 4th Int. Conf. on Theoretical and Methodological Issues in MT", |
| "volume": "", |
| "issue": "", |
| "pages": "83--100", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P.F. Brown, S.A. Della Pietra, V.J. Della Pietra, J.D. Lafferty, and R.L. Mercer. 1992. Analysis, statis- tical transfer, and synthesis in machine translation. In Proc. TMI 1992: 4th Int. Conf. on Theoretical and Methodological Issues in MT, pages 83-100, Montreal, P.O., Canada, June.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "The mathematics of statistical machine translation: Parameter estimation", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "F" |
| ], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [ |
| "J" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "263--311", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P.F. Brown, S.A. Della Pietra, V.J. Della Pietra, and R.L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Compu- tational Linguistics, 19(2):263-311 .", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Generalized iterative scaling for log-linear models", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "N" |
| ], |
| "last": "Darroch", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Ratcliff", |
| "suffix": "" |
| } |
| ], |
| "year": 1972, |
| "venue": "Annals of Mathematical Statistics", |
| "volume": "43", |
| "issue": "", |
| "pages": "1470--1480", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J.N. Darroch and D. Ratcliff. 1972. Generalized itera- tive scaling for log-linear models. Annals of Mathe- matical Statistics, 43:1470-1480.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Inducing features in random fields", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [ |
| "J" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Lafferty", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "IEEE Trans. on Pattern Analysis and Machine Inteligence", |
| "volume": "19", |
| "issue": "4", |
| "pages": "380--393", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S.A. Della Pietra, V.J. Della Pietra, and J. Lafferty. 1997. Inducing features in random fields. IEEE Trans. on Pattern Analysis and Machine Inteligence, 19(4):380-393, July.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Fast decoding and optimal decoding for machine translation", |
| "authors": [ |
| { |
| "first": "U", |
| "middle": [], |
| "last": "Germann", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Jahr", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Yamada", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proc. 39th Annual Meeting of the Assoc. for Computational Linguisticsjoint with EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "228--235", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "U. Germann, M. Jahr, K. Knight, D. Marcu, and K. Ya- mada. 2001. Fast decoding and optimal decoding for machine translation. In Proc. 39th Annual Meet- ing of the Assoc. for Computational Linguistics - joint with EACL, pages 228-235, Toulouse, France, July.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Morpho-syntactic analysis for reordering in statistical machine translation", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Nieben", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proc. MT Summit VIII", |
| "volume": "", |
| "issue": "", |
| "pages": "247--252", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. NieBen and H. Ney. 2001a. Morpho-syntactic anal- ysis for reordering in statistical machine translation. In Proc. MT Summit VIII, pages 247-252, Santiago de Compostela, Galicia, Spain, September.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Toward hierarchical models for statistical machine translation of inflected languages", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Niel3en", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "39th Annual Meeting of the Assoc. for Computational Linguistics -joint with EACL 2001: Proc. Workshop on Data-Driven Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "47--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Niel3en and H. Ney. 2001b. Toward hierarchi- cal models for statistical machine translation of in- flected languages. In 39th Annual Meeting of the Assoc. for Computational Linguistics -joint with EACL 2001: Proc. Workshop on Data-Driven Ma- chine Translation, pages 47-54, Toulouse, France, July.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "An evaluation tool for machine translation: Fast evaluation for mt research", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Nieben", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Leusch", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proc. of the Second Int. Conf on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "39--45", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. NieBen, F.J. Och, G. Leusch, and H. Ney. 2000. An evaluation tool for machine translation: Fast evalu- ation for mt research. In Proc. of the Second Int. Conf on Language Resources and Evaluation, pages 39-45, Athens, Greece, May.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Improved alignment models for statistical machine translation", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Tillmann", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proc. Joint SIGDAT Conf on Empirical Methods in Natural Language Processing and Very Large Corpora", |
| "volume": "", |
| "issue": "", |
| "pages": "20--28", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F.J. Och, C. Tillmann, and H. Ney. 1999. Improved alignment models for statistical machine transla- tion. In Proc. Joint SIGDAT Conf on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 20-28, University of Mary- land, College Park, MD, June.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "BLEU: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [ |
| "J" |
| ], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. 40th Annual Meeting of the Assoc. for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "311--318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Papineni, S. Roukos, T. Ward, and W.J. Zhu. 2002. BLEU: a method for automatic evaluation of ma- chine translation. In Proc. 40th Annual Meeting of the Assoc. for Computational Linguistics, pages 311-318, Philadelphia, PA, July.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A simple introduction to maximum entropy models for natural language processing", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Ratnaparkhi", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Ratnaparkhi. 1997. A simple introduction to max- imum entropy models for natural language process- ing. Technical Report 97-08, Institute for Research in Cognitive Science, University of Pennsylvania, Philadelphia, PA, May.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Word re-ordering and DP beam search for statistical machine translation", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Tillmann", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Tillmann and H. Ney. 2002. Word re-ordering and DP beam search for statistical machine translation. to appear in Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Extensions to HMM-based statistical word alignment models", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "T" |
| ], |
| "last": "Ilhan", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. Conf on Empirical Methods for Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "87--94", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Toutanova, H.T. Ilhan, and C.D. Manning. 2002. Extensions to HMM-based statistical word align- ment models. In Proc. Conf on Empirical Meth- ods for Natural Language Processing, pages 87-94, Philadelphia, PA, July.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Statistical methods for machine translation", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Vogel", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Tillmann", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Nieben", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Sawaf", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Verbmobil: Foundations of Speech-to-Speech Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "377--393", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Vogel, F.J. Och, C. Tillmann, S. NieBen, H. Sawaf, and H. Ney. 2000. Statistical methods for ma- chine translation. In W. Wahlster, editor, Verbmobil: Foundations of Speech-to-Speech Translation, pages 377-393. Springer Verlag: Berlin, Heidelberg, New York.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Decoding algorithm in statistical translation", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Waibel", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proc. 35th Annual Meeting of the Assoc. for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "366--372", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y.Y. Wang and A. Waibel. 1997. Decoding algo- rithm in statistical translation. In Proc. 35th Annual Meeting of the Assoc. for Computational Linguistics, pages 366-372, Madrid, Spain, July.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "content": "<table><tr><td>vocabulary</td><td/><td/></tr><tr><td>original</td><td>POS tags</td><td>spliced words</td></tr><tr><td>you go</td><td>PRP VBP</td><td>you_go</td></tr><tr><td>you went</td><td>PRP VBD</td><td>you_went</td></tr><tr><td>you think</td><td>PRP VBP</td><td>you_think</td></tr><tr><td colspan=\"3\">you will have PRP MD VB you_will_have</td></tr><tr><td>you can go</td><td colspan=\"2\">PRP MD VB you_can go</td></tr></table>", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "text": "Examples of spliced words in the English" |
| }, |
| "TABREF1": { |
| "content": "<table><tr><td>original</td><td>POS tags</td><td>spliced words</td></tr><tr><td>do you go</td><td>VBP PRP VB</td><td>you_go</td></tr><tr><td>did you go</td><td>VBD PRP VB</td><td>you_did go</td></tr><tr><td colspan=\"3\">have you gone VBP PRP VBN you_have gone</td></tr><tr><td>will you go</td><td>MD PRP VB</td><td>you_will_go</td></tr><tr><td>can you go</td><td>PRP MD VB</td><td>you_can go</td></tr></table>", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "text": "Examples of spliced words in the English vocabulary after question inversion" |
| }, |
| "TABREF2": { |
| "content": "<table><tr><td colspan=\"2\">: Examples of transformed English sentences</td></tr><tr><td>Original</td><td>how are you ?</td></tr><tr><td colspan=\"2\">Question Inversion how you are ?</td></tr><tr><td>Verb Treatment</td><td>how you_are ?</td></tr><tr><td>Catalan Sentence</td><td>coin esta ?</td></tr><tr><td>Spanish Sentence</td><td>i, c6mo estas ?</td></tr><tr><td>Original</td><td>or do you think we want to stay [... 1 ?</td></tr><tr><td colspan=\"2\">Question Inversion or you think we want to stay [... 1 ?</td></tr><tr><td>Verb Treatment</td><td>or you_think we_want to stay [... ] ?</td></tr><tr><td>Catalan Sentence</td><td>o creu que voldrem quedar-nos [... ] ?</td></tr><tr><td>Spanish Sentence</td><td>i, o cree que querremos quedamos I'...] ?</td></tr><tr><td>Original</td><td>did you say the eighteenth ?</td></tr><tr><td colspan=\"2\">Question Inversion you did say the eighteenth ?</td></tr><tr><td>Verb Treatment</td><td>you_did say the eighteenth ?</td></tr><tr><td>Catalan Sentence</td><td>has dit el divuit ?</td></tr><tr><td>Spanish Sentence</td><td>i, has dicho el dieciocho ?</td></tr></table>", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "text": "" |
| }, |
| "TABREF4": { |
| "content": "<table><tr><td/><td/><td colspan=\"2\">English</td><td>Spanish</td><td>Catalan</td></tr><tr><td/><td/><td>Original</td><td>Transformed</td><td/></tr><tr><td>Training</td><td>Sentences</td><td/><td colspan=\"2\">13 352</td></tr><tr><td/><td>Words</td><td>123 454</td><td>114 099</td><td>118 534</td><td>118 137</td></tr><tr><td/><td>Words\"</td><td>101 738</td><td>92 383</td><td>96 997</td><td>96 503</td></tr><tr><td colspan=\"2\">Vocabulary Size</td><td>2 154</td><td>2 776</td><td>3 933</td><td>3 572</td></tr><tr><td/><td>Singletons</td><td colspan=\"2\">790 (37%) 1 165 (42%)</td><td>1 844 (47%)</td><td>1 658 (47%)</td></tr><tr><td>Develop</td><td>Sentences</td><td/><td/><td>272</td></tr><tr><td/><td>Words</td><td>2 267</td><td>2 096</td><td>2217</td><td>2211</td></tr><tr><td/><td>Unknown Words</td><td>21</td><td>22</td><td>34</td><td>34</td></tr><tr><td>Test</td><td>Sentences</td><td/><td/><td>262</td></tr><tr><td/><td>Words</td><td>2 626</td><td>2 460</td><td>2 451</td><td>2 470</td></tr><tr><td/><td>Unknown Words</td><td>17</td><td>18</td><td>30</td><td>35</td></tr></table>", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "text": "Statistics of the training, develop and test set of the English-Spanish-Catalan LC-STAR corpus (*number of words without punctuation marks)" |
| }, |
| "TABREF5": { |
| "content": "<table><tr><td/><td>Develop</td><td/><td>Test</td><td/></tr><tr><td/><td colspan=\"4\">WER 100-BLEU WER 100-BLEU</td></tr><tr><td>Catalan Baseline</td><td>37.6</td><td>58.2</td><td>33.0</td><td>49.2</td></tr><tr><td>+ Transformations</td><td>35.0</td><td>55.1</td><td>30.8</td><td>46.6</td></tr><tr><td>Spanish Baseline</td><td>35.4</td><td>57.6</td><td>32.1</td><td>48.9</td></tr><tr><td>+ Transformations</td><td>35.0</td><td>55.8</td><td>31.5</td><td>47.6</td></tr></table>", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "text": "Translation error rates [%] for English-Catalan and for English-Spanish" |
| }, |
| "TABREF6": { |
| "content": "<table><tr><td>Source</td><td>Lbelieve, the flight is every day?</td></tr><tr><td>Reference</td><td>crec, que el vol Cs cada dia?</td></tr><tr><td>Baseline</td><td>suposo, el vol es cada dia?</td></tr><tr><td colspan=\"2\">Verb Treatment crec, que el vol es cada dia?</td></tr><tr><td>Source</td><td>Lprefer single.</td></tr><tr><td>Reference</td><td>prefereixo individual.</td></tr><tr><td>Baseline</td><td>jo preferiria una individual.</td></tr><tr><td colspan=\"2\">Verb Treatment prefereixo una individual.</td></tr></table>", |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "text": "Examples of English-Catalan translations with and without transformation Source we_exchange them and, that would be good. Reference les canviem i, aix6 estaria be. Baseline ens canviem i, aixO estaria be. Verb Treatment les canviem i, aixO estaria be. Source okay, and Lwill, speak to you soon then. Reference d' acord, i jo, parlare amb tu aviat doncs. Baseline d' acord, i jo far, parlare amb tu aviat doncs. Verb Treatment d' acord, i jo, parlare amb tu aviat doncs." |
| } |
| } |
| } |
| } |