ACL-OCL / Base_JSON /prefixF /json /freeopmt /2012.freeopmt-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:40:21.400382Z"
},
"title": "Deep evaluation of hybrid architectures: Use of different metrics in MERT weight optimization",
"authors": [
{
"first": "Cristina",
"middle": [],
"last": "Espa\u00f1a-Bonet",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Arantza",
"middle": [],
"last": "D\u00edaz De Ilarraza",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Kepa",
"middle": [],
"last": "Sarasola",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The process of developing hybrid MT systems is usually guided by an evaluation method used to compare different combinations of basic subsystems. This work presents a deep evaluation experiment of a hybrid architecture, which combines rule-based and statistical translation approaches. Differences between the results obtained from automatic and human evaluations corroborate the inappropriateness of pure lexical automatic evaluation metrics to compare the outputs of systems that use very different translation approaches. An examination of sentences with controversial results suggested that linguistic well-formedness should be considered in the evaluation of output translations. Following this idea, we have experimented with a new simple automatic evaluation metric, which combines lexical and PoS information. This measure showed higher agreement with human assessments than BLEU in a previous study (Labaka et al., 2011). In this paper we have extended its usage throughout the system development cycle, focusing on its ability to improve parameter optimization. Results are not totally conclusive. Manual evaluation reflects a slight improvement, compared to BLEU, when using the proposed measure in system optimization. However, the improvement is too small to draw any clear conclusion. We believe that we should first focus on integrating more linguistically representative features in the developing of the hybrid system, and then go deeper into the development of automatic evaluation metrics.",
"pdf_parse": {
"paper_id": "2012",
"_pdf_hash": "",
"abstract": [
{
"text": "The process of developing hybrid MT systems is usually guided by an evaluation method used to compare different combinations of basic subsystems. This work presents a deep evaluation experiment of a hybrid architecture, which combines rule-based and statistical translation approaches. Differences between the results obtained from automatic and human evaluations corroborate the inappropriateness of pure lexical automatic evaluation metrics to compare the outputs of systems that use very different translation approaches. An examination of sentences with controversial results suggested that linguistic well-formedness should be considered in the evaluation of output translations. Following this idea, we have experimented with a new simple automatic evaluation metric, which combines lexical and PoS information. This measure showed higher agreement with human assessments than BLEU in a previous study (Labaka et al., 2011). In this paper we have extended its usage throughout the system development cycle, focusing on its ability to improve parameter optimization. Results are not totally conclusive. Manual evaluation reflects a slight improvement, compared to BLEU, when using the proposed measure in system optimization. However, the improvement is too small to draw any clear conclusion. We believe that we should first focus on integrating more linguistically representative features in the developing of the hybrid system, and then go deeper into the development of automatic evaluation metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The process of developing hybrid MT systems is guided by the evaluation method used to compare outputs of different combinations of basic subsystems. Direct human evaluation is more accurate but unfortunately it is extremely expensive, so automatic metrics have to be used in prototype developing. However, the method should evaluate the outputs of different systems with the same criteria, and these criteria should be as close as possible to human judgment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "6.1"
},
{
"text": "It is well known that rule-based and phrase-based statistical machine translation paradigms (RBMT and SMT, respectively) have complementary strengths and weaknesses. First, RBMT systems tend to produce syntactically better translations and deal with long distance dependencies, agreement and constituent reordering in a better way, since they perform the analysis, transfer and generation steps based on syntactic principles. On the bad side, they usually have problems with lexical selection due to a poor handling of word ambiguity. Also, in cases in which the input sentence has an unexpected syntactic structure, the parser may fail and the quality of the translation decrease dramatically. On the other side, phrase-based SMT models usually do a better job with lexical selection and general fluency, since they model lexical choice with distributional criteria and explicit probabilistic language models. However, phrase-based SMT systems usually generate structurally worse translations, since they model translation more locally and have problems with long distance reordering. They also tend to produce very obvious errors, which are annoying for regular users, e.g., lack of gender and number agreement, bad punctuation, etc. Moreover, SMT systems can experience a severe degradation of performance when applied to corpora different from those used for training (out-of-domain evaluation).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "6.1"
},
{
"text": "Because of these complementary virtues and drawbacks several works are being devoted to build hybrid systems with components of both approaches. A classification and a summary of hybrid architectures can be seen in Thurmair (2009) . The case we present here is within the philosophy of those systems where the RBMT system leads the translation and the SMT system provides complementary information. Following this line, Habash et al. (2009) enrich the dictionary of a RBMT system with phrases from an SMT system. Federmann et al. (2010) use the translations obtained with a RBMT system and substitute selected noun phrases by their SMT counterparts. Globally, their results improve the individual systems when the hybrid system is applied to translate into languages with a richer morphology than the source.",
"cite_spans": [
{
"start": 215,
"end": 230,
"text": "Thurmair (2009)",
"ref_id": "BIBREF18"
},
{
"start": 420,
"end": 440,
"text": "Habash et al. (2009)",
"ref_id": "BIBREF6"
},
{
"start": 513,
"end": 536,
"text": "Federmann et al. (2010)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "6.1"
},
{
"text": "Regarding the evaluation of the final system and its components, still nowadays, the BLEU metric (Papineni et al., 2002) is the most used metric in MT, but several doubts have arisen around it (Melamed et al., 2003 , Callison-Burch et al., 2006 , Koehn and Monz, 2006 . In addition to the fact that it is extremely difficult to interpret what is being expressed in BLEU (Melamed et al., 2003) , improving its value neither guarantees an improvement in the translation quality (Callison-Burch et al., 2006) nor offers such high correlation with human judgment as was believed (Koehn and Monz, 2006) .",
"cite_spans": [
{
"start": 97,
"end": 120,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF15"
},
{
"start": 193,
"end": 214,
"text": "(Melamed et al., 2003",
"ref_id": "BIBREF13"
},
{
"start": 215,
"end": 244,
"text": ", Callison-Burch et al., 2006",
"ref_id": "BIBREF0"
},
{
"start": 245,
"end": 267,
"text": ", Koehn and Monz, 2006",
"ref_id": "BIBREF8"
},
{
"start": 370,
"end": 392,
"text": "(Melamed et al., 2003)",
"ref_id": "BIBREF13"
},
{
"start": 476,
"end": 505,
"text": "(Callison-Burch et al., 2006)",
"ref_id": "BIBREF0"
},
{
"start": 575,
"end": 597,
"text": "(Koehn and Monz, 2006)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "6.1"
},
{
"text": "In the last few years, several new evaluation metrics have been suggested to consider a higher level of linguistic information (Liu and Gildea, 2005 , Popovi\u0107 and Ney, 2007 , Chan and Ng, 2008 , and different methods of metric combination have been tested. Due to its simplicity, we decided to use the idea presented by Gim\u00e9nez and M\u00e0rquez (2008) , where a set of simple metrics are combined by means of the arithmetic mean. This work presents a deep evaluation experiment of a hybrid architecture that tries to get the best of both worlds, rule-based and statistical. The results obtained corroborated the known doubts about BLEU. And suggests that the further development of the hybrid system should be guided by a linguistically more informed metric that should be able to capture the syntactic correctness of the rule-based translation, which is preferred by human assessors.",
"cite_spans": [
{
"start": 127,
"end": 148,
"text": "(Liu and Gildea, 2005",
"ref_id": "BIBREF11"
},
{
"start": 149,
"end": 172,
"text": ", Popovi\u0107 and Ney, 2007",
"ref_id": "BIBREF16"
},
{
"start": 173,
"end": 192,
"text": ", Chan and Ng, 2008",
"ref_id": "BIBREF1"
},
{
"start": 320,
"end": 346,
"text": "Gim\u00e9nez and M\u00e0rquez (2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "6.1"
},
{
"text": "In the next section of this paper we describe the hybrid system. Section 6.3 presents the evaluation experiments: the corpora used in them, and the results of the automatic and manual evaluations. Finally, the last section is devoted to conclusions and future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "6.1"
},
{
"text": "'Statistical Matxin Translator', SMatxinT in short, is a hybrid system controlled by the RBMT translator and enriched with a wide variety of SMT translation options (Espa\u00f1a-Bonet et al., 2011).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The hybrid system, SMatxinT",
"sec_num": "6.2"
},
{
"text": "The two individual systems combined in SMatxinT are a rule-based Spanish-Basque system called Matxin (Mayor et al., 2011 ) and a standard phrase-based statistical MT system based on Moses which works at the morpheme level allowing to deal with the rich morphology of Basque (Labaka, 2010) .",
"cite_spans": [
{
"start": 101,
"end": 120,
"text": "(Mayor et al., 2011",
"ref_id": "BIBREF12"
},
{
"start": 274,
"end": 288,
"text": "(Labaka, 2010)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Individual systems",
"sec_num": "6.2.1"
},
{
"text": "Matxin is an open-source RBMT engine, whose main goal is to translate from Spanish into Basque using the traditional transfer model. Matxin consists of three main components: (i) analysis of the source language into a dependency tree structure; (ii) transfer from the source language dependency tree to a target language dependency structure; and (iii) generation of the output translation from the target dependency structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Individual systems",
"sec_num": "6.2.1"
},
{
"text": "The engine reuses several open tools and it is based on an unique XML format for the flow between the different modules, which makes easier the interaction among different developers of tools and resources. The result is an open source software which can be downloaded from matxin.sourceforge.net, and it has an on-line demo 1 available since 2006. For the statistical system, words are split into several morphemes by using a Basque morphological analyzer/lemmatizer, aiming at reducing the sparseness produced by the agglutinative nature of Basque and the small amount of parallel corpora. Adapting the baseline system to work at the morpheme level mainly consists of training the decoder on the segmented text. The SMT system trained on segmented words generates a sequence of morphemes. So, in order to obtain the final Basque text from the segmented output, a word-generation post-process is applied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Individual systems",
"sec_num": "6.2.1"
},
{
"text": "State-of-the-art tools are used in this case. GIZA++ toolkit (Och, 2003) is used for the alignments, SRILM toolkit (Stolcke, 2002) for the language model and the Moses Decoder (Koehn et al., 2007) . We used a log-linear functions: phrase translation probabilities (in both directions), word-based translation probabilities (lexicon model, in both directions), a phrase length penalty and the target language model. The language model is a simple 3gram language model with modified Kneser-Ney smoothing. We also used a lexical reordering model ('msd-bidirectional-fe' training option). Parameter optimization was done following the usual practice, i.e., Minimum-Error-Rate Training (Och, 2003) , however, the metric used for the optimization is not only BLEU, but it depends on the system as it will be seen. Figure 6 .1: General architecture of SMatxinT. The RBMT modules which guide the MT process are the grey boxes.",
"cite_spans": [
{
"start": 61,
"end": 72,
"text": "(Och, 2003)",
"ref_id": "BIBREF14"
},
{
"start": 115,
"end": 130,
"text": "(Stolcke, 2002)",
"ref_id": "BIBREF17"
},
{
"start": 176,
"end": 196,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF7"
},
{
"start": 681,
"end": 692,
"text": "(Och, 2003)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 808,
"end": 816,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Individual systems",
"sec_num": "6.2.1"
},
{
"text": "The initial analysis of the source sentence is done by Matxin. It produces a dependency parse tree, where the boundaries of each syntactic phrase are marked. In order to add hybrid functionality two new modules are introduced to the RBMT architecture ( Figure 6 .1): the tree enrichment module, which incorporates SMT additional translations to each phrase of the syntactic tree; and a monotonous decoding module, which is responsible for generating the final translation by selecting among RBMT and SMT partial translation candidates from the enriched tree.",
"cite_spans": [],
"ref_spans": [
{
"start": 253,
"end": 261,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hybridisation",
"sec_num": "6.2.2"
},
{
"text": "The tree enrichment module introduces two types of translations for the syntactic constituents given by Matxin: 1) the SMT translation(s) of every phrase, and 2) the SMT translation(s) of the entire subtree containing that phrase. For example, the analysis of the text fragment \"afirm\u00f3 el consejero de interior\" (said the Secretary of interior) gives two phrases: the head \"afirm\u00f3\" (said) and its children \"el consejero de interior\" (the Secretary of interior). The full rule-based translation is \"Barne Sailburua baieztatu zuen\" and the full SMT translation is \"esan zuen herrizaingo sailburuak\". SMatxinT considers these two phrases for the translation of the full sentence, but also the SMT translations of their constituents (\"esan zuen\" and \"herrizaingo sailburuak\"). However, short phrases may have a wrong SMT translation because of a lack of context. So, to overcome this problem SMatxinT also uses the translation of a phrase extracted from a longer SMT translation (\"herrizaingo sailburuak\" in the previous example). So, in order to translate \"afirm\u00f3 el consejero de interior\" the system has produced 5 distinct phrases, a number that can be increased by considering the n-best lists.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hybridisation",
"sec_num": "6.2.2"
},
{
"text": "After tree enrichment, the transfer and generation steps of the RBMT system are carried out in a usual way, and a final monotonous decoder chooses among the options. A key aspect for the performance of the system is the election of the features for this decoding. The results we present here are obtained with a set of eleven features. Three of them are usually used as standard SMT features (language model, word penalty and phrase penalty). We also include four features to show the origin of the phrase and the consensus among systems (a counter indicating how many different systems generated the phrase, two binary features indicating whether the phrase comes from the SMT/RBMT system or not, and the number of source words covered by the phrase generated by both individual systems simultaneously). Finally, we use the lexical probabilities in both directions in two forms: a similar approach to IBM-1 probabilities modified to take unknown alignments into account and a lexical probability inferred from the RBMT dictionary. We refer the reader to Espa\u00f1a-Bonet et al. 2011for further details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hybridisation",
"sec_num": "6.2.2"
},
{
"text": "The language pair used at evaluation is dictated by the rule-based system and, in this case, Matxin works with the Spanish-to-Basque translation. Basque and Spanish are two languages with very different morphology and syntax.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6.3"
},
{
"text": "In previous experiments we evaluated all systems by means of both automatic an manual evaluations (Labaka et al., 2011) . Those results corroborated the already known inadequacy of the metrics that measure only the lexical matching for comparing systems that use so different translation paradigms. This kind of metrics are biased in favor of the SMT, as it happens in our evaluation, where the statistical system achieved the best results in the indomain evaluation, even when it generated the worst translations according to the manual assessment.",
"cite_spans": [
{
"start": 98,
"end": 119,
"text": "(Labaka et al., 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6.3"
},
{
"text": "To address these limitations of the metrics that are only based on lexical matching, we defined a metric that seeks to check the syntactic correctness, calculating the same expressions but at the PoS level and combining it with lexical BLEU through the arithmetic mean. This metric, which is able to assess the syntactic correctness, has shown a higher level of agreement with human assessments both at document and sentence level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6.3"
},
{
"text": "But evaluation metrics are not only used for comparing different systems, those metrics are also used to guide the development of the systems. Thus, being aware of the problems of BLEU to identify many of the good translations generated by the RBMT system, we used linguistically informed metrics not only on the evaluation, but also in MERT optimization of the linear decoder. So, in addition to individual systems, we will evaluate three different hybrid systems, depending on the metric used in optimization (BLEU, METEOR and BLEU c , a new defined metric according to Eq. 6.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6.3"
},
{
"text": "The corpus built to train the SMT system consists of four subsets: (1) six reference books translated manually by the translation service of the University of the Basque Country (EHUBooks); (2) a collection of 1,036 articles published in Spanish and Basque by the Consumer Eroski magazine 2 (Consumer); (3) translation memories mostly using administrative language developed by Elhuyar 3 (ElhuyarTM); and (4) a translation memory including short descriptions of TV programmes (EuskaltelTB). All together they made up a corpus of 8 mil-lion words in Spanish and 6 mil-ion words in Basque. Table 6 .1 shows some statistics on the corpora, giving some figures about the number of sentences and tokens. The training corpus is then basically made up of administrative documents and descriptions of TV programs. For development and testing we extracted some administrative data for the in-domain evaluation and we selected a collection of news for the out-of-domain study, totaling three sets:",
"cite_spans": [],
"ref_spans": [
{
"start": 588,
"end": 595,
"text": "Table 6",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Bilingual and monolingual corpora",
"sec_num": "6.3.1"
},
{
"text": "Elhuyardevel and Elhuyartest: 1,500 segments each, extracted from the administrative documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual and monolingual corpora",
"sec_num": "6.3.1"
},
{
"text": "NEWStest: 1,000 sentences collected from Spanish newspapers with two references.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual and monolingual corpora",
"sec_num": "6.3.1"
},
{
"text": "Additionally, we collected a 21 million word monolingual corpus, which together with the Basque side of the parallel bilingual corpora, builds up a 28 million word corpus. This monolingual corpus is also heterogeneous, and includes text from two sources: the Basque Corpus of Science and Technology (ZT corpus 4 ) and articles published by Berria newspaper (Berria corpus).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual and monolingual corpora",
"sec_num": "6.3.1"
},
{
"text": "In order to perform the automatic evaluation of the translations we use a subset of lexical metrics available in the Asiya evaluation package (Gim\u00e9nez and M\u00e0rquez, 2010) . Tables 6.2 and 6.3 show the BLEU, TER and METEOR scores for the in-domain test set (Elhuyartest) and the out-of-domain one (NEWStest) respectively 5 . Besides, the tables include the score given by the combination of metrics for the two individual systems (Matxin and SMT) and three hybrid systems SMatxinT that have been optimized against these different metrics. Results of Google Translate 6 are given as control system.",
"cite_spans": [
{
"start": 142,
"end": 169,
"text": "(Gim\u00e9nez and M\u00e0rquez, 2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "6.3.2"
},
{
"text": "In Labaka et al. (2011) it was shown that a simple combination of n-gram matching metrics at different linguistic levels, such as words and PoS, is more correlated with human assessments than just the lexical match. Therefore, we use this new metric, BLEU c , not only to evaluate the translations but also to optimize the system. BLEU c = (BLEU + BLEU P oS )/2 (6.1) According to all the automatic metrics Matxin is the worst system both for in-domain and out-of-domain data. The statistical system is worse than the hybrid models for out-of-domain data and shows a similar performance in the in-domain test set. In this case, the BLEU score achieved by SMatxinT is slightly worse than the scores obtained by the single SMT system, but better according to the rest of metrics. The distinct behavior between metrics and the small differences do not allow us to define a clear preference between statistical and hybrid systems. On the contrary, on the out-domain corpora (NEWStest), SMatxinT consistently achieves better scores than any other system.",
"cite_spans": [
{
"start": 3,
"end": 23,
"text": "Labaka et al. (2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "6.3.2"
},
{
"text": "The use of different metrics in the MERT optimization does not significantly affect the final evaluation. The systems that have been optimized with respect to different metrics obtained very similar results and, when these differences exists, they are not consistent between different evaluation test set or metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "6.3.2"
},
{
"text": "In the in-domain evaluation, although the differences are small, the hybrid system optimized on BLEU gets the best results according to BLEU, METEOR and BLEUc. In contrast, the TER metrics assigns the best score to the hybrid system that is optimized on METEOR. It is worth noting that the optimizations on BLEUc and METEOR does not improve results by those metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "6.3.2"
},
{
"text": "In the out-domain corpus, although the differences remain small, the results are more stable. In this test set, the hybrid system that achieves the best evaluation is the one optimized on BLEUc, improving the results obtained by the BLEU optimization according to all evaluation metrics. In this corpus, as in the in-domain one, the system optimized on METEOR achieves results particularly high in the TER metric, which makes if to be the best system according to this metric.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "6.3.2"
},
{
"text": "Based on these results, one could state that the low in-domain performance of Matxin penalizes the hybrid system, preventing it to overcome the single SMT system. But, in the out-domain test set, where the scores of Matxin were not so far from the rest of the systems, our hybridization technique was able to combine the best of both systems obtaining the best translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "6.3.2"
},
{
"text": "As in previous works, we contrast those automatic results with a manual evaluation carried out on 100 sentences randomly chosen from the in-domain test set (Elhuyartest) and another 100 sentences chosen from the out-domain test set (NEWStest). The human evaluators are asked to order the 5 translation provided (both individual systems and three different optimizations of SMatxinT). Human evaluators are allowed to determine that various translations are equally good. Depending on how many draws there are, the ranking scope can vary for 1 to 5 (when there is not any draw) to 1 to 1 (when all systems are considered equal). So, we normalized all rankings to the 0-1 scope (where 0 is the best system and 1 is the worst in all cases). Table 6 .4 shows the original and normalized average rankings obtained by each system. According those results, in the in-domain test set Matxin obtains the best ranking, but differences to the three SMatxinT instances are not significant. Those systems that use linguistically motivated metrics (METEOR and BLEU c ) in MERT obtain slightly better results than the instance optimized over BLEU. The SMT system, in turn, obtains the worst ranking. On the other hand, in the out-domain evaluation the differences are bigger: Matxin, the rule-based system, clearly outperforms the hybrid systems and these ones outperform the statistical system. The differences between different optimizations of SMatxinT are not significant. Each sentence, 100 in each test set, has been assessed by two evaluators. Agreement between evaluators is difficult to check, as qualitatively small changes between them can produce multiple single changes in the precedence numbers in the ranking. For example, between the following two rankings Matxin 1, BLEU 2, BLEU c 2, METEOR 2, SMT 3 Matxin 1, BLEU 2, BLEU c 3, METEOR 3, SMT 4 three precedence numbers are changed, but there is only a single qualitative difference (in the second ranking the system trained with BLEU is better than those trained with BLEUc and METEOR).",
"cite_spans": [],
"ref_spans": [
{
"start": 737,
"end": 744,
"text": "Table 6",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "6.3.3"
},
{
"text": "To make the rankings more comparable we discretized the assigned ranking into 4 possible values: best, intermediate, worst and all-draw. The best and worst values mean that the system has been asserted as the best or the worst system. The intermediate value is assigned to other systems. In the cases that all systems are assigned to the same rank the all-draw value is assigned. Table 6 .5 shows the times that both evaluators assigned the same discrete ranking. Between brackets, the times that each evaluator assigns this ranking is shown. In some cases, the agreement is high, as when Matxin is claimed as the best out-domain system, 47(51+64). But generally the agreement is not very high. These results further demonstrate the equality of the systems, thickened by the lack of agreement between evaluators. In addition, it also shows some interesting results, as the fact that even in-domain the RBMT system produces more sentences tagged as the best translation. But the system also generates a high number of sentences labeled as the worst translation. So, in the overall assessment it fails to distance itself from the hybrid systems (which produce less 'best' translations, but also less 'worst' translations).",
"cite_spans": [],
"ref_spans": [
{
"start": 380,
"end": 387,
"text": "Table 6",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "6.3.3"
},
{
"text": "In this work we present an in-depth evaluation of SMatxinT, a hybrid system that is controlled by the RBMT translator and enriched with a wide variety of SMT translation options. The results of the human evaluation, where the translation of all the individual systems was ranked, established that Matxin, the RBMT system, achieved the best performance followed by SMatxinT, while the SMT system generated the worst translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6.4"
},
{
"text": "Those results, very far from what the automatic metrics show, corroborate the already known inadequacy of the metrics that measure only the lexical matching for comparing systems that use so different translation paradigms. This kind of metrics is biased in favor of the SMT, as it happens in our evaluation, where the statistical system achieves the best results in the in-domain evaluation, even when it generates the worst translations according to the manual assessment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6.4"
},
{
"text": "To address these limitations of the metrics that are only based on lexical matching, we defined a metric that seeks to ensure the syntactic correctness, combining lexical BLEU with PoS matching information. At the time of combining these metrics, we opted for simplicity and we used the arithmetic mean of BLEU in words and PoS. This method, despite its simplicity, has already shown its suitability before. Our combined metric is simple and able to maintain a higher correlation with manual evaluation than the usual lexical metrics, while ensures the lexical matching.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6.4"
},
{
"text": "But evaluation metrics are not only used for comparing different systems, those metrics are also used to guide the optimization of the systems. In practical terms, in our hybrid architecture, we used those metrics to identify the features that are able to differentiate the best translation proposed by different approaches. Thus, being aware of the problems of BLEU to identify many of the good translations generated by the RBMT system, we used linguistically informed metrics not only on the evaluation, but also in MERT optimization of the linear decoder. So, in addition to individual systems, we evaluate three different hybrid systems, depending on the metric used in optimization. According to the results achieved, the use of different metrics in optimization has low impact in translation quality. Although the use of BLEU c in optimization slightly improves the results achieved by manual evaluation, this improvement is too small to draw clear conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6.4"
},
{
"text": "We consider that the minimal differences that exist between different optimizations are due to the lack of linguistic features at monotonous decoding. Current 11 features are mainly devoted to characterize the origin system of a given phrase and the probabilities for the lexical translation. In MERT optimization, the evaluation metrics are only used to find out which of the features present in the decoding are the most useful at generating the final translation. So, if there are no features which depend on the PoS in our case, or on higher level information such as the type of chunk, they may not be informative enough to strengthen the metric. In this case, optimization has little room for improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6.4"
},
{
"text": "Given these results, the need to provide more in-depth linguistic information to the evaluation metrics is undeniable. But, since we carry out our research in translation into Basque, we have at our disposal few linguistic tools, much less than for languages like English. Future work should first focus on integrating more representative linguistic features in the hybrid system which allow a qualitative leap in the translations quality. Then the small improvements reported here could be confirmed or ruled out.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6.4"
},
{
"text": "http://www.opentrad.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://revista.consumer.es 3 http://www.elhuyar.org/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "www.ztcorpusa.net/ 5 Figures do not exactly match the ones presented in previous work, since we correct some capitalization errors.6 http://translate.google.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research has been partially funded by the Spanish Ministry of Education and Science (OpenMT-2, TIN2009-14675-C03) and the EC Seventh Framework Programme under grant agreement numbers 247914 (MOLTO project, FP7- ICT-2009-4-247914) and 247762 (FAUST project, FP7-ICT-2009-4-247762).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Re-evaluating the Role of BLEU in Machine Translation Research",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the International Conference of European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "249--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Callison-Burch, Chris, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the Role of BLEU in Machine Translation Research. In Proceedings of the International Conference of European Chapter of the Association for Computational Linguistics (EACL), pages 249-256.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "MAXSIM: A maximum similarity metric for machine translation evaluation",
"authors": [
{
"first": "Yee",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Seng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "55--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chan, Yee Seng and Hwee Tou Ng. 2008. MAXSIM: A maximum similarity metric for machine translation evaluation. In Proceedings of ACL-08: HLT , pages 55-62. Columbus, Ohio: ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Arantza D\u00edaz de Ilarraza, Lluis M\u00e0rquez, and Kepa Sarasola",
"authors": [
{
"first": "Cristina",
"middle": [],
"last": "Espa\u00f1a-Bonet",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings MT Summit XIII",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Espa\u00f1a-Bonet, Cristina, Gorka Labaka, Arantza D\u00edaz de Ilarraza, Lluis M\u00e0rquez, and Kepa Sarasola. 2011. Hybrid machine translation guided by a rule-based system. In Proceedings MT Summit XIII . Xiamen, China.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Further experiments with shallow hybrid mt systems",
"authors": [
{
"first": "C",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Eisele",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Hunsicker",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR",
"volume": "",
"issue": "",
"pages": "77--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Federmann, C., A. Eisele, Y. Chen, S. Hunsicker, J. Xu, and H. Uszkoreit. 2010. Further experiments with shallow hybrid mt systems. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 77-81. Uppsala, Sweden: ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A smorgasbord of features for automatic MT evaluation",
"authors": [
{
"first": "Jes\u00fas",
"middle": [],
"last": "Gim\u00e9nez",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Third Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "195--198",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gim\u00e9nez, Jes\u00fas and Llu\u00eds M\u00e0rquez. 2008. A smorgasbord of features for automatic MT evaluation. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 195-198. Columbus, Ohio: ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Asiya: An Open Toolkit for Automatic Machine Translation (Meta-)Evaluation",
"authors": [
{
"first": "Jes\u00fas",
"middle": [],
"last": "Gim\u00e9nez",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
}
],
"year": 2010,
"venue": "The Prague Bulletin of Mathematical Linguistics",
"volume": "",
"issue": "94",
"pages": "77--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gim\u00e9nez, Jes\u00fas and Llu\u00eds M\u00e0rquez. 2010. Asiya: An Open Toolkit for Automatic Machine Translation (Meta-)Evaluation. The Prague Bulletin of Mathematical Linguistics (94):77- 86.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Symbolic-to-statistical hybridization: extending generation-heavy machine translation",
"authors": [
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2009,
"venue": "Machine Translation",
"volume": "23",
"issue": "",
"pages": "23--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Habash, Nizar, Bonnie Dorr, and Christof Monz. 2009. Symbolic-to-statistical hybridization: extending generation-heavy machine translation. Machine Translation 23:23-63.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Moses: Open Source Toolkit for Statistical Machine Translation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn, P., H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Prague, Czech Republic.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Manual and Automatic Evaluation of Machine Translation between European Languages",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 1st Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "102--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koehn, Philipp and Christof Monz. 2006. Manual and Automatic Evaluation of Machine Translation between European Languages. In Proceedings of the 1st Workshop on Statis- tical Machine Translation, pages 102-121.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "EUSMT: Incorporating Linguistic Information into SMT for a Morphologically Rich Language. Its use in SMT-RBMT-EBMT hybridation",
"authors": [
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Labaka, Gorka. 2010. EUSMT: Incorporating Linguistic Information into SMT for a Mor- phologically Rich Language. Its use in SMT-RBMT-EBMT hybridation. Ph.D. thesis, University of the Basque Country.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Deep evaluation of hybrid architectures: simple metrics correlated with human judgements",
"authors": [
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Arantza",
"middle": [],
"last": "D\u00edaz De Ilarraza",
"suffix": ""
},
{
"first": "Cristina",
"middle": [],
"last": "Espa\u00f1a-Bonet",
"suffix": ""
},
{
"first": "Lluis",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "Kepa",
"middle": [],
"last": "Sarasola",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of International Workshop on Using Linguistic Information for Hybrid Machine Translation LIHMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Labaka, Gorka, Arantza D\u00edaz de Ilarraza, Cristina Espa\u00f1a-Bonet, Lluis M\u00e0rquez, and Kepa Sarasola. 2011. Deep evaluation of hybrid architectures: simple metrics correlated with human judgements. In Proceedings of International Workshop on Using Linguistic Infor- mation for Hybrid Machine Translation LIHMT . Barcelona.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Syntactic features for evaluation of machine translation",
"authors": [
{
"first": "Ding",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, Ding and Daniel Gildea. 2005. Syntactic features for evaluation of machine translation. In Proceedings of ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization, pages 25-32.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Arantza D\u00edaz de Ilarraza, Gorka Labaka, Mikel Lersundi, and Kepa Sarasola",
"authors": [
{
"first": "Aingeru",
"middle": [],
"last": "Mayor",
"suffix": ""
},
{
"first": "I\u00f1aki",
"middle": [],
"last": "Alegria",
"suffix": ""
}
],
"year": 2011,
"venue": "Machine Translation",
"volume": "25",
"issue": "",
"pages": "53--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mayor, Aingeru, I\u00f1aki Alegria, Arantza D\u00edaz de Ilarraza, Gorka Labaka, Mikel Lersundi, and Kepa Sarasola. 2011. Matxin, an open-source rule-based machine translation system for basque. Machine Translation 25:53-82.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Precision and Recall of Machine Translation",
"authors": [
{
"first": "I",
"middle": [],
"last": "Melamed",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Dan",
"suffix": ""
},
{
"first": "Joseph",
"middle": [
"P"
],
"last": "Green",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turian",
"suffix": ""
}
],
"year": 2003,
"venue": "NAACL '03: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "",
"issue": "",
"pages": "61--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melamed, I. Dan, Ryan Green, and Joseph P. Turian. 2003. Precision and Recall of Machine Translation. In NAACL '03: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technol- ogy, pages 61-63. Morristown, NJ, USA: ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och, F. J. 2003. Minimum error rate training in statistical machine translation. In Proceed- ings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL), pages 160-167.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bleu: A method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Word error rates: decomposition over pos classes and applications for error analysis",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Second Workshop on Statistical Machine Translation, StatMT '07",
"volume": "",
"issue": "",
"pages": "48--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Popovi\u0107, Maja and Hermann Ney. 2007. Word error rates: decomposition over pos classes and applications for error analysis. In Proceedings of the Second Workshop on Statistical Machine Translation, StatMT '07, pages 48-55. Stroudsburg, PA, USA: ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "SRILM -An Extensible Language Modeling Toolkit",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the International Conference of Spoken Language Processing",
"volume": "2",
"issue": "",
"pages": "901--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke, A. 2002. SRILM -An Extensible Language Modeling Toolkit. In Proceedings of the International Conference of Spoken Language Processing, vol. 2, pages 901-904.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Comparing different architectures of hybrid machine translation systems",
"authors": [
{
"first": "G",
"middle": [],
"last": "Thurmair",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceddings of MT Summit XII",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thurmair, G. 2009. Comparing different architectures of hybrid machine translation systems. In Proceddings of MT Summit XII .",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"num": null,
"text": "1: Statistics on the bilingual collection of parallel corpora.",
"content": "<table><tr><td/><td/><td>sentences</td><td>tokens</td></tr><tr><td>EHUBooks</td><td>Spanish Basque</td><td>39,583</td><td>1,036,605 794,284</td></tr><tr><td>Consumer</td><td>Spanish Basque</td><td>61,104</td><td>1,347,831 1,060,695</td></tr><tr><td>ElhuyarTM</td><td>Spanish Basque</td><td>186,003</td><td>3,160,494 2,291,388</td></tr><tr><td>EuskaltelTB</td><td>Spanish Basque</td><td>222,070</td><td>3,078,079 2,405,287</td></tr><tr><td>Total</td><td>Spanish Basque</td><td>491,853</td><td>7,966,419 6,062,911</td></tr></table>",
"html": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table><tr><td colspan=\"6\">.2: Automatic evaluation of the in-domain test set, Elhuyartest, for the individual</td></tr><tr><td colspan=\"2\">and hybrid systems.</td><td/><td/><td/><td/></tr><tr><td/><td/><td>BLEU</td><td>METEOR</td><td>TER</td><td>BLEUc</td></tr><tr><td>Ind. systems</td><td>Matxin SMT</td><td>6.07 16.50</td><td>27.20 37.49</td><td>83.49 70.39</td><td>19.65 27.64</td></tr><tr><td>Control</td><td>Google</td><td>8.19</td><td>28.02</td><td>78.43</td><td>20.73</td></tr><tr><td/><td>BLEU</td><td>16.09</td><td>38.24</td><td>69.92</td><td>27.95</td></tr><tr><td>SMatxinT</td><td>BLEUc METEOR</td><td>15.36 15.87</td><td>38.24 37.77</td><td>70.78 67.77</td><td>27.33 27.53</td></tr></table>",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"text": ".3: Automatic evaluation of the out-of-domain test set, NEWStest, for the individual and hybrid systems.",
"content": "<table><tr><td/><td/><td>BLEU</td><td>METEOR</td><td>TER</td><td>BLEUc</td></tr><tr><td>Ind. systems</td><td>Matxin SMT</td><td>12.67 15.84</td><td>36.10 37.70</td><td>69.16 66.52</td><td>31.98 31.01</td></tr><tr><td>Control</td><td>Google</td><td>12.36</td><td>32.57</td><td>70.44</td><td>29.08</td></tr><tr><td/><td>BLEU</td><td>16.61</td><td>39.24</td><td>64.50</td><td>32.77</td></tr><tr><td>SMatxinT</td><td>BLEUc METEOR</td><td>17.11 16.76</td><td>39.94 39.30</td><td>63.84 62.83</td><td>33.39 32.50</td></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "4: Real and normalized mean of the ranking manually assigned to each system.",
"content": "<table><tr><td/><td/><td colspan=\"2\">Elhuyartest</td><td colspan=\"2\">NEWStest</td></tr><tr><td/><td/><td>ranking</td><td>norm.</td><td>ranking</td><td>norm.</td></tr><tr><td>Ind. systems</td><td>Matxin SMT</td><td>2.070 2.510</td><td>0.396 0.532</td><td>1.705 2.605</td><td>0.275 0.625</td></tr><tr><td/><td>BLEU</td><td>2.165</td><td>0.423</td><td>2.210</td><td>0.485</td></tr><tr><td>SMatxinT</td><td>BLEUc</td><td>2.085</td><td>0.399</td><td>2.110</td><td>0.445</td></tr><tr><td/><td>METEOR</td><td>2.095</td><td>0.403</td><td>2.125</td><td>0.470</td></tr></table>",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"text": "5: Discrete ranking results. Figures correspond to agreement of both evaluators, between brackets each evaluator's figures.",
"content": "<table><tr><td/><td/><td colspan=\"2\">Elhuyartest</td><td/></tr><tr><td/><td colspan=\"2\">best intermediate</td><td>worst</td><td>all-draw</td></tr><tr><td>Matxin</td><td>24 (34+42)</td><td>9 (26+19)</td><td>20 (38+32)</td><td>0 (2+7)</td></tr><tr><td>SMT</td><td>9 (22+23)</td><td>7 (31+23)</td><td>30 (45+47)</td><td>0 (2+7)</td></tr><tr><td>BLEU</td><td>8 (27+19)</td><td>22 (52+43)</td><td>8 (19+31)</td><td>0 (2+7)</td></tr><tr><td>BLEUc</td><td>12 (27+18)</td><td>29 (55+45)</td><td>7 (16+30)</td><td>0 (2+7)</td></tr><tr><td>METEOR</td><td>6 (28+19)</td><td>24 (54+47)</td><td>6 (16+27)</td><td>0 (2+7)</td></tr><tr><td/><td/><td colspan=\"2\">NEWStest</td><td/></tr><tr><td/><td colspan=\"2\">best intermediate</td><td>worst</td><td>all-draw</td></tr><tr><td>Matxin</td><td>47 (51+64)</td><td>4 (22+12)</td><td>10 (25+19)</td><td>0 (2+5)</td></tr><tr><td>SMT</td><td>7 (20+11)</td><td>6 (21+25)</td><td>41 (57+59)</td><td>0 (2+5)</td></tr><tr><td>BLEU</td><td>11 (28+15)</td><td>27 (44+43)</td><td>21 (26+37)</td><td>0 (2+5)</td></tr><tr><td>BLEUc</td><td>12 (27+17)</td><td>28 (50+44)</td><td>15 (21+34)</td><td>0 (2+5)</td></tr><tr><td colspan=\"2\">METEOR 11 (26+16)</td><td>26 (46+42)</td><td>18 (26+37)</td><td>0 (2+5)</td></tr></table>",
"html": null
}
}
}
}