ACL-OCL / Base_JSON /prefixW /json /wmt /2020.wmt-1.119.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:43:24.233888Z"
},
"title": "IST-Unbabel Participation in the WMT20 Quality Estimation Shared Task",
"authors": [
{
"first": "Jo\u00e3o",
"middle": [],
"last": "Moura",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Instituto Superior T\u00e9cnico",
"location": {
"settlement": "Lisbon"
}
},
"email": "joaopcmoura@tecnico.ulisboa.pt"
},
{
"first": "Miguel",
"middle": [],
"last": "Vera",
"suffix": "",
"affiliation": {},
"email": "miguel.vera@unbabel.com"
},
{
"first": "Daan",
"middle": [],
"last": "Van Stigt",
"suffix": "",
"affiliation": {},
"email": "daan.stigt@unbabel.com"
},
{
"first": "Fabio",
"middle": [],
"last": "Kepler",
"suffix": "",
"affiliation": {},
"email": "kepler@unbabel.com"
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": "",
"affiliation": {
"laboratory": "Instituto de Telecomunica\u00e7\u00f5es Instituto Superior T\u00e9cnico Unbabel",
"institution": "",
"location": {
"settlement": "Lisbon"
}
},
"email": "andre.t.martins@tecnico.ulisboa.pt"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present the joint contribution of IST and Unbabel to the WMT 2020 Shared Task on Quality Estimation. Our team participated on all tracks (Direct Assessment, Post-Editing Effort, Document-Level), encompassing a total of 14 submissions. Our submitted systems were developed by extending the OpenKiwi framework to a transformer-based predictorestimator architecture, and to cope with glassbox, uncertainty-based features coming from neural machine translation systems.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "We present the joint contribution of IST and Unbabel to the WMT 2020 Shared Task on Quality Estimation. Our team participated on all tracks (Direct Assessment, Post-Editing Effort, Document-Level), encompassing a total of 14 submissions. Our submitted systems were developed by extending the OpenKiwi framework to a transformer-based predictorestimator architecture, and to cope with glassbox, uncertainty-based features coming from neural machine translation systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Quality estimation (QE) is the task of evaluating a translation system's quality without access to reference translations (Blatz et al., 2004; Specia et al., 2018) . This paper describes the joint contribution for Instituto Superior T\u00e9cnico (IST) and Unbabel to the WMT20 Quality Estimation shared task, where systems were submitted to all three tasks: 1) sentence-level direct assessment; 2) word and sentence-level post-editing effort; and 3) documentlevel annotation and scoring.",
"cite_spans": [
{
"start": 122,
"end": 142,
"text": "(Blatz et al., 2004;",
"ref_id": "BIBREF2"
},
{
"start": 143,
"end": 163,
"text": "Specia et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unbabel's participation in previous editions of the shared task (2016, 2017, 2019) used ensemble of strong individual systems, with varying architectures and hyper-parameters. While this strategy led to very strong results, large system ensembles are not a very practical solution, complicating model deployment and requiring expensive computation and memory usage. This year, in contrast, our focus was on simplicity: only single model systems were submitted and, in a few cases, an additional simple ensemble of the same model. Transfer learning on top of pretrained multilingual models was also used for avoiding manual pretraining for each language pair.",
"cite_spans": [
{
"start": 64,
"end": 70,
"text": "(2016,",
"ref_id": null
},
{
"start": 71,
"end": 76,
"text": "2017,",
"ref_id": null
},
{
"start": 77,
"end": 82,
"text": "2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Last year's winning submission (Kepler et al., 2019a) combined strong individual systems built on top of the OpenKiwi framework (Kepler et al., 2019b) and pretrained Transformer models. We consolidated those changes with support for newly released pretrained models and packages and published a new version 2.0 of the OpenKiwi framework. 1 We trained and submitted single model systems in OpenKiwi for all tasks, beating all baselines by a large margin. Additionaly, we also used OpenKiwi with small adaptations to handle specific sources of information in Tasks 1 and 3.",
"cite_spans": [
{
"start": 31,
"end": 53,
"text": "(Kepler et al., 2019a)",
"ref_id": "BIBREF7"
},
{
"start": 128,
"end": 150,
"text": "(Kepler et al., 2019b)",
"ref_id": "BIBREF8"
},
{
"start": 338,
"end": 339,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Task 1, in particular, was introduced this year with Direct Assessment scores as targets. Further, it introduced the novelty of providing the trained NMT models that were used for producing the translations. Previously, only black-box QE was considered in the WMT Shared Task, as it is one of the main uses cases. With the availability of the NMT models, new glass-box approaches can be explored. Our best submitted systems drew inspiration from to leverage this information, improving in performance and robustness over a black-box approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main contributions are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We release the second version of OpenKiwi along with our submission, with a variety of new features, including the ability to use pretrained Transformer-based Language Models;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We show that transfer learning techniques still perform well, by fine-tuning XLM-Roberta in a Predictor-Estimator architecture;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We incorporate features extracted from the provided NMT models into our existing architectures and show that glass-box QE improves upon black-box approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This year's shared task edition comprised three tasks: 1) a newly introduced one for sentence-level direct assessment; 2) one for word and sentencelevel post-editing effort; and 3) one for documentlevel. Refer to the Findings paper for full descriptions. Of noteworthy mention is that the NMT models for Tasks 1 and 2 where provided along with the data, which opened up the possibility of using glassbox approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality Estimation Tasks",
"sec_num": "2"
},
{
"text": "To avoid the complexity of ensemble of several systems, all our submitted systems consisted of a single model type. In addition to standard OpenKiwi 2.0 systems submitted to Tasks 1 and 2 ( \u00a73.1), we implemented two types of extensions on top of OpenKiwi, one for exploring glass-box approaches for Tasks 1 and 2 ( \u00a73.2), and one for handling document-level QE for Task 3 ( \u00a73.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implemented Systems",
"sec_num": "3"
},
{
"text": "Given the success in doing transfer learning with pretrained Language Models in last year's shared task edition, we published support for them as part of the open source QE framework OpenKiwi in a new 2.0 version. BERT, XLM, and XLM-Roberta are currently supported via the Transformers 2 Python package (Wolf et al., 2019) , which means different models can be easily used. For this year's shared task, we based all systems on this version of OpenKiwi and used pretrained XLM-Roberta models (Conneau et al., 2020) , either base or large versions. We chose XLM-Roberta (called XLM-R from here on) instead of XLM, used in last year's best individual model, due to its reported state-ofthe-art performance on downstream cross-lingual tasks and based on preliminary experiments.",
"cite_spans": [
{
"start": 303,
"end": 322,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 491,
"end": 513,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Base OpenKiwi System",
"sec_num": "3.1"
},
{
"text": "The architecture follows the overall pattern introduced originally in the Predictor-Estimator model (Kim et al., 2017) , comprising a \"Feature Extractor\" module with a \"Quality Estimator\" module on top. Figure 1 depicts this general architecture.",
"cite_spans": [
{
"start": 100,
"end": 118,
"text": "(Kim et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 203,
"end": 211,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Base OpenKiwi System",
"sec_num": "3.1"
},
{
"text": "The Feature Extractor module consists of a pretrained XLM-R model and feature extraction methods on top, such that features for the target sentence, the target tokens, and the source tokens are returned separately. Source and target sentences are passed as inputs in the format <s> target </s> <s> source </s>. Output features for tokens in the target sentence are averaged and then concatenated with the classifier token embedding (first <s> in the input), and returned as sentence features. 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Base OpenKiwi System",
"sec_num": "3.1"
},
{
"text": "For the Quality Estimator module we used linear layers instead of a bi-LSTM (as used by Kim et al. (2017) ), since initial experiments showed similar performance. Additional linear layers were stacked on top for each output type: target words, target gaps, source words, and sentence regression.",
"cite_spans": [
{
"start": 88,
"end": 105,
"text": "Kim et al. (2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Base OpenKiwi System",
"sec_num": "3.1"
},
{
"text": "For the plain OpenKiwi submissions we used the XLM-R base model and a Quality Estimator block with two linear layers. Hyper-parameter search was performed for each language pair and task 4 and submitted as a single model system to Tasks 1 and 2, and used as basis for the submission to Task 3. These systems will be referred to as OPENKIWI-BASE through the rest of the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Base OpenKiwi System",
"sec_num": "3.1"
},
{
"text": "3.2 Glass-Box QE",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Base OpenKiwi System",
"sec_num": "3.1"
},
{
"text": "Recent work on MT confidence estimation showed that useful information coming from an MT system, obtained as a by-product of translation, can be competitive with supervised black-box QE models in terms of correlation to human judgements of translation quality, in settings where the labeled data is scarce. The approach described in requires access to the MT system that produced the translations (unlike the black-box regime). This year's new Task 1, and the fact it shares datasets with Task 2, allowed us to explore this approach on both tasks. In our work, we investigated how to combine the richness of this extra information coming from the provided Neural MT (NMT) system with the strength of state-of-the-art approaches to supervised QE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Glass-Box Features",
"sec_num": "3.2.1"
},
{
"text": "To this end, we extract features (referred to as glass-box features henceforth) using the output probability distribution obtained from (i) a standard deterministic NMT and (ii) using uncertainty quantification. For (ii) we use Monte Carlo Dropout (Gal and Ghahramani, 2015) as a way of circumventing the miscalibration problem of Deep Neural Networks (Guo et al., 2017) and obtaining measures indicative of the model's uncertainty.",
"cite_spans": [
{
"start": 248,
"end": 274,
"text": "(Gal and Ghahramani, 2015)",
"ref_id": "BIBREF5"
},
{
"start": 352,
"end": 370,
"text": "(Guo et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Glass-Box Features",
"sec_num": "3.2.1"
},
{
"text": "We obtain 7 different features for each sentence of each language-pair, the first 3 via (i) and the last 4 via (ii) (full details are in Fomicheva et al. (2020)):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Glass-Box Features",
"sec_num": "3.2.1"
},
{
"text": "\u2022 TP -sentence average of word translation probability",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Glass-Box Features",
"sec_num": "3.2.1"
},
{
"text": "\u2022 Softmax-Ent -sentence average of softmax output distribution entropy Table 1 shows the correlation between each one of these features and human DAs for every language pair in Task 1. As expected, features obtained using uncertainty quantification consistently display higher correlations across all languagepairs, D-TP being the most effective for high and medium resource languages, and D-Lex-Sim for low resource languages.",
"cite_spans": [],
"ref_spans": [
{
"start": 71,
"end": 78,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Glass-Box Features",
"sec_num": "3.2.1"
},
{
"text": "Different configurations were attempted in order to introduce the extracted glass-box features into the OpenKiwi system. The best empirical performance was observed with a simple method: we reduced the dimension of the pooled sentence features output from XLM-R by about five fold (onto bottleneck size), creating a dimensional bottleneck and forcing a more compact sentence representation, and then concatenated the seven extracted glass-box features to this hidden state, followed by an expansion back to a higher dimensional state of hidden size. The result is used as input feature for regression on the sentence score, employing p progressively smaller feed-forward layers (halving in size). A visualization of this process can be seen in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 744,
"end": 752,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Glass-box + Black-box Model",
"sec_num": "3.2.2"
},
{
"text": "The glass-box features were individually normalized a priori, according to their mean and variance in the training dataset, allowing for their integration in the network's training in a scale-independent way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Glass-box + Black-box Model",
"sec_num": "3.2.2"
},
{
"text": "Systems were trained for all language pairs in Tasks 1 and 2. XLM-R large was used instead of base version. We ran experiments with and without glass-box features. From here on we will call KIWI-GLASS-BOX the system as described here, which was the one used for the official submissions, but for comparison we will also refer to KIWI-LARGE as the same system but without using the glass-box features. Hyper-parameter search was performed over p, bottleneck size, hidden size, warmup steps (number of warm up steps for optimizer), freeze steps (number of steps for which XLM-R's weights are not updated) and lr (learning rate). The exact values can be found in Table 6 in Appendix A.",
"cite_spans": [],
"ref_spans": [
{
"start": 660,
"end": 667,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Glass-box + Black-box Model",
"sec_num": "3.2.2"
},
{
"text": "All submissions of KIWI-GLASS-BOX to Task 1 were created by simple linear ensembles, combining 5 of the models obtained through hyperparameter search for each language pair. We used the validation set predictions of these 5 models to train a LASSO regression model. However since we do not possess labels for the test set, these ensembles were trained using k-fold cross-validation (k = 10) on the validation set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Glass-box + Black-box Model",
"sec_num": "3.2.2"
},
{
"text": "For Task 3 we submitted two systems, both of which are based on the general OpenKiwi architecture described in Section 3.1. The two systems differ only in the type of tags they predict, and the subsequent post-processing that is applied to these tags to obtain annotations and document-level MQM (Multidimensional Quality Metrics) scores. We submitted single systems that predict both tasks of document-level annotation and scoring.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level QE",
"sec_num": "3.3"
},
{
"text": "The first system, henceforth referred to as KIWI-DOC, is OPENKIWI-BASE with additional data processing to convert between word-and sentencelevel predictions, and document-level predictions. The data approach is the exact same as Kepler et al. (2019a) . To obtain training data, annotations are converted to binary word-level tags (OK and BAD tags) and sentence-level MQM scores are computed from the annotations pertaining to the sentence. After training, document-level annotation predictions are obtained by the following heuristic: contiguous BAD tags in the word-level predictions are grouped into a single annotation span and are given the severity label major. Predicted document-level MQM scores are obtained by averaging predicted sentence-level MQM weighted by sentence-length (regression) or by direct computation from the predicted annotations using the MQM formula (direct).",
"cite_spans": [
{
"start": 229,
"end": 250,
"text": "Kepler et al. (2019a)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level QE",
"sec_num": "3.3"
},
{
"text": "The second system, KIWI-DOC-IOB, is a new contribution in which the task of annotating is approached as Named-entity recognition by using severity tags in IOB (Inside-Outside-Beginning) format. 5 This richer tag scheme addresses two types of information loss that occur in the approach taken for KIWI-DOC: the severity information is kept, and adjacent but disjoint annotations are not collapsed into single annotations during prediction. 6 This approach has the advantage that the predicted tag sequences can be converted to annotations directly by converting the token spans into character spans and using the predicted label as severity. The architecture of KIWI-DOC-IOB is identical to that of KIWI-DOC except that it is trained with a linear chain CRF 7 that enforces correctness of the IOB tag-sequence at prediction time 8 .",
"cite_spans": [
{
"start": 439,
"end": 440,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level QE",
"sec_num": "3.3"
},
{
"text": "For both systems we trained a final linear regression model that combines the two types of pre-dicted MQM scores (regression and direct) with features derived from the tag-level predictions. We use the following additional features (when available 9 ) computed over the document: the fraction of predicted tags corresponding to an error tag; 10 and the mean, variance, minimum, and maximum of the probability of the BAD. For simplicity we train the linear regression on the same training data as the systems. For each system, we perform search over all combinations of features, and choose the subset that gives the highest Pearson score on the validation set for that particular system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Document-level QE",
"sec_num": "3.3"
},
{
"text": "The results achieved over the validation set on all language pairs for Task 1 are shown in Table 2 . We also include the best correlation achieved by any glass-box feature (denoted by BEST GB FEA-TURE), showing that indeed the proposed method allows for this rich information to complement and enhance the model's training, resulting in a performance increase when compared to model or GBfeature independently. High resource language pair models (En-De, En-Zh, Ru-En benefit the most from the aid of NMT internal information, in particular English-German, where an increase of \u2248 4.5% occurs; this might indicate the usefulness of incorporating nuanced information when sentence scores have less variability.",
"cite_spans": [],
"ref_spans": [
{
"start": 91,
"end": 98,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Task 1: Sentence-Level Direct Assessment",
"sec_num": "4.1"
},
{
"text": "Scored test set predictions submitted during the development of this approach served as informative feedback, revealing the drop from validation to test performance to be smaller on KIWI-GLASS-BOX models when compared to KIWI-LARGE models, suggesting better generalization capabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 1: Sentence-Level Direct Assessment",
"sec_num": "4.1"
},
{
"text": "Post-editing Effort",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 2: Word and Sentence-Level",
"sec_num": "4.2"
},
{
"text": "We trained OPENKIWI-BASE and KIWI-GLASS-BOX on all three subtasks at the same time: source tags, target tags, and sentence HTER. The best model was selected by the highest sum of the three metrics on the validation set. We used a single run of each of the two models to simultaneously predict the three outputs. The results can be seen in Table 3 . Using the glass-box features provided a significant boost to the Pearson score, showing our strategy for sentence-level DA estimation performed well also when estimating sentence-level HTER.",
"cite_spans": [],
"ref_spans": [
{
"start": 339,
"end": 346,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Task 2: Word and Sentence-Level",
"sec_num": "4.2"
},
{
"text": "Even though we only have a single model for all subtasks, our models outperformed the baselines by a large margin and performed very competitively in the test leaderboard (to cite Findings paper).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task 2: Word and Sentence-Level",
"sec_num": "4.2"
},
{
"text": "The results for the document-level scoring are shown in Table 4 . For both systems we observe The only exception is KIWI-DOC-IOB-direct, which performanced equally poorly on both.",
"cite_spans": [],
"ref_spans": [
{
"start": 56,
"end": 63,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Task 3: Document-Level QE",
"sec_num": "4.3"
},
{
"text": "set. This suggests that our method of search over features for the linear regression is overly optimizing the performance to the validation data. It may also reflect our choice to train the linear model on system predictions on training data. Table 5 shows the results for the annotation task. The best results are obtained by KIWI-DOC. Surprisingly, the strong scoring results of KIWI-DOC-IOB with direct (derived from predicted annotations) do not translate to good results on the annotation F1. The difference between the models is caused by the different trade-off between precision and recall: KIWI-DOC-IOB produces less annotations that are more precise, but KIWI-DOC catches much more errors. 12 The most likely cause for this is the more complex tag-set and constrained decoding of KIWI-DOC-IOB.",
"cite_spans": [
{
"start": 700,
"end": 702,
"text": "12",
"ref_id": null
}
],
"ref_spans": [
{
"start": 243,
"end": 250,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Task 3: Document-Level QE",
"sec_num": "4.3"
},
{
"text": "Our approach to this year's edition of the QE shared task was simplicity. Our submissions consisted of either single models, or simple ensembles of multiple runs of the same model. Moreover, we used multi-task models in Task 2, where a system was trained on all three possible outputs (target and source word level and sentence level). We implemented a new version of OpenKiwi and used it as our baseline. It significantly outperformed the official shared task baseline across the board, which was based on the previous version of OpenKiwi. Finally, we showed that having access to NMT models enables using glass-box approaches to QE, which in turn improves performance when used in combination with a black-box QE system. Table 6 : Hyper-parameters of the best models trained for each language pair in Task 1. 70 trials were performed for each search, using the OPTUNA framework (Akiba et al., 2019) , and hyper-parameter values were sampled with the TPE (Tree-structured Parzen Estimator) algorithm. The criterion for trial selection was r Pearson correlation to validation set DA's.",
"cite_spans": [
{
"start": 880,
"end": 900,
"text": "(Akiba et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 723,
"end": 730,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "Even though XLM-R was not trained on the Next Sentence Prediction objective (therefore not using the classification token in its original pretraining), preliminary experiments showed that concatenating inputs, average pooling, and using the classification token resulted in better performance compared to feeding source and target separately and extracting sentence features with other strategies (only pooled target, only the classifier token, classifier token + pooled source, and others).4 Hyper-parameters that were searched are: learning rate, dropout, number of warmup steps, and number of freeze steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The full label set is hence: B-minor, I-minor, B-major, I-major, B-critical, I-critical, and O.6 The two other types of information loss that were noted byKepler et al. (2019a) are left unaddressed: tags are still defined at the token-level, and annotations consisting of multiple spans are still split into individual annotations.7 Each edge score is a single learned parameter that is independent of the input.8 During decoding, the edge scores corresponding to the impossible transitions are set manually to \u2212\u221e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Because of the non-binary tags and CRF model the probability based features are not used for the KIWI-DOC-IOB model (posterior marginals could be used for this).10 This correspond to the BAD tag for KIWI-DOC and all tags different from O for KIWI-DOC-IOB.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "On the validation set KIWI-DOC-IOB predicted 2555 annotations, whereas KIWI-DOC predicted 4028 (the gold set has 5626 annotations). Extending the output message of the annotation evaluation script allowed us to further validate this hypothesis on the validation set: for KIWI-DOC-IOB precision/recall is 0.6287/0.3322; for KIWI-DOC precision/recall is 0.4549/0.6092.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the P2020 programs MAIA (contract 045909) and Unbabel4EU (contract 042671), by the European Research Council (ERC StG DeepSPIN 758969), and by the Funda\u00e7\u00e3o para a Ci\u00eancia e Tecnologia through contract UID/50008/2019.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Optuna: A next-generation hyperparameter optimization framework",
"authors": [
{
"first": "Takuya",
"middle": [],
"last": "Akiba",
"suffix": ""
},
{
"first": "Shotaro",
"middle": [],
"last": "Sano",
"suffix": ""
},
{
"first": "Toshihiko",
"middle": [],
"last": "Yanase",
"suffix": ""
},
{
"first": "Takeru",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Masanori",
"middle": [],
"last": "Koyama",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 25rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Op- tuna: A next-generation hyperparameter optimiza- tion framework. In Proceedings of the 25rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments",
"authors": [
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65-72, Ann Ar- bor, Michigan. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Confidence Estimation for Machine Translation",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blatz",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Simona",
"middle": [],
"last": "Gandrabur",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Kulesza",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of the International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto San- chis, and Nicola Ueffing. 2004. Confidence Estima- tion for Machine Translation. In Proc. of the Inter- national Conference on Computational Linguistics, page 315.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised quality estimation for neural machine translation",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Lisa",
"middle": [],
"last": "Yankovskaya",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9ric",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Aletras",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computation Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Fr\u00e9d\u00e9ric Blain, Francisco Guzm\u00e1n, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, and Lucia Specia. 2020. Unsupervised quality estimation for neural machine translation. Transactions of the As- sociation for Computation Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning",
"authors": [
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of The 33rd International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarin Gal and Zoubin Ghahramani. 2015. Dropout as a bayesian approximation: Representing model un- certainty in deep learning. Proceedings of The 33rd International Conference on Machine Learning.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "On calibration of modern neural networks",
"authors": [
{
"first": "Chuan",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Geoff",
"middle": [],
"last": "Pleiss",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
}
],
"year": 2017,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Wein- berger. 2017. On calibration of modern neural net- works. ArXiv, abs/1706.04599.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unbabel's participation in the WMT19 translation quality estimation shared task",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Kepler",
"suffix": ""
},
{
"first": "Jonay",
"middle": [],
"last": "Tr\u00e9nous",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Treviso",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Vera",
"suffix": ""
},
{
"first": "Ant\u00f3nio",
"middle": [],
"last": "G\u00f3is",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Amin Farajian",
"suffix": ""
},
{
"first": "Ant\u00f3nio",
"middle": [
"V"
],
"last": "Lopes",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "78--84",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5406"
]
},
"num": null,
"urls": [],
"raw_text": "Fabio Kepler, Jonay Tr\u00e9nous, Marcos Treviso, Miguel Vera, Ant\u00f3nio G\u00f3is, M. Amin Farajian, Ant\u00f3nio V. Lopes, and Andr\u00e9 F. T. Martins. 2019a. Unba- bel's participation in the WMT19 translation qual- ity estimation shared task. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 78-84, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "OpenKiwi: An open source framework for quality estimation",
"authors": [
{
"first": "Fabio",
"middle": [],
"last": "Kepler",
"suffix": ""
},
{
"first": "Jonay",
"middle": [],
"last": "Tr\u00e9nous",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Treviso",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Vera",
"suffix": ""
},
{
"first": "Andr\u00e9",
"middle": [
"F T"
],
"last": "Martins",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations",
"volume": "",
"issue": "",
"pages": "117--122",
"other_ids": {
"DOI": [
"10.18653/v1/P19-3020"
]
},
"num": null,
"urls": [],
"raw_text": "Fabio Kepler, Jonay Tr\u00e9nous, Marcos Treviso, Miguel Vera, and Andr\u00e9 F. T. Martins. 2019b. OpenKiwi: An open source framework for quality estimation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 117-122, Florence, Italy. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Predictor-Estimator using Multilevel Task Learning with Stack Propagation for Neural Quality Estimation",
"authors": [
{
"first": "Hyun",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jong-Hyeok",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Seung-Hoon",
"middle": [],
"last": "Na",
"suffix": ""
}
],
"year": 2017,
"venue": "Conference on Machine Translation (WMT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hyun Kim, Jong-Hyeok Lee, and Seung-Hoon Na. 2017. Predictor-Estimator using Multilevel Task Learning with Stack Propagation for Neural Quality Estimation. In Conference on Machine Translation (WMT).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Findings of the wmt 2020 shared task on quality estimation",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Frederic",
"middle": [],
"last": "Blain",
"suffix": ""
},
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Erick",
"middle": [],
"last": "Fonseca",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzman",
"suffix": ""
},
{
"first": "Andre Ft",
"middle": [],
"last": "Martins",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Frederic Blain, Marina Fomicheva, Er- ick Fonseca, Vishrav Chaudhary, Francisco Guzman, and Andre FT Martins. 2020. Findings of the wmt 2020 shared task on quality estimation. In Proceed- ings of the Fifth Conference on Machine Translation: Shared Task Papers.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Quality Estimation for Machine Translation",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Scarton",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [
"Henrique"
],
"last": "Paetzold",
"suffix": ""
}
],
"year": 2018,
"venue": "Synthesis Lectures on Human Language Technologies",
"volume": "11",
"issue": "1",
"pages": "1--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Carolina Scarton, and Gustavo Henrique Paetzold. 2018. Quality Estimation for Machine Translation. Synthesis Lectures on Human Lan- guage Technologies, 11(1):1-162.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "https://github.com/huggingface/ transformers General architecture of the implemented OpenKiwi-based systems.",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "Sent-Std -sentence standard deviation of word probabilities \u2022 D-TP -average TP across N (N = 30) stochastic forward-passes \u2022 D-Var -variance of TP across N stochastic forward-passes \u2022 D-Combo -combination of D-TP and D-Var defined by 1 \u2212 D-TP/D-Var\u2022 D-Lex-Sim -lexical similarity -measured by METEOR score(Banerjee and Lavie, 2005) -of MT output generated in different stochastic passes.",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "Architecture of the \"Quality Estimator\" module modified to include glass-box features.",
"type_str": "figure",
"num": null
},
"TABREF0": {
"type_str": "table",
"num": null,
"text": "TP 0.0993 0.2808 0.5951 0.3992 0.3653 0.3658 0.3658 Softmax-Ent 0.0858 0.2919 0.5595 0.3546 0.4133 0.4077 0.3790 Sent-Std 0.0691 0.3252 0.5049 0.3985 0.3669 0.3912 0.3510",
"html": null,
"content": "<table><tr><td/><td>Feature</td><td>En-De En-Zh</td><td>Language Pair Ro-En Et-En Ne-En Si-En</td><td>Ru-En</td></tr><tr><td>(i)</td><td/><td/><td/></tr><tr><td/><td>D-TP</td><td colspan=\"3\">0.1078 0.3158 0.6404 0.4936 0.3905 0.3797 0.4441</td></tr><tr><td>(ii)</td><td>D-Var D-Combo</td><td colspan=\"3\">0.0782 0.1943 0.3550 0.2780 0.2336 0.2338 0.2329 0.0487 0.1259 0.2620 0.1335 0.2938 0.2244 0.2013</td></tr><tr><td/><td>D-Lex-Sim</td><td colspan=\"3\">0.0994 0.2903 0.6210 0.3940 0.4751 0.4318 0.4092</td></tr></table>"
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "Pearson correlation (r) between the employed glass-box features and human DA's for every language pair in Task 1 (validation set) -best results are in bold.",
"html": null,
"content": "<table/>"
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "",
"html": null,
"content": "<table><tr><td>: Task 1 results on the validation and test sets</td></tr><tr><td>for all language pairs in terms of Pearson's r correla-</td></tr><tr><td>tion. Systems in bold were officially submitted. (*)</td></tr><tr><td>Lines with an asterisk use LASSO regression to tune</td></tr><tr><td>ensemble weights on the validation set, therefore their</td></tr><tr><td>numbers cannot be directly compared to the other mod-</td></tr><tr><td>els.</td></tr></table>"
},
"TABREF5": {
"type_str": "table",
"num": null,
"text": "Task 2 word and sentence-level results on the validation and test sets. Results for OPENKIWI-BASE and KIWI-GLASS-BOX were obtained from a single model trained by multi-tasking on the 3 different subtasks. (*) Baseline results on the validation set were not made available by the organizers.",
"html": null,
"content": "<table><tr><td>System</td><td>Validation</td><td>Test</td></tr><tr><td>KIWI-DOC-regression</td><td>0.5146</td><td>0.4127</td></tr><tr><td>KIWI-DOC-direct</td><td>0.3131</td><td>0.3156</td></tr><tr><td>KIWI-DOC-linear</td><td>0.5635</td><td>0.4014</td></tr><tr><td>KIWI-DOC-IOB-regression</td><td>0.5731</td><td>0.4746</td></tr><tr><td>KIWI-DOC-IOB-direct</td><td>0.5483</td><td>0.3363</td></tr><tr><td>KIWI-DOC-IOB-linear</td><td>0.6023</td><td>0.4493</td></tr></table>"
},
"TABREF6": {
"type_str": "table",
"num": null,
"text": "Results of document-level (task 3) submissions for MQM scoring (Pearson). The results of KIWI-DOC and KIWI-DOC-IOB are for the same single model. For model selection during training we used the summed validation set Pearson of direct and regression to obtain a model that performs well in both methods.",
"html": null,
"content": "<table><tr><td>System</td><td>Validation</td><td>Test</td></tr><tr><td>KIWI-DOC</td><td>0.4934</td><td>0.4716</td></tr><tr><td>KIWI-DOC-IOB</td><td>0.4016</td><td>0.4147</td></tr></table>"
},
"TABREF7": {
"type_str": "table",
"num": null,
"text": "",
"html": null,
"content": "<table><tr><td>: Results of document-level (task 3) submis-</td></tr><tr><td>sions for annotation (F1). For model selection during</td></tr><tr><td>training we used validation set MCC for KIWI-DOC</td></tr><tr><td>and validation set tagging F1 for KIWI-DOC-IOB.</td></tr><tr><td>a large drop in Pearson score from validation set</td></tr><tr><td>to test set, in the range of 0.1-0.2, 11 which sug-</td></tr><tr><td>gests that there is a difference in data distribution</td></tr><tr><td>between the two sets. On the validation set, KIWI-</td></tr><tr><td>DOC and KIWI-DOC-IOB obtain comparable Pear-</td></tr><tr><td>son correlation, albeit for different MQM methods.</td></tr><tr><td>While both models perform comparably in the sen-</td></tr><tr><td>tence score prediction (regression), the KIWI-</td></tr><tr><td>DOC-IOB system clearly outperforms KIWI-DOC</td></tr><tr><td>on the MQM scores that are computed directly</td></tr><tr><td>from the predicted annotations (direct). The</td></tr><tr><td>improvements made by linear regression on the val-</td></tr><tr><td>idation set do not consistently translate to the test</td></tr><tr><td>11</td></tr></table>"
},
"TABREF8": {
"type_str": "table",
"num": null,
"text": "shows the hyperparameters used in Task 1.",
"html": null,
"content": "<table><tr><td>Language</td><td/><td colspan=\"3\">Hyper-parameters</td><td/></tr><tr><td>Pair</td><td colspan=\"2\">hidden size bottleneck size</td><td>lr</td><td colspan=\"2\">warmup steps freeze steps</td></tr><tr><td>EN-DE</td><td>900</td><td>200</td><td>1.00E-05</td><td>6535</td><td>750</td></tr><tr><td>EN-ZH</td><td>700</td><td>300</td><td>7.00E-06</td><td>3280</td><td>4375</td></tr><tr><td>RO-EN</td><td>900</td><td>200</td><td>9.00E-06</td><td>2625</td><td>5687</td></tr><tr><td>ET-EN</td><td>500</td><td>200</td><td>7.00E-06</td><td>655</td><td>3935</td></tr><tr><td>NE-EN</td><td>900</td><td>200</td><td>1.20E-05</td><td>2625</td><td>3060</td></tr><tr><td>SI-EN</td><td>900</td><td>200</td><td>7.00E-06</td><td>5250</td><td>5250</td></tr><tr><td>RU-EN</td><td>700</td><td>200</td><td>1.70E-05</td><td>3800</td><td>6125</td></tr></table>"
}
}
}
}