ACL-OCL / Base_JSON /prefixW /json /wmt /2020.wmt-1.107.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:42:08.862641Z"
},
"title": "Bicleaner at WMT 2020: Universitat d'Alacant-Prompsit's submission to the parallel corpus filtering shared task",
"authors": [
{
"first": "Miquel",
"middle": [],
"last": "Espl\u00e0-Gomis V\u00edctor",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "M",
"middle": [],
"last": "S\u00e1nchez-Cartagena",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jaume",
"middle": [],
"last": "Zaragoza-Bernabeu",
"suffix": "",
"affiliation": {
"laboratory": "Prompsit Language Engineering Av. Universitat s/n. Edifici Quorum III, E",
"institution": "",
"location": {
"postCode": "03202",
"settlement": "Elx (",
"country": "Spain"
}
},
"email": ""
},
{
"first": "Felipe",
"middle": [],
"last": "S\u00e1nchez-Mart\u00ednez",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the joint submission of Universitat d'Alacant and Prompsit Language Engineering to the WMT 2020 shared task on parallel corpus filtering. Our submission, based on the free/open-source tool Bicleaner, enhances it with Extremely Randomised Trees and lexical similarity features that account for the frequency of the words in the parallel sentences to determine if two sentences are parallel. To train this classifier we used the clean corpora provided for the task and synthetic noisy parallel sentences. In addition we rescore the output of Bicleaner using characterlevel language models and n-gram saturation.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the joint submission of Universitat d'Alacant and Prompsit Language Engineering to the WMT 2020 shared task on parallel corpus filtering. Our submission, based on the free/open-source tool Bicleaner, enhances it with Extremely Randomised Trees and lexical similarity features that account for the frequency of the words in the parallel sentences to determine if two sentences are parallel. To train this classifier we used the clean corpora provided for the task and synthetic noisy parallel sentences. In addition we rescore the output of Bicleaner using characterlevel language models and n-gram saturation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "This paper describes the joint submission of Universitat d'Alacant and Prompsit Language Engineering to the parallel corpus filtering shared task at the Fifth Conference on Machine Translation (WMT 2020). Our submission is built upon Bicleaner (S\u00e1nchez-Cartagena et al., 2018), 1 a widely-used free/open-source tool for detecting noisy parallel sentences that participated in the 2018 edition of this shared task and ranked fourth out of 17 submissions on one of the sub-tasks. We provide quality scores for the sentence pairs provided by the organiser without re-aligning them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The 2020 edition of the parallel corpus filtering shared task focuses on two under-resourced Asian languages paired with English: Khmer and Pashto. Khmer (km) is the official language of Cambodia and is spoken circa 16 million people in Cambodia, Vietnam and Thailand. 2 There are about 500k English-Khmer parallel sentences in OPUS, 3 mainly belonging to narrow domains like software products and religion. Pashto (ps) is spoken by around 40 million people in Pakistan and in Afghanistan, where it is official together with Persian. 4 There are around 100k English-Pastho parallel sentence in OPUS, most of which belong to the software domain.",
"cite_spans": [
{
"start": 534,
"end": 535,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Detecting noisy parallel sentences for underresourced language pairs, like those addressed in this shared task, is challenging. Pastho is not directly supported by LASER (Schwenk and Douze, 2017) , although it supports other Iranian languages, and there are few bilingual resources for building the Bicleaner's models.",
"cite_spans": [
{
"start": 170,
"end": 195,
"text": "(Schwenk and Douze, 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Bicleaner is based on a classifier that assesses whether a pair of sentences are mutual translations or not. It is trained on a parallel corpus (positive samples) and on an automatically corrupted version of the same corpus (negative samples). The most important features used by the classifier are lexical similarity scores obtained with the help of probabilistic bilingual dictionaries, which are also extracted from the parallel corpus. Our submission improves the performance of the version of Bicleaner that took part in the 2018 shared task in multiple ways: a new classification algorithm, new lexical features that account for the frequency of the words in the parallel sentences, and a novel way of generating corrupted pairs of sentences. In addition, we re-score the output of Bicleaner combining character-level language models and an n-gram saturation scorer in a linear combination whose parameters are determined by fine-tuning the MBART model provided by the organisers of the shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organised as follows. Section 2 describes the Bicleaner classifier whereas Section 3 explains how the score produced by the classifier is combined with the information provided by character-level language models and an n-gram saturation algorithm to produce the submitted score. Section 4 then describes the process followed to build the submission, and Section 5 lists related approaches. The paper ends with some concluding remarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Bicleaner is based on an automatic classifier that produces a score for a pair of sentences representing the probability that they are mutual translations. Random Forests (Breiman, 2001) , the classification algorithm used in the 2018 submission, has been replaced by Extremely-Randomised Trees (Geurts et al., 2006) because the latter performed best on preliminary experiments.",
"cite_spans": [
{
"start": 171,
"end": 186,
"text": "(Breiman, 2001)",
"ref_id": "BIBREF3"
},
{
"start": 295,
"end": 316,
"text": "(Geurts et al., 2006)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bicleaner classifier",
"sec_num": "2"
},
{
"text": "The Extremely Randomised Trees classification algorithm works by selecting at each internal node the best feature from a sub-set of features selected at random from the whole set of features F , and using a random cut-off point. The hyper-parameters controlling the training of these classifiers are therefore the method used to rank the features and select the best one, the size of the subset of features selected at random, and the number of trees to be used. To select the best hyper-parameters we performed a grid search with the following hyperparameter values. For the ranking we tried with Gini importance (Breiman et al., 1984, Ch. 4 ) and information gain; for the size of the sub-set of features we tried with |F |, log 2 |F | and |F |; for the number of trees we tried with 100, 200, 300, 400 and 500.",
"cite_spans": [
{
"start": 614,
"end": 642,
"text": "(Breiman et al., 1984, Ch. 4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bicleaner classifier",
"sec_num": "2"
},
{
"text": "The features we used can be split in two groups: those that account for the lexical similarity of the two sentences, and those based on shallow properties of the sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bicleaner classifier",
"sec_num": "2"
},
{
"text": "Bilingual lexical similarity is assessed by means of the lexical feature Qmax(S, \u0398, d), which was first described by S\u00e1nchez-Cartagena et al. (2018) and is inspired by the translation probabilities used in statistical machine translation (Koehn, 2009) . It is defined as:",
"cite_spans": [
{
"start": 117,
"end": 148,
"text": "S\u00e1nchez-Cartagena et al. (2018)",
"ref_id": "BIBREF17"
},
{
"start": 238,
"end": 251,
"text": "(Koehn, 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical features",
"sec_num": "2.1"
},
{
"text": "Qmax(S, \u0398, d) = t\u2208\u0398 max s\u2208S\u222a{NULL} p(t|s; d) 1 |\u0398|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical features",
"sec_num": "2.1"
},
{
"text": "where, S is a source-language (SL) sentence, S is a set with the tokens in S, \u0398 is a set with the tokens in the target-language (TL) sentence T that appear at least once in the SL-to-TL probabilistic bilingual dictionary d, and p(t|s; d) stands for the translation probability of the target token t given the source token s according to the bilingual dictionary d. A smoothing is applied if, for a token t, max s\u2208S\u222a{NULL} p(t|s; d) equals zero; in that case, this expression is set to the value of the smallest probability in d divided by 10. One can interpret that, in this case, the dictionary is providing evidence that t is unlikely to be the translation of any of the tokens in S. It is worth noting that this case differs from the case in which a token t \u2208 T does not appear in the dictionary at all; in that case, no evidence, either positive or negative, is available for it. This is why Qmax is only computed for the tokens in \u0398 instead of doing so for all the tokens in T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical features",
"sec_num": "2.1"
},
{
"text": "The informativeness of Qmax strongly depends on the coverage of the probabilistic bilingual dictionary used. To measure the coverage of this dictionary, the feature Qmax is complemented with two additional features: Even though low-frequency words usually have more discriminatory power (Ramos, 2003) , the original formulation of the Bicleaner lexical features did not take into account word frequency in any way. In order to allow the classifier to give different weights to words from different frequency ranks, we re-formulated the lexical features: Qmax now becomes a set of features",
"cite_spans": [
{
"start": 287,
"end": 300,
"text": "(Ramos, 2003)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical features",
"sec_num": "2.1"
},
{
"text": "CoverT(T, d),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical features",
"sec_num": "2.1"
},
{
"text": "{Qmax q (S, \u0398, d, R) | q \u2208 [1, 4]}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical features",
"sec_num": "2.1"
},
{
"text": "While the summation in the original Qmax was computed for all the tokens in \u0398, in Qmax q it is only computed for those tokens in \u0398 that appear in the quartile q \u2208 [1, 4] of the ranking of tokens R. R sorts tokens by the logarithm of their relative frequency in a monolingual corpus; in this way, quartile q = 1 contains a large amount of tokens with low frequency, while quartile q = 4 contains fewer tokens with high frequency. 5 The same adaptation is applied to obtain the set of features {CoverS",
"cite_spans": [
{
"start": 429,
"end": 430,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical features",
"sec_num": "2.1"
},
{
"text": "q (T, d) | q \u2208 [1, 4]} and {CoverST q (S, T, d) | q \u2208 [1, 4]}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical features",
"sec_num": "2.1"
},
{
"text": "As in the original Bicleaner, these features were also computed in the reverse direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical features",
"sec_num": "2.1"
},
{
"text": "Shallow features do not make use of bilingual lexical information and are aimed at complementing the lexical features, which may not be reliable enough in sentence pairs with poor dictionary coverage. The shallow features used can be further split into those that model sentence length and those that identify tokens and characters that give hints about the parallelness of a pair of sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "Features that model sentence length are based on the assumption that the ratio between the lengths of a pair of parallel sentences is fairly constant for a given language pair. Hence, sentence pairs that deviate too much from this ratio are not likely to be parallel. We measure how close is the ratio of a given pair of sentences to the expected one as the probability mass function of a Poisson distribution. We also provide the raw lengths to the classifier. The complete list of features based on sentence length is the following. Each of these features is computed independently for the SL sentence S and for the TL sentence T of the pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "\u2022 Likelihood of having a TL segment T with length (in tokens) l T given l S , the length of the SL segment S, and r ts , the ratio between the length of TL and SL computed on a training parallel corpus; likelihood is computed as P r(X = l T ; \u03bb = l S \u2022 r ts ). This feature is also computed for S:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "P r(X = l S ; \u03bb = l T \u2022 r st ). Note that P r(X = k; \u03bb = L) = e \u2212L \u2022L k k! .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "\u2022 Number of tokens in the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "\u2022 Number of characters in the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "\u2022 Average token length (in characters) in the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "Parallel pairs of sentences are also likely to share numerical expressions, punctuation marks and proper nouns. The following features aim at leveraging that information. Each of these features is computed independently for S and T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "\u2022 Number of punctuation marks of each type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "\u2022 Proportion of numerical expressions in the sentence that can be found in the other sentence of the pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "\u2022 Proportion of capitalised tokens in the sentence that can be found in the other sentence of the pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "Finally, character counts can also be considered hints for parallelness. They are taken into account by the following features, which are computed independently for S and T :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "\u2022 Number of characters in each of the main Unicode classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "\u2022 Number of different characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "\u2022 Number of occurrences of the three most frequent characters, normalised by sentence length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "\u2022 Entropy of the string, considering each character as an event whose probability is proportional to the number of occurrences of the character in the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "\u2022 Maximum number of consecutive repetitions of the same character.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "Overall, 92 shallow features are used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow features",
"sec_num": "2.2"
},
{
"text": "For training the Bicleaner classifier, positive and negative samples are used. The positive samples are those found in the original parallel corpus. The negative samples are generated by corrupting the sentences in that corpus as explained next. Three types of synthetic noise are applied for corrupting the sentences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling noise",
"sec_num": "2.3"
},
{
"text": "\u2022 wrong alignment: parallel segments are randomly re-aligned to produce pairs of segments that are not parallel;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling noise",
"sec_num": "2.3"
},
{
"text": "\u2022 wrong segmentation: one of the sentences in the pair is truncated: a suffix starting from a random position is removed, therefore emulating an error in sentence segmentation; and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling noise",
"sec_num": "2.3"
},
{
"text": "\u2022 word replacement: a random number of words in one of the sentences of the pair is replaced by other words with similar frequency as computed on a monolingual corpus. 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling noise",
"sec_num": "2.3"
},
{
"text": "The amount of corrupted sentences we generated equals the the size of the original parallel corpus, and the three types of synthetic noise were applied in the same proportion. The classifier is therefore trained on a set of sentences twice as large as the original parallel corpus. This strategy differs from the one followed in the 2018 submission (S\u00e1nchez-Cartagena et al., 2018) for generating corrupted sentences, where only the \"wrong alignment\" type of noise was used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling noise",
"sec_num": "2.3"
},
{
"text": "3 Re-scoring",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling noise",
"sec_num": "2.3"
},
{
"text": "Subsampling 5 million words from the raw corpus based on the score described in the previous section ensures that NMT systems are trained on parallel data. However, some of the selected training parallel samples may not bring useful information and replacing them with other, more informative samples could improve the performance of the resulting NMT systems. We hypothesise that two main reasons could make a pair of sentences which are mutual translations non-informative: i) sentences are not fluent enough and hence very different from those that will be translated with the resulting NMT systems: lists of keywords or website menus are examples of such non-fluent sentences; and ii) the pair of sentences is too similar to other training samples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling noise",
"sec_num": "2.3"
},
{
"text": "To take into account these additional factors, the final score assigned to each sentence pair was computed as follows. First, each sentence received a preliminary score, prescore, computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling noise",
"sec_num": "2.3"
},
{
"text": "prescore(s, t) = \u03bb \u2022 bicleaner(S, T )+ (1 \u2212 \u03bb) \u2022 min(fluency s (S), fluency t (T )))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling noise",
"sec_num": "2.3"
},
{
"text": "where S and T are respectively the SL and TL sentence, bicleaner is the score described in Section 2, and fluency s and fluency t denote, respectively, fluency scores in the SL and in the TL provided by character language models. 7 Fluency scores were computed as the normalised perplexity of the sentence according to a 7-gram character language model estimated with KenLM (Heafield, 2011) . Normalisation was aimed at placing the perplexities in the [0, 1] interval and consisted on a linear transformation that ensured that the values in the raw corpus had a 7 Values of \u03bb close to 1.0 make lists of keywords or website menus that are mutual translations to have the highest scores. Value of \u03bb around 0.5 make the top scored segment pairs to be fluent, complete grammatical sentences. Values of \u03bb close to 0.0 make fluent but non-parallel sentences to receive the highest scores. mean of 0.5 and standard deviation of 0.25. Assuming that the perplexities follow a normal distribution, 95% of the values fall into the desired range. Those values with score lower than 0 or higher than 1 after the transformation were set to 0 and 1, respectively.",
"cite_spans": [
{
"start": 230,
"end": 231,
"text": "7",
"ref_id": null
},
{
"start": 374,
"end": 390,
"text": "(Heafield, 2011)",
"ref_id": "BIBREF9"
},
{
"start": 562,
"end": 563,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling noise",
"sec_num": "2.3"
},
{
"text": "After computing prescore, sentence pairs were sorted by that score in descending order, and the score of those pairs for which all their 3-grams could be found in sentences with a higher score was multiplied by a penalty \u03b2 to promote diversity in the subsampled corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling noise",
"sec_num": "2.3"
},
{
"text": "The values of the parameters \u03bb and \u03b2, that control the contribution of parallelness, fluency and novelty to the final score were optimised so as to maximise the BLEU score obtained after fine tuning the MBART model provided by the task organisers. The Nelder-Mead algorithm (Nelder and Mead, 1965) , which does not require gradient computations, was used.",
"cite_spans": [
{
"start": 274,
"end": 297,
"text": "(Nelder and Mead, 1965)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modelling noise",
"sec_num": "2.3"
},
{
"text": "This section describes the process followed to build our submission, which comprised selection of training data, corpora preprocessing, classifier training and evaluation of different alternatives for some of the steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building the submission",
"sec_num": "4"
},
{
"text": "For both language pairs, the classifier training data was built from the concatenation of all the clean parallel corpora provided by the shared-task organisers. The length ratios used in shallow features were computed on the same data, as well as the bilingual dictionaries. In order to build the dictionaries, the parallel sentences were word-aligned with MGIZA++. 8 Alignments were symmetrised with the heuristic grow-diag-final and the probabilities in the bilingual dictionaries were estimated afterwards by maximum likelihood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data used",
"sec_num": "4.1"
},
{
"text": "The Wikipedia monolingual corpus provided by the organisers was used to compute the word frequencies for word ranking R as described in Section 2.1. The same monolingual data was used to train character language models. Pashto and Khmer models were trained on the complete data. A different English language model was trained for each language pair on a random sample of the English Wikipedia corpus that matched the size of the Pashto/Khmer Wikipedia corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data used",
"sec_num": "4.1"
},
{
"text": "The clean parallel data provided by the organisers was filtered before their use. Those parallel sentences in which at least one side contain less than 20% of characters in the Unicode range of the corresponding language were discarded. The remaining parallel sentences were deduplicated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-filtering",
"sec_num": "4.2"
},
{
"text": "The raw sentence pairs to be scored were also pre-processed with a series of heuristic rules: the score was set to zero if any of the conditions was met. These rules were aimed at detecting segments with evident flaws and speeding up the subsequent steps. The rules were aimed at detecting the following defects in the parallel sentences:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-filtering",
"sec_num": "4.2"
},
{
"text": "\u2022 Wrong language: same Unicode filtering applied to the clean corpora (see above).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-filtering",
"sec_num": "4.2"
},
{
"text": "\u2022 Too long sentences: those with more than 1024 characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-filtering",
"sec_num": "4.2"
},
{
"text": "\u2022 Untranslated: SL and TL segments are identical after removing numerical expressions and punctuation marks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-filtering",
"sec_num": "4.2"
},
{
"text": "\u2022 Not fluent: the sentence contain elements such as URLs, arithmetic operators, too many parentheses, escaped Unicode characters, and other common defects that arise when crawling parallel corpora from the web. These elements were detected by means of regular expressions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-filtering",
"sec_num": "4.2"
},
{
"text": "Tokenization and subword segmentation have shown to improve the recall of the probabilistic dictionaries used to obtain the lexical features described in Section 2.1. We experimented with the following tokenisation and subword segmentation methods, which were applied to the clean data as well as to the raw sentences to be scored:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenisation and word segmentation",
"sec_num": "4.3"
},
{
"text": "\u2022 Rule-based tokenisation (tok) for Pashto, Khmer and English, as provided by the tool Polyglot (Al-Rfou et al., 2013);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenisation and word segmentation",
"sec_num": "4.3"
},
{
"text": "\u2022 Rule-based tokenisation plus word morphological segmentation with Morfessor (tok-morph). For this we used, after tokenisation, the pre-trained models for Morfessor (Virpioja et al., 2013) included in Polyglot.",
"cite_spans": [
{
"start": 166,
"end": 189,
"text": "(Virpioja et al., 2013)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tokenisation and word segmentation",
"sec_num": "4.3"
},
{
"text": "As previously mentioned, the probabilistic bilingual dictionaries were obtained from the same parallel corpus used to train the classifier. This strategy has an important drawback. While almost all words would be found in the bilingual dictionaries when training the classifier, the coverage would be much smaller when classifying the raw sentences because of the small amount of parallel data available. In order to close the gap between training and classification, we removed some dictionary entries during training. Specifically, we removed the least frequent entries so as to ensure that the coverage of the truncated dictionaries on the training data matches the coverage of the full dictionaries on the raw sentences to be scored. Table 1 depicts the results obtained on the development environment during the preparation of the submission. The system that produced the scores for our final submission is shown in bold. We firstly evaluated the different tokenisation alternatives described in Section 4.3, and applied the re-scoring scheme described in Section 3 on top of the best performing one. The results show that the tokenisation with Polyglot without any kind of subword segmentation (tok) leads to the best results. It is also worth mentioning the poor performance obtained with morphological segmentation, which needs to be studied more carefully. Moreover, rescoring for increased fluency and diversity further improved the results. Table 1 also shows the results obtained by the baseline LASER model, 9 which was consistently outperformed by Bicleaner. Comparing the results of the version of Bicleaner used in this submission with that used in 2018 also shows that the changes introduced bring a positive impact.",
"cite_spans": [],
"ref_spans": [
{
"start": 738,
"end": 745,
"text": "Table 1",
"ref_id": null
},
{
"start": 1452,
"end": 1459,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Bicleaner",
"sec_num": "4.4"
},
{
"text": "A shared task on parallel corpus filtering was part of the WMT conference programme for the first time in 2018 (Koehn et al., 2018) . That year the task was targeted at a high-resource scenario. NMT models, which already provide the probability of a 9 These results do not exactly match those published at http://www.statmt.org/wmt20/ parallel-corpus-filtering.html, probably because of differences in the GPU hardware or random initialization seed. TL sentence given an SL sentence, emerged as the dominant approach (Junczys-Dowmunt, 2018).",
"cite_spans": [
{
"start": 111,
"end": 131,
"text": "(Koehn et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 250,
"end": 251,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "Last year's edition was focused on a lowresource scenario , where parallel data big enough to build NMT models that provide reliable TL probability distributions was not available. The best performing model was LASER, a method based on multilingual sentence embeddings that takes advantage of the data available for multiple language pairs. In fact, a LASER model trained on 93 languages is the baseline model published by the organisers for this edition of the shared task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "Unlike LASER, our submission is mainly based on lexical similarity scores analogous to those used in statistical machine translation. They are computed only on parallel data, without any kind of transfer learning from other language pairs. The approach we follow to detect sentences that are mutual translations is similar to the one by Munteanu and Marcu (2005) for detecting parallel sentences in comparable corpora. However, we use a larger set of shallow features not related to lexical similarity and follow a more sophisticated method for generating negative samples.",
"cite_spans": [
{
"start": 337,
"end": 362,
"text": "Munteanu and Marcu (2005)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "Concerning our re-scoring strategy for including information about fluency and diversity, participants from past editions also used these attributes to score sentences. For instance, Axelrod et al. (2019) and V\u00e1zquez et al. (2019) devised a scoring strategy under the assumption that parallel sentences should have similar monolingual language model perplexities, and many other submissions included a penalty for repetitive sentences (Gonz\u00e1lez-Rubio, 2019; Erdmann and Gwinnup, 2019; Bernier-Colborne and Lo, 2019). Nevertheless, to the best of our knowledge, our approach is the first one that directly optimises the weight of these attributes towards an automatic translation evaluation metric.",
"cite_spans": [
{
"start": 183,
"end": 204,
"text": "Axelrod et al. (2019)",
"ref_id": "BIBREF1"
},
{
"start": 209,
"end": 230,
"text": "V\u00e1zquez et al. (2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "5"
},
{
"text": "We described the joint submission of Universitat d'Alacant and Prompsit Language Engineering to the parallel corpus filtering shared task at the Fifth Conference on Machine Translation (WMT 2020). Our submission is based on Bicleaner, an open source tool based on a classifier that uses lexical similarity features inspired in the translation probabilities used in statistical machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding remarks",
"sec_num": "6"
},
{
"text": "We presented a series of improvements over the version of Bicleaner that participated in the 2018 edition of the shared task, namely a better classifier, more sophisticated generation of negative samples and a reformulation of the lexical similarity scores which takes into account word frequency. We showed that these improvements are effective and they allowed our submission to outperform LASER, a state-of-the-art method based on multilingual sentence embeddings. Moreover, combining Bicleaner scores with scores that account for fluency and diversity further improved the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding remarks",
"sec_num": "6"
},
{
"text": "We plan to keep exploring subword segmentation algorithms that help to fight data sparseness when computing lexical similarity scores with the help of bilingual dictionaries. We also aim at integrating word embeddings into lexical similarity scores, which would allow us to leverage monolingual data in a more effective way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Concluding remarks",
"sec_num": "6"
},
{
"text": "https://github.com/bitextor/bicleaner 2 Wikipedia: https://en.wikipedia.org/wiki/ Khmer_language 3 http://opus.nlpl.eu",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Wikipedia: https://en.wikipedia.org/wiki/ Pashto",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Preliminary experiments showed that no gain is obtained by dividing word frequencies in more than four groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The ranking of token frequencies R described in Section 2.1 was used for this replacements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/moses-smt/mgiza. git",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Work funded by the European Union through projects GoURMET and ParaCrawl. GoURMET -Global Under-Resourced Media Translation, grant agreement number 825299-is funded through the H2020 research and innovation programme. ParaCrawl -actions numbers 2017-EU-IA-0178 and 2018-EU-IA-0063-is funded under the Automated Translation CEF Telecom instrument managed by INEA at the European Commission.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Polyglot: Distributed word representations for multilingual nlp",
"authors": [
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Perozzi",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Skiena",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "183--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual nlp. In Proceedings of the Seven- teenth Conference on Computational Natural Lan- guage Learning, pages 183-192, Sofia, Bulgaria. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Dual monolingual cross-entropy delta filtering of noisy parallel data",
"authors": [
{
"first": "Amittai",
"middle": [],
"last": "Axelrod",
"suffix": ""
},
{
"first": "Anish",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Sloto",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "245--251",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5433"
]
},
"num": null,
"urls": [],
"raw_text": "Amittai Axelrod, Anish Kumar, and Steve Sloto. 2019. Dual monolingual cross-entropy delta filtering of noisy parallel data. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 245-251, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "NRC parallel corpus filtering system for WMT 2019",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Bernier-Colborne",
"suffix": ""
},
{
"first": "Chi-Kiu",
"middle": [],
"last": "Lo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "252--260",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5434"
]
},
"num": null,
"urls": [],
"raw_text": "Gabriel Bernier-Colborne and Chi-kiu Lo. 2019. NRC parallel corpus filtering system for WMT 2019. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 252-260, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Random forests. Machine Learning",
"authors": [
{
"first": "Leo",
"middle": [],
"last": "Breiman",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "45",
"issue": "",
"pages": "5--32",
"other_ids": {
"DOI": [
"10.1023/A:1010933404324"
]
},
"num": null,
"urls": [],
"raw_text": "Leo Breiman. 2001. Random forests. Machine Learn- ing, 45(1):5-32.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Classification and Regression Trees",
"authors": [
{
"first": "Leo",
"middle": [],
"last": "Breiman",
"suffix": ""
},
{
"first": "Jerome",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"J"
],
"last": "Stone",
"suffix": ""
},
{
"first": "R",
"middle": [
"A"
],
"last": "Olshen",
"suffix": ""
}
],
"year": 1984,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leo Breiman, Jerome Friedman, Charles J. Stone, and R.A. Olshen. 1984. Classification and Regression Trees. Taylor & Francis.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Lowresource corpus filtering using multilingual sentence embeddings",
"authors": [
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Yuqing",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "261--266",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5435"
]
},
"num": null,
"urls": [],
"raw_text": "Vishrav Chaudhary, Yuqing Tang, Francisco Guzm\u00e1n, Holger Schwenk, and Philipp Koehn. 2019. Low- resource corpus filtering using multilingual sentence embeddings. In Proceedings of the Fourth Confer- ence on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 261-266, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Quality and coverage: The AFRL submission to the WMT19 parallel corpus filtering for low-resource conditions task",
"authors": [
{
"first": "Grant",
"middle": [],
"last": "Erdmann",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Gwinnup",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "267--270",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5436"
]
},
"num": null,
"urls": [],
"raw_text": "Grant Erdmann and Jeremy Gwinnup. 2019. Quality and coverage: The AFRL submission to the WMT19 parallel corpus filtering for low-resource conditions task. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 267-270, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Extremely randomized trees",
"authors": [
{
"first": "Pierre",
"middle": [],
"last": "Geurts",
"suffix": ""
},
{
"first": "Damien",
"middle": [],
"last": "Ernst",
"suffix": ""
},
{
"first": "Louis",
"middle": [],
"last": "Wehenkel",
"suffix": ""
}
],
"year": 2006,
"venue": "Machine Learning",
"volume": "63",
"issue": "1",
"pages": "3--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pierre Geurts, Damien Ernst, and Louis Wehenkel. 2006. Extremely randomized trees. Machine Learn- ing, 63(1):3-42.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Webinterpret submission to the WMT2019 shared task on parallel corpus filtering",
"authors": [
{
"first": "Jes\u00fas",
"middle": [],
"last": "Gonz\u00e1lez-Rubio",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "271--276",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5437"
]
},
"num": null,
"urls": [],
"raw_text": "Jes\u00fas Gonz\u00e1lez-Rubio. 2019. Webinterpret submission to the WMT2019 shared task on parallel corpus fil- tering. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 271-276, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "KenLM: Faster and smaller language model queries",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "187--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield. 2011. KenLM: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187-197, Edinburgh, Scotland. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Dual conditional cross-entropy filtering of noisy parallel corpora",
"authors": [
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "888--895",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6478"
]
},
"num": null,
"urls": [],
"raw_text": "Marcin Junczys-Dowmunt. 2018. Dual conditional cross-entropy filtering of noisy parallel corpora. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 888-895, Belgium, Brussels. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2009. Statistical machine translation. Cambridge University Press.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Findings of the WMT 2019 shared task on parallel corpus filtering for low-resource conditions",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "54--72",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5404"
]
},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Francisco Guzm\u00e1n, Vishrav Chaud- hary, and Juan Pino. 2019. Findings of the WMT 2019 shared task on parallel corpus filtering for low-resource conditions. In Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), pages 54-72, Flo- rence, Italy. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Findings of the WMT 2018 shared task on parallel corpus filtering",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Huda",
"middle": [],
"last": "Khayrallah",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Mikel",
"middle": [
"L"
],
"last": "Forcada",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "726--739",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6453"
]
},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Huda Khayrallah, Kenneth Heafield, and Mikel L. Forcada. 2018. Findings of the WMT 2018 shared task on parallel corpus filtering. In Pro- ceedings of the Third Conference on Machine Trans- lation: Shared Task Papers, pages 726-739, Bel- gium, Brussels. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improving machine translation performance by exploiting non-parallel corpora",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Dragos",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Munteanu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "4",
"pages": "477--504",
"other_ids": {
"DOI": [
"10.1162/089120105775299168"
]
},
"num": null,
"urls": [],
"raw_text": "Dragos Stefan Munteanu and Daniel Marcu. 2005. Im- proving machine translation performance by exploit- ing non-parallel corpora. Computational Linguis- tics, 31(4):477-504.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A Simplex Method for Function Minimization",
"authors": [
{
"first": "John",
"middle": [
"A"
],
"last": "Nelder",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Mead",
"suffix": ""
}
],
"year": 1965,
"venue": "The Computer Journal",
"volume": "7",
"issue": "4",
"pages": "308--313",
"other_ids": {
"DOI": [
"10.1093/comjnl/7.4.308"
]
},
"num": null,
"urls": [],
"raw_text": "John A. Nelder and Roger Mead. 1965. A Simplex Method for Function Minimization. The Computer Journal, 7(4):308-313.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Using TF-IDF to determine word relevance in document queries",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "Ramos",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the first instructional conference on machine learning",
"volume": "242",
"issue": "",
"pages": "133--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan Ramos. 2003. Using TF-IDF to determine word relevance in document queries. In Proceedings of the first instructional conference on machine learn- ing, volume 242, pages 133-142. New Jersey, USA.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Prompsit's submission to WMT 2018 parallel corpus filtering shared task",
"authors": [
{
"first": "M",
"middle": [],
"last": "V\u00edctor",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "S\u00e1nchez-Cartagena",
"suffix": ""
},
{
"first": "Sergio",
"middle": [],
"last": "Ba\u00f1\u00f3n",
"suffix": ""
},
{
"first": "Gema",
"middle": [],
"last": "Ortiz-Rojas",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ram\u00edrez",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "955--962",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6488"
]
},
"num": null,
"urls": [],
"raw_text": "V\u00edctor M. S\u00e1nchez-Cartagena, Marta Ba\u00f1\u00f3n, Sergio Ortiz-Rojas, and Gema Ram\u00edrez. 2018. Prompsit's submission to WMT 2018 parallel corpus filtering shared task. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 955-962, Belgium, Brussels. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning joint multilingual sentence representations with neural machine translation",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Matthijs",
"middle": [],
"last": "Douze",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "157--167",
"other_ids": {
"DOI": [
"10.18653/v1/W17-2619"
]
},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk and Matthijs Douze. 2017. Learn- ing joint multilingual sentence representations with neural machine translation. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 157-167, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "The University of Helsinki submission to the WMT19 parallel corpus filtering task",
"authors": [
{
"first": "Ra\u00fal",
"middle": [],
"last": "V\u00e1zquez",
"suffix": ""
},
{
"first": "Umut",
"middle": [],
"last": "Sulubacak",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "3",
"issue": "",
"pages": "294--300",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5441"
]
},
"num": null,
"urls": [],
"raw_text": "Ra\u00fal V\u00e1zquez, Umut Sulubacak, and J\u00f6rg Tiedemann. 2019. The University of Helsinki submission to the WMT19 parallel corpus filtering task. In Proceed- ings of the Fourth Conference on Machine Transla- tion (Volume 3: Shared Task Papers, Day 2), pages 294-300, Florence, Italy. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Morfessor 2.0: Python implementation and extensions for morfessor baseline",
"authors": [
{
"first": "Sami",
"middle": [],
"last": "Virpioja",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Smit",
"suffix": ""
},
{
"first": "Stig-Arne",
"middle": [],
"last": "Gr\u00f6nroos",
"suffix": ""
},
{
"first": "Mikko",
"middle": [],
"last": "Kurimo",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sami Virpioja, Peter Smit, Stig-Arne Gr\u00f6nroos, and Mikko Kurimo. 2013. Morfessor 2.0: Python im- plementation and extensions for morfessor baseline. Technical report.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "which returns the percentage of unique tokens in T appearing in d, and CoverTS(S, T, d), which returns the percentage of unique tokens in T that appear in d associated with at least one token in S. All these features are also computed in the reverse direction: Qmax(\u0398, S, d ), CoverS(S, d ), and CoverST(T, S, d ), where d is a TL-to-SL probabilistic bilingual dictionary.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF0": {
"text": "BLEU scores obtained by the different configurations evaluated for Khmer-English and Pashto-English on the development environment provided by the organisers.",
"num": null,
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td colspan=\"2\">Khmer-English</td><td colspan=\"2\">Pashto-English</td></tr><tr><td>System</td><td colspan=\"4\">fairseq MBART fairseq MBART</td></tr><tr><td>LASER (baseline)</td><td>6.80</td><td>10.33</td><td>9.55</td><td>11.50</td></tr><tr><td>Bicleaner 2018 tok</td><td>7.45</td><td>10.16</td><td>10.11</td><td>11.85</td></tr><tr><td>Bicleaner 2020 tok</td><td>7.76</td><td>10.66</td><td>10.10</td><td>12.35</td></tr><tr><td>Bicleaner 2020 tok-morph</td><td>7.33</td><td>10.56</td><td>8.64</td><td>10.94</td></tr><tr><td>Bicleaner 2020 tok + re-score</td><td>8.25</td><td>11.18</td><td>10.53</td><td>12.80</td></tr><tr><td>Table 1:</td><td/><td/><td/><td/></tr></table>"
}
}
}
}