ACL-OCL / Base_JSON /prefixW /json /wat /2020.wat-1.18.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:34:05.723385Z"
},
"title": "The University of Tokyo's Submissions to the WAT 2020 Shared Task",
"authors": [
{
"first": "Mat\u012bss",
"middle": [],
"last": "Rikters",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {
"addrLine": "7-3-1 Hongo, Bunkyo-ku",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": ""
},
{
"first": "Ryokan",
"middle": [],
"last": "Ri",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {
"addrLine": "7-3-1 Hongo, Bunkyo-ku",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": ""
},
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {
"addrLine": "7-3-1 Hongo, Bunkyo-ku",
"settlement": "Tokyo",
"country": "Japan"
}
},
"email": "nakazawa@logos.t.u-tokyo.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The paper describes the development process of the The University of Tokyo's NMT systems that were submitted to the WAT 2020 Document-level Business Scene Dialogue Translation sub-task. We describe the data processing workflow, NMT system training architectures, and automatic evaluation results. For the WAT 2020 shared task, we submitted 26 systems (both with and without using other resources) for English-Japanese and Japanese-English translation directions. The submitted systems were trained using Transformer models and one was a SMT baseline.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "The paper describes the development process of the The University of Tokyo's NMT systems that were submitted to the WAT 2020 Document-level Business Scene Dialogue Translation sub-task. We describe the data processing workflow, NMT system training architectures, and automatic evaluation results. For the WAT 2020 shared task, we submitted 26 systems (both with and without using other resources) for English-Japanese and Japanese-English translation directions. The submitted systems were trained using Transformer models and one was a SMT baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We describe the machine translation (MT) systems submitted to the WAT 2020 Document-level Business Scene Dialogue Translation sub-task developed by the team of The University of Tokyo. We chose the identifier of our team to be ut-mrt, which specifies our affiliation (The University of Tokyo) and first names (Mat\u012bss, Ryokan, Toshiaki) . We participated in both EN\u2192JA and JA\u2192EN translation directions. We experimented with mixing and matching several data sets, data processing approaches and training methods.",
"cite_spans": [
{
"start": 309,
"end": 335,
"text": "(Mat\u012bss, Ryokan, Toshiaki)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main findings are: 1) using source side context mainly improves EN\u2192JA MT, but not always, and mainly degrades or leaves little impact on JA\u2192EN MT; 2) there are no better data than more data -we see the biggest improvements from using larger training data sets; and 3) optimiser delay (Bogoychev et al., 2018) can help a lot -by setting the optimiser delay value to 8 instead of the default 1 increased BLEU scores by more than 1.5 in both translation directions.",
"cite_spans": [
{
"start": 288,
"end": 312,
"text": "(Bogoychev et al., 2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We used multiple dataset combinations to train our models for the shared task. We also filtered some of the larger automatically collected data sets which are usually more noisy and contain duplicates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "Aside from using only the provided BSD training dataset (BSD 20 (Rikters et al., 2019) ), we had access to an extended version of the BSD four times the size (BSD 80), as well as two other similar corpora -AMI Meeting corpus (AMI) and a parallel version of OntoNotes 5.0 (ON) (Rikters et al., 2020) . We also experimented with using the jParaCrawl (Morishita et al., 2019) dataset, data from WMT 2020 1 (whcih includes JParaCrawl, Ted Talks (Cettolo et al., 2012) , The Kyoto Free Translation Task Corpus (Neubig, 2011) , Japanese-English Subtitle Corpus (Pryzant et al., 2018) , WikiMatrix (Schwenk et al., 2019) and Wiki Titles v2), and a proprietary document-aligned news dataset gathered from several sources. The full training data statistics are shown in Table 1 . The AMI and jParaCrawl corpora contain many duplicates while the rest seem to be of higher quality. Filtered BSD 20 20,000 18,818 17,672 BSD 80 80,629 74,377 69,742 AMI 110,483 75,660 57,046 ON 28,429 24,335 18,348 WMT 17,880,587 16,501,296 13,035,839 jParaCrawl 10,105,351 8,790,618 7,087,631 News 1,104,549 1,101,751 956,654 Table 1 : Total, unique data amounts and after filtering for the noisiest corpora.",
"cite_spans": [
{
"start": 64,
"end": 86,
"text": "(Rikters et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 276,
"end": 298,
"text": "(Rikters et al., 2020)",
"ref_id": "BIBREF19"
},
{
"start": 348,
"end": 372,
"text": "(Morishita et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 441,
"end": 463,
"text": "(Cettolo et al., 2012)",
"ref_id": "BIBREF2"
},
{
"start": 505,
"end": 519,
"text": "(Neubig, 2011)",
"ref_id": "BIBREF13"
},
{
"start": 555,
"end": 577,
"text": "(Pryzant et al., 2018)",
"ref_id": "BIBREF16"
},
{
"start": 591,
"end": 613,
"text": "(Schwenk et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 761,
"end": 768,
"text": "Table 1",
"ref_id": null
},
{
"start": 871,
"end": 1132,
"text": "Filtered BSD 20 20,000 18,818 17,672 BSD 80 80,629 74,377 69,742 AMI 110,483 75,660 57,046 ON 28,429 24,335 18,348 WMT 17,880,587 16,501,296 13,035,839 jParaCrawl 10,105,351 8,790,618 7,087,631 News 1,104,549 1,101,751 956,654 Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "2"
},
{
"text": "We used data filtering methods described by Rikters (2018) to remove the noisiest parts of the corpora for experiments involving jParaCrawl. The filtering process consists of the following filters: 1) unique parallel sentence filter, which removes duplicate parallel sentences; 2) equal source-target filter, which removes parallel sentences that are identical in both languages; 3) multiple sourcesone target and multiple targets -one source filters; 4) non-alphabetical filters -remove sentences having a majority of characters outside the scope of the specified language; 5) repeating token filter, which removes sentences that have several repeating tokens or phrases in a row; and 6) correct language filter, which uses language identification (Lui and Baldwin, 2012) to remove parallel sentences where the identified language does not match the expected one. Data amounts after filtering are shown in the final column of Table 1 . Similar to the amount of duplicates, AMI and jParaCrawl were filtered the most along with the WMT data set.",
"cite_spans": [
{
"start": 44,
"end": 58,
"text": "Rikters (2018)",
"ref_id": "BIBREF17"
},
{
"start": 749,
"end": 772,
"text": "(Lui and Baldwin, 2012)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 927,
"end": 934,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Filtering",
"sec_num": "2.1"
},
{
"text": "For pre-processing we used only Sentencepiece (Kudo and Richardson, 2018) to create a shared vocabulary with size depending on the total training data set size for the specific experiment. The vocabulary size was set to 3,000 tokens for experiments with only BSD 20 data, 8,000 for experiments with BSD 80 data, 16,000 when using BSD 80 / AMI / ON together and 32,000 tokens for experiments involving WMT, jParaCrawl or News data. We did not perform other tokenisation or truecasing for the training data. We used Mecab (Kudo, 2006) to tokenise the Japanese side of the evaluation data, which we used only for scoring. The English side remained as-is.",
"cite_spans": [
{
"start": 46,
"end": 73,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF9"
},
{
"start": 520,
"end": 532,
"text": "(Kudo, 2006)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Filtering",
"sec_num": "2.1"
},
{
"text": "We separate our submissions in 3 main categories by model configuration type -statistical MT (SMT) baseline models, NMT models and NMT models with context. The latter category also includes models with tags specifying the domain (or rather training corpus used).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Configurations",
"sec_num": "3"
},
{
"text": "We trained SMT baseline systems using using the Moses (Koehn et al., 2007) toolkit in the Tilde MT platform (Vasi\u013cjevs et al., 2012) . The SMT systems consist of: word alignment performed using fastalign (Dyer et al., 2013) ; 7-gram translation models and the wbe-msd-bidirectional-fe-allff reordering models; a language model trained with KenLM (Heafield, 2011) ; models tuned using the improved MERT (Bertoldi et al., 2009) .",
"cite_spans": [
{
"start": 54,
"end": 74,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF7"
},
{
"start": 108,
"end": 132,
"text": "(Vasi\u013cjevs et al., 2012)",
"ref_id": "BIBREF24"
},
{
"start": 204,
"end": 223,
"text": "(Dyer et al., 2013)",
"ref_id": "BIBREF3"
},
{
"start": 346,
"end": 362,
"text": "(Heafield, 2011)",
"ref_id": "BIBREF4"
},
{
"start": 402,
"end": 425,
"text": "(Bertoldi et al., 2009)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SMT",
"sec_num": "3.1"
},
{
"text": "For the sentence-level NMT systems, we used Sockeye (Hieber et al., 2017) or Marian (Junczys-Dowmunt et al., 2018) to train transformer architecture (Vaswani et al., 2017 ) models with several different parameter configurations until convergence on development data (no improvement on validation perplexity for 10 checkpoints). Each model was trained on a single Nvidia TITAN V (12GB) GPU, and training time was about 2-3 days for models with only BSD/ON/AMI data and about 5-6 days when using WMT/News/jParaCrawl data.",
"cite_spans": [
{
"start": 52,
"end": 73,
"text": "(Hieber et al., 2017)",
"ref_id": "BIBREF5"
},
{
"start": 84,
"end": 114,
"text": "(Junczys-Dowmunt et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 149,
"end": 170,
"text": "(Vaswani et al., 2017",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NMT",
"sec_num": "3.2"
},
{
"text": "The main reason for using two different toolkits is that Marian currently does not support source side input factors, which help when training models with context. However, Sockeye does not support using optimiser delay, which enables training with larger batch sizes and significantly improves the final outcome. Differences in the model and data configurations are as follows: We experimented with two different approaches of domain adaptation. For models trained with Marian, the usual approach of resetting convergence parameters and swapping out the full training data set with a 1:1 mix of domain data (BSD corpus) and an equal-sized random subset of the remaining data worked fine. This, however, did not work as well for models trained with Sockeye when following the domain adaptation tutorial 2 . In this case we augmented the training data by adding a domain specifying tag (<AMI>, <BSD> or <ON>) (Tars and Fishel, 2018) at the beginning of each source sentence of training, development and evaluation data. The domain tag approach lead to a similar increase in BLEU score as the usual domain adaptation approach.",
"cite_spans": [
{
"start": 908,
"end": 931,
"text": "(Tars and Fishel, 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NMT",
"sec_num": "3.2"
},
{
"text": "To train our context-aware systems, we experimented with the approach of sentence concatenation (Tiedemann and Scherrer, 2017) with source side factors (Sennrich and Haddow, 2016) . We use the Sockeye toolkit and similar parameters as in our sentence-level systems. For the concatenation context-aware MT, we experimented with two approaches: 1) prepending the previous sentence from the same document, followed by a beginning of sentence tag <bos>, to the source sentence; 2) in addition, providing source side factors to specify if a token represents context or the source sentence.",
"cite_spans": [
{
"start": 96,
"end": 126,
"text": "(Tiedemann and Scherrer, 2017)",
"ref_id": "BIBREF23"
},
{
"start": 152,
"end": 179,
"text": "(Sennrich and Haddow, 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NMT with Context",
"sec_num": "3.3"
},
{
"text": "The source side factors that we used for training were either C or S, representing context and the actual source sentence respectively. Examples of source sentences with context and factors are shown in Table 2 . The first sentence in the table has no previous context, as it is the first one in the respective document. The second sentence has the first one as context, followed by a beginning of sentence tag <bos>, and so on. ",
"cite_spans": [],
"ref_spans": [
{
"start": 203,
"end": 210,
"text": "Table 2",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "NMT with Context",
"sec_num": "3.3"
},
{
"text": "<bos> \u306f\u3044 \u3001 G \u793e \u304a\u5ba2\u69d8 \u76f8 \u8ac7 \u5ba4 \u306e \u30b1 \u30a4 \u30c8 \u3067\u3059 \u3002 \u306f\u3044 \u3001 G \u793e \u304a\u5ba2\u69d8 \u76f8 \u8ac7 \u5ba4 \u306e \u30b1 \u30a4 \u30c8 \u3067\u3059 \u3002<bos> \u3054 \u7528 \u4ef6 \u306f ? Source side factors C S S S S S S S S S S S S S S C C C C C C C C C C C C C C C S S S S S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source sentences",
"sec_num": null
},
{
"text": "We use the SacreBLEU 3 tool (Post, 2018) to evaluate automatic translations and calculate BLEU scores (Papineni et al., 2002) in Table 3 , which contains results from the intermediate models that were not submitted to the shared task evaluation site. This table shows the incremental BLEU score improvements of switching between the base and small configurations of the transformer model, model averaging, enabling optimiser delay and domain adaptation. It also shows that BLEU scores go both up and down when adding context sentences to the source side. We did not compare how data filtering impacts the final result, but filtering was only performed in experiment settings which involved the jParaCrawl corpus, which was the largest overall and contained the majority of noisy data.",
"cite_spans": [
{
"start": 28,
"end": 40,
"text": "(Post, 2018)",
"ref_id": "BIBREF15"
},
{
"start": 102,
"end": 125,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 129,
"end": 136,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Both result tables show that adding training data improves BLEU scores. Ideally, we would have wanted the jParaCrawl and all WMT corpora to be document-aligned to be able to train the contextaware models using the complete data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Amount",
"sec_num": null
},
{
"text": "We first experimented with incrementally adding all of the document-level data available to us -BSD 80, AMI, ON, News -and compared how using context impacts the final translation. Then, we switched to only sentence-level experiments and added jParaCrawl and the rest of WMT20 corpora to the mix, which finally lead to our highest-scoring models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Amount",
"sec_num": null
},
{
"text": "For experiments using only the provided training data from the shared task it is clear that the transformer-base model was too big to efficiently utilise the little amount of data. It is interesting that for EN\u2192JA the SMT model outperformed all NMT models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Configurations",
"sec_num": null
},
{
"text": "Rows 9-12 of Table 3 show incremental improvements while using the same training data and seemingly the same transformer-base configuration. We first switched from Sockeye to Marian and saw immediate improvement of about 1 BLEU. Later we found out that this was due to some default parameters being different or not set in Sockeye and after aligning the parameters 4 we were able to train comparable models. However, Sockeye does not support the optimiser delay feature that can be used in Marian to increase the effective training batch size and simulate training on larger GPUs, which in turn leads to higher final BLEU scores. Domain adaptation / tuning is another feature / strategy that supposedly works in both toolkits, but seems to lead to grater gain in Marian. Rows 11 and 12 show the improvement from optimiser delay and domain adaptation, which are about 1.6 BLEU and about 1.7 BLEU on average respectively. ",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Model Configurations",
"sec_num": null
},
{
"text": "By prepending the previous sentence as context for each training, development, and test data content sentence we were expecting to see slight improvements in both translation directions. We did, however, find that this leads to a drop in scores for all of our JA\u2192EN experiments (rows in Tables 3 and 4 where the difference between adjacent configurations is Ctx). Out of the 5 comparable EN\u2192JA experiments adding context improved in 3 cases.",
"cite_spans": [],
"ref_spans": [
{
"start": 287,
"end": 302,
"text": "Tables 3 and 4",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Context",
"sec_num": null
},
{
"text": "Automatic evaluation results from the submission website are shown in Table 4 . The abbreviations used in the table are explained in Section 3.2. Several of our models ranked in the top-5 in each translation direction according to the automatic evaluation. By looking at the results, it is clear that having the larger BSD corpus gave us a big and perhaps unfair advantage. It is also evident both here and in the human evaluation results that just adding larger amounts of any parallel data leads to improvements in BLEU scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.1"
},
{
"text": "Results of the human evaluation are summarised in Table 5 . For the human evaluation we chose to submit our highestscoring context-aware systems along with their otherwise identical context-agnostic alternatives in or-der to better understand the benefits or drawbacks of adding previous context. We added all human evaluated results to the table and gathered configurations of other team models from descriptions on the evaluation site 5 . Unlike BLEU and RIBES scores, which were higher for the context-aware version in the EN\u2192JA direction, it seems that the evaluators preferred the context-agnostic model output in both translation directions.",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 57,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "4.2"
},
{
"text": "We were also fortunate enough to have our overall highest-scoring submissions evaluated by humans and confirm that they truly were in the top-2 for both translation directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "4.2"
},
{
"text": "The paper described the development process of the The University of Tokyo's MT systems that were submitted for the WAT 2020 Document-level Business Scene Dialogue Translation sub-task. Among other things, we experimented with adding previous context to training data, larger batches and domain specifying tags. While we did find some slight BLEU score improvements when training context-aware models, document-aligned data required to train them are still rare and rather small in size. More substantial improvements were gained by simply adding all available sentence-aligned Table 5 : Human evaluation results ordered by the human adequacy score on a scale of 0.00 to 5.00 -the higher the better. All EN\u2192JA BLEU scores are an average of the 3 tokeniser versions (Juman, Kytea and Mecab). In addition to the previously introduced abbreviations, BT stands for back-translation, Doc-lvl means document level, the + signifies other unmentioned corpora that were used, and the remaining abbreviations are corpora that the other shared task participants used.",
"cite_spans": [],
"ref_spans": [
{
"start": 578,
"end": 585,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "corpora and training regular NMT models. In contrast to our expectation that the contextaware models will be superior at least for the EN\u2192JA translation direction, where we saw gains in BLEU scores, results from the human evaluation showed otherwise. We believe that a more sophisticated training method may be required to fully take advantage the document-aligned data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We did not perform any back-translation of monolingual business dialogue or similar corpora, nor did we train transformer-big models or perform model distillation. All of these are other popular methods used in similar shared tasks known to improve the final results. Our intuition is that such moves would further improve the final outcome by several BLEU points, but due to time constraints we chose not to go forward with them. In total, 26 systems were submitted for the English\u2194Japanese language pair and four of them to the human evaluation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://www.statmt.org/wmt20/translation-task.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://awslabs.github.io/sockeye/tutorials/adapt.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Version string: BLEU+case.mixed+numrefs.1+smooth. exp+tok.13a+version.1.2.21",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Initial learning rate -0.003; transformer activation type -swish; optimizer params -beta1:0.9, beta2:0.98, epsilon:0.000000001; transformer dropout -0.1;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://lotus.kuee.kyoto-u.ac.jp/WAT/evaluation/index.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by \"Research and Development of Deep Learning Technology for Advanced Multilingual Speech Translation\", the Commissioned Research of National Institute of Information and Communications Technology (NICT), JAPAN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Improved Minimum Error Rate Training in Moses",
"authors": [
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Jean-Baptiste",
"middle": [],
"last": "Fouet",
"suffix": ""
}
],
"year": 2009,
"venue": "The Prague Bulletin of Mathematical Linguistics",
"volume": "91",
"issue": "1",
"pages": "7--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicola Bertoldi, Barry Haddow, and Jean-Baptiste Fouet. 2009. Improved Minimum Error Rate Train- ing in Moses. The Prague Bulletin of Mathematical Linguistics, 91(1):7--16.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Accelerating asynchronous stochastic gradient descent for neural machine translation",
"authors": [
{
"first": "Nikolay",
"middle": [],
"last": "Bogoychev",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2991--2996",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1332"
]
},
"num": null,
"urls": [],
"raw_text": "Nikolay Bogoychev, Kenneth Heafield, Alham Fikri Aji, and Marcin Junczys-Dowmunt. 2018. Acceler- ating asynchronous stochastic gradient descent for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 2991-2996, Brus- sels, Belgium. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Wit 3 : Web inventory of transcribed and translated talks",
"authors": [
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Girardi",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 16 th Conference of the European Association for Machine Translation (EAMT)",
"volume": "",
"issue": "",
"pages": "261--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mauro Cettolo, Christian Girardi, and Marcello Fed- erico. 2012. Wit 3 : Web inventory of transcribed and translated talks. In Proceedings of the 16 th Confer- ence of the European Association for Machine Trans- lation (EAMT), pages 261-268, Trento, Italy.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A simple, fast, and effective reparameterization of ibm model 2",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Chahuneau",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of NAACL-HLT 2013",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameteriza- tion of ibm model 2. In Proceedings of NAACL-HLT 2013. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Kenlm: Faster and smaller language model queries",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "187--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield. 2011. Kenlm: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187-197. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Sockeye: A toolkit for neural machine translation",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hieber",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Domhan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Vilar",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2017. Sockeye: A toolkit for neural machine translation. ArXiv e-prints.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Marian: Fast Neural Machine Translation in C++",
"authors": [
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Tomasz",
"middle": [],
"last": "Dwojak",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Neckermann",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Seide",
"suffix": ""
},
{
"first": "Ulrich",
"middle": [],
"last": "Germann",
"suffix": ""
},
{
"first": "Alham",
"middle": [],
"last": "Fikri Aji",
"suffix": ""
}
],
"year": 2018,
"venue": "Nikolay Bogoychev, Andr\u00e9 F. T. Martins, and Alexandra Birch",
"volume": "",
"issue": "",
"pages": "116--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Al- ham Fikri Aji, Nikolay Bogoychev, Andr\u00e9 F. T. Mar- tins, and Alexandra Birch. 2018. Marian: Fast Neu- ral Machine Translation in C++. In Proceedings of ACL 2018, System Demonstrations, pages 116- 121, Melbourne, Australia. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Pro- ceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, pages 177-180. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Mecab: Yet another part-of-speech and morphological analyzer",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo. 2006. Mecab: Yet another part-of-speech and morphological analyzer. http://mecab. source- forge. jp.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "2012. langid.py: An off-the-shelf language identification tool",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the ACL 2012 System Demonstrations",
"volume": "",
"issue": "",
"pages": "25--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In Pro- ceedings of the ACL 2012 System Demonstrations, pages 25-30. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Jparacrawl: A large scale web-based english-japanese parallel corpus",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Morishita",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.10668"
]
},
"num": null,
"urls": [],
"raw_text": "Makoto Morishita, Jun Suzuki, and Masaaki Na- gata. 2019. Jparacrawl: A large scale web-based english-japanese parallel corpus. arXiv preprint arXiv:1911.10668.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Overview of the 7th workshop on Asian translation",
"authors": [
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Nakayama",
"suffix": ""
},
{
"first": "Chenchen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Hideya",
"middle": [],
"last": "Mino",
"suffix": ""
},
{
"first": "Isao",
"middle": [],
"last": "Goto",
"suffix": ""
},
{
"first": "Win",
"middle": [
"Pa"
],
"last": "Pa",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
},
{
"first": "Shantipriya",
"middle": [],
"last": "Parida",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 7th Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toshiaki Nakazawa, Hideki Nakayama, Chenchen Ding, Raj Dabre, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukuttan, Shantipriya Parida, Ond\u0159ej Bojar, and Sadao Kurohashi. 2020. Overview of the 7th workshop on Asian transla- tion. In Proceedings of the 7th Workshop on Asian Translation, Suzhou, China. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The Kyoto free translation task",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig. 2011. The Kyoto free translation task. http://www.phontron.com/kftt.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6319"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Belgium, Brussels. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "JESC: Japanese-English Subtitle Corpus. Language Resources and Evaluation Conference (LREC)",
"authors": [
{
"first": "R",
"middle": [],
"last": "Pryzant",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Britz",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Pryzant, Y. Chung, D. Jurafsky, and D. Britz. 2018. JESC: Japanese-English Subtitle Corpus. Language Resources and Evaluation Conference (LREC).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Impact of Corpora Quality on Neural Machine Translation",
"authors": [
{
"first": "Mat\u012bss",
"middle": [],
"last": "Rikters",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 8th Conference Human Language Technologies -The Baltic Perspective",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mat\u012bss Rikters. 2018. Impact of Corpora Quality on Neural Machine Translation. In In Proceedings of the 8th Conference Human Language Technologies - The Baltic Perspective (Baltic HLT 2018), Tartu, Es- tonia.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Designing the business conversation corpus",
"authors": [
{
"first": "Mat\u012bss",
"middle": [],
"last": "Rikters",
"suffix": ""
},
{
"first": "Ryokan",
"middle": [],
"last": "Ri",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 6th Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "54--61",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5204"
]
},
"num": null,
"urls": [],
"raw_text": "Mat\u012bss Rikters, Ryokan Ri, Tong Li, and Toshiaki Nakazawa. 2019. Designing the business conversa- tion corpus. In Proceedings of the 6th Workshop on Asian Translation, pages 54-61, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Document-aligned japaneseenglish conversation parallel corpus",
"authors": [
{
"first": "Mat\u012bss",
"middle": [],
"last": "Rikters",
"suffix": ""
},
{
"first": "Ryokan",
"middle": [],
"last": "Ri",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mat\u012bss Rikters, Ryokan Ri, Tong Li, and Toshi- aki Nakazawa. 2020. Document-aligned japanese- english conversation parallel corpus. In Proceedings of the Fifth Conference on Machine Translation: Vol- ume 1, Research Papers, Punta Cana, Dominican Re- public. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Wikimatrix: Mining 135m parallel sentences in 1620 language pairs from wikipedia",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Hongyu",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzm\u00e1n. 2019. Wiki- matrix: Mining 135m parallel sentences in 1620 lan- guage pairs from wikipedia.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Linguistic input features improve neural machine translation",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "1",
"issue": "",
"pages": "83--91",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2209"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich and Barry Haddow. 2016. Linguistic input features improve neural machine translation. In Proceedings of the First Conference on Machine Translation: Volume 1, Research Papers, pages 83- 91, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Multi-domain neural machine translation",
"authors": [
{
"first": "Sander",
"middle": [],
"last": "Tars",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Twenty-first Annual Conference of the European Association for Machine Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sander Tars and Mark Fishel. 2018. Multi-domain neural machine translation. In Proceedings of the Twenty-first Annual Conference of the European As- sociation for Machine Translation (EAMT 2018).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Neural machine translation with extended context",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Scherrer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Third Workshop on Discourse in Machine Translation",
"volume": "",
"issue": "",
"pages": "82--92",
"other_ids": {
"DOI": [
"10.18653/v1/W17-4811"
]
},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann and Yves Scherrer. 2017. Neural ma- chine translation with extended context. In Proceed- ings of the Third Workshop on Discourse in Machine Translation, pages 82-92, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "LetsMT!: A Cloud-Based Platform for Do-It-Yourself Machine Translation",
"authors": [
{
"first": "Andrejs",
"middle": [],
"last": "Vasi\u013cjevs",
"suffix": ""
},
{
"first": "Raivis",
"middle": [],
"last": "Skadi\u0146\u0161",
"suffix": ""
},
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the ACL 2012 System Demonstrations",
"volume": "",
"issue": "",
"pages": "43--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrejs Vasi\u013cjevs, Raivis Skadi\u0146\u0161, and J\u00f6rg Tiedemann. 2012. LetsMT!: A Cloud-Based Platform for Do- It-Yourself Machine Translation. In Proceedings of the ACL 2012 System Demonstrations, July, pages 43-48, Jeju Island, Korea. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Sockeye Transformer base (T.bas) -6 layers -Transformer small (T.sm) -4 layers -One previous context sentence (Ctx) -Domain tags (Dom) -Average of 4 best models (Avg) \u2022 Marian -Transformer base (T.bas) -6 layers -Optimiser delay of 8 -Domain adaptation (Tun) -Ensemble of 2 best models (Ens)",
"type_str": "figure",
"num": null
},
"TABREF0": {
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table",
"text": "Examples of training data source sentences and the respective source side factors for the concatenated context-aware experiments."
},
"TABREF2": {
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table",
"text": "Automatic evaluation results of models that were not submitted to the shared task evaluation site. All EN\u2192JA scores are calculated on references and outputs tokenised with Mecab. Configuration details are split by vertical lines, where the first part specifies the model type (Transformer -small or base), next are the corpora used for training, following by additional data/model details (domain tags, context, optimiser delay, domain adaptation, model averaging)."
},
"TABREF4": {
"content": "<table><tr><td>Configuration</td></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": "Automatic evaluation results of the submitted systems in BLEU and RIBES. All EN\u2192JA scores are an average of the 3 tokeniser versions (Juman, Kytea and Mecab). The first two groups of rows were trained with Sockeye and the last group was trained with Marian. Configuration details are split by vertical lines, where the first part specifies the model type (SMT or Transformer -small or base), next are the corpora used for training, following by additional data/model details (domain tags, context, domain adaptation), and finally if either model averaging (only for Sockeye) or ensembling (only for Marian) was used. Configurations marked in a bold font were submitted for human evaluation."
}
}
}
}