| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T04:34:14.870023Z" |
| }, |
| "title": "Inference-only sub-character decomposition improves translation of unseen logographic characters", |
| "authors": [ |
| { |
| "first": "Danielle", |
| "middle": [], |
| "last": "Saunders", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Cambridge", |
| "location": { |
| "country": "UK" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Weston", |
| "middle": [], |
| "last": "Feely", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Cambridge", |
| "location": { |
| "country": "UK" |
| } |
| }, |
| "email": "wesfeely@gmail.com" |
| }, |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Byrne", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Cambridge", |
| "location": { |
| "country": "UK" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Neural Machine Translation (NMT) on logographic source languages struggles when translating 'unseen' characters, which never appear in the training data. One possible approach to this problem uses sub-character decomposition for training and test sentences. However, this approach involves complete retraining, and its effectiveness for unseen character translation to non-logographic languages has not been fully explored. We investigate existing ideograph-based sub-character decomposition approaches for Chinese-to-English and Japanese-to-English NMT, for both high-resource and low-resource domains. For each language pair and domain we construct a test set where all source sentences contain at least one unseen logographic character. We find that complete sub-character decomposition often harms unseen character translation, and gives inconsistent results generally. We offer a simple alternative based on decomposition before inference for unseen characters only. Our approach allows flexible application, achieving translation adequacy improvements and requiring no additional models or training.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Neural Machine Translation (NMT) on logographic source languages struggles when translating 'unseen' characters, which never appear in the training data. One possible approach to this problem uses sub-character decomposition for training and test sentences. However, this approach involves complete retraining, and its effectiveness for unseen character translation to non-logographic languages has not been fully explored. We investigate existing ideograph-based sub-character decomposition approaches for Chinese-to-English and Japanese-to-English NMT, for both high-resource and low-resource domains. For each language pair and domain we construct a test set where all source sentences contain at least one unseen logographic character. We find that complete sub-character decomposition often harms unseen character translation, and gives inconsistent results generally. We offer a simple alternative based on decomposition before inference for unseen characters only. Our approach allows flexible application, achieving translation adequacy improvements and requiring no additional models or training.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "While Neural Machine Translation (NMT) has evolved rapidly in recent years, not all of its successful techniques are equally applicable to all language pairs. A particular example is the representation and translation of unseen tokens, which do not appear in the training data. With techniques like subword decomposition (Sennrich et al., 2016) , an unseen word in an alphabetic language can in the worst case be represented as a sequence of characters. Since alphabetic languages usually have few unique characters, it is reasonable to assume that * Now at Amazon, work performed while at SDL all of these 'backoff' characters will be present in the limited model vocabulary.", |
| "cite_spans": [ |
| { |
| "start": 321, |
| "end": 344, |
| "text": "(Sennrich et al., 2016)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We focus instead on the translation of unseen Chinese and Japanese logographic characters into alphabetic languages, a task that remains a challenge for NMT. Logographic writing systems may have many thousands of logograms, each representing at least one word, morpheme or concept as well as conveying phonetic and prosodic information. Inevitably some characters will either not be present in the training data, or will be present but too rare to be included in the vocabulary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "If the model is required to translate a previously unseen character, it will usually be replaced with an UNK (unknown word) token. The most likely outcome is that it will be ignored by the translation model, which will instead rely on the context of the unseen character to produce the translation. In the worst case, the presence of a previously-unseen character at inference time may harm the translation quality. This is a particular concern for NMT in low-resource domains, when a model is less able to rely on lexical context.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Many logographic characters share subcharacter components 1 , which can carry semantic or phonetic meaning (Table 1 ). An intuitive approach to the logogram sparsity problem in NLP uses sub-character decompositions in place of characters. Sub-character work in NMT has focused on the use of shared sub-characters to improve Chinese-Japanese translation Komachi, 2018, 2019) . In this approach all logograms are decomposed, and subword vocabularies are learned over sub-character sequences.", |
| "cite_spans": [ |
| { |
| "start": 353, |
| "end": 373, |
| "text": "Komachi, 2018, 2019)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 107, |
| "end": 115, |
| "text": "(Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We identify two motivations for using sub- characters in logographic NMT:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Char Meaning Sub-chars Semantic sub-char \u68ee Forest \u6728\u6728\u6728 \u6728 (Tree) \u9c2f Sardine \u9b5a\u5f31 \u9b5a (Fish) \u6821 School \u6728\u4ea4 \u6728 (Tree)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "1. Sharing vocabularies between languages with similar sub-character decompositions, as in Chinese-Japanese translation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "2. Representing unseen characters -those not appearing in the training data -in semantically meaningful ways.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our hypothesis is that, while complete subcharacter decomposition for all characters might be useful in case 1, only some characters benefit from only semantic elements of the decomposition in case 2. The focus of this work is case 2. Our contributions are as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We compare ideograph-based sub-character schemes for Chinese-to-English and Japanese-to-English NMT with a strong BPE subword baseline, for both high-and low-resource domain translation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We evaluate both on general test sets, and on challenge sets which we construct such that all sentences have at least one character that was not seen in the training data. To the best of our knowledge, this is the first attempt to analyze the impact of sub-character decomposition on unseen character translation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We demonstrate that, counter-intuitively, training models with indiscriminate subcharacter decomposition can harm unseen character translation, and also gives inconsistent performance on sentences with no unseen characters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We instead propose a set of extremely straightforward inference-time sub-character decomposition schemes requiring no additional models or training.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In NMT, applying radical decomposition before learning a Byte Pair Encoding (BPE, Sennrich et al. (2016) ) vocabulary has been shown to improve Chinese-Japanese supervised and unsupervised translation over a standard character BPE representation Komachi, 2018, 2019) . However, Chinese-Japanese translation benefits from a high proportion of shared sub-characters. The impact of sub-character decomposition for translating unseen logographic characters to an alphabetic language that cannot share sub-characters has not been fully explored.", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 104, |
| "text": "(BPE, Sennrich et al. (2016)", |
| "ref_id": null |
| }, |
| { |
| "start": 246, |
| "end": 266, |
| "text": "Komachi, 2018, 2019)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "Zhang and Komachi (2018) also explore translation from Chinese and Japanese with sub-character decomposition into English. However, they translate to word-based English sentences instead of a stronger BPE representation, and they do not assess the effect of sub-character decomposition on unseen characters. Kuang and Han (2018) likewise train NMT models with factored sub-character information for Chinese-to-English translation but use words instead of BPE units as their baseline decomposition for both languages. do explore sub-character decomposition for BPE-based Chinese-to-English NMT. They find that training with sub-character decomposition alone does not give quality improvements in this case, although it has the practical advantage of a smaller vocabulary size. Our findings echo these, but we confirm them for Japanese-to-English translation and translation of unseen characters specifically.", |
| "cite_spans": [ |
| { |
| "start": 308, |
| "end": 328, |
| "text": "Kuang and Han (2018)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "Outside of translation, use of sub-character decomposition has been shown to improve learning of character embeddings for Chinese (Sun et al., 2014) and language modelling for Japanese (Nguyen et al., 2017) . Sub-character decomposition has also been applied to sentiment classification (Ke and Hagiwara, 2017 ), text classification (Toyama et al., 2017) and word similarity tasks, the last with mixed results (Karpinska et al., 2018) .", |
| "cite_spans": [ |
| { |
| "start": 130, |
| "end": 148, |
| "text": "(Sun et al., 2014)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 185, |
| "end": 206, |
| "text": "(Nguyen et al., 2017)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 287, |
| "end": 309, |
| "text": "(Ke and Hagiwara, 2017", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 333, |
| "end": 354, |
| "text": "(Toyama et al., 2017)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 410, |
| "end": 434, |
| "text": "(Karpinska et al., 2018)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "We consider work on NMT with byte-level subwords (Costa-juss\u00e0 et al., 2017; as complementary to this work. Representing text at the level of bytes allows any logographic character with a Unicode representation to be included in the model vocabulary. However, inclusion in the vocabulary does not guarantee that the model learns a good character representation, and such schemes do not leverage the semantic or phonetic information available in sub-character decompositions of unseen characters.", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 75, |
| "text": "(Costa-juss\u00e0 et al., 2017;", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related work", |
| "sec_num": "1.1" |
| }, |
| { |
| "text": "Over 80% of Chinese characters can be broken down into both a semantic and a phonetic component (Liu et al., 2010) . The semantic meaning of a Chinese character often corresponds to the subcharacter occupying its top or left position (Hoosain, 1991 ). These may be -but are not always -radicals: sub-characters that cannot be broken down any further. However, radicals in these positions are not necessarily directly meaningful. For example, radical \u9b5a ('fish') has a clear semantic relationship with the character \u9c2f ('sardine'), but the semantic connection of radical \u6728 ('tree') to character \u6821 ('school') is more abstract. Example decompositions are given in Table 1 . The phonetic component is less likely to be helpful for translation to a non-logographic language, except in the case of transliterations.", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 114, |
| "text": "(Liu et al., 2010)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 234, |
| "end": 248, |
| "text": "(Hoosain, 1991", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 659, |
| "end": 666, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Sub-character decomposition", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We first explore the impact of two variations on ideograph-based sub-character decomposition applied to all characters in the source language. Following Zhang and Komachi (2019) we use decomposition information from the CHISE project 2 , which provides ideograph sequences for CJK (Chinese-Japanese-Korean) characters. As well as ideographs, the sequences include ideographic description characters (IDCs), which convey the structure of an ideograph. While Zhang and Komachi (2019) use IDC information for Chinese-Japanese translation, use of structural subcharacter information has not yet been explored for NMT to an alphabetic language. IDCs may convey useful information about which sub-character component is likely to be the semantic or phonetic component, but they also make character representations significantly longer. We therefore compare training with subcharacter decompositions with and without the IDCs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training with sub-character decomposition", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Applying sub-character decomposition to all characters for training decreases the vocabulary size, but significantly lengthens source sequences. Additionally, these schemes apply decomposition to all source characters, regardless of whether they benefit from decomposition. We propose an alternative approach which applies sub-character decomposition only to unseen characters at inference time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference-only sub-character decomposition", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We apply decomposition if a test source sentence 1) contains an unseen character which 2) can be decomposed into at least one sub-character that is already present in the vocabulary. We do not include the entire decomposition, but keep only the sub-characters already in the model vocabulary. We experiment with both keeping all invocabulary sub-characters, and keeping only the leftmost in-vocabulary sub-character, which is frequently the semantic component. We consider the left-only approach to be a reasonable heuristic since in many cases other components do not contribute semantic meaning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference-only sub-character decomposition", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The inference-only decomposition approach has several advantages over training with subcharacter decomposition. It is extremely fast, since decomposition is a pre-processing step before inference. It does not require training from scratch with very long sequences, which can harm overall performance. Sentences without unseen characters, which are unlikely to benefit from decomposition, are left completely unchanged by the scheme.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference-only sub-character decomposition", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Finally, the scheme is very flexible: decomposition can be applied to individual unseen characters on a case-by-case basis if necessary. For example, the presence of the \u9b5a ('fish') radical on the left of a character very often indicates that the character is for a type of fish, so applying inference-only decomposition to such characters will improve adequacy. Characters can in principle be excluded from decomposition if they do not benefit from it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference-only sub-character decomposition", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We convert some sub-character components to their base forms to improve character coverage. A small number of components change form in some cases. For example, \u6c34 ('water'), can exist as its own character or as a radical, but often becomes \u6c35 when used on the left hand side of a character (e.g. \u6c60, 'pond'). We manually define 30 such cases for inference-only decomposition, swapping the changed radical (unlikely to be in the vocabulary) for its base form (often in the vocabulary). This is unneeded when training with sub-character decomposition as all forms can be included in the vocabulary.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference-only sub-character decomposition", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Even when radicals are replaced with their base form, not all radicals will be present in the vocabulary of a non-sub-character model. To ad-dress this problem we propose replacing the outof-vocabulary radical with an in-vocabulary, nonradical character that conveys a related semantic meaning. Experimentally, we attempt this with a single character for both Chinese and Japanese, replacing radical \u7592 ('illness'), which is not in the vocabulary, with character '\u75c5' ('illness').", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference-only sub-character decomposition", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Finally, a very simple approach to unseen subcharacters is to remove them from source sentences. This makes it unlikely that the character will be correctly translated, but saves the model from translating an UNK. We only apply this to characters which could be decomposed, so UNK may still occur.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference-only sub-character decomposition", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Examples of real sub-character decompositions for all schemes in this work are shown in ) and \u7621 ('sores', semantic component \u7592 'illness'). Subcharacter \u7592 is not in the vocabulary, so does not appear in inference-only decomposition unless swapped with an in-vocabulary character e.g. \u75c5 ('illness').", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference-only sub-character decomposition", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "For both Chinese-English and Japanese-English, we first train a baseline model on a larger corpus and then adapt the same model to a smaller corpus. This lets us evaluate unseen character translation in both higher-and lower-resource settings. In both cases we evaluate on a corresponding standard test set where available, as well as an unseen characters test set. The latter is constructed from training sentences containing at least one decomposable logographic character otherwise not appearing in the training set. These sentences are held out from the the training data, so any logographic characters appearing only in an 'unseen chars' set are not seen at all during training.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "To construct the unseen character set for the higher-resource domain we hold out training sentences with logographic characters appearing infrequently 3 in the whole corpus, then filter for source/target sentence length ratio less than 3.5. We build the BPE vocabularies (Sennrich et al., 2016) on the high-resource domain training set. The baseline source and all target BPE vocabularies consist of character sequences, while the sub-character BPE vocabularies consist of subcharacter sequences, following Zhang and Komachi (2018) . For the lower-resource domains the unseen sets are held-out sentences containing logographic characters not in the baseline source vocabulary, filtered as before.", |
| "cite_spans": [ |
| { |
| "start": 271, |
| "end": 294, |
| "text": "(Sennrich et al., 2016)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 507, |
| "end": 531, |
| "text": "Zhang and Komachi (2018)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For Chinese-English our baseline model is trained on a proprietary parallel training data set containing web-crawled data from a mix of domains. We learn separate Chinese and English BPE vocabularies on this corpus with 50K merges. For the lower-resource-domain model we adapt to 3M sentence pairs from publicly available corpora made available by the Chinese Academy of Sciences (CAS) 4 . Since neither of these training sets have standard test set splits, we use the WMT news task test sets WMT19 and WMT18 zh-en for general evaluation of the higher-and lower-resource cases respectively (Barrault et al., 2019) . WMT19 contains only seen characters, as do all but 2 lines of WMT18.", |
| "cite_spans": [ |
| { |
| "start": 590, |
| "end": 613, |
| "text": "(Barrault et al., 2019)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For Japanese-English, we train the higherresource model on 2M scientific domain sentence pairs from the ASPEC corpus (Nakazawa et al., 2016) . We learn separate Japanese and English BPE vocabularies on this corpus with 30K merges. Our smaller domain is the Kyoto Free Translation Task (KFTT) corpus (Neubig, 2011) . We use the standard test sets for general evaluation. In the AS-PEC test set 36 (2%) sentences contain unseen decomposable characters, as well as 180 (15.5%) sentences in the KFTT test set.", |
| "cite_spans": [ |
| { |
| "start": 117, |
| "end": 140, |
| "text": "(Nakazawa et al., 2016)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 299, |
| "end": 313, |
| "text": "(Neubig, 2011)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Our NMT models are all Transformer models (Vaswani et al., 2017) . We use 512 hidden units, 6 hidden layers, 8 heads, and a batch size of 4096 tokens in all cases. We train for 300K steps for the Chinese-English and for 240K steps for the Table 4 : BLEU scores for training with different decomposition schemes for higher-and lower-resource test sets. Baseline has no sub-character decomposition. Sub-character decomposition during training fails to improve general translation, and only improves unseen set translation for ASPEC.", |
| "cite_spans": [ |
| { |
| "start": 42, |
| "end": 64, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 239, |
| "end": 246, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental setup and evaluation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Japanese-English higher-resource domain models. For the lower resource domains we fine-tune the trained models for 30K and 10K steps respectively. We conduct inference via beam search with beam size 4. For ASPEC evaluation we evaluate Moses tokenized English with the multi-bleu tool to correspond to the official WAT evaluation. For all other results we report detokenized English using the SacreBLEU tool 5 (Post, 2018) . All BLEU is for truecased English.", |
| "cite_spans": [ |
| { |
| "start": 409, |
| "end": 421, |
| "text": "(Post, 2018)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental setup and evaluation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We have two requirements when using subcharacter decomposition for unseen character translation:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u2022 Sets with few unseen characters (all general test sets except KFTT) should not experience performance degradation in terms of BLEU.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u2022 Translation performance on unseen characters should improve.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Unseen character translation improvement may not be detectable by BLEU score, since the unseen character sets may only have one or two unseen characters per sentence. Moreover generating a hypernym, such as 'fish' instead of 'sardine' for \u9c2f, would not improve BLEU, despite being a more adequate translation than UNK and a more correct translation than e.g. 'salmon'. Consequently we also give examples for the most promising schemes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "5 BLEU+case.mixed+numrefs.1+smooth.exp+ tok.13a+version.1.4.8", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In Table 4 we give results after training with subcharacter decomposition schemes. We compare decomposition with and without structural information (IDCs) to a strong BPE baseline. On general test sets, we see BLEU degradation compared to the baseline, especially for Japanese-English. We note that our Japanese-English AS-PEC decomposed-training score is similar to the result for the same set achieved by Zhang and Komachi (2018) with ideograph decomposition. However, our non-decomposed baseline is much stronger, and so we are not able to replicate their finding that training with sub-character decomposition is beneficial to NMT from logographic languages to English. We suggest this degradation may be the result of training and inference with much longer sequences, which are wellestablished as challenging for NMT (Koehn and Knowles, 2017) .", |
| "cite_spans": [ |
| { |
| "start": 407, |
| "end": 431, |
| "text": "Zhang and Komachi (2018)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 823, |
| "end": 848, |
| "text": "(Koehn and Knowles, 2017)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 10, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training with decomposition", |
| "sec_num": "3.3.1" |
| }, |
| { |
| "text": "Interestingly we find that adding IDCs, which lengthen sequences, performs slightly better for the lower-resource than for higher-resource cases, especially for Chinese-English. A possible explanation is that the longer sequences regularize adaptation in these cases, avoiding overfitting to the highly specific lower-resource domains. However, these cases still show degradation relative to the baseline.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training with decomposition", |
| "sec_num": "3.3.1" |
| }, |
| { |
| "text": "On the unseen sets, training with sub-character decomposition outperforms the baseline in terms of BLEU for the ASPEC unseen set. However, this is not a consistent result, with the baseline perform- In the air, the sate oil was most easily oxidized, followed by linseed oil and soybean oil.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training with decomposition", |
| "sec_num": "3.3.1" |
| }, |
| { |
| "text": "In air, salmon oil was most susceptible to oxidation, followed by linseed oil and soybean oil.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training decompose", |
| "sec_num": null |
| }, |
| { |
| "text": "Fish oil was most oxidized in air, followed by linseed oil and soybean oil. Japanese source (KFTT)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference decompose (left)", |
| "sec_num": null |
| }, |
| { |
| "text": "\u5eb7\u5143\u5143\u5e74 ( In 1256, he died from a red spot disease. Table 6 : Examples of translation with different decomposition schemes from each of the three unseen sets extracted from publicly available corpora. We compare the most consistent training decomposition (no IDCs) and inferenceonly decomposition (left-only) to the baseline. In the final Japanese example, we additionally compare swapping the unseen radical with an in-vocabulary character. Unseen characters and (approximate) reference translations are marked in square brackets.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 49, |
| "end": 56, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Inference decompose (left)", |
| "sec_num": null |
| }, |
| { |
| "text": "ing best or joint best in all other cases. Table 5 gives results for our inference-only unseen character decomposition schemes, compared to the baseline with no decomposition. Inference time decomposition has no effect on the Chinese-English test sets with no unseen characters. This is as we expect, since these test sets are unchanged. For Japanese-English a slight decrease on the KFTT general set (about 15% sentences with unseen characters) is balanced by a small improvement on the ASPEC general set (2% sentences with unseen characters). These results are a strong advantage compared to training decomposition, which must be applied to all sentences whether they benefit or not, often degrading performance.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 43, |
| "end": 50, |
| "text": "Table 5", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Inference decompose (left)", |
| "sec_num": null |
| }, |
| { |
| "text": "Test sets with many unseen characters have a range of BLEU performance under inference-time decomposition. One consistent result is that leftonly decomposition gives better scores than us-ing all sub-characters. This may be explained by the fact that representing a character as multiple sub-characters may lead the model to generate a separate translation for each sub-character, harming performance. By contrast the leftmost subcharacter tends to be the semantic component so may give good translation performance alone.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference-only decomposition", |
| "sec_num": "3.3.2" |
| }, |
| { |
| "text": "As a precision-based metric, BLEU is not an ideal measure of improving unseen character translation. Any such improvements under decomposition are more likely to improve adequacy than precision, since they often involve introducing synonyms or hypernyms. This difficulty is highlighted by the strong performance of the 'remove unseen' scheme which simply deletes unseen decomposable characters from source sentences. Clearly, such a scheme cannot improve the translation of these characters, although it may reduce the number of hypothesis tokens, inadvertently improving precision and therefore BLEU.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference-only decomposition", |
| "sec_num": "3.3.2" |
| }, |
| { |
| "text": "The higher performance of the decompose (left) scheme is more promising, since this is likely to actually generate translations for unseen characters. On a similar note, replacing the unseen 'illness' radical with a character conveying the same semantic meaning as described at the end of Sec. 2.2 does not affect BLEU for any set, but we do see noticeable improvements in adequacy for the handful of affected sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference-only decomposition", |
| "sec_num": "3.3.2" |
| }, |
| { |
| "text": "We provide example translations under different training and inference decomposition schemes in Table 6 . We observe some interesting differences in adequacy between training decomposition and inference-only decomposition. In particular, both Japanese translations with training decomposition feature a plausible but incorrect translation. With inference-only decomposition the translation is less fluent, but more generic and consequently more correct.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 96, |
| "end": 103, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Qualitative evaluation", |
| "sec_num": "3.3.3" |
| }, |
| { |
| "text": "We note that training with sub-character decomposition has an unfortunate tendency to translate over-specific terms from spurious sub-character matches. For example, in the first (ASPEC) Japanese example, \u9b5a ('fish') is also the radical in \u9bad ('salmon'), and in the second (KFTT) Japanese example, \u5009 ('storehouse') is also a major component in \u69cd ('spear'). The model trained with sub-character decompositions therefore produces 'salmon' and 'spear' instead of 'sardine' and 'measles'. Meanwhile the inference-only leftradical heuristic produces 'fish' and 'disease', both of which are correct translations, if not referencematching.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Qualitative evaluation", |
| "sec_num": "3.3.3" |
| }, |
| { |
| "text": "We identify this pattern throughout the unseencharacter sets for certain characters in particular. Characters for concrete nouns, such as types of fish, illness, bird, tree, and so on tend to be wellhandled by inference-only decomposition with the left-sub-character heuristic and failed by the training decomposition scheme.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Qualitative evaluation", |
| "sec_num": "3.3.3" |
| }, |
| { |
| "text": "More abstract characters are more challenging for both schemes, such as those with radical \u5fc3 ('heart') which often refer to an emotion. However, a major benefit of our approach is its flexibility; such poorly-handled characters could simply be excluded from the decomposition scheme, or replaced with a more appropriate non-radical character as we do for the 'illness' radical \u7592. Future work on this problem could involve determining the most relevant sub-character component of an character, if any, rather than the simple left-only heuristic.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Qualitative evaluation", |
| "sec_num": "3.3.3" |
| }, |
| { |
| "text": "We explore the effect of sub-character decomposition on NMT from logographic languages into English. During training decomposition may hurt general translation performance without necessarily helping unseen character translation. We propose a flexible inference-time sub-character decomposition procedure which targets unseen characters, and show that it aids adequacy and reduces misleading overtranslation in unseen character translation. The scheme is straightforward, requires no additional models or training, and has no negative impact on sentences without unseen characters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "4" |
| }, |
| { |
| "text": "214 Kangxi Radicals are defined as a block in Unicode as of version 3.0(Consortium, 2000). In this paper we follow prior work in using shallower decompositions which can include non-radical sub-character units.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Accessed via https://github.com/cjkvi/cjkvi-ids", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Rumjahn Hoosain. 1991. Psycholinguistic implications for linguistic relativity: A case study of Chinese. Psychology Press.6 http://www.hpc.cam.ac.uk", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was supported by EPSRC grants EP/M508007/1 and EP/N509620/1 and has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service 6 funded by EPSRC Tier-2 capital grant EP/P020259/1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Findings of the 2019 conference on machine translation (WMT19)", |
| "authors": [ |
| { |
| "first": "Lo\u00efc", |
| "middle": [], |
| "last": "Barrault", |
| "suffix": "" |
| }, |
| { |
| "first": "Ond\u0159ej", |
| "middle": [], |
| "last": "Bojar", |
| "suffix": "" |
| }, |
| { |
| "first": "Marta", |
| "middle": [ |
| "R" |
| ], |
| "last": "Costa-Juss\u00e0", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Federmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Fishel", |
| "suffix": "" |
| }, |
| { |
| "first": "Yvette", |
| "middle": [], |
| "last": "Graham", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthias", |
| "middle": [], |
| "last": "Huck", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Shervin", |
| "middle": [], |
| "last": "Malmasi", |
| "suffix": "" |
| }, |
| { |
| "first": "Christof", |
| "middle": [], |
| "last": "Monz", |
| "suffix": "" |
| }, |
| { |
| "first": "Mathias", |
| "middle": [], |
| "last": "M\u00fcller", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the Fourth Conference on Machine Translation", |
| "volume": "2", |
| "issue": "", |
| "pages": "1--61", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W19-5301" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lo\u00efc Barrault, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M\u00fcller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine transla- tion (WMT19). In Proceedings of the Fourth Con- ference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61, Florence, Italy. As- sociation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The Unicode Standard", |
| "authors": [ |
| { |
| "first": "Unicode", |
| "middle": [], |
| "last": "Consortium", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "1", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Unicode Consortium. 2000. The Unicode Standard, Version 3.0, volume 1. Addison-Wesley Profes- sional.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Byte-based neural machine translation", |
| "authors": [ |
| { |
| "first": "Marta", |
| "middle": [ |
| "R" |
| ], |
| "last": "Costa-Juss\u00e0", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Escolano", |
| "suffix": "" |
| }, |
| { |
| "first": "Jos\u00e9", |
| "middle": [ |
| "A R" |
| ], |
| "last": "Fonollosa", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the First Workshop on Subword and Character Level Models in NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "154--158", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W17-4123" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marta R. Costa-juss\u00e0, Carlos Escolano, and Jos\u00e9 A. R. Fonollosa. 2017. Byte-based neural machine trans- lation. In Proceedings of the First Workshop on Subword and Character Level Models in NLP, pages 154-158, Copenhagen, Denmark. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Subcharacter information in Japanese embeddings: When is it worth it?", |
| "authors": [ |
| { |
| "first": "Marzena", |
| "middle": [], |
| "last": "Karpinska", |
| "suffix": "" |
| }, |
| { |
| "first": "Bofang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Rogers", |
| "suffix": "" |
| }, |
| { |
| "first": "Aleksandr", |
| "middle": [], |
| "last": "Drozd", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Workshop on the Relevance of Linguistic Structure in Neural Architectures for NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "28--37", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W18-2905" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marzena Karpinska, Bofang Li, Anna Rogers, and Aleksandr Drozd. 2018. Subcharacter information in Japanese embeddings: When is it worth it? In Proceedings of the Workshop on the Relevance of Linguistic Structure in Neural Architectures for NLP, pages 28-37, Melbourne, Australia. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Radicallevel Ideograph Encoder for RNN-based Sentiment Analysis of Chinese and Japanese", |
| "authors": [ |
| { |
| "first": "Yuanzhi", |
| "middle": [], |
| "last": "Ke", |
| "suffix": "" |
| }, |
| { |
| "first": "Masafumi", |
| "middle": [], |
| "last": "Hagiwara", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of Machine Learning Research", |
| "volume": "77", |
| "issue": "", |
| "pages": "561--573", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yuanzhi Ke and Masafumi Hagiwara. 2017. Radical- level Ideograph Encoder for RNN-based Sentiment Analysis of Chinese and Japanese. Proceedings of Machine Learning Research, 77:561-573.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Six challenges for neural machine translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Knowles", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the First Workshop on Neural Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "28--39", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W17-3204" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six chal- lenges for neural machine translation. In Proceed- ings of the First Workshop on Neural Machine Trans- lation, pages 28-39, Vancouver. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Apply chinese radicals into neural machine translation: Deeper than character level. 30Th European Summer School In Logic", |
| "authors": [ |
| { |
| "first": "Shaohui", |
| "middle": [], |
| "last": "Kuang", |
| "suffix": "" |
| }, |
| { |
| "first": "Lifeng", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Language And Information", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shaohui Kuang and Lifeng Han. 2018. Apply chinese radicals into neural machine translation: Deeper than character level. 30Th European Summer School In Logic, Language And Information.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Holistic versus analytic processing: Evidence for a different approach to processing of Chinese at the word and character levels in Chinese children", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Phil", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "H" |
| ], |
| "last": "Kevin", |
| "suffix": "" |
| }, |
| { |
| "first": "Catherine", |
| "middle": [], |
| "last": "Chung", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiuhong", |
| "middle": [], |
| "last": "Mcbride-Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Tong", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Journal of Experimental Child Psychology", |
| "volume": "107", |
| "issue": "4", |
| "pages": "466--478", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Phil D Liu, Kevin KH Chung, Catherine McBride- Chang, and Xiuhong Tong. 2010. Holistic versus an- alytic processing: Evidence for a different approach to processing of Chinese at the word and character levels in Chinese children. Journal of Experimental Child Psychology, 107(4):466-478.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "ASPEC: Asian Scientific Paper Excerpt Corpus", |
| "authors": [ |
| { |
| "first": "Toshiaki", |
| "middle": [], |
| "last": "Nakazawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Manabu", |
| "middle": [], |
| "last": "Yaguchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Kiyotaka", |
| "middle": [], |
| "last": "Uchimoto", |
| "suffix": "" |
| }, |
| { |
| "first": "Masao", |
| "middle": [], |
| "last": "Utiyama", |
| "suffix": "" |
| }, |
| { |
| "first": "Eiichiro", |
| "middle": [], |
| "last": "Sumita", |
| "suffix": "" |
| }, |
| { |
| "first": "Sadao", |
| "middle": [], |
| "last": "Kurohashi", |
| "suffix": "" |
| }, |
| { |
| "first": "Hitoshi", |
| "middle": [], |
| "last": "Isahara", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC)", |
| "volume": "", |
| "issue": "", |
| "pages": "2204--2208", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Toshiaki Nakazawa, Manabu Yaguchi, Kiyotaka Uchi- moto, Masao Utiyama, Eiichiro Sumita, Sadao Kuro- hashi, and Hitoshi Isahara. 2016. ASPEC: Asian Scientific Paper Excerpt Corpus. In Proceedings of the Ninth International Conference on Language Re- sources and Evaluation (LREC), pages 2204-2208.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "The Kyoto free translation task", |
| "authors": [ |
| { |
| "first": "Graham", |
| "middle": [], |
| "last": "Neubig", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Graham Neubig. 2011. The Kyoto free translation task. http://www.phontron.com/kftt.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Sub-character neural language modelling in Japanese", |
| "authors": [ |
| { |
| "first": "Viet", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Julian", |
| "middle": [], |
| "last": "Brooke", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the First Workshop on Subword and Character Level Models in NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "148--153", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W17-4122" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Viet Nguyen, Julian Brooke, and Timothy Baldwin. 2017. Sub-character neural language modelling in Japanese. In Proceedings of the First Workshop on Subword and Character Level Models in NLP, pages 148-153, Copenhagen, Denmark. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "A call for clarity in reporting BLEU scores", |
| "authors": [ |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Post", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "186--191", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W18-6319" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Belgium, Brussels. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Neural machine translation of rare words with subword units", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "1715--1725", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P16-1162" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Radical-enhanced Chinese character embedding", |
| "authors": [ |
| { |
| "first": "Yaming", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Nan", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhenzhou", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaolong", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "International Conference on Neural Information Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "279--286", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yaming Sun, Lei Lin, Nan Yang, Zhenzhou Ji, and Xiaolong Wang. 2014. Radical-enhanced Chinese character embedding. In International Conference on Neural Information Processing, pages 279-286. Springer.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Utilizing visual forms of Japanese characters for neural review classification", |
| "authors": [ |
| { |
| "first": "Yota", |
| "middle": [], |
| "last": "Toyama", |
| "suffix": "" |
| }, |
| { |
| "first": "Makoto", |
| "middle": [], |
| "last": "Miwa", |
| "suffix": "" |
| }, |
| { |
| "first": "Yutaka", |
| "middle": [], |
| "last": "Sasaki", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing", |
| "volume": "2", |
| "issue": "", |
| "pages": "378--382", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yota Toyama, Makoto Miwa, and Yutaka Sasaki. 2017. Utilizing visual forms of Japanese characters for neural review classification. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 378-382, Taipei, Taiwan. Asian Federation of Natural Language Processing.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "\u0141ukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "6000--6010", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 6000-6010.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Neural machine translation with byte-level subwords", |
| "authors": [ |
| { |
| "first": "Changhan", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiatao", |
| "middle": [], |
| "last": "Gu", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1909.03341" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Changhan Wang, Kyunghyun Cho, and Jiatao Gu. 2019. Neural machine translation with byte-level subwords. arXiv preprint arXiv:1909.03341.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Neural machine translation of logographic language using sub-character level information", |
| "authors": [ |
| { |
| "first": "Longtu", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mamoru", |
| "middle": [], |
| "last": "Komachi", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "17--25", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/W18-6303" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Longtu Zhang and Mamoru Komachi. 2018. Neural machine translation of logographic language using sub-character level information. In Proceedings of the Third Conference on Machine Translation: Re- search Papers, pages 17-25, Belgium, Brussels. As- sociation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Chinese-Japanese unsupervised neural machine translation using sub-character level information", |
| "authors": [ |
| { |
| "first": "Longtu", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mamoru", |
| "middle": [], |
| "last": "Komachi", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 33rd Pacific Asia Conference on Language, Information and Computation", |
| "volume": "", |
| "issue": "", |
| "pages": "309--315", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Longtu Zhang and Mamoru Komachi. 2019. Chinese- Japanese unsupervised neural machine translation us- ing sub-character level information. In Proceedings of the 33rd Pacific Asia Conference on Language, In- formation and Computation, pages 309-315.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Sub-Character Chinese-English Neural Machine Translation with Wubi encoding", |
| "authors": [ |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Feifei", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaodong", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhenshuang", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhen", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1911.02737" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wei Zhang, Feifei Lin, Xiaodong Wang, Zhenshuang Liang, and Zhen Huang. 2019. Sub-Character Chinese-English Neural Machine Translation with Wubi encoding. arXiv preprint arXiv:1911.02737.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table/>", |
| "text": "Some characters with sub-character decompositions given by CHISE. Not all decompositions or subcharacters convey the character's semantic meaning." |
| }, |
| "TABREF1": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Decomposition</td><td/><td>\u9c2f</td><td>\u7621</td></tr><tr><td>Baseline</td><td/><td>UNK</td><td>UNK</td></tr><tr><td>Training decompose</td><td/><td>\u9b5a\u5f31</td><td>\u7592\u5009</td></tr><tr><td colspan=\"2\">Training decompose (IDC)</td><td colspan=\"2\">\u2ff0\u9b5a\u5f31 \u2ff8\u7592\u5009</td></tr><tr><td colspan=\"2\">Inference-only remove</td><td/><td/></tr><tr><td colspan=\"2\">Inference-only decompose</td><td>\u9b5a\u5f31</td><td>\u5009</td></tr><tr><td colspan=\"2\">Inference-only decompose (left)</td><td>\u9b5a</td><td>\u5009</td></tr><tr><td>Inference-only</td><td>decompose</td><td>\u9b5a\u5f31</td><td>\u75c5\u5009</td></tr><tr><td colspan=\"2\">(replace unseen radical)</td><td/><td/></tr><tr><td colspan=\"2\">Inference-only decompose (left,</td><td>\u9b5a</td><td>\u75c5</td></tr><tr><td colspan=\"2\">replace unseen radical)</td><td/><td/></tr></table>", |
| "text": "" |
| }, |
| "TABREF2": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table/>", |
| "text": "" |
| }, |
| "TABREF4": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Decomposition</td><td/><td colspan=\"2\">Chinese-English</td><td/><td/><td colspan=\"2\">Japanese-English</td><td/></tr><tr><td>(training)</td><td colspan=\"2\">Higher-resource</td><td colspan=\"2\">Lower-resource</td><td colspan=\"2\">Higher-resource</td><td colspan=\"2\">Lower-resource</td></tr><tr><td/><td colspan=\"8\">WMT19 Unseen WMT18 Unseen ASPEC Unseen KFTT Unseen</td></tr><tr><td>None (Baseline)</td><td>25.2</td><td>22.6</td><td>18.3</td><td>12.4</td><td>28.3</td><td>13.5</td><td>16.9</td><td>13.3</td></tr><tr><td>Decompose</td><td>24.9</td><td>22.6</td><td>17.5</td><td>11.4</td><td>26.9</td><td>14.8</td><td>16.2</td><td>12.5</td></tr><tr><td>Decompose IDC</td><td>24.8</td><td>22.5</td><td>18.2</td><td>12.4</td><td>26.4</td><td>14.7</td><td>16.2</td><td>12.4</td></tr></table>", |
| "text": "Sentence counts for Chinese-English and Japanese-English training and test sets. Chinese-English proprietary and CAS training corpora have no standard test sets, so we use the WMT news task WMT19 and WMT18 test sets respectively. The 'unseen chars' test sets are held out from the corresponding training sets such that every sentence has at least one unseen decomposable logographic character." |
| }, |
| "TABREF6": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Chinese source (CAS)</td><td>\u98de\u8089\u5207\u8584\u7247\uff0c\u7528\u86cb\u6e05\u7cca\u4e0a\u6d46\uff0c\u4e0b\u5f00\u6c34\u9505 [\u6c46] \u900f\u635e\u51fa\u3002</td></tr><tr><td>English reference</td><td>Cut the wild chicken meat into thin slices, smear with egg white, [scald] thoroughly.</td></tr><tr><td>Baseline</td><td>Fleshy slice, slurry with egg whites, and get out of the boiling water pan.</td></tr><tr><td>Training decompose</td><td>Fly to cut sliver pieces of meat, slurp them with purine paste, and pick them up from the</td></tr><tr><td/><td>open water pan.</td></tr><tr><td>Inference decompose</td><td>Cut thin slices of meat, slurry with egg whites, get out of the boiling water pan.</td></tr><tr><td>Japanese source (ASPEC)</td><td>\u7a7a\u6c17\u4e2d\u3067\u306f [\u9c2f] \u6cb9\u304c\u6700\u3082\u9178\u5316\u3055\u308c\u3084\u3059\u304f\uff0c\u3064\u3044\u3067\u4e9c\u9ebb\u4ec1\u6cb9\uff0c\u5927\u8c46\u6cb9\u306e\u9806\u3067\u3042\u3063\u305f\u3002</td></tr><tr><td>English reference</td><td>Due to its high contents of DHA and EPA, [sardine] oil FFA was most rapidly oxidized in</td></tr><tr><td/><td>air, followed by linseed and soybean oil FFAs.</td></tr><tr><td>Baseline</td><td/></tr></table>", |
| "text": "Higher-and lower-resource test set BLEU scores for the baseline models ofTable 4with different inferencetime decomposition methods. Line 1 is duplicated fromTable 4. Inference-time decomposition matches the baseline on general test sets, and some unseen sets see BLEU improvement." |
| } |
| } |
| } |
| } |