{ "paper_id": "P18-1007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:39:47.165228Z" }, "title": "Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "", "affiliation": {}, "email": "taku@google.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Subword units are an effective way to alleviate the open vocabulary problems in neural machine translation (NMT). While sentences are usually converted into unique subword sequences, subword segmentation is potentially ambiguous and multiple segmentations are possible even with the same vocabulary. The question addressed in this paper is whether it is possible to harness the segmentation ambiguity as a noise to improve the robustness of NMT. We present a simple regularization method, subword regularization, which trains the model with multiple subword segmentations probabilistically sampled during training. In addition, for better subword sampling, we propose a new subword segmentation algorithm based on a unigram language model. We experiment with multiple corpora and report consistent improvements especially on low resource and out-of-domain settings.", "pdf_parse": { "paper_id": "P18-1007", "_pdf_hash": "", "abstract": [ { "text": "Subword units are an effective way to alleviate the open vocabulary problems in neural machine translation (NMT). While sentences are usually converted into unique subword sequences, subword segmentation is potentially ambiguous and multiple segmentations are possible even with the same vocabulary. The question addressed in this paper is whether it is possible to harness the segmentation ambiguity as a noise to improve the robustness of NMT. We present a simple regularization method, subword regularization, which trains the model with multiple subword segmentations probabilistically sampled during training. In addition, for better subword sampling, we propose a new subword segmentation algorithm based on a unigram language model. We experiment with multiple corpora and report consistent improvements especially on low resource and out-of-domain settings.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Neural Machine Translation (NMT) models (Bahdanau et al., 2014; Luong et al., 2015; Wu et al., 2016; Vaswani et al., 2017) often operate with fixed word vocabularies, as their training and inference depend heavily on the vocabulary size. However, limiting vocabulary size increases the amount of unknown words, which makes the translation inaccurate especially in an open vocabulary setting.", "cite_spans": [ { "start": 40, "end": 63, "text": "(Bahdanau et al., 2014;", "ref_id": "BIBREF1" }, { "start": 64, "end": 83, "text": "Luong et al., 2015;", "ref_id": "BIBREF13" }, { "start": 84, "end": 100, "text": "Wu et al., 2016;", "ref_id": "BIBREF31" }, { "start": 101, "end": 122, "text": "Vaswani et al., 2017)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A common approach for dealing with the open vocabulary issue is to break up rare words into subword units (Schuster and Nakajima, 2012; Chitnis and DeNero, 2015; Sennrich et al., 2016; Wu et al., 2016 Table 1 : Multiple subword sequences encoding the same sentence \"Hello World\" (BPE) (Sennrich et al., 2016 ) is a de facto standard subword segmentation algorithm applied to many NMT systems and achieving top translation quality in several shared tasks (Denkowski and Neubig, 2017; Nakazawa et al., 2017) . BPE segmentation gives a good balance between the vocabulary size and the decoding efficiency, and also sidesteps the need for a special treatment of unknown words. BPE encodes a sentence into a unique subword sequence. However, a sentence can be represented in multiple subword sequences even with the same vocabulary. Table 1 illustrates an example. While these sequences encode the same input \"Hello World\", NMT handles them as completely different inputs. This observation becomes more apparent when converting subword sequences into id sequences (right column in Table 1 ). These variants can be viewed as a spurious ambiguity, which might not always be resolved in decoding process. At training time of NMT, multiple segmentation candidates will make the model robust to noise and segmentation errors, as they can indirectly help the model to learn the compositionality of words, e.g., \"books\" can be decomposed into \"book\" + \"s\".", "cite_spans": [ { "start": 106, "end": 135, "text": "(Schuster and Nakajima, 2012;", "ref_id": "BIBREF19" }, { "start": 136, "end": 161, "text": "Chitnis and DeNero, 2015;", "ref_id": "BIBREF4" }, { "start": 162, "end": 184, "text": "Sennrich et al., 2016;", "ref_id": "BIBREF21" }, { "start": 185, "end": 200, "text": "Wu et al., 2016", "ref_id": "BIBREF31" }, { "start": 285, "end": 307, "text": "(Sennrich et al., 2016", "ref_id": "BIBREF21" }, { "start": 454, "end": 482, "text": "(Denkowski and Neubig, 2017;", "ref_id": "BIBREF5" }, { "start": 483, "end": 505, "text": "Nakazawa et al., 2017)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 201, "end": 208, "text": "Table 1", "ref_id": null }, { "start": 828, "end": 835, "text": "Table 1", "ref_id": null }, { "start": 1076, "end": 1083, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this study, we propose a new regularization method for open-vocabulary NMT, called subword regularization, which employs multiple subword segmentations to make the NMT model accurate and robust. Subword regularization consists of the following two sub-contributions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose a simple NMT training algorithm to integrate multiple segmentation candidates. Our approach is implemented as an on-the-fly data sampling, which is not specific to NMT architecture. Subword regularization can be applied to any NMT system without changing the model structure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We also propose a new subword segmentation algorithm based on a language model, which provides multiple segmentations with probabilities. The language model allows to emulate the noise generated during the segmentation of actual data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Empirical experiments using multiple corpora with different sizes and languages show that subword regularization achieves significant improvements over the method using a single subword sequence. In addition, through experiments with out-of-domain corpora, we show that subword regularization improves the robustness of the NMT model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "multiple subword segmentations 2.1 NMT training with on-the-fly subword sampling", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation with", "sec_num": "2" }, { "text": "Given a source sentence X and a target sentence Y , let x = (x 1 , . . . , x M ) and y = (y 1 , . . . , y N ) be the corresponding subword sequences segmented with an underlying subword segmenter, e.g., BPE. NMT models the translation probability P (Y |X) = P (y|x) as a target language sequence model that generates target subword y n conditioning on the target history y