diff --git a/alatentmorphologymodelforopenvocabularyneuralmachinetranslation/cd7bcd2e-8ef7-4536-aede-1ef1edb818ec_content_list.json b/alatentmorphologymodelforopenvocabularyneuralmachinetranslation/cd7bcd2e-8ef7-4536-aede-1ef1edb818ec_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..dcca23b22de1582f7baa7ab9ced8eef6b528557d --- /dev/null +++ b/alatentmorphologymodelforopenvocabularyneuralmachinetranslation/cd7bcd2e-8ef7-4536-aede-1ef1edb818ec_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a9b10538faa51164ffe928c52c75a053122ed9ecc078d69bd48d15c234f10603 +size 92048 diff --git a/alatentmorphologymodelforopenvocabularyneuralmachinetranslation/cd7bcd2e-8ef7-4536-aede-1ef1edb818ec_model.json b/alatentmorphologymodelforopenvocabularyneuralmachinetranslation/cd7bcd2e-8ef7-4536-aede-1ef1edb818ec_model.json new file mode 100644 index 0000000000000000000000000000000000000000..bd83fce87336820d6c73725cfbe1b9b981649f3d --- /dev/null +++ b/alatentmorphologymodelforopenvocabularyneuralmachinetranslation/cd7bcd2e-8ef7-4536-aede-1ef1edb818ec_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e71030d6f6fab38daaeb19804044103d1c1d9752f74e019eb76493b1ce5394c4 +size 111384 diff --git a/alatentmorphologymodelforopenvocabularyneuralmachinetranslation/cd7bcd2e-8ef7-4536-aede-1ef1edb818ec_origin.pdf b/alatentmorphologymodelforopenvocabularyneuralmachinetranslation/cd7bcd2e-8ef7-4536-aede-1ef1edb818ec_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a4830ca63a2f1fc19f6a3e74e6961e3627c676cd --- /dev/null +++ b/alatentmorphologymodelforopenvocabularyneuralmachinetranslation/cd7bcd2e-8ef7-4536-aede-1ef1edb818ec_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ca52c73b42b142acc50b9ca82c4cc2b3714e2a6f4ba9608a60f96fdd80a30fed +size 833888 diff --git a/alatentmorphologymodelforopenvocabularyneuralmachinetranslation/full.md b/alatentmorphologymodelforopenvocabularyneuralmachinetranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5173283e267f4f8e440c25f7e39cfb9a7d71287b --- /dev/null +++ b/alatentmorphologymodelforopenvocabularyneuralmachinetranslation/full.md @@ -0,0 +1,352 @@ +# A LATENT MORPHOLOGY MODEL FOR OPEN-VOCABULARY NEURAL MACHINE TRANSLATION + +Duygu Ataman* + +University of Zurich + +ataman@cl.uzh.ch + +Wilker Aziz + +University of Amsterdam w.aziz@uva.nl + +Alexandra Birch + +University of Edinburgh a.birch@ed.ac.uk + +# ABSTRACT + +Translation into morphologically-rich languages challenges neural machine translation (NMT) models with extremely sparse vocabularies where atomic treatment of surface forms is unrealistic. This problem is typically addressed by either pre-processing words into subword units or performing translation directly at the level of characters. The former is based on word segmentation algorithms optimized using corpus-level statistics with no regard to the translation task. The latter learns directly from translation data but requires rather deep architectures. In this paper, we propose to translate words by modeling word formation through a hierarchical latent variable model which mimics the process of morphological inflection. Our model generates words one character at a time by composing two latent representations: a continuous one, aimed at capturing the lexical semantics, and a set of (approximately) discrete features, aimed at capturing the morphosynthetic function, which are shared among different surface forms. Our model achieves better accuracy in translation into three morphologically-rich languages than conventional open-vocabulary NMT methods, while also demonstrating a better generalization capacity under low to mid-resource settings. + +# 1 INTRODUCTION + +Neural machine translation (NMT) models are conventionally trained by maximizing the likelihood of generating the target side of a bilingual parallel corpus of observations one word at a time conditioned on their full observed context. NMT models must therefore learn distributed representations that accurately predict word forms in very diverse contexts, a process that is highly demanding in terms of training data as well as the network capacity. Under conditions of lexical sparsity, which includes both the case of unknown words and the case of known words occurring in surprising contexts, the model is likely to struggle. Such adverse conditions are typical of translation involving morphologically-rich languages, where any single root may lead to exponentially many different surface realizations depending on its syntactic context. Such highly productive processes of word formation lead to many word forms being rarely or ever observed with a particular set of morphosyntactic attributes. The standard approach to overcome this limitation is to pre-process words into subword units that are shared among words, which are, in principle, more reliable as they are observed more frequently in varying context (Sennrich et al., 2016; Wu et al., 2016). One drawback related to this approach, however, is that the estimation of the subword vocabulary relies on word segmentation methods optimized using corpus-dependent statistics, disregarding any linguistic notion of morphology and the translation objective. This often produces subword units that are semantically ambiguous as they might be used in far too many lexical and syntactic contexts (Ataman et al., 2017). Moreover, in this approach, a word form is then generated by prediction of multiple subword units, which makes generalizing to unseen word forms more difficult due to the possibility that a subword unit necessary to reconstruct a given word form may be unlikely in a given context. To alleviate the sub-optimal effects of using explicit segmentation and generalize better to new morphological forms, recent studies explored the idea of extending NMT to model translation directly at + +the level of characters (Kreutzer & Sokolov, 2018; Cherry et al., 2018), which, in turn, have demonstrated the requirement of using comparably deeper networks, as the network would then need to learn longer distance grammatical dependencies (Sennrich, 2017). + +In this paper, we explore the benefits of explicitly modeling variation in surface forms of words using techniques from deep latent variable modeling in order to improve translation accuracy for low-resource and morphologically-rich languages. Latent variable models allow us to inject inductive biases relevant to the task, which, in our case, is word formation during translation. In order to formulate the process of morphological inflection, design a hierarchical latent variable model which translates words one character at a time based on word representations learned compositionally from sub-lexical components. In particular, for each word, our model generates two latent representations: i) a continuous-space dense vector aimed at capturing the lexical semantics of the word in a given context, and ii) a set of (approximately) discrete features aimed at capturing that word's morphosyntactic role in the sentence. We then see inflection as decoding a word form, one character at a time, from a learned composition of these two representations. By forcing the model to encode each word representation in terms of a more compact set of latent features, we encourage them to be shared across contexts and word forms, thus, facilitating generalization under sparse settings. We evaluate our method in translating English into three morphologically-rich languages each with a distinct morphological typology: Arabic, Czech and Turkish, and show that our model is able to obtain better translation accuracy and generalization capacity than conventional approaches to open-vocabulary NMT. + +# 2 NEURAL MACHINE TRANSLATION + +In this paper, we use recurrent NMT architectures based on the model developed by Bahdanau et al. (2014). The model essentially estimates the conditional probability of translating a source sequence $x = \langle x_1, x_2, \ldots, x_m \rangle$ into a target sequence $y = \langle y_1, y_2, \ldots, y_l \rangle$ via an exact factorization: + +$$ +p (y | x, \theta) = \prod_ {i = 1} ^ {l} p \left(y _ {j} \mid x, y _ {< i}, \theta\right) \tag {1} +$$ + +where $y_{0}^d$ ) from the word-level decoder hidden state $\mathbf{h}_i$ (which represents $x$ and $y_{