## **MASK-ALIGN: Self-Supervised Neural Word Alignment** **Chi Chen** [1] _[,]_ [3] _[,]_ [4] **, Maosong Sun** [1] _[,]_ [3] _[,]_ [4] _[,]_ [5] **, Yang Liu** _[∗]_ [1] _[,]_ [2] _[,]_ [3] _[,]_ [4] _[,]_ [5] 1Department of Computer Science and Technology, Tsinghua University, Beijing, China 2Institute for AI Industry Research, Tsinghua University, Beijing, China 3Institute for Artificial Intelligence, Tsinghua University, Beijing, China 4Beijing National Research Center for Information Science and Technology 5Beijing Academy of Artificial Intelligence **Abstract** Word alignment, which aims to align translationally equivalent words between source and target sentences, plays an important role in many natural language processing tasks. Current unsupervised neural alignment methods focus on inducing alignments from neural machine translation models, which does not leverage the full context in the target sequence. In this paper, we propose MASK-ALIGN, a selfsupervised word alignment model that takes advantage of the full context on the target side. Our model parallelly masks out each target token and predicts it conditioned on both source and the remaining target tokens. This two-step process is based on the assumption that the source token contributing most to recovering the masked target token should be aligned. We also introduce an attention variant called _leaky attention_, which alleviates the problem of high cross-attention weights on specific tokens such as periods. Experiments on four language pairs show that our model outperforms previous unsupervised neural aligners and obtains new state-of-the-art results. **1** **Introduction** Word alignment is an important task of finding the correspondence between words in a sentence pair (Brown et al., 1993) and used to be a key component of statistical machine translation (SMT) (Koehn et al., 2003; Dyer et al., 2013). Although word alignment is no longer explicitly modeled in neural machine translation (NMT) (Bahdanau et al., 2015; Vaswani et al., 2017), it is often leveraged to analyze NMT models (Tu et al., 2016; Ding et al., 2017). Word alignment is also used in many other scenarios such as imposing lexical constraints on the decoding process (Arthur et al., 2016; Hasler et al., 2018), improving automatic post-editing (Pal _∗_ Corresponding author **Tokyo** Induced alignment link: **Tokio - Tokyo** Figure 1: An example of inducing an alignment link for target token “Tokyo” in MASK-ALIGN. First, we mask out “Tokyo” and predict it with source and other target tokens. Then, the source token “Tokio” that contributes most to recovering the masked word (highlighted in red) is chosen to be aligned to “Tokyo”. et al., 2017), and providing guidance for translators in computer-aided translation (Dagan et al., 1993). Compared with statistical methods, neural methods can learn representations end-to-end from raw data and have been successfully applied to supervised word alignment (Yang et al., 2013; Tamura et al., 2014). For unsupervised word alignment, however, previous neural methods fail to significantly exceed their statistical counterparts such as FAST-ALIGN (Dyer et al., 2013) and GIZA++ (Och and Ney, 2003). Recently, there is a surge of interest in NMT-based alignment methods which take alignments as a by-product of NMT systems (Li et al., 2019; Garg et al., 2019; Zenkel et al., 2019, 2020; Chen et al., 2020). Using attention weights or feature importance measures to induce alignments for to-be-predicted target tokens, these methods outperform unsupervised statistical aligners like GIZA++ on a variety of language pairs. Although NMT-based unsupervised aligners have proven to be effective, they suffer from two major limitations. First, due to the autoregressive property of NMT systems (Sutskever et al., 2014), Alignment Attention Weights - ��� ���� ���� Leaky Attention |t1|Col2|Col3|Col4|Col5| |---|---|---|---|---| |t**2**||||| |t**3**||||| |t**4**||||| h **1** h **2** h **3** h **4** |Col1|Col2|Col3|Col4| |---|---|---|---| |t|t|t|t| |t|||| |t|||| Feed Forward ✕ L L ✕ ��� ���� ������ ��� - ��� ���� ���� Figure 2: The architecture of MASK-ALIGN. they only leverage part of the target context. This inevitably brings noisy alignments when the prediction is ambiguous. Consider the target sentence in Figure 1. When predicting “Tokyo”, an NMT system may generate “1968” because future context is not observed, leading to a wrong alignment link (“1968”, “Tokyo”). Second, they have to incorporate an additional guided alignment loss (Chen et al., 2016) to outperform GIZA++. This loss requires pseudo alignments of the full training data to guide the training of the model. Although these pseudo alignments can be utilized to partially alleviate the problem of ignoring future context, they are computationally expensive to obtain. In this paper, we propose a self-supervised model specifically designed for the word alignment task, namely MASK-ALIGN. Our model parallelly masks out each target token and recovers it conditioned on the source and other target tokens. Figure 1 shows an example where the target token “Tokyo” is masked out and re-predicted. Intuitively, as all source tokens except “Tokio” can find their counterparts on the target side, “Tokio” should be aligned to the masked token. Based on this intuition, we assume that the source token contributing most to recovering a masked target token should be aligned to that target token. Compared with NMTbased methods, MASK-ALIGN is able to take full advantage of bidirectional context on the target side and hopefully achieves higher alignment quality. We also introduce an attention variant called _leaky_ _attention_ to reduce the high attention weights on specific tokens such as periods. By encouraging agreement between two directional models both for training and inference, our method consistently outperforms the state-of-the-art on four language pairs without using guided alignment loss. **2** **Approach** Figure 2 shows the architecture of our model. The model predicts each target token conditioned on the source and other target tokens and generates alignments from the attention weights between source and target (Section 2.1). Specifically, our approach introduces two attention variants, _static-KV atten-_ _tion_ and _leaky attention_, to efficiently obtain attention weights for word alignment. To better utilize attention weights from two directions, we encourage agreement between two unidirectional models during both training (Section 2.2) and inference (Section 2.3). **2.1** **Modeling** Conventional unsupervised neural aligners are based on NMT models (Peter et al., 2017; Garg et al., 2019). Given a source sentence **x** = _x_ 1 _, . . ., xJ_ and a target sentence **y** = _y_ 1 _, . . ., yI_, NMT models the probability of the target sentence conditioned on the source sentence: where **y** _