diff --git a/2kenizetyingsubwordsequencesforchinesescriptconversion/5e2eb333-d3dc-4384-b313-9fcdcdebb6e1_content_list.json b/2kenizetyingsubwordsequencesforchinesescriptconversion/5e2eb333-d3dc-4384-b313-9fcdcdebb6e1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..11eb412417a816d3f32931262787021dcf555ae6 --- /dev/null +++ b/2kenizetyingsubwordsequencesforchinesescriptconversion/5e2eb333-d3dc-4384-b313-9fcdcdebb6e1_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ea0f615c5b3daa3308f7bc3c3d8505fd4d4e301355bf5dddba14224a34c2fa9 +size 112517 diff --git a/2kenizetyingsubwordsequencesforchinesescriptconversion/5e2eb333-d3dc-4384-b313-9fcdcdebb6e1_model.json b/2kenizetyingsubwordsequencesforchinesescriptconversion/5e2eb333-d3dc-4384-b313-9fcdcdebb6e1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b2168300035dd45f54a13ef29afe0d01bf7ff6ac --- /dev/null +++ b/2kenizetyingsubwordsequencesforchinesescriptconversion/5e2eb333-d3dc-4384-b313-9fcdcdebb6e1_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2a135e0268a0623e2991143d5b7a4569e18ab9a4171f2b060d13c5107706589 +size 139359 diff --git a/2kenizetyingsubwordsequencesforchinesescriptconversion/5e2eb333-d3dc-4384-b313-9fcdcdebb6e1_origin.pdf b/2kenizetyingsubwordsequencesforchinesescriptconversion/5e2eb333-d3dc-4384-b313-9fcdcdebb6e1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5e96c2bb95728783a2f16c2d66f89129aba94219 --- /dev/null +++ b/2kenizetyingsubwordsequencesforchinesescriptconversion/5e2eb333-d3dc-4384-b313-9fcdcdebb6e1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:afcefa6d41805c4e1827f8d3b1a9b274c5dbfdbc2cd29f98811fb225c537dd53 +size 633984 diff --git a/2kenizetyingsubwordsequencesforchinesescriptconversion/full.md b/2kenizetyingsubwordsequencesforchinesescriptconversion/full.md new file mode 100644 index 0000000000000000000000000000000000000000..08a8c9af18ee17159b1e08e5ea2a12827fe21dab --- /dev/null +++ b/2kenizetyingsubwordsequencesforchinesescriptconversion/full.md @@ -0,0 +1,460 @@ +# 2kenize: Tying Subword Sequences for Chinese Script Conversion + +Pranav A 徒 and Isabelle Augenstein + +$^{1}$ Independent Researcher, Hong Kong + +$^{\text{師}}$ Department of Computer Science, University of Copenhagen, Denmark + +cs.pranav.a{at}gmail.com,augenstein{at}di.ku.dk + +# Abstract + +Simplified Chinese to Traditional Chinese character conversion is a common preprocessing step in Chinese NLP. Despite this, current approaches have insufficient performance because they do not take into account that a simplified Chinese character can correspond to multiple traditional characters. Here, we propose a model that can disambiguate between mappings and convert between the two scripts. The model is based on subword segmentation, two language models, as well as a method for mapping between subword sequences. We further construct benchmark datasets for topic classification and script conversion. Our proposed method outperforms previous Chinese Character conversion approaches by 6 points in accuracy. These results are further confirmed in a downstream application, where 2kenize is used to convert pretraining dataset for topic classification. An error analysis reveals that our method's particular strengths are in dealing with code mixing and named entities. The code and dataset is available at https://github.com/pranav-ust/2kenize + +# 1 Introduction + +Chinese character (or script) conversion is a common preprocessing step for Chinese NLP practitioners (Zhang, 2014; Shi et al., 2011). Traditional Chinese (TC) and Simplified Chinese (SC) are the two standardized character sets (or scripts) for written Chinese. TC is predominantly used in Taiwan, Hong Kong, and Macau, whereas SC is mainly adopted in mainland China and SC characters are simplified versions of TC characters in terms of strokes and parts. Therefore, Chinese NLP practitioners apply script converters1 to translate the + +
SC Sentence维护发展中国家共同利益Comments
Segmentation维护发展中国家共同利益护发: haircare
Conversion維護髮展中國家共同利益× Conversion
Segmentation维护发展中国家共同利益发展: develop
Conversion維護發展中國家共同利益✓ Conversion
+ +Table 1: Example sentence with two different segmentations, and resulting different conversions. The sentence translates to Safeguarding the common interests of developing countries. This is a recurring example in this paper. Also refer §F.5. + +dataset into their desired language. This is especially useful for TC NLP practitioners because TC is less widely used and under-resourced as compared to SC. + +Converting from TC to SC is generally straightforward because there are one-to-one correspondences between most of the characters, so conversion can be performed using mapping tables (Denisowski, 2019; Chu et al., 2012). However, conversion from SC to TC is an arduous task as some SC characters can be mapped to more than one TC character depending on the context of the sentence. A detailed analysis by Halpern and Kerman (1999) shows that SC to TC conversion is a challenging and crucial problem, as $12\%$ of SC characters have one-to-many mappings to TC characters. Our experiments show that current script converters achieve sentence accuracy results of $55 - 85\%$ ( $\S 3$ ). + +Another issue is that varying tokenization would lead to different results as Chinese is an unsegmented language, see Table 1 for an example. Off-the-shelf script converters would translate 维护发展中国家共同利益 into 維護髮展中國家共同利益,² whereas the correct conversion is 維 + +護發展中國家共同利益. Here, the SC character 发 (hair; issue) has two TC mappings, 髖 (hair, issue) and 發 (hair; issue), depending on the context and tokenization; which shows that this task is non-trivial. + +Despite this being an important task, there is a lack of benchmarks, $^{3}$ which implies that this problem is understudied in NLP. In this study, we propose 2kenize, a subword segmentation model which jointly considers Simplified Chinese and forecasting Traditional Chinese constructions. We achieve this by constructing a joint Simplified Chinese and Traditional Chinese language model based Viterbi tokenizer. Performing mapping disambiguation based on this tokenization method improves sentence accuracy by 6 points as compared to off-the-shelf converters and supervised models. Our qualitative error analysis reveals that our method's particular strengths are in dealing with code-mixing and named entities. Additionally, we address the issue of a lack of benchmark datasets by constructing datasets for script conversion and TC topic classification. + +# 2 2kenize: Joint Segmentation and Conversion + +We employ subword tokenization, as it addresses the issue of rare and unknown words (Mikolov et al., 2012) and has been shown advantageous for the language modelling of morphologically-rich languages (Czapla et al., 2018; Mielke and Eisner, 2019). This achieves improvements in accuracy for neural machine translation (NMT) tasks and has now become a prevailing practice (Denkowski and Neubig, 2017). The most widely-utilized method is Byte Pair Encoding (BPE, Sennrich et al. (2016)), a compression algorithm that combines frequent sequences of characters, which results in rare strings being segmented into subwords. Unigram (Kudo, 2018) and BPE-Drop (Provilkov et al., 2019) use subword ambiguity as noise, as well as stochastically-corrupted BPE segmentation to make it less deterministic. For NMT tasks generally, subword segmentation is seen as a monolingual task and applied independently on source and target corpora. We hypothesize that translation tasks, and specifically conversion tasks, as investigated here, would have a bet- + +ter performance if segmentation were performed jointly. Hence, in this section, we describe our proposed method 2kenize, which jointly segments by taking the source and its approximate target sentences into account. This motivates the main idea of this paper: We propose 2kenize which jointly considers the source sentence and its corresponding target conversions by doing lookaheads with mappings. + +# 2.1 Outline of the proposed approach + +Given the possible SC character sequence $\mathbf{s} = s_1s_2\ldots s_n$ and TC character sequence $\mathbf{t} = t_1t_2\ldots t_n$ , we want to find the most likely $\mathbf{t}$ , which is given by the Bayes decision rule as follows: + +$$ +\mathbf {t} = \underset {\mathbf {t} ^ {\prime} \in T ^ {*}} {\arg \max } p (\mathbf {s}, \mathbf {t} ^ {\prime}) \tag {1} +$$ + +where $T^{*}$ denotes the set of all strings over symbols $(t_{i})$ in $T$ (Kleene star). We divide this problem into two parts: finding the mapping sequence (2) and finding the TC sequence from mappings (7). + +We define a mapping, which is given by $m_{i} = (\mathfrak{s}_{i},\mathfrak{t}_{i}) = (s_{j:k},\mathfrak{t}_{j:k})$ . Here, $\mathfrak{t}_{j:k} = \{t_{j:k}^{1}\ldots t_{j:k}^{n}\}$ is a set of TC characters that correspond to the SC character in the mapping. Thus, a mapping sequence can be defined as a concatenation of mappings, which is $\mathbf{m} = m_1m_2\dots m_l$ . Let $\mathcal{M}$ be the superset of all possible mapping sequences and $\mathcal{M}(\mathbf{s})$ be the all mapping sequences resulting from $\mathbf{s}$ . Then, the best possible mapping sequence is given by + +$$ +\mathbf {m} = \underset {\mathbf {m} ^ {\prime} \in \mathcal {M} (\mathbf {s})} {\arg \max } p \left(\mathbf {m} ^ {\prime}\right) \tag {2} +$$ + +Moreover, $p(\mathbf{m})$ can be expanded as such: + +$$ +\begin{array}{l} p (\mathbf {m}) = p \left(m _ {1} m _ {2} \dots m _ {l}\right) (3) \\ = p \left( \begin{array}{l l l l} \mathfrak {s} _ {1} & \mathfrak {s} _ {2} & \dots & \mathfrak {s} _ {l} \\ \mathfrak {t} _ {1} & \mathfrak {t} _ {2} & \dots & \mathfrak {t} _ {l} \end{array} \right) (4) \\ \approx p \left(\mathfrak {s} _ {1} \mathfrak {s} _ {2} \dots \mathfrak {s} _ {l}\right) + p \left(\mathrm {t} _ {1} \mathrm {t} _ {2} \dots \mathrm {t} _ {l}\right) (5) \\ = p _ {L M} \left(\mathfrak {s} _ {1: l}\right) + \sum_ {t \in \prod_ {i} t _ {i}} p _ {L M} \left(t _ {1: l}\right) (6) \\ \end{array} +$$ + +After expanding the mapping sequences (4), we take an approximation by estimating this as the sum of likelihoods of two sequences formed due to co-segmentations (5). The set of possible TC sequences is given by the Cartesian product of $t_i$ . These likelihoods can then be estimated using language model (LM) probabilities as shown in (6). + +$$ +\mathbf {t} = \underset {\mathbf {t} ^ {\prime} \in \mathbf {m} _ {\mathbf {t}}} {\arg \max } p \left(\mathbf {t} ^ {\prime}\right) \tag {7} +$$ + +![](images/8ac1134e60a57d9330d566c49868d0cef73b8611613a95446a773a013fe02fdb.jpg) +Figure 1: Language model architecture with subword and subsequence sampling. (Alt text: §F.1). + +Once the mapping sequence $\mathbf{m}$ has been found, all possible TC sequences are found from the set $\mathbf{m}_{\mathbf{t}}$ which is the Cartesian product for all $t_i$ in $\mathbf{m}$ . From (7), we calculate approximate final sequence using beam search. + +# 2.2 Model Architecture + +Viterbi, a dynamic programming (DP) algorithm, considers phrases (or subsequences) and performs segmentation in a 'bottom-up' fashion (Nagata, 1994; Sproat et al., 1996). RNN-based language models are theoretically considered to be ' $\infty$ ’-gram (Khandelwal et al., 2018), which constitutes a challenge. Consider this sentence,维护发展中国家共同利益. A potential challenge could be to adequately estimate the probability of共同利益. As this sequence occurs infrequently in the beginning of sentences in the corpus, an RNN would under-estimate the probability of this subsequence. Moreover, an RNN would likely lose some useful context and perform worse without it (Kim et al., 2019). So for Viterbi to perform well with an RNN, we train the language model on subsequences. We approach this by training our model in such a way that it samples subsequences ran + +![](images/5ed6b4fd8e0445284138909508f082c5cbad015338acbc8ce174c50666d7242d.jpg) +Figure 2: From the given SC sentence, we create possible TC sequences using mappings. We input these to Viterbi, which recursively calls LSTM. Using Eq. (6) as the scoring function, Viterbi outputs the mapping sequence. We perform beam search to find the best TC sequence from the mapping sequence. (Alt text: §F.2). + +domly in each epoch. As shown in Fig 1, we randomly split the sentence and use subsequences in separate epochs. + +Using Kudo (2018) regularization method, we sample from the $n$ -best segmentations in each epoch. This is done so that the model can understand different segmentations of a subsequence using a similar motivation as above. Recent works have shown that varying subword segmentations lead to a better downstream model performance (Provilkov et al., 2019; Kudo, 2018; Hiraoka et al., 2019); therefore, we use it as a data augmentation strategy. Once we get the $n$ -best segmentations with scores, we normalize them, and then use the normalized scores as sampling probabilities (see Fig 1). As opposed to other subword tokenizers where the vocabulary size is fixed, we do not limit the vocabulary in our model. Hence, there are numerous possibilities of segment combinations which raises a need of caching most frequent tokens. Inspired by the work related to cache-based LMs (Kawakami et al., 2017) and ghost batches (Hoffer et al., 2017), we only consider the top- $k$ tokens in the main network memory and keep track of gradients of less recently used token embeddings (commonly known as LRU, Least Recently Used policy). This could be thought of as + +
HK LiteratureHK NewsTW LiteratureTW News
SourcesLiu (1962)Singpao (2017-2018)Jiubadao (2011)AS subset Emerson (2005)
Lau Yee (1972)Mingpao (2017-2018)Ko (2010)Liberty Times (2017-2018)
Foon (1988)CityU subset Emerson (2005)Yao (1964)United Daily News (2017-2018)
Average Length194.8214.6188.2223.6
IAA0.9820.9790.9810.971
Mapping Examples干-[幹,乾,干,幹] +須-[須,髒]苏-[蘇,噉,甦] +暗-[暗,闊]复-[復,複,覆] +叹-[嘆,歎]胡-[胡,衛,鬍] +迹-[蹟,跡]
+ +Table 2: An overview of the dataset used for intrinsic evaluation. We report sources, average character lengths and sentence level inter-annotator agreements (IAA, reported in $\kappa$ ) and some examples of ambiguous SC-TC mappings. + +virtual embeddings as delayed gradient accumulation allows to accommodate larger number of tokens. This virtual size embedding architecture is related to the continuous cache implementation and stochastic tokenization architectures (Grave et al., 2016; Hiraoka et al., 2019). + +# 2.3 Segmentation and Disambiguation + +This optimal sequencing problem can be formulated as an overlapping subsequence approach, which can be solved using LM based Viterbi (Nagata, 1994; Sproat et al., 1996). Fig. 2 explains this process of joint subword modelling. Here, we take Eq. (6) as the objective function for finding the mapping sequence, however, we use subword perplexities (Cotterell et al., 2018; Mielke et al., 2019; Mielke, 2019) in our implementation. For the TC LSTM, we add the probabilities of the beams of the possible sequences. + +As discussed in §2.1 and Eq. (7), beam search is needed to select the best subword sequence for TC. Once the sentences are tokenized, the mapping table is used to convert each SC token to the corresponding TC token. We extract the final TC sentence by resolving ambiguities through beam search using the TC LSTM (Fig. 2). + +# 3 Intrinsic Evaluation + +# 3.1 Dataset for Intrinsic Evaluation + +We construct a gold standard corpus for both Chinese scripts consisting of 4 domains: HK Literature and Newswire, and Taiwanese Literature and Newswire (Table 2) with each domain containing 3000 sentences. SC-TC mapping tables are constructed from existing resources (Denisowski, 2019; Chu et al., 2012). We heuristically convert selected TC sentences to SC using OpenCC. We asked the annotators to manually correct any incorrect conversions. $^{4}$ + +# 3.2 Language Model Training + +We choose the SIGHAN-2005 Bakeoff dataset to train the segmentation-based language model (Emerson, 2005). For SC, we select the PKU and MSR partitions, and for TC, we use the Academia Sinica and CityU partitions. We apply maximal matching (or heuristic dictionary-based word segmenter) to pre-process these datasets by segmenting words into subwords (Wong and Chan, 1996). Here, 'dictionary' refers to the word-list in the mapping table. We then train a 2-layer LSTM language model LSTM with tied weights, and embedding and hidden sizes of 512 (Sundermeyer et al., 2012) on this segmented dataset with subsequence sampling and stochastic tokenization as discussed in $\S 2.2$ . + +# 3.3 Baselines and Ablations + +We implement the following baselines for the experimentation: + +Off-the-shelf Converters: Hanziconv $^{6}$ and Mafan $^{7}$ are dictionary-based script character converters. Evaluating this could be useful to understand the lower accuracy bound. OpenCC $^{8}$ uses a hybrid of characters and words (specifically trie based tokenizer) for script conversion (Pranav A et al., 2019). + +Language Model Disambiguation: A strong baseline to this problem would be to build a language model to disambiguate between the characters, which is quite similar to STCP (Xu et al., 2017). We use a 2-layer LSTM language model trained on Traditional Chinese corpus. + +Neural Sequence Models: We heuristically convert Traditional Chinese Wikipedia to Simplified Chinese using OpenCC and use it for training the seq2seq model (Sutskever et al., 2014). We + +
Conversion SystemHK LitHK NewsTW LitTW NewsOverall
DEDSADEDSADEDSADEDSADEDSA
Dictionary based conversion, Hanziconv34.154.737.759.131.360.039.358.934.255.6
Dictionary based conversion, Mafan14.771.217.772.514.573.813.372.714.472.6
Trie dictionary based conversion, OpenCC5.587.35.183.44.184.73.888.54.385.3
Language Model Disambiguation, STCP6.385.65.479.94.784.15.283.95.384.0
Convolutional Sequence Models6.785.85.379.34.884.55.283.95.484.4
2kenize with word tokenization11.284.312.181.311.382.110.081.111.582.7
2kenize with maximal matching5.288.73.393.14.088.64.887.74.588.9
2kenize with Unigram subwords3.491.93.890.94.388.13.987.83.789.3
2kenize with joint LSTM modelling2.894.93.193.73.891.32.991.93.092.4
+ +Table 3: Results of the intrinsic evaluation experiments which are reported as a mean across 10 different seeds. We use disambiguation error density (DED, the lower, the better) and sentence accuracy (SA, the higher the better) metrics for evaluation. **Bold:** best, **Underlined:** second-best. + +construct a 20-layer neural convolutional sequence model (Gehring et al., 2017) (both in encoder and decoder) using fairseq (Ott et al., 2019). + +We perform ablation tests by inserting following segmentation models. + +Word tokenization: We use Jieba, which is a commonly used hidden markov model based word tokenizer for Chinese NLP. + +Dictionary substrings: We apply maximal string matching, which is a dictionary based greedy tokenizer (Pranav A et al., 2019; Wong and Chan, 1996). + +Unigram from Sentencepiece: Subword segmentation is performed by sampling unigram language model perplexity values (Kudo, 2018). + +Joint subwords: As discussed in §2.3, we use joint SC-TC subwords. + +# 3.4 Results for Intrinsic Evaluation + +We evaluate our models using the metrics of disambiguation error density (DED) and sentence accuracy (SA). DED is the average of total edit distances per 1000 ambiguous Simplified characters, which is $\frac{\sum\text{edit distances}}{\sum\text{ambiguous Simplified characters}} \times 1000$ . SA is the number of sentences correctly converted in percentages. Contrary to previous papers, we do not report character based accuracy values, as generally most characters have straightforward mappings — a reason why we opt for a less forgiving metric like SA where every character in a sentence has to be correctly converted. + +Results are shown in Table 3, broken down by domain, and overall. Our model attains an average DED of 3.0 and a SA of $92.4\%$ overall, whereas the best existing converter, OpenCC, only achieves a DED of 4.3 and a SA of $85.3\%$ . We + +find that seq2seq and LM based disambiguation perform almost on par with OpenCC, due to the large number of false positive errors by these models. Jieba achieves an average DED of 11.2 as it does not handle OOV words well. For maximal matching of segmented words and Unigram subwords, it achieves an overall DED of 4.5 and 3.7, respectively — showing that joint segmentation yields better results. We observe that accuracy values are slightly worse on news text, due to the relatively high number of new entities in those datasets. We find that seq2seq and LM based disambiguation gives rise to many false positives. Heuristically converting TC to SC results in certain conversion errors in the training dataset; and additionally, seq2seq approaches tend to reword the target sentence, which shows that they are unsuitable for this task. + +# 3.5 Qualitative Error Analysis + +We manually inspect incorrect conversions in the intrinsic evaluation and find four interesting recurring linguistic patterns which confused the converters. We instructed the annotators to classify the items in the dataset (overall 12000 sentences in intrinsic evaluation dataset) if the sentences contain any of these patterns. In Table 4, we provide an overview of statistical information of these patterns and the performance by the converters. + +Code mixing: Vernacular Cantonese characters (zh-yue) are a subset of TC characters but do not follow the norms of the standard written Chinese (Snow, 2004). We find that some of the sentences in our dataset are code-mixed with zh-yue (e.g. speech transcription) or English (e.g. named entities). Consider the snippet, "...古惑架 BENZ 190E 撞埋支...", which is code-mixed with + +
CaseMethodSAExample
Code mixing with肯尼迪咾多囉做,掂唔掂呀?
With so much to do in Kennedy, can you handle it?
CantoneseOpenCC20.5肯尼迪咾多囉做,掂唔掂呀?
(34 cases, 0.3%)STCP8.8肯尼迪咾多囉做,掂唔掂呀?
2kenize91.1甘迺迪咾多囉做,掂唔掂呀?
Code mixing with EnglishOpenCC95.6自从我捲住大古惑架 BENZ 190E 撞埋支电灯柱嚱度之後, After I drove Slick's Benz 190E into the telephone pole,
(1532 cases, 12.8%)STCP86.5自從我捲住大古惑架 BENZ 190E 撞埋支電燈柱嚱度之後, 自從我捲住大古惑架 BENZ 190E 撞埋支電燈柱嚱度之后,
2kenize98.7自從我捲住大古惑架 BENZ 190E 撞埋支電燈柱嚱度之後,
Disguised维护发展中国家共同利益
NamedSafeguard the common interests of developing countries
EntitiesOpenCC85.7維護髮展中國家共同利益
(378 cases, 3.15%)STCP82.1維護髮展中國家共同利益
2kenize93.2維護發展中國家共同利益
Repeated乔治亞来到了乔治亞洲旅游
NamedGeorgia came to Georgia for travelling.
EntitiesOpenCC84.4佐治亞來到了佐治亞洲旅遊
(428 cases, 3.57%)STCP17.9佐治亞來到了喬治亞洲旅遊
2kenize87.8喬治亞來到了喬治亞洲旅遊
+ +Table 4: Casewise breakdown of common errors. The first sentence is SC, second is the English translation and rest are TC outputs from the converters. + +both zh-yue and English. The characters "BENZ 190E", 架 and 埋支 are not a part of standard written Chinese. We find that OOV words are 2kenized into single-character tokens which results in: "古 惑 | 架 | B|E|N|Z| 1|9|0|E| 撞 | 埋 | 支" Thus, 2kenize distributes the entropy over multiple tokens rather than a single token (generally UNK is used in such cases). This allows the language model to have more space for multiple guesses, which shows a massive advantage over word models or just UNK-ing it, a reason why subword tokenizers outperform closed-vocabulary models (Merit, 2019). + +Disguised Named Entities: Take the recurring sentence: "维护发展中国家共同利益". Observe that the sentence contains a frequent word 中国 (China). However, the actual meaning and English translation do not include "China" at all. This is an interesting linguistic trait of Chinese, where words often appear in the sentence, but are not being interpreted. This could easily trip up a tokenizer, as the probability of 中国 being a token independently is high. Having 中国 as a separate token in the sentence could lead into an incorrect conversion (Table 1). We find in 2kenizer's trellis10 that "维护 | 发展 | 中" has a higher probability than other possible segmentations. Substructure lookups and beam search in our setup considerably reduces the probability of getting wrong tokenization. The sen + +tence is 2kenized into “维护 | 发展 | 中 | 国家 | 共同 | 利益”, which results in the correct conversion –維護發展中國家共同利益. + +Repetitions: We find that in $3.57\%$ of sentences, named entities are repeated. Interestingly, STCP, which uses a language model for disambiguation, often only converts one out of the repeated tokens correctly, which we can see in the table. As also shown, STCP prefers 佐治亞 over 喬治亞 in the first occurrence, but then prefers 喬治亞11 in the second occurrence as it gets more context. 2kenize converts both of the entities correctly, very likely due to substructure lookups. + +Failure Cases: Dictionary-based converters (OpenCC, HanziConv and Mafan) only use the first conversion candidate12 if multiple candidates are available. STCP often converts named entities wrongly, especially the ones which have long-range dependencies and repetitions. Although we find that 2kenize converts some of the unseen named entities perfectly, some of the errors caused were due to infrequent characters. Few cases are mainly related to variant characters13 which are often used interchangeably. + +
Formal Text Classification Dataset Overview
SourceSingtao
Pretraining Corpus Size17500
Training Size3000
Validation Size450
Testing Size450
CategoriesFinancial, Educational, Local International, Sports
Languagezh-hant-hk
+ +
Informal Text Classification Dataset Overview
SourceLIHKG
Pretraining Corpus Size21000
Training Size4000
Validation Size450
Testing Size450
CategoriesSports, Opinions, IT
Financial, Leisure, Memes
Languageszh-hant-hk, zh-yue, en-HK
+ +Table 5: Characteristics of classification dataset (Traditional Chinese) for extrinsic evaluation experiments. + +# 4 Extrinsic Evaluation + +An accurate script converter should produce a less erroneous dataset, which should in turn improves the accuracy of the downstream tasks. In this section, we demonstrate the effect of script conversion on topic classification tasks to examine this assumption. We also study the impact of tokenization and pooling on the accuracy of topic classification. We apply the converter to the language modelling corpus (Wikitext), then train a classifier for informal and formal topic classification on that translated data. This allows us to measure the performance of the converter compared to other ones for a specific downstream task. + +# 4.1 Dataset for Extrinsic Evaluation + +This section describes the dataset that we used for extrinsic evaluation experiments. It involves a pretraining dataset which consists Chinese Wikipedia and topic classification datasets. + +# 4.1.1 Pretraining Dataset + +We use Chinese Wikipedia articles for pretraining the language model. Script conversion is an issue in Chinese Wikipedia, and currently, they use a server-side mechanism to automatically convert the scripts (dictionary-based) based on the location of the user. However, Wikipedia provides an option to view the article without conversion, which + +we use in the corpus. $^{14}$ We use $zh-CN$ , $zh-HK$ and $zh-yue$ wikis to retrieve articles originally written SC, TC and vernacular Cantonese + TC respectively with the help of wikiextractor. $^{15}$ We pretrain the formal text classification models on articles from $zh-HK$ and converted $zh-CN$ ; and classification models for informal text on articles from $zh-HK$ , $zh-yue$ , and converted $zh-CN$ . + +# 4.1.2 Classification Datasets: + +We choose two classification tasks: formal news and informal topic classification (Table 5). For formal news, we scrape recent articles (2017-2019) from Singtao, $^{16}$ for informal topics, we scrape posts (2017-2018) from LIHKG. $^{17}$ + +# 4.2 Performance of various classifiers + +For classification baselines, we use character-based SVM (Support Vector Machines, Joachims (1998)), CNN (Convolutional Nets, Zhang et al. (2015)) and Chinese BERT (Devlin et al., 2019). We also employ a state-of-the-art text classifier, MultiFiT (Eisenschlos et al., 2019), a lightweight RNN-based language model based classifier, which has shown to achieve a performance competitive with BERT (Devlin et al., 2019) and ULMFiT (Howard and Ruder, 2018). The base architecture of MultiFiT is a 4-layer QRNN (Bradbury et al., 2016) with classifier head. We choose rectified Adam (Liu et al., 2019) with Lookahead (Zhang et al., 2019) as the optimizer. We employ the cosine cyclic learning scheduler (Smith, 2015), where the limits of learning rate cycles are found by increasing the learning rate logarithmically and computing the evaluation loss for each learning rate (Smith, 2018). To compute the batch size, we apply gradient noise scale to each batch size candidate and pick the one which gives the highest gradient noise scale (McCandlish et al., 2018). We apply label smoothing (Szegedy et al., 2015) and use mixed precision training on RTX 2080. We implement our experiments using Pytorch (Paszke et al., 2019) and FastAI (Howard and Gugger, 2020). + +MultiFiT uses concat pooling after the last layer of QRNN, which means that the last time step is concatenated with an average and maximum + +![](images/0609885ce21c65a8be389c0f74e23f1fcf7f9833477794606436b2bf70128ccc.jpg) +Figure 3: Proposed architecture for topic classification where we tweak MultiFiT to concatenate concat-pools from all layers. (Alt text: §F.3). + +
FormalInformal
Char-SVM73.263.7
Char-CNN78.564.9
Chinese BERT (base)84.566.3
MultiFiT with no pooling87.568.5
MultiFiT with concat pooling88.669.9
MultiFiT with layer pooling89.070.3
+ +pooled over previous time steps. Studies show that in LM based classifiers, different layers capture different types of knowledge—the last layer would be domain-specific and initial layers would be more generalized (Yosinski et al., 2014; Peters et al., 2019). We speculate that concat pooling only on the last layer limits the information available to the classifier head and we hypothesise that the classifier would perform better if domain-specific as well as generalized knowledge were available to the head. For this reason, we augment the original MultiFIT architecture with layer pooling, which is concat pooling from all the layers, and pass that to the dense layer in the classifier, as shown in Fig 3. + +We fine-tune the BERT language model and pretrain the MultiFiT language model on Chinese Wikipedia subsets (§4.1.1). All classifiers are then trained on the given training set (character based models) and evaluated on the test set in terms of accuracy as number of items in each class are roughly equal. This experiment (and subsequent experiments in this section) is repeated across ten different seeds (Reimers and Gurevych, 2018) and data splits (Gorman and Bedrick, 2019) and the results are shown in Table 6. Layer pooling shows an absolute improvement of $0.4\%$ improvement over concat pooling on formal and informal topic clas + +Table 6: Performance of various architectures on topic classification in terms of accuracy. The results are reported as a mean result across 10 different seeds and data splits. **Bold:** best, **underlined:** second best. + +
Pretraining data of MultiFiTFormalInformal
No Conversions89.070.3
Including conversions with OpenCC91.775.6
Including conversions with STCP92.373.4
Including conversions with 2kenize93.277.9
+ +Table 7: Ablation test of MultiFiT on different script converters. The results are reported as a mean accuracy result across 10 different seeds and data splits. **Bold:** best, **underlined:** second best. + +
Corpus TokenizationFormalInformal
Char93.277.9
Jieba92.478.3
BPE92.781.0
BPE-Drop93.782.7
Unigram94.882.2
1kenize94.883.2
+ +Table 8: Ablation test of MultiFiT on tokenizers. The results are reported as a mean accuracy result across 10 different seeds and data splits. **Bold:** best, **underlined:** second best. + +sification, thus confirming our hypothesis. + +# 4.3 Effect of Conversion on Classification + +For each converter (OpenCC, STCP, 2kenize), we translate $zh-CN$ wiki dataset and augment it with the TC wiki dataset. Then, we pretrain on this dataset, finetune on the domain data and train MultiFiT with layer pooling on these three datasets. We demonstrate test set accuracies in Table 7. The dataset translated by 2kenize outperforms other converters, giving an absolute improvement of $0.9\%$ on formal and $4.5\%$ over second-best converters on informal topic classification. These results emphasise that better script conversion improves the quality of the pretraining dataset, which boosts the performance of the downstream tasks like topic classification. + +# 4.4 Effect of Tokenization on Classification + +Studies show that tokenization affects classification accuracy; open-vocabulary methods generally perform best (Eisenschlos et al., 2019; Hiraoka et al., 2019). For this experiment, we perform further ablations on our previous best classifier setup (MultiFiT with layer pooling on 2kenize) to understand the effect of various subword tokenizers. Pretraining generally takes a long time (1-2 GPU days), hence we pretrain the classifier once for each tokenized corpus and do not perform sub + +![](images/4af4c79c358b8d78558ac21cd7f8c8485c0fc037380230d9c2941a4f46bc3b5e.jpg) +Figure 4: Log-log plots for different tokenizers. This is plotted frequency vs rank for the first 10000 tokens. Negative slopes calculated from least squares are in the legend (lower means less skewed). (Alt text: §F.4). + +word sampling for this experiment. For closed vocabulary methods, we use character and word segmentations (here with Jieba). Likewise, for open-vocabulary methods, we employ BPE, BPE-Drop and Unigram subword tokenizers. + +Subword tokenizers mostly rely on frequency and do not take likelihood (something similar to $n$ -gram language model) of tokenized sentence into consideration. Hence, we choose LM-based Viterbi segmentation (henceforth referred as lkenize), and here the LM would be the TC LSTM described in §2.2. We report results in Table 8. We find that for formal classification, lkenize and Uni-gram perform best. lkenize outperforms other subword tokenizers for the noisier informal dataset, giving an absolute improvement of $0.5\%$ over the second best method, which is BPE-Drop. + +We plot a log frequency of tokens vs log order rank, which is shown in Figure 4. This distribution is based on the LIHKG dataset, which is noisier than other domains. We observe that character and word distributions are steeper than language model based subword tokenizers. This indicates that subword tokenizers produce a less skewed token distribution. Subword tokenizers like BPE and Unigram are deterministic and rely on frequency for segmentation. Since Ikenize is contextual, being LM-based, we find that it produces the least skewed distribution (lowest Zipf's law coefficient (Zipf, 1949)), which also reduces variance, a reason why this simple segmentation method outperforms others for informal text classification. + +# 5 Takeaways and Open Questions + +The contributions of our work are: + +- 2kenize, a subword segmentation model, which jointly segments source sentence and its corresponding approximate target conversions. +- An unsupervised script converter based on 2kenize which shows a significant improvement over existing script converters and supervised models. +- 1kenize, a variant of 2kenize which performs tokenization on only Traditional Chinese sentences which improves accuracy on topic classification tasks. +- Character conversion evaluation datasets: spanning Hong Kong and Taiwanese literature and news genres. +- Traditional Chinese topic Classification datasets: formal (scraped from Singtao) and informal (scraped from LIHKG) styles spanning genres like news, social media discussions, and memes. + +The key findings of our work are: + +- Our script converter shows a strong performance when dealing with code mixing and named entities. Supervised models are prone to anaphora and unseen entities related errors. +- A simple LM-based Viterbi segmentation model outperforms other subword tokenizers on topic classification tasks and reduces skewness of token distribution on a noisy dataset. + +We leave some open questions to explore: + +- How can we exploit subword variations to reduce skewness in the NLU tasks? +- Would subword-segmentation-transfer be helpful for other NMT-NLU task pairs like we did for 2kenize (script conversion) to 1kenize (classification)? + +We anticipate that this study would be useful to TC NLP practitioners, as we address several research gaps, namely script conversion and a lack of benchmark datasets. + +# Acknowledgements + +The first author would like to thank Dayta AI Limited, S.F. Hui, I-Tsun Cheng, Ishaan Batra, Conrad Ho, Roy Fork, Abhishek Gupta, Ajay Singh, Eugene Ho, Patrick Tu, Alex Chu, and Leland So for making valuable additions to this work. The second author would like to acknowledge funding from the Swedish Research Council for the project under grant agreement 2019-04129, which partly funded this work. + +# References + +Emily M. Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587-604. +James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2016. Quasi-recurrent neural networks. +Chenhui Chu, Toshiaki Nakazawa, and Sadao Kurohashi. 2012. Chinese characters mapping table of Japanese, traditional Chinese and simplified Chinese. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012), pages 2149-2152, Istanbul, Turkey. European Languages Resources Association (ELRA). +Wikipedia Contributors. 2019. Chinese script conversion and word processing in wikipedia. Page Version ID: 56925003. +Ryan Cotterell, Sabrina J. Mielke, Jason Eisner, and Brian Roark. 2018. Are all languages equally hard to language-model? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 536-541, New Orleans, Louisiana. Association for Computational Linguistics. +Piotr Czapla, Jeremy Howard, and Marcin Kardas. 2018. Universal language model fine-tuning with subword tokenization for polish. *ArXiv*, abs/1810.10222. +Paul Denisowski. 2019. Cc-ceduct. https://cc-ceduct.org/. +Michael Denkowski and Graham Neubig. 2017. Stronger baselines for trustable results in neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 18-27, Vancouver. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Julian Eisenschlos, Sebastian Ruder, Piotr Czapla, Marcin Kadras, Sylvain Gugger, and Jeremy Howard. 2019. MultiFiT: Efficient multi-lingual language model fine-tuning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing + +(EMNLP-IJCNLP), pages 5706-5711, Hong Kong, China. Association for Computational Linguistics. +Thomas Emerson. 2005. The second international Chinese word segmentation bakeoff. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing. +A Foon. 1988. Diary of the little man. Book. Pp.5-6. +Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML '17, page 1243-1252. JMLR.org. +Kyle Gorman and Steven Bedrick. 2019. We need to talk about standard splits. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2786-2791, Florence, Italy. Association for Computational Linguistics. +Edouard Grave, Armand Joulin, and Nicolas Usunier. 2016. Improving neural language models with a continuous cache. *ArXiv*, abs/1612.04426. +Jack Halpern and Jouni Kerman. 1999. Pitfalls and complexities of chinese to chinese conversion. In International Unicode Conference (14th) in Boston. +Tatsuya Hiraoka, Hiroyuki Shindo, and Yuji Matsumoto. 2019. Stochastic tokenization with a language model for neural text classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1620-1629, Florence, Italy. Association for Computational Linguistics. +Elad Hoffer, Itay Hubara, and Daniel Soudry. 2017. Train longer, generalize better: closing the generalization gap in large batch training of neural networks. In NIPS. +Jeremy Howard and Sylvain Gugger. 2020. Fastai: A layered api for deep learning. Information, 11(2):108. +Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328-339, Melbourne, Australia. Association for Computational Linguistics. +Jiubadao. 2011. You Are the Apple of My Eye. Chun Tian Chu Ban. +Thorsten Joachims. 1998. Text categorization with support vector machines: Learning with many relevant features. In ECML. +Kazuya Kawakami, Chris Dyer, and Phil Blunsom. 2017. Learning to create and reuse words in open-vocabulary neural language modeling. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1492-1502, Vancouver, Canada. Association for Computational Linguistics. + +Urvashi Khandelwal, He He, Peng Qi, and Dan Jurafsky. 2018. Sharp nearby, fuzzy far away: How neural language models use context. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 284-294, Melbourne, Australia. Association for Computational Linguistics. +Yoon Kim, Alexander Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and Gabor Melis. 2019. Unsupervised recurrent neural network grammars. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1105-1117, Minneapolis, Minnesota. Association for Computational Linguistics. +Giddens Ko. 2010. Cafe, Waiting, Love. Spring Press. +Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66-75, Melbourne, Australia. Association for Computational Linguistics. +Cheung Lau Yee. 1972. Intersection. Benefits Publishing Co., Ltd. +Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2019. On the variance of the adaptive learning rate and beyond. ArXiv, abs/1908.03265. +Yichang Liu. 1962. *Drunkard*. Benefits Publishing Co., Ltd. +Sam McCandlish, Jared Kaplan, Dario Amodei, and OpenAI Dota Team. 2018. An empirical model of large-batch training. *ArXiv*, abs/1812.06162. +Stephen Merity. 2019. Single headed attention rnn: Stop thinking with your head. ArXiv, abs/1911.11423. +Sabrina J. Mielke. 2019. Can you compare perplexity across different segmentations? +Sabrina J. Mielke, Ryan Cotterell, Kyle Gorman, Brian Roark, and Jason Eisner. 2019. What kind of language is hard to language-model? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4975-4989, Florence, Italy. Association for Computational Linguistics. +Sabrina J. Mielke and Jason Eisner. 2019. Spell once, summon anywhere: A two-level open-vocabulary language model. AAAI. +Tomáš Mikolov, Ilya Sutskever, Anoop Deoras, Hai-son Le, Stefan Kombrink, and Jan Cernocky. 2012. Subword language modeling with neural networks. preprint (http://www.fit.vutbr.cz/imikolov/rnml/char.pdf), 8. + +Masaaki Nagata. 1994. A stochastic Japanese morphological analyzer using a forward-DP backward-A* n-best search algorithm. In COLING 1994 Volume 1: The 15th International Conference on Computational Linguistics. +Xue Nianwen, Zhang Xiuhong, Jiang Zixin, Palmer Martha, Xia Fei, Chiou Fu-Dong, and Meiyu Chang. 2016. Chinese treebank 9.0. LDC2016T13. Web Download. Philadelphia: Linguistic Data Consortium. +Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairseq: A fast, extensible toolkit for sequence modeling.* In *Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)*, pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics. +Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Dipl.-Ing. Kopf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In NeurIPS 2019. +Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. 2019. To tune or not to tune? adapting pretrained representations to diverse tasks. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 7-14, Florence, Italy. Association for Computational Linguistics. +Pranav A, S.F. Hui, I-Tsun Cheng, Ishaan Batra, and Chiu Yik Hei. 2019. Learn languages first and then convert: Towards effective simplified to traditional chinese conversion. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Student Research Workshop, non-archival), Minneapolis, Minnesota. Association for Computational Linguistics. +Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. 2019. Bpe-dropout: Simple and effective subword regularization. *ArXiv*, abs/1910.13267. +Nils Reimers and Iryna Gurevych. 2018. Why comparing single performance scores does not allow to draw conclusions about machine learning approaches. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. + +Xiaodong Shi, Yidong Chen, and Xiuping Huang. 2011. Key problems in conversion from simplified to traditional chinesesecharacters. In International Conference on Asian Language Processing. +Leslie N. Smith. 2015. Cyclical learning rates for training neural networks. 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 464-472. +Leslie N. Smith. 2018. A disciplined approach to neural network hyper-parameters: Part 1 - learning rate, batch size, momentum, and weight decay. *ArXiv*, abs/1803.09820. +Don Snow. 2004. Cantonese as written language: The growth of a written Chinese vernacular, volume 1. Hong Kong University Press. +Richard W. Sproat, Chilin Shih, William Gale, and Nancy Chang. 1996. A stochastic finite-state word-segmentation algorithm for Chinese. Computational Linguistics, 22(3):377-404. +Martin Sundermeyer, Ralf Schluter, and Hermann Ney. 2012. Lstm neural networks for language modeling. In INTERSPEECH. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104-3112. Curran Associates, Inc. +Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2015. Rethinking the inception architecture for computer vision. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2818-2826. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. +Pak-kwong Wong and Chorkin Chan. 1996. Chinese word segmentation based on maximum matching and word binding force. In COLING 1996 Volume 1: The 16th International Conference on Computational Linguistics. +Jiarui Xu, Xuezhe Ma, Chen-Tse Tsai, and Eduard Hovy. 2017. Stcp: Simplified-traditional chinese conversion and proofreading. Proceedings of the IJCNLP 2017, System Demonstrations, pages 61-64. +Chiung Yao. 1964. Fire and rain. Book. ISBN 0-330-36076-0. +Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks? In NIPS. + +Michael Ruogu Zhang, James Lucas, Geoffrey E. Hinton, and Jimmy Ba. 2019. Lookahead optimizer: k steps forward, 1 step back. ArXiv, abs/1907.08610. +Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NIPS. +Xiaoheng Zhang. 2014. A comparative study on simplified-traditional chinese translation. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data, pages 212-222. Springer. +George Kingsley Zipf. 1949. Human behavior and the principle of least effort. + +# A Summary in Traditional Chinese: 簡體中文到繁體中文的文本轉換器 + +研究中文NLP時,將文本進行繁簡轉換是常見的數據預處理步驟。在簡繁轉換過程中,經常出現多個繁字轉換成同一簡體字,反之亦然。藉此透過測試現行的繁簡轉換算法,發現只有 $55 - 85\%$ 準確度。進一步的調查發現,現代的神經網絡,譬如神經語言模型的字符歧義消除(neural language model disambiguation)和神經序列模型(neural sequence models),均只達到 $84 - 85\%$ 的句子準確性,都是由第一類錯誤(Type I error)所致。我們推斷上述問題,是由於模型未能有效釐清子詞(subword)的邊界所導致。 + +在此,我們提出了2kenize,一個子詞分割模型(subword segmentation model),同時利用先行式繁體中文以及簡體中文進行建構。我們將聯合簡體中文及繁體中文共同訓練Viterbi分詞器。即使利用較具挑戰性的數據集測試,本模型亦達到 $91 - 95\%$ 消歧準確度。透過定性誤差分析(qualitative error analysis),展示了本模型更擅長處理code-mixing以及命名個體(named entities)除此以外,我們亦在主題分類領域中进行了外部評估,本模型更在主題分類的字符及詞語模型(character and word-based models)的領域中表現出眾,更在子詞正則化(subword regularization)中,獲得比BPE更好的名次。然後針對繁體中文句子對2kenize進行調整,誕生了1kenize。1kenize分別在正式數據集與其他子詞分詞器(subword tokenizers)名列前茅,在非正式數據集上更表現超群。由此,我們推斷子詞分詞器會嚴重地受token的分佈及偏度而影響 + +是次研究的貢獻: + +1. 2kenize:簡體中文到繁體中文的文本轉換器 +2. 字符轉換評估數據集:跨越香港和台灣文獻及新聞等多個類型的數據集 +3. 主題分類數據集:繁體中文的正式和非正式文本數據涵蓋新聞,社交媒體討論,改圖,改歌,memes等二次創作文本。 + +# B Data Statement for Intrinsic Evaluation + +# B.1 Corpus + +In this subsection, we discuss the annotation procedure and the characteristics of the corpus used for the intrinsic evaluation. We have used Bender + +and Friedman (2018) data statement design for the description. + +# B.1.1 Curation Rationale + +The script conversion task is understudied in NLP and we could not find good quality parallel corpora to evaluate our approaches. The idea is to curate a diverse collection of TC works and convert them to SC, due to its one-to-one correspondence. However, we find out that some of the conversions were wrong because + +1. sometimes dictionaries resulted in incorrect conversion, +2. stylistic differences between HK and TW characters and phrasing, +3. code-mixing of Cantonese and Traditional Chinese, +4. code-mixing with non-Chinese characters, +5. some characters in TC-SC conversion have one-to-many mappings as well. + +Hence, we need quality control with human annotators to validate our conversions. + +# B.1.2 Annotation Process + +Demographic: We opted for 4 trained annotators, 2 for annotating HK-style TC and 2 for annotating TW-style TC and thus going for double annotation for the corpus. They ranged in age from 18-20 years, included 2 men and 2 women, gave their ethnicity as Hong Kongers (2) and Taiwanese (2), and their native spoken languages were Cantonese (2) and Taiwanese Mandarin (2). + +Workload: Annotators approximately validated 100 sentences per hour, comprising of total workload of 60 hours. They were given a month to annotate and were paid 5000 Hong Kong Dollars on completion. + +Procedure: The annotators were shown TC and converted SC sentences (we used OpenCC to convert) and were asked to validate and correct any conversion mistakes. In case of disagreement, we used majority voting between automatically converted and annotators' corrections. + +We provide raw agreement and Krippendorf's $\alpha$ in Table 1 for pooled data and various sub-groups of the dataset. We also report inter-annotator agreements on character and phrasal levels in Table 2. These agreement values are difficult to interpret, but generally $\alpha \geq 0.8$ is considered to be substantial. + +
RAα
HK0.980.98
Lit0.9820.98
News0.9790.97
TW0.980.98
Lit0.9810.98
News0.9710.97
+ +Table 9: Inter-annotator agreements + +
RAα
Character Level0.980.97
Word Level0.950.94
Sentence Level0.930.92
+ +Table 10: Inter-annotator agreements as per different levels + +# B.1.3 Speech Situation + +The publication dates and sources are listed in the Table 2. HK and TW literature consists of popular books for which many movie and drama adaptations are made.[18] Specifically, for HK literature, the text contains code-mixed characters with Vernacular Cantonese, which is quite unusual in formal publishing practices, and these books are often cited as an example for popularizing Cantonese in the 60s (Snow, 2004). We also found code-mixing with English and numerous transliterated named entities which we have used for qualitative error analysis in the Table 4. + +# B.1.4 Text Characteristics + +Although Hong Kong and Taiwan both use Traditional Chinese, they are stylistically different as the dominant spoken language in HK is Cantonese and in TW is Taiwanese Mandarin. Thus, it is quite essential to test the performance of our algorithms on these two styles. We collected two genres for each style: informal literature and formal news. We found more variation within informal HK-TW literature as compared to the formal news. We intentionally chose long sentences (average length of 200 characters), especially which contain more ambiguous characters to make the dataset more challenging for testing. + +# C Data Statement for Extrinsic Evaluation + +This subsection describes the characteristics of the topic classification in Traditional Chinese. For the + +short overview, please see Table 5. + +# C.1 Curation Rationale + +We choose two different styles for curating this dataset: formal and informal. The formal text consists of news dataset scraped from Singtao, one of the popular newswire in Hong Kong. The classes in this dataset consist of Financial, Educational, Local, International, and Sports subsections. There are 17500 unlabelled and 3900 labelled items in this section. Authors would like to credit I-Tsun Cheng for giving us helpful suggestions in curating this dataset. + +The informal text consists of social media posts dataset scraped from LIHKG, a Twitter equivalent in Hong Kong. The classes in this dataset consist of Sports, Opinions, Memes, IT, Financial and Leisure. There are 21000 unlabelled and 4900 labelled items in this section. Authors would like to credit Leland So for giving us helpful suggestions in curating this dataset. + +# C.2 Language Variety + +The texts in the formal subsection are typically written in Hong Kong style Traditional Chinese (zh-hant-hk). The posts scraped from LIHKG are predominantly in Traditional Chinese (zh-hant-hk), and they are often code-mixed with Vernacular Cantonese (zh-yue) and English (en-HK). + +# C.3 Speaker Demographic + +Speakers were not directly approached for inclusion in this dataset and thus could not be asked for demographic information. Our best guess for demographic of LIHKG forum users are typically university students (19-23 years), and the majority of them speak Cantonese as a native language. + +# C.4 Text Characteristics + +The news articles are scraped from 2017-2019 and LIHKG posts are scraped from 2017-2018. Some of the posts in LIHKG are in the transliterated Cantonese form and some of them are not written in Standard Written Chinese. The news posts are generally quite long and often contains more than 5 sentences (average length of nearly 300 characters). On the other hand, the LIHKG posts are shorter and forums titles are generally one sentence each (average length of nearly 50 characters). Please note that due to the current situations in Hong Kong, we do not include political posts and news from mid-2019. + +# D Description of Intrinsic Evaluation Experiments + +# D.1 Heuristic Grid Search of Learning Rate and Batch Size Hyperparameters + +We employ the cosine cyclic learning scheduler (Smith, 2015), where the limits of learning rate cycles are found by increasing the learning rate logarithmically and computing the evaluation loss for each learning rate (Smith, 2018). To compute the batch size, we apply gradient noise scale to each batch size candidate and pick the one which gives the highest gradient noise scale (McCandlish et al., 2018). + +# D.2 Training of SC and TC Language Model + +The datasets are described in §3.2. The model architecture is 2-layer LSTM language model with tied weights. Embedding size is 512 and hidden size is 512. We perform a concat pooling in the last layer where we concatenate the last output of the word, mean pool and max pool of all representations. We adopt comparable subword perplexity as suggested by Cotterell et al. (2018); Mielke et al. (2019); Mielke (2019), where we use a common denominator, referring to the number of segments per word in order to compare. On average, we achieve a perplexity of 168.6 on the Chinese Treebank test set (Nianwen et al., 2016). Also refer to Chinese LM Benchmark: https://chinesenlp.xyz/#/docs/language_modeling. The training took 2 days on RTX 2080 with FP16 training, with a batch size of 256 and number of epochs of 250. + +# D.3 Training of Convolutional seq2seq + +Training dataset is a heuristically converted Traditional Chinese Wikipedia with OpenCC. We use 20 layers in encoder and decoder with the embedding size of 512 implemented in Fairseq (Ott et al., 2019). Dropout is 0.1 and we use adaptive softmax to speed up the training. The training took 1 day on RTX 2080 with FP16 training, with a batch size of 128 and number of epochs of 250. + +# E Description of Extrinsic Evaluation Experiments + +# E.1 Character CNN training + +The datasets are described in $\S 4.1.2$ . The model architecture is 7-layer CNN with tied weights and residual blocks. Embedding size is 512 and hidden size is 512. We perform a concat pooling in + +the last layer where we concatenate the last output of the word, mean pool and max pool of all representations. The training took 16 hours on RTX 2080 with FP16 training, with a batch size of 256 and number of epochs of 350. + +# E.2 Chinese BERT training + +The datasets are described in §4.1.2. We use Chinese BERT base (12-layer, 768-hidden, 12-heads, 110M parameters) using Transformers library (Wolf et al., 2019). We use sequence length of 384 and batch size of 12. Finetuning language model took 2 hours (learning rate of 3e-5) and finetuning classifier took 1 hour each on both datasets, including grid search on learning rates: 3e-4, 1e-4, 5e-5, 3e-5, where 3e-5 gives the best results (on RTX 2080 with FP16 training). + +# E.3 MultiFiT training + +We found MultiFiT is highly reproducible as compared to other models as it gives the least variance across the seeds and data splits. Hyperparameters are chosen by heuristic grid search on learning rate and batch size. The datasets are described in §4.1.2. Pretraining language model takes 1 GPU day for each experiment of MultiFiT. Finetuning language model takes 3 hours where we used a patience of 2 epochs. Finetuning classifiers takes 3 hours where we used a patience of 2 epochs. All experiments of MultiFiT are implemented using FastAI (Howard and Gugger, 2020). + +# F Alternative texts for figures and Chinese explanations + +# F.1 Alternative text for Figure 1 + +The recurring Chinese sentence is split and we take one subsequence of it. The other subsequence is used in next iteration. We perform Unigram viterbi segmentation on this and get the probabilities. The probabilities are normalized and we sample a segmentation using this probability. This segmentation goes into the model which goes through cached embeddings, followed by stacked LSTM layers, followed by concat pooling (which consists of last output, mean pooling and max pooling) which then goes through a linear layer. We cache the top-k embeddings in the main memory and for the least frequent embeddings we track the gradients and do not keep them in the main network (we used gradient accumulation). + +# F.2 Alternative text for Figure 2 + +From the given SC sentence, we create possible TC sequences using mappings. We input these to Viterbi, which recursively calls LSTM. Using Eq. (6) as the scoring function, Viterbi outputs the mapping sequence. We perform beam search to find the best TC sequence from the mapping sequence where we used the same TC LSTM again. + +# F.3 Alternative text for Figure 3 + +The architecture contains 4 stacked QRNN layers. Each layer has QRNN cells. After every layer we perform a concat pool (taking the last output, max pool and mean pool). We aggregate these pools in the final layer which goes into a linear layer. We highly recommend this for making the training more stable. + +# F.4 Alternative text for Figure 4 + +We have plotted log-log token distribution. On x-axis we have order rank and on y-axis we have frequencies. Character based tokenization gives a slope of 1.703, BPE-Drop gives 1.31, BPE gives 1.27, word tokenization (Jieba) gives 1.41, unigram sampling gives 1.28 and 1kenize gives the least skewed distribution with a slope of 1.1. Note that these are negative slope and lower the slope is, more efficiently vocabulary is tokenized. + +# F.5 Recurring Chinese sentence + +Here, we explain the recurring sentence in this paper. In Table 1 we had SC sentence维护发展中国家共同利益, which means Safeguarding the common interests of developing countries. This is pronounced as Weihu fāzhàn zhong guójiā gòngtóng liyi in Mandarin. Its correct TC translation is 維護發展中國家共同利益, which is pronounced as wai4 wu6 faat3 zin2zung1 gwok3 gaa1 gung6 tung4 lei6 jik1 in Cantonese (note that the numerals are the tones). \ No newline at end of file diff --git a/2kenizetyingsubwordsequencesforchinesescriptconversion/images.zip b/2kenizetyingsubwordsequencesforchinesescriptconversion/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e77779bc1e7d9b3974e87ca6e9d8c9cf2889fa41 --- /dev/null +++ b/2kenizetyingsubwordsequencesforchinesescriptconversion/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51a402fd476dd82a248fefedf00f1f8dd414322ce76e7afe7f3ab0f5dc32bce7 +size 640037 diff --git a/2kenizetyingsubwordsequencesforchinesescriptconversion/layout.json b/2kenizetyingsubwordsequencesforchinesescriptconversion/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a05a99d27a3c6e70e71967342256e4a9228ba808 --- /dev/null +++ b/2kenizetyingsubwordsequencesforchinesescriptconversion/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5452aade638dbf962283e057f2ca993a663de821ca560f887d04cdd69b42754e +size 497638 diff --git a/abatchnormalizedinferencenetworkkeepstheklvanishingaway/76710d6c-bd02-47c7-86e5-0f31e1b093ae_content_list.json b/abatchnormalizedinferencenetworkkeepstheklvanishingaway/76710d6c-bd02-47c7-86e5-0f31e1b093ae_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f6812cf82bcd28b8b91e01d19f2b9350d52445d6 --- /dev/null +++ b/abatchnormalizedinferencenetworkkeepstheklvanishingaway/76710d6c-bd02-47c7-86e5-0f31e1b093ae_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b1a2be5c3e3d59658c6545a9d2f4103beaf5553028723a9001b1134f25322d7 +size 102311 diff --git a/abatchnormalizedinferencenetworkkeepstheklvanishingaway/76710d6c-bd02-47c7-86e5-0f31e1b093ae_model.json b/abatchnormalizedinferencenetworkkeepstheklvanishingaway/76710d6c-bd02-47c7-86e5-0f31e1b093ae_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c9dd919e069db1f456235a47cb0566c47e166965 --- /dev/null +++ b/abatchnormalizedinferencenetworkkeepstheklvanishingaway/76710d6c-bd02-47c7-86e5-0f31e1b093ae_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc7cdd4ca42cbd3f5951758bb4c5087e318212f09218efe50bfee63a009a33dd +size 125706 diff --git a/abatchnormalizedinferencenetworkkeepstheklvanishingaway/76710d6c-bd02-47c7-86e5-0f31e1b093ae_origin.pdf b/abatchnormalizedinferencenetworkkeepstheklvanishingaway/76710d6c-bd02-47c7-86e5-0f31e1b093ae_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a90839ebb05355970fa0b5cef787ee2342d5fef9 --- /dev/null +++ b/abatchnormalizedinferencenetworkkeepstheklvanishingaway/76710d6c-bd02-47c7-86e5-0f31e1b093ae_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:de87d8a6708e9f6bd172fe9612f7dcce3c02bb1bff6783b54874fc075e564bcd +size 1110954 diff --git a/abatchnormalizedinferencenetworkkeepstheklvanishingaway/full.md b/abatchnormalizedinferencenetworkkeepstheklvanishingaway/full.md new file mode 100644 index 0000000000000000000000000000000000000000..45a2d36e23b0ecb6943e4a5f455036efbab589fb --- /dev/null +++ b/abatchnormalizedinferencenetworkkeepstheklvanishingaway/full.md @@ -0,0 +1,439 @@ +# A Batch Normalized Inference Network Keeps the KL Vanishing Away + +Qile Zhu $^{1}$ , Wei Bi $^{2}$ , Xiaojiang Liu $^{2}$ , Xiyao Ma $^{1}$ , Xiaolin Li $^{3}$ and Dapeng Wu $^{1}$ + +1University of Florida, 2Tencent AI Lab, 3AI Institute, Tongdun Technology +{valder,maxiy,dpwu}@ufl.edu +{victoriabi,kieranliu}@tencent.com +xiaolin.li@tongdun.net + +# Abstract + +Variational Autoencoder (VAE) is widely used as a generative model to approximate a model's posterior on latent variables by combining the amortized variational inference and deep neural networks. However, when paired with strong autoregressive decoders, VAE often converges to a degenerated local optimum known as "posterior collapse". Previous approaches consider the Kullback-Leibler divergence (KL) individual for each datapoint. We propose to let the KL follow a distribution across the whole dataset, and analyze that it is sufficient to prevent posterior collapse by keeping the expectation of the KL's distribution positive. Then we propose Batch Normalized-VAE (BN-VAE), a simple but effective approach to set a lower bound of the expectation by regularizing the distribution of the approximate posterior's parameters. Without introducing any new model component or modifying the objective, our approach can avoid the posterior collapse effectively and efficiently. We further show that the proposed BN-VAE can be extended to conditional VAE (CVAE). Empirically, our approach surpasses strong autoregressive baselines on language modeling, text classification and dialogue generation, and rivals more complex approaches while keeping almost the same training time as VAE. + +# 1 Introduction + +Variational Autoencoder (VAE) (Kingma and Welling, 2014; Rezende et al., 2014) is one of the most popular generative framework to model complex distributions. Different from the Autoencoder (AE), VAE provides a distribution-based latent representation for the data, which encodes the input $\mathbf{x}$ into a probability distribution $\mathbf{z}$ and reconstructs the original input using samples from $\mathbf{z}$ . When + +inference, VAE first samples the latent variable from the prior distribution and then feeds it into the decoder to generate an instance. VAE has been successfully applied in many NLP tasks, including topic modeling (Srivastava and Sutton, 2017; Miao et al., 2016; Zhu et al., 2018), language modeling (Bowman et al., 2016), text generation (Zhao et al., 2017b) and text classification (Xu et al., 2017). + +An autoregressive decoder (e.g., a recurrent neural network) is a common choice to model the text data. However, when paired with strong autoregressive decoders such as LSTMs (Hochreiter and Schmidhuber, 1997) and trained under conventional training strategy, VAE suffers from a well-known problem named the posterior collapse or the KL vanishing problem. The decoder in VAE learns to reconstruct the data independent of the latent variable $\mathbf{z}$ , and the KL vanishes to 0. + +Many convincing solutions have been proposed to prevent posterior collapse. Among them, fixing the KL as a positive constant is an important direction (Davidson et al., 2018; Guu et al., 2018; van den Oord et al., 2017; Xu and Durrett, 2018; Tomczak and Welling, 2018; Kingma et al., 2016; Razavi et al., 2019). Some change the Gaussian prior with other distributions, e.g., a uniform prior (van den Oord et al., 2017; Zhao et al., 2018) or a von Mises-Fisher (vMf) distribution (Davidson et al., 2018; Guu et al., 2018; Xu and Durrett, 2018). However, these approaches force the same constant KL and lose the flexibility to allow various KLs for different data points (Razavi et al., 2019). Without changing the Gaussian prior, free-bits (Kingma et al., 2016) adds a threshold (free-bits) of the KL term in the ELBO object and stops the optimization of the KL part when its value is smaller than the threshold. Chen et al. (2017) point out that the objective of free-bits is non-smooth and suffers from the optimization challenges. $\delta$ -VAE (Razavi et al., 2019) sets the parameters in a specific range + +to achieve a positive KL value for every latent dimension, which may limit the model performance. + +Other work analyzes this problem from a view of optimization (Bowman et al., 2016; Zhao et al., 2017a; Chen et al., 2017; Alemi et al., 2018). Recently, He et al. (2019) observe that the inference network is lagging far behind the decoder during training. They propose to add additional training loops for the inference network only. Li et al. (2019) further propose to initialize the inference network with an encoder pretrained from an AE objective, then trains the VAE with the free-bits. However, these two methods are much slower than the original VAE. + +The limitation of the constant KL and the high cost of additional training motivate us to seek an approach that allows flexible modeling for different data points while keeping as fast as the original VAE. In this paper, instead of considering the KL individually for each data point, we let it follow a distribution across the whole dataset. We demonstrate that keeping a positive expectation of the KL's distribution is sufficient to prevent posterior collapse in practice. By regularizing the distribution of the approximate posterior's parameters, a positive lower bound of this expectation could be ensured. Then we propose Batch Normalized-VAE (BN-VAE), a simple yet effective approach to achieving this goal, and discuss the connections between BN-VAE and previous enhanced VAE variants. We further extend BN-VAE to the conditional VAE (CVAE). Last, experimental results demonstrate the effectiveness of our approach on real applications, including language modeling, text classification and dialogue generation. Empirically, our approach surpasses strong autoregressive baselines and is competitive with more sophisticated approaches while keeping extremely higher efficiency. Code and data are available at https://github.com/valdersoul/bn-vae. + +# 2 Background and Related Work + +In this section, we first introduce the basic background of VAE, then we discuss the lagging problem (He et al., 2019). At last, we present more related work. + +# 2.1 VAE Background + +VAE (Kingma and Welling, 2014; Rezende et al., 2014) aims to learn a generative model $p(\mathbf{x}, \mathbf{z})$ to maximize the marginal likelihood $\log p(\mathbf{x})$ on a + +dataset. The marginal likelihood cannot be calculated directly due to an intractable integral over the latent variable $\mathbf{z}$ . To solve this, VAE introduces a variational distribution $q_{\phi}(\mathbf{z}|\mathbf{x})$ which is parameterized by a complex neural network to approximate the true posterior. Then it turns out to optimize the ELBO of $\log p(\mathbf{x})$ : + +$$ +\mathcal {L} = \mathrm {E} _ {q _ {\phi} (\mathbf {z} | \mathbf {x})} [ \log p _ {\theta} (\mathbf {x} | \mathbf {z}) ] - K L (q _ {\phi} (\mathbf {z} | \mathbf {x}) | | p (\mathbf {z})) \tag {1} +$$ + +where $\phi$ represents the inference network and $\theta$ denotes the decoder. The above first term is the reconstruction loss, while the second one is the KL between the approximate posterior and the prior. The Gaussian distribution $\mathcal{N} \sim (0, I)$ is a usual choice for the prior, and the KL between the approximate posterior $q_{\phi}(\mathbf{z}|\mathbf{x})$ and the prior $p(\mathbf{z})$ can be computed as: + +$$ +K L = \frac {1}{2} \sum_ {i = 1} ^ {n} \left(\mu_ {i} ^ {2} + \sigma_ {i} ^ {2} - \log \sigma_ {i} ^ {2} - 1\right), \tag {2} +$$ + +where $\mu_{i}$ and $\sigma_{i}$ is the mean and standard deviation of approximate posterior for the $i_{th}$ latent dimension, respectively. When the decoder is autoregressive, it can recover the data independent of the latent $\mathbf{z}$ (Bowman et al., 2016). The optimization will encourage the approximate posterior to approach the prior which results in the zero value of the KL. + +# 2.2 The Lagging Problem + +Recently, He et al. (2019) analyze posterior collapse with the Gaussian prior from a view of training dynamics. The collapse is a local optimum of VAE when $q_{\phi}(\mathbf{z}|\mathbf{x}) = p_{\theta}(\mathbf{z}|\mathbf{x}) = p(\mathbf{z})$ for all inputs. They further define two partial collapse states: model collapse, when $p_{\theta}(\mathbf{z}|\mathbf{x}) = p(\mathbf{z})$ , and inference collapse, when $q_{\phi}(\mathbf{z}|\mathbf{x}) = p(\mathbf{z})$ . They observe that the inference collapse always happens far before the model collapse due to the existence of autoregressive decoders. Different from the model posterior, the inference network lacks guidance and easily collapses to the prior at the initial stage of training, and thus posterior collapse happens. Based on this understanding, they propose to aggressively optimize the inference network. However, this approach cost too much time compared with the original VAE. In our work, we also employ the Gaussian prior and thus suffer from the same lagging problem. Yet, our proposed approach does + +not involve additional training efforts, which can effectively avoid the lagging problem (Section 3.3) and keep almost the same training efficiency as the original VAE (Section 5.1). More details can be found in Section 3.3. + +# 2.3 Related Work + +To prevent posterior collapse, we have mentioned many work about changing the prior in the introduction. Besides these approaches, some work modifies the original training objective directly. For example, Bowman et al. (2016) introduce an annealing strategy, where they slightly increase the weight of KL from 0 to 1 during the warm-up period. $\beta$ -VAE (Higgins et al., 2017) treats the KL weight as a hyperparameter to constrain the minimum value of the KL. Alemi et al. (2017), on the other hand, set a fixed KL weight to control the mutual information between $\mathbf{z}$ and $\mathbf{x}$ . Tolstikhin et al. (2018) leverage the Wasserstein distance to replace the KL. Zhao et al. (2017a) replace the KL with maximum mean discrepancy. Fang et al. (2019) introduce sample-based representations which lead to implicit latent features with an auxiliary network. + +Some change the training strategy. Kim et al. (2018) address the amortization gap (Cremer et al., 2018) in VAE and propose Semi-Amortized VAE to compose the inference network with additional mean-field updates. Fu et al. (2019) propose a cyclical annealing schedule, which repeats the process of increasing $\beta$ multiple times. + +There are various other approaches to solve the posterior collapse. For example, some researchers choose to weaken the decoder by replacing the LSTM decoder with convolution neural networks without autoregressive modeling (Semeniuta et al., 2017; Yang et al., 2017). Chen et al. (2017) input a lossy representation of data to the autoregressive decoder and enforce $\mathbf{z}$ to capture the information about the original input. Inheriting this idea, some following work add direct connections between $\mathbf{z}$ and $\mathbf{x}$ (Zhao et al., 2017b; Dieng et al., 2019). Ma et al. (2019) introduce an additional regularization to learn diverse latent representation. $\delta$ -VAE (Razavi et al., 2019) and free-bits (Kingma et al., 2016) set a minimum number of KL for each latent dimension to prevent the posterior collapse. + +Srivastava and Sutton (2017, 2018) find that using ADAM (Kingma and Ba, 2014) with a high learning rate to train VAE may cause the gradients to diverge early. Their explanation for the diverg + +ing behavior lies in the exponential curvature of the gradient from the inference network which produces the variance part of the approximate posterior. Then they apply batch normalization to the variance part to solve this problem. We use the simple SGD without momentum to train our model. Moreover, we apply batch normalization to the mean part of the inference network to keep the expectation of the KL's distribution positive, which is different from their work. We also find that Sønderby et al. (2016) utilize batch normalization in all fully connected layers with nonlinear activation functions to improve the model performance. Different from it, our approach directly applies batch normalization to the parameters of the approximate posterior, which is the output of the inference network. + +# 3 Batch-Normalized VAE + +In this section, we first derive the expectation of the KL's distribution and show that it is enough to avoid posterior collapse by keeping the expectation of the KL's distribution positive. Then we propose our regularization method on the parameters of the approximate posterior to ensure a positive lower bound of this expectation. We further discuss the difference between our approach and previous work. + +# 3.1 Expectation of the KL's Distribution + +Given an $\pmb{x} \in \mathcal{X}$ , the inference network parametrizes a $n$ -dimensional Gaussian distribution with its mean $\mu = f_{\mu}(\pmb{x})$ and diagonal covariance $\pmb{\Sigma} = \text{diag}(f_{\Sigma}(\pmb{x}))$ , where $f_{\mu}$ and $f_{\Sigma}$ are two neural networks. In practice, the ELBO is computed through a Monte Carlo estimation from $b$ samples. The KL in Eq. 2 is then computed over $b$ samples from $\mathcal{X}$ : + +$$ +\begin{array}{l} K L = \frac {1}{2 b} \sum_ {j = 1} ^ {b} \sum_ {i = 1} ^ {n} \left(\mu_ {i j} ^ {2} + \sigma_ {i j} ^ {2} - \log \sigma_ {i j} ^ {2} - 1\right) \\ = \frac {1}{2} \sum_ {i = 1} ^ {n} \left(\frac {\sum_ {j = 1} ^ {b} \mu_ {i j} ^ {2}}{b} + \frac {\sum_ {j = 1} ^ {b} \sigma_ {i j} ^ {2}}{b} \right. \\ - \frac {\sum_ {j = 1} ^ {b} \log \sigma_ {i j} ^ {2}}{b} - 1). \tag {3} \\ \end{array} +$$ + +When $b$ gets larger, the above empirical value will approach the mean of the KL across the whole dataset. + +To make use of this observation, we assume that $\mu_{i}$ and $\log \sigma_i^2$ for each latent dimension $i$ follow + +a certain distribution with a fixed mean and variance across the dataset respectively. The distribution may vary between different latent dimensions. In this way, the KL turns to a distribution of $\mu_{i}$ 's and $\log \sigma_{i}^{2}$ 's. From Eq. 3, we can see that $\sum_{j=1}^{b} \mu_{ij}^{2} / b$ is the sample mean of $\mu_{i}^{2}$ , which converges to $\mathrm{E}[\mu_{i}^{2}] = \mathrm{Var}[\mu_{i}] + \mathrm{E}^{2}[\mu_{i}]$ . Similarly, $\sum_{j=1}^{b} \sigma_{ij}^{2} / b$ converges to $\mathrm{E}[\sigma_{i}^{2}]$ , and $\sum_{j=1}^{b} \log \sigma_{ij}^{2} / b$ to $\mathrm{E}[\log \sigma_{i}^{2}]$ . Thus, we can derive the expectation of the KL's distribution as: + +$$ +\begin{array}{l} \operatorname {E} [ K L ] = \frac {1}{2} \sum_ {i = 1} ^ {n} (\operatorname {V a r} [ \mu_ {i} ] + \operatorname {E} ^ {2} [ \mu_ {i} ] \\ + \mathrm {E} \left[ \sigma_ {i} ^ {2} \right] - \mathrm {E} \left[ \log \sigma_ {i} ^ {2} \right] - 1) \\ \geq \frac {1}{2} \sum_ {i = 1} ^ {n} \left(\operatorname {V a r} \left[ \mu_ {i} \right] + \mathrm {E} ^ {2} \left[ \mu_ {i} \right]\right), \tag {4} \\ \end{array} +$$ + +where $\operatorname{E}[\sigma_i^2 - \log \sigma_i^2] \geq 1$ since the minimum of $e^x - x$ is 1. If we can guarantee a positive lower bound of $\operatorname{E}[KL]$ , we can then effectively prevent the posterior collapse. + +Based on Eq. 4, the lower bound is only dependent on the number of latent dimensions $n$ and $\mu_{i}$ 's mean and variance. This motivates our idea that with proper regularization on the distributions of $\mu_{i}$ 's to ensure a positive lower bound of $\operatorname{E}[KL]$ . + +# 3.2 Normalizing Parameters of the Posterior + +The remaining key problem is to construct proper distributions of $\mu_{i}$ 's that can result in a positive lower bound of $\operatorname{E}[KL]$ in Eq. 4. Here, we propose a simple and efficient approach to accomplish this by applying a fixed batch normalization on the output of the inference network $(\mu_{i})$ . Batch Normalization (BN) (Ioffe and Szegedy, 2015) is a widely used regularization technique in deep learning. It normalizes the output of neurons and makes the optimization landscape significantly smoother (Santurkar et al., 2018). Different from other tasks that apply BN in the hidden layers and seek fast and stable training, here we leverage BN as a tool to transform $\mu_{i}$ into a distribution with a fixed mean and variance. Mathematically, the regularized $\mu_{i}$ is written by: + +$$ +\hat {\mu} _ {i} = \gamma \frac {\mu_ {i} - \mu_ {\mathcal {B} i}}{\sigma_ {\mathcal {B} i}} + \beta , \tag {5} +$$ + +where $\mu_{i}$ and $\hat{\mu_i}$ are means of the approximate posterior before and after BN. $\mu_{\mathcal{B}i}$ and $\sigma_{\mathcal{B}i}$ denote the mean and standard deviations of $\mu_{i}$ . They are biased estimated within a batch of samples for each + +dimension indecently. $\gamma$ and $\beta$ are the scale and shift parameter. Instead of using a learnable $\gamma$ in Eq. 5, we use a fixed BN which freezes the scale $\gamma$ . In this way, the distribution of $\mu_{i}$ has the mean of $\beta$ and the variance of $\gamma^2$ . $\beta$ is a learnable parameter that makes the distribution more flexible. + +Now, we derive the lower bound of $\operatorname{E}[KL]$ by using the fixed BN. With the fixed mean $\beta$ and variance $\gamma^2$ for $\mu_i$ in hand, we get a new lower bound as below: + +$$ +\begin{array}{l} \operatorname {E} [ K L ] \geq \frac {1}{2} \sum_ {i} ^ {n} (\operatorname {V a r} [ \mu_ {i} ] + \operatorname {E} ^ {2} [ \mu_ {i} ]) \\ = \frac {n \cdot \left(\gamma^ {2} + \beta^ {2}\right)}{2}. \tag {6} \\ \end{array} +$$ + +To this end, we can easily control the lower bound of $\operatorname{E}[KL]$ by setting $\gamma$ . Algorithm 1 shows the training process. + +Algorithm 1 BN-VAE training. + +1: Initialize $\phi$ and $\theta$ . +2: for $i = 1,2,\dots$ Until Convergence do +3: Sample a mini-batch $\mathbf{x}$ +4: $\mu, \log \sigma^2 = f_{\phi}(\mathbf{x})$ +5: $\mu^{\prime} = BN_{\gamma ,\beta}(\mu)$ +6: Sample $\mathbf{z} \sim \mathcal{N}(\mu', \sigma^2)$ and reconstruct $\mathbf{x}$ from $f_{\theta}(\mathbf{z})$ . +7: Compute gradients $\mathbf{g}_{\phi ,\theta}\gets \nabla_{\phi ,\theta}\mathcal{L}(\mathbf{x};\phi ,\theta)$ +8: Update $\phi, \theta$ using $\mathbf{g}_{\phi, \theta}$ . +9: end for + +# 3.3 Connections with Previous Approaches + +Constructing a positive KL: Both free-bits (Kingma et al., 2016) and $\delta$ -VAE (Razavi et al., 2019) set a threshold on the KL value. Free-bits changes the KL term in the ELBO to a hinge loss term: $\sum_{i}^{n}\max (\lambda ,KL(q_{\phi}(z_{i}|x)||p(z_{i}))$ . Another version of free-bits is to apply the threshold to the entire sum directly instead of the individual value. Training with the free-bits objective, the model will stop to drive down the KL value when it is already below $\lambda$ . However, Chen et al. (2017) point out that the objective of free-bits is non-smooth and suffers from the optimization challenges. Our approach does not face the optimization problem since we use the original ELBO objective. + +$\delta$ -VAE sets a target rate of $\delta$ for each latent dimension by constraining the mean and variance of + +the approximate posterior: + +$$ +\begin{array}{l} \sigma_ {q} = \sigma_ {q} ^ {l} + \left(\sigma_ {q} ^ {u} - \sigma_ {q} ^ {l}\right) \frac {1}{1 + e ^ {- q _ {\phi} (x)}}, (7) \\ \mu = 2 \delta + 1 + \ln \left(\sigma_ {q} ^ {2}\right) - \sigma_ {q} ^ {2} + \max (0, \mu_ {\phi} (\mathbf {x})), (8) \\ \end{array} +$$ + +where $[\sigma^l, \sigma^u]$ are the feasible interval for $\sigma_q$ by solving $\ln (\sigma_q^2) - \sigma_q^2 + 2\delta + 1 \geq 0$ . Although $\delta$ -VAE can ensure a minimum value for the KL, it limits the model performance due to that the parameters are constrained in the interval. Our approach only constrains the distributions of $\mu$ , which is more flexible than $\delta$ -VAE. Experiments further show that our approach surpass both free-bits and $\delta$ -VAE. + +Reducing inference lag: As we focus on the setting of the conventional Gaussian prior, the lagging problem mentioned in Section 2.2 is crucial. To this point, it is beneficial to analyze an alternate form of the ELBO: + +$$ +\mathcal {L} = \log p _ {\theta} (\mathbf {x}) - K L \left(q _ {\phi} (\mathbf {z} | \mathbf {x}) \mid \mid p _ {\theta} (\mathbf {z} | \mathbf {x})\right). \tag {9} +$$ + +With this view, the only goal of the approximate posterior $q_{\phi}(\mathbf{z}|\mathbf{x})$ is to match the model posterior $p_{\theta}(\mathbf{z}|\mathbf{x})$ . We examine the performance of our approach to reduce inference lag using the same synthetic experiment in He et al. (2019). Details can be found in Section 1 of the Appendix. The synthetic experiment indicates that our approach with the regularization is beneficial to rebalance the optimization between inference and generation, and finally overcomes posterior collapse. We also prefer a large $\gamma$ due to that a small $\gamma$ will push the approximate posterior to the prior. More details on the synthetic experiment can be found in the Appendix. + +# 4 Extension to CVAE + +Given an observation $\mathbf{x}$ and its output $\mathbf{y}$ , CVAE (Sohn et al., 2015; Zhao et al., 2017b) models the conditional distribution $p(\mathbf{y}|\mathbf{x})$ . The variational lower bound of the conditional log-likelihood is: + +$$ +\begin{array}{l} \mathcal {L} = \operatorname {E} _ {q _ {\phi} (\mathbf {z} | \mathbf {x}, \mathbf {y})} [ \log p _ {\kappa} (\mathbf {y} | \mathbf {x}, \mathbf {z}) ] \\ - K L \left(q _ {\phi} (\mathbf {z} | \mathbf {x}, \mathbf {y}) \mid \mid p _ {\theta} (\mathbf {z} | \mathbf {x})\right) \\ \leq \log p (\mathbf {y} | \mathbf {x}). \tag {10} \\ \end{array} +$$ + +Different from VAE, the prior $p_{\theta}(\mathbf{z}|\mathbf{x})$ in CVAE is not fixed, which is also parametrized by a neural network. It is possible to apply another BN on the + +mean of the prior with a different $\gamma$ so that the expectation of the KL becomes a constant. However, this lower bound is uncontrollable due to the density of $\mu_{1} + \mu_{2}$ is the convolution of their densities, which is intractable. + +To overcome this issue, we propose to constrain the prior with a fixed distribution. We achieve it by adding another KL between the prior and a known Gaussian distribution $r(\mathbf{z})$ , i.e. $KL(p_{\theta}(\mathbf{z}|\mathbf{x})||r(\mathbf{z}))$ . Instead of optimizing the ELBO in Eq. 10, we optimize a lower bound of the ELBO for CVAE: + +$$ +\mathcal {L} ^ {\prime} = \mathcal {L} - K L (p _ {\theta} (\mathbf {z} | \mathbf {x}) | | r (\mathbf {z})) \leq \mathcal {L}. \tag {11} +$$ + +The KL term in the new bound is the sum of $KL(q_{\phi}(\mathbf{z}|\mathbf{x},\mathbf{y})||p_{\theta}(\mathbf{z}|\mathbf{x}))$ and $KL(p_{\theta}(z|x)||r(z))$ which can be computed as: + +$$ +\begin{array}{l} K L = \frac {1}{2} \sum_ {i = 1} ^ {n} \left(\frac {\sigma_ {q i} ^ {2} + (\mu_ {q i} - \mu_ {p i}) ^ {2}}{\sigma_ {p i} ^ {2}}\right) \\ + \sigma_ {p i} ^ {2} + \mu_ {p i} ^ {2} - \log \sigma_ {q i} ^ {2} - 1), \tag {12} \\ \end{array} +$$ + +where $\sigma_q$ , $\mu_q$ and $\sigma_p$ , $\mu_p$ are the parameters of $q_{\phi}$ and $p_{\theta}$ respectively. $n$ denotes the hidden size. The KL term vanishes to 0 when and only when $q_{\phi}$ and $p_{\theta}$ collapse to $r(\mathbf{z})$ , which is the normal distribution. As we explained in Section 3.2, KL won't be 0 when we apply BN in $q_{\phi}$ . We then prove that when $q_{\phi}$ collapses to $p_{\theta}$ , the KL term is not the minima (details in Section 2 of the Appendix) so that $KL(q_{\phi}(\mathbf{z}|\mathbf{x},\mathbf{y})||p_{\theta}(\mathbf{z}|\mathbf{x}))$ won't be 0. In this way, we can avoid the posterior collapse in CVAE. Algorithm 2 shows the training details. + +# Algorithm 2 BN-CVAE training. + +1: Initialize $\phi, \theta$ and $\kappa$ . +2: for $i = 1,2,\dots$ Until Convergence do +3: Sample a mini-batch $\mathbf{x},\mathbf{y}$ +4: $\mu_q, \log \sigma_q^2 = f_\phi(\mathbf{x}, \mathbf{y})$ and $\mu_p, \log \sigma_p^2 = f_\theta(\mathbf{x})$ . +5: $\mu_q' = BN_{\gamma, \beta}(\mu_q)$ . +6: Sample $\mathbf{z} \sim \mathcal{N}(\mu_q', \sigma_q^2)$ and reconstruct $\mathbf{y}$ from $f_{\kappa}(\mathbf{z}, \mathbf{x})$ . +7: Compute gradients $\mathbf{g}_{\phi ,\theta ,\kappa}\gets \nabla_{\phi ,\theta ,\kappa}\mathcal{L}^{\prime}$ +8: Update $\phi, \theta, \kappa$ using $\mathbf{g}_{\phi, \theta, \kappa}$ . +9: end for + +
ModelYahooYelp
NLLKLMIAUNLLKLMIAU
Without a pretrained AE encoder
CNN-VAE≤332.110.0--≤359.17.6--
LSTM-LM328---351.1---
VAE328.60.00.00.0357.90.00.00.0
β-VAE (0.4)328.76.32.88.0358.24.22.04.2
cyclic*330.62.12.02.3359.52.01.94.1
Skip-VAE*328.52.31.38.1357.61.91.07.4
SA-VAE327.25.22.79.8355.92.81.78.4
Agg-VAE326.75.72.915.0355.93.82.411.3
FB (4)331.04.13.83.0359.24.01.932.0
FB (5)330.65.72.03.0359.84.91.332.0
δ-VAE (0.1)*330.73.20.00.0359.83.20.00.0
vMF-VAE (13)*327.42.0-32.0357.52.0-32.0
BN-VAE (0.6)*326.76.25.632.0356.56.55.432.0
BN-VAE (0.7)*327.48.87.432.0355.99.17.432.0
With a pretrained AE encoder
cyclic*333.125.89.132.0361.520.59.332.0
FB (4)*326.28.16.832.0356.07.66.632.0
δ-VAE (0.15)*331.05.61.111.2359.45.20.55.9
vMF-VAE (13)*328.42.0-32.0357.02.0-32.0
BN-VAE (0.6)*326.76.45.832.0355.56.65.932.0
BN-VAE (0.7)*326.59.17.632.0355.79.17.532.0
+ +Table 1: Results on Yahoo and Yelp datasets. We report mean values across 5 different random runs. * indicates the results are from our experiments, while others are from He et al. (2019); Li et al. (2019). We only show the best performance of every model for each dataset. More results on various parameters can be found in the Appendix. + +# 5 Experiments + +# 5.1 VAE for Language Modeling + +Setup: We test our approach on two benchmark datasets: Yelp and Yahoo corpora (Yang et al., 2017). We use a Gaussian prior $\mathcal{N}(0, I)$ , and the approximate posterior is a diagonal Gaussian. Following previous work (Burda et al., 2016; He et al., 2019), we report the estimated negative log likelihood (NLL) from 500 importance weighted samples, which can provide a tighter lower bound compared to the ELBO and shares the same information with the perplexity (PPL). Besides the NLL, we also report the KL, the mutual information (MI) $I_q$ (Alemi et al., 2017) and the number of activate units (AU) (Burda et al., 2016) in the latent space. The $I_q$ can be calculated as: + +$$ +\begin{array}{l} I _ {q} = \mathrm {E} _ {p _ {d} (\mathbf {x})} [ K L (q _ {\phi} (\mathbf {z} | \mathbf {x}) | | p (\mathbf {z})) ] - \\ K L \left(q _ {\phi} (\mathbf {z}) | | p (\mathbf {z})\right), \tag {13} \\ \end{array} +$$ + +where $p_d(\mathbf{x})$ is the empirical distribution. The aggregated posterior $q_{\phi}(\mathbf{z}) = \mathrm{E}_{p_d(\mathbf{x})}[q_{\phi}(\mathbf{z}|\mathbf{x})]$ and $KL(q_{\phi}(\mathbf{z})||p(\mathbf{z}))$ can be approximated with Monte Carlo estimations. The AU is measured as $A_z = Cov(\mathrm{E}_{\mathbf{z}\sim q(\mathbf{z}|\mathbf{x})}[\mathbf{z}])$ . We set the threshold of 0.01, which means if $A_{zi} > 0.01$ , the unit $i$ is active. + +Configurations: We use a 512-dimension word embedding layer for both datasets. For the encoder and the decoder, a single layer LSTM with 1024 + +hidden size is used. We use $\mathbf{z}$ to generate the initial state of the encoder following Kim et al. (2018); He et al. (2019); Li et al. (2019). To optimize the objective, we use mini-batch SGD with 32 samples per batch. We use one NVIDIA Tesla v100 for the experiments. For all experiments, we use the linear annealing strategy that increases the KL weight from 0 to 1 in the first 10 epochs if possible. + +Compared methods: We compare our model with several strong baselines and methods that hold the previous state-of-the-art performance on text modeling benchmarks. + +- Baselines, including neural autoregressive models (the LSTM language model). +- Methods with weakening the decoder: CNN-VAE (Yang et al., 2017). +- Methods with a modified model structure: SkipVAE (Dieng et al., 2019). +Methods with a modified training objective: + +- VAE with annealing (Bowman et al., 2016). +- $\beta$ -VAE (Higgins et al., 2017). +- Cyclic annealing (Fu et al., 2019), we use the default cyclic schedule. + +Methods with a lower bound for KL values: + +- Free-bits (FB) (Kingma et al., 2016). +- $\delta$ -VAE (Razavi et al., 2019). +- vMF-VAE (Xu and Durrett, 2018) +- Methods with a modified training strategy. + +
ModelYahooYelp
HoursRatioHoursRatio
VAE3.831.004.501.00
SA-VAE52.9912.8059.3712.64
Agg VAE11.762.8421.444.56
AE+FB7.702.019.222.05
BN-VAE3.981.044.601.02
+ +- Semi-amortized VAE (SA-VAE) (Kim et al., 2018). +- VAE with an aggressive training (Agg-VAE) (He et al., 2019). +- FB with a pretrained inference network (AE+FB) (Fu et al., 2019) + +Main results: Table 1 shows the results. We further split the results into two different settings, one for models with a pretrained inference network and one without it. Our approach achieves the best NLL in the setting without a pretrained inference network on both datasets and is competitive in the setting with a pretrained encoder. Moreover, we can observe that: + +- $\delta$ -VAE does not perform well in both settings, which shows that constraining the parameters in a small interval is harmful to the model. In vMF-VAE, data points share the same KL value. Our approach is flexible and gets better performance. +- Although Agg-VAE and SA-VAE both get good performance, they require additional updates on the inference network and cost more training efforts, which are validated in the next part. +- Cyclic annealing with a pretrained inference network achieves the highest KL, but it may not be a good generative model. +- Paired with a pretrained inference network, all methods except cyclic annealing can someday boost the performance. This phenomenon indicates that the lagging problem (He et al., 2019) is important in VAE training. When leveraging the pretrained inference network, our approach achieves the smallest performance gap compared with other methods. In other words, our approach can alleviate the lagging problem efficiently. + +Training time: Table 2 shows the training time (until convergence) and the relative ratio of the basic VAE, our approach and the other best three models in Table 1. SA-VAE is about 12 times slower than our approach due to the local update for each data point. Agg-VAE is 2-4 times slower + +Table 2: Comparison of training time to convergence. We report both the absolute hours and relative speed. + +
#label1005001k2k10k
AE81.186.290.389.494.1
VAE66.182.688.489.694.5
δ-VAE61.861.962.662.993.8
Agg-VAE80.985.988.890.693.7
cyclic62.475.580.388.794.2
FB (9)79.884.488.891.1294.7
AE+FB (6)87.690.292.093.494.9
BN-VAE (0.7)88.891.692.594.195.4
+ +Table 3: Accuracy on Yelp. + +
ModelCVAECVAE (BOW)BN-VAE
PPL36.4024.4930.67
KL0.159.305.18
BLEU-410.238.568.64
A-bow Prec95.8796.8996.64
A-bow Recall90.9393.9594.43
E-bow Prec86.2683.5584.69
E-bow Recall77.9181.1381.75
+ +Table 4: Comparison on dialogue generation. + +than ours because it requires additional training for the inference network. AE+FB needs to train an autoencoder before the VAE. However, our approach is fast since we only add one-layer batch normalization, and thus the training cost is almost the same as the basic VAE. More results about the training behavior can be found in Section 3 of the Appendix. + +Performance on a downstream task - Text classification: The goal of VAE is to learn a good representation of the data for downstream tasks. Here, we evaluate the quality of latent representations by training a one-layer linear classifier based on the mean of the posterior distribution. We use a downsampled version of the Yelp sentiment dataset (Shen et al., 2017). Li et al. (2019) further sampled various labeled data to train the classifier. To compare with them fairly, we use the same samples in Li et al. (2019). Results are shown in Table 3. Our approach achieves the best accuracy in all the settings. For 10k training samples, all the methods get a good result. However, when only using 100 training samples, different methods vary a lot in accuracy. The text classification task shows that our approach can learn a good latent representation even without a pretrained inference network. + +# 5.2 CVAE for Dialogue Generation + +Setup: For dialogue generation, we test our approach in the setting of CVAE. Following previous work (Zhao et al., 2017b), we use the Switchboard (SW) Corpus (Godfrey and Holliman, 1997), which contains 2400 two-sided telephone conversations. + +
ModelFluencyRelevanceInformativeness
Avg#Accept#HighAvg#Accept#HighAvg#Accept#High
CVAE2.11 (0.58)87%23%1.90 (0.49)82%8%1.39 (0.59)34%5%
CVAE (BOW)2.08 (0.73)84%23%1.86 (0.58)75%11%1.54 (0.65)46%8%
BN-CVAE2.16 (0.71)88%27%1.92 (0.67)80%12%1.54 (0.67)43%10%
+ +Table 5: Human evaluation results. Numbers in parentheses is the corresponding variance on 200 test samples. + +
Topic: ETHICS IN GOVERNMENT
Context: have trouble drawing lines as to what's illegal and what's not
Target (statement): well i mean the other problem is that they're always up for
CVAECVAE (BOW)BN-CVAE
1. yeah1. yeah1. it's not a country
2. yeah2. oh yeah they're not2. it is the same thing that's what i think is about the state is a state
3. yeah3. no it's not too bad3. yeah it's
+ +Table 6: Sampled generated responses. Only the last sentence in the context is shown here. + +We use a bidirectional GRU with hidden size 300 to encode each utterance and then a one-layer GRU with hidden size 600 to encode previous $k - 1$ utterances as the context. The response decoder is a one-layer GRU with hidden size 400. The latent representation $\mathbf{z}$ has a size of 200. We use the evaluation metrics from Zhao et al. (2017b): (1) Smoothed Sentence-level BLEU (Chen and Cherry, 2014); (2) Cosine Distance of Bag-of-word Embedding, which is a simple method to obtain sentence embeddings. We use the pretrained Glove embedding (Pennington et al., 2014) and denote the average method as A-bow and the extreme method as E-bow. Higher values indicate more plausible responses. We compared our approach with CVAE and CVAE with bag-of-words (BOW) loss (Zhao et al., 2017b), which requires the decoder in the generation network to predict the bag-of-words in the response $\mathbf{y}$ based on $\mathbf{z}$ . + +Automatic evaluation: Table 4 shows the results of these three approaches. From the KL values, we find that CVAE suffers from posterior collapse while CVAE (BOW) and our approach avoid it effectively. For BLEU-4, we observe the same phenomenon in the previous work (Fu et al., 2019; Zhao et al., 2017b) that CVAE is slightly better than the others. This is because CVAE tends to generate the most likely and safe responses repeatedly with the collapsed posterior. As for precision, these three models do not differ much. However, CVAE (BOW) and our BN-VAE outperform CVAE in recall with a large margin. This indicates that BN-VAE can also produce diverse responses with good quality like CVAE (BOW). + +Human evaluation: We conduct the human evaluation by asking five annotators from a commercial annotation company to grade 200 sampled conver + +sations from the aspect of fluency, relevance and informativeness on a scale of 1-3 (see Section 4 of the Appendix for more details on the criteria). We also report the proportion of acceptable/high scores $(\geq 2$ and $= 3)$ on each metric. Table 5 shows the annotation results. Overall, our approach beats the other two compared methods in relevance and fluency with more informative responses. Also, our approach has the largest proportion of responses whose scores are High. This indicates that our model can produce more meaningful and relevant responses than the other two. + +Case study: Table 6 shows the sampled responses generated by the three methods (more can be found in the Appendix). By maintaining a reasonable KL, responses generated by our approach are more relevant to the query with better diversity compared to the other two. We test the three methods in the simplest setting of dialogue generation. Note that the focus of this work is to improve the CVAE itself by avoiding its KL vanishing problem but not to hack the state-of-the-art dialogue generation performance. To further improve the quality of generated responses, we can enhance our approach by incorporating knowledge such as dialogue acts (Zhao et al., 2017b), external facts (Ghazvininejad et al., 2018) and personal profiles (Zhang et al., 2018). + +# 6 Conclusions and Future Work + +In this paper, we tackle the posterior collapse problem when VAE is paired with autoregressive decoders. Instead of considering the KL individually, we make it follow a distribution $D_{KL}$ and show that keeping the expectation of $D_{KL}$ positive is sufficient to prevent posterior collapse. We propose Batch Normalized VAE (BN-VAE), a simple but effective approach to set a lower bound of $D_{KL}$ + +by regularization the approximate posterior's parameters. Our approach can also avoid the recently proposed lagging problem efficiently without additional training efforts. We show that our approach can be easily extended to CVAE. We test our approach on three real applications, language modeling, text classification and dialogue generation. Experiments show that our approach outperforms strong baselines and is competitive with more complex methods which keeping substantially faster. + +We leverage the Gaussian prior as the example to introduce our method in this work. The key to our approach to be applicable is that we can get a formula for the expectation of the KL. However, it is hard to get the same formula for some more strong or sophisticated priors, e.g., the Dirichlet prior. For these distributions, we can approximate them by the Gaussian distributions (such as in Srivastava and Sutton (2017)). In this way, we can batch normalize the corresponding parameters. Further study in this direction may be interesting. + +# References + +Alexander Alemi, Ben Poole, Ian Fischer, Joshua Dillon, Rif A Saurous, and Kevin Murphy. 2018. Fixing a broken elbo. In ICML. +Alexander A. Alemi, Ian Fischer, Joshua V. Dillon, and Kevin Murphy. 2017. Deep variational information bottleneck. In ICLR. +Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In CONLL. +Yuri Burda, Roger B. Grosse, and Ruslan R. Salakhutdinov. 2016. Importance weighted autoencoders. In ICLR. +Boxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing techniques for sentence-level bleu. In Proceedings of the Ninth Workshop on Statistical Machine Translation. +Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2017. Variational lossy autoencoder. In $ICLR$ . +Chris Cremer, Xuechen Li, and David Duvenaud. 2018. Inference suboptimality in variational autoencoders. In ICML. +Tim R. Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, and Jakub M. Tomczak. 2018. Hyperspherical variational auto-encoders. In UAI. + +Adji B Dieng, Yoon Kim, Alexander M Rush, and David M Blei. 2019. Avoiding latent variable collapse with generative skip models. In AISTATS. +Le Fang, Chunyuan Li, Jianfeng Gao, Wen Dong, and Changyou Chen. 2019. Implicit deep latent variable models for text generation. In EMNLP-IJCNLP. +Hao Fu, Chunyuan Li, Xiaodong Liu, Jianfeng Gao, Asli Celikyilmaz, and Lawrence Carin. 2019. Cyclic annealing schedule: A simple approach to mitigating KL vanishing. In NAACL. +Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In AAAI. +J Godfrey and E Holliman. 1997. Switchboard-1 release 2: Linguistic data consortium. In SWITCHBOARD: A User's Manual. +Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. In Transactions of the Association of Computational Linguistics. MIT Press. +Junxian He, Daniel Spokoyny, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. Lagging inference networks and posterior collapse in variational autoencoders. In *ICLR*. +Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-vae: Learning basic visual concepts with a constrained variational framework. In ICLR. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. In Neural computation. MIT Press. +Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML. +Yoon Kim, Sam Wiseman, Andrew C. Miller, David A Sontag, and Alexander M. Rush. 2018. Semi-amortized variational autoencoders. In ICML. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In ICLR. +Diederik P. Kingma and Max Welling. 2014. Auto-encoding variational bayes. In ICLR. +Durk P Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. 2016. Improved variational inference with inverse autoregressive flow. In NeurlIPS. +Bohan Li, Junxian He, Graham Neubig, Taylor Berg-Kirkpatrick, and Yiming Yang. 2019. A surprisingly effective fix for deep latent variable modeling of text. In EMNLP-IJCNLP. + +Xuezhe Ma, Chunting Zhou, and Eduard Hovy. 2019. MAE: Mutual posterior-divergence regularization for variational autoencoders. In ICLR. +Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In ICML. +Aaron van den Oord, Oriol Vinyals, and koray kavukcuoglu. 2017. Neural discrete representation learning. In NeurlIPS. +Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP. +Ali Razavi, Aaron van den Oord, Ben Poole, and Oriol Vinyals. 2019. Preventing posterior collapse with delta-VAEs. In *ICLR*. +Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In ICML. +Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, and Aleksander Madry. 2018. How does batch normalization help optimization? In NeurIPS. +Stanislau Semeniuta, Aliaksei Severyn, and Erhardt Barth. 2017. A hybrid convolutional variational autoencoder for text generation. In EMNLP. +Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2017. Style transfer from non-parallel text by cross-alignment. In NeurlIPS. +Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In NeurIPS. +Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. 2016. Ladder variational autoencoders. In NeurlIPS. +Akash Srivastava and Charles Sutton. 2017. Autoencoding variational inference for topic models. In ICLR. +Akash Srivastava and Charles Sutton. 2018. Variational inference in pachinko allocation machines. In arXiv preprint arXiv:1804.07944. +Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. 2018. Wasserstein autoencoders. In ICLR. +Jakub M. Tomczak and Max Welling. 2018. Vae with a vampprior. In AISTATS. +Jiacheng Xu and Greg Durrett. 2018. Spherical latent spaces for stable variational autoencoders. In EMNLP. +Weidi Xu, Haoze Sun, Chao Deng, and Ying Tan. 2017. Variational autoencoder for semi-supervised text classification. In AAAI. + +Zichao Yang, Zhiting Hu, Ruslan Salakhutdinov, and Taylor Berg-Kirkpatrick. 2017. Improved variational autoencoders for text modeling using dilated convolutions. In ICML. +Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In ACL. +Shengjia Zhao, Jiaming Song, and Stefano Ermon. 2017a. Infovae: Information maximizing variational autoencoders. In arXiv preprint arXiv:1706.02262. +Tiancheng Zhao, Kyusong Lee, and Maxine Eskenazi. 2018. Unsupervised discrete sentence representation learning for interpretable neural dialog generation. In ACL. +Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017b. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In ACL. +Qile Zhu, Zheng Feng, and Xiaolin Li. 2018. Graphbtm: Graph enhanced autoencoded variational inference for biterm topic model. In EMNLP. + +# A Appendix + +# A.1 Experiments on Synthetic Data + +We follow the Agg-VAE and construct the synthetic data to validate whether our approach can avoid the lagging problem. VAE used in this synthetic task has a LSTM encoder and a LSTM decoder. We use a scalar latent variable because we need to compute $\mu_{x,\theta}$ which is approximated by discretization of $p_{\theta}(z|x)$ . To visualize the training progress, we sample 500 data points from the validation set and show them on the mean space. + +We plot the mean value of the approximate posterior and the model posterior during training for the basic VAE and BN-VAE. As shown the first column in Fig. 1, all points have the zero mean of the model posterior (the x-axis), which indicates that $\mathbf{z}$ and $\mathbf{x}$ are independent at the beginning of training. For the basic VAE, points start to spread in the x-axis during training while sharing almost the same y value, since the model posterior $p_{\theta}(\mathbf{z}|\mathbf{x})$ is well learned with the help of the autoregressive decoder. However, the inference posterior $q_{\phi}(\mathbf{z}|\mathbf{x})$ is lagging behind $p_{\theta}(\mathbf{z}|\mathbf{x})$ and collapses to the prior in the end. Our regularization approximated by BN, on the other hand, pushes the inference posterior $q_{\phi}(\mathbf{z}|\mathbf{x})$ away from the prior $(p(\mathbf{z}))$ at the initial training stage, and forces $q_{\phi}(\mathbf{z}|\mathbf{x})$ to catch up with $p_{\theta}(\mathbf{z}|\mathbf{x})$ to minimize $KL(q_{\phi}(\mathbf{z}|\mathbf{x})||p_{\theta}(\mathbf{z}|\mathbf{x}))$ in Eq. 9. As in the second row of Fig. 1, points spread in both directions and towards the diagonal. + +We also report the results on different $\gamma$ 's with different batch sizes (32 in Fig. 1). Fig. 2 shows the training dynamics. Both settings of $\gamma$ avoid posterior collapse efficiently. A larger $\gamma$ produces more diverse $\mu$ 's which spread on the diagonal. However, a small $\gamma$ results in a small variance for the distribution of $\mu$ , thus $\mu$ 's in the bottom row are closer to the original (mean of the distribution). When $\gamma$ is 0, posterior collapse happens. Different batch sizes do not diff a lot, so 32 is a decent choice. An intuitive improvement of our method is to automatically learn different $\gamma$ for different latent dimensions, which we leave for future work. + +# A.2 Proof in CVAE + +The KL can be computed as: + +$$ +\begin{array}{l} K L = \frac {1}{2} \sum_ {i = 1} ^ {n} \left(\frac {\sigma_ {q i} ^ {2} + \left(\mu_ {q i} - \mu_ {p i}\right) ^ {2}}{\sigma_ {p i} ^ {2}}\right) \tag {14} \\ + \sigma_ {p i} ^ {2} + \mu_ {p i} ^ {2} - \log \sigma_ {q i} ^ {2} - 1). \\ \end{array} +$$ + +We need to prove that KL will not achieve the minimum number when $\mu_{pi}$ equals to $\mu_{qi}$ and $\sigma_{pi}$ equals $\sigma_{qi}$ . We take hidden size as 1 for example. The binary function about $\mu_{pi}$ and $\sigma_{pi}$ is: + +$$ +\begin{array}{l} f _ {\mu_ {p i}, \sigma_ {p i}} = \left(\frac {\sigma_ {q i} ^ {2} + \left(\mu_ {q i} - \mu_ {p i}\right) ^ {2}}{\sigma_ {p i} ^ {2}}\right) \tag {15} \\ + \sigma_ {p i} ^ {2} + \mu_ {p i} ^ {2} - l o g \sigma_ {q i} ^ {2} - 1), \\ \end{array} +$$ + +the maxima and minima of $f_{\mu_{pi},\sigma_{pi}}$ must be the stationary point of $f_{\mu_{pi},\sigma_{pi}}$ due to its continuity. The stationary point is: + +$$ +\frac {\partial f}{\partial \mu_ {p i}} = \frac {2 \left(\mu_ {p i} - \mu_ {q i}\right)}{\sigma_ {p i} ^ {2}} + 2 \mu_ {p i} \tag {16} +$$ + +$$ +\frac {\partial f}{\partial \sigma_ {p i}} = \frac {- 2 \left(\sigma_ {q i} ^ {2} + \left(\mu_ {q i} - \mu_ {p i}\right) ^ {2}\right)}{\sigma_ {p i} ^ {3}} + 2 \sigma_ {p i}. \tag {17} +$$ + +When $\mu_{pi} = \mu_{qi}$ and $\sigma_{pi} = \sigma_{qi}$ , both partial derivative is not 0. So it is not the stationary point of $f$ , then it won't be the minima. + +# A.3 Language Modeling + +We investigate the training procedure for different models. We plot the MI $I_{q}$ , $D_{KL}$ in the ELBO and the distance between the approximated posterior and the prior, $D_{KL}(q_{\phi}(z)||p(z))$ . As in Eq. 4 in the main paper, $D_{KL}$ in the ELBO is the sum of the other two. Fig. 3 shows these three values throughout the training. Although $D_{KL}$ is the upper bound of the mutual information, we notice that the gap is usually large. In the initial training stage, $D_{KL}$ increases in the basic VAE with annealing, while its MI remains small. With the weight decreases, the method finally suffers from posterior collapse. In contrast, our approach can obtain a high MI with a small $D_{KL}$ value like aggressive VAE. The full results on language modeling are in Table 8. + +# A.4 CVAE for dialogue generation + +Human evaluation: We evaluate the generated responses from three aspects: relevance, fluency and informativeness. Here we introduce the criteria of the evaluation as shown in Table 7. We sample 200 conversations from the test set. For each conversation, we sample three generated responses from each model, totally 600 responses. + +Case study: We report 4 examples generated from these three models, shown in Table 9. CVAE (BOW) and our approach both can generate diverse responses. However, responses from ours are more related to the context compared with the other two. + +![](images/d4210eb55e845431a8375f1446de3e661107ce297579d7897e78ca451f062ff1.jpg) + +![](images/fa4ebc4f657890e77da998962f8aa3423737cfb41c89cea565c43caf9ef7b122.jpg) + +![](images/5e1f9de101d383d44302ff4c10b0a2b695acaff51fb04bc4e4ca4ac0ca62492a.jpg) + +![](images/89e2ab0d06b326fe0e0ed5f2ed4e1a75363bf22c1e17029ddde679cfe805a715.jpg) + +![](images/643b3040c4f0b8d6cfdafd801dbdde275f3de5a45919e93b01d316e31fa90637.jpg) +Figure 1: Visualization of 500 sampled data from the synthetic dataset during the training. The x-axis is $\mu_{x,\theta}$ , the approximate model posterior mean. The y-axis is $\mu_{x,\phi}$ , which represents the inference posterior mean. b is batch size and $\gamma$ is 1 in BN. + +![](images/fa952160034d089318babb1568dfe5b11b4981c3e20760e68edd67044bbf34cf.jpg) + +![](images/8eebf42eb7521f5ca479a5cb48943269726369d61e192b1c6066b19e5e9c0c7b.jpg) + +![](images/db944eaa3ecb598fb1c61083866f9ed27dc41cef9a6f52ace86f33ec6db91d14.jpg) + +![](images/6a44af03c6c2700c441eb51cff8517b6f721f5517534a13fea6ba86877960320.jpg) + +![](images/566f86da046a71d6e86c8bc0436a853da4da296c2417aa283abf272b3e27ef06.jpg) + +![](images/c32f4382a747b1777323dec47d42ada87d0b58f834f302b7091d6d7ae906e8ca.jpg) + +![](images/45c50d36c63c2c79ab8e8464b785db47b474ae518c40a9031f3f7cc82ac85d70.jpg) + +![](images/6c8c4f70b7169cd67954eab18060b9106cd4d6b4aafe3e23e3c76d2a3567b90c.jpg) +Figure 2: Visualization of our BN-VAE on different $\gamma$ for synthetic data. + +![](images/5623f2b617bbbe8dcc336e308fd93aeaa7cdb7c9c47c18b5f9aa6f4b19f71ee8.jpg) + +![](images/c7a7c39c7bec794a41fd16e62edfb751784a4d6dd73e36dde59a97c6c1ac5239.jpg) + +![](images/a04c09648faa1fc39a77253443a1425fdc52b632992215f8f801229e39500527.jpg) + +![](images/9ed6ec79c74a66d25d7c318e2d2fa04ca9a8969d65fb872a22f0796afb69787a.jpg) +Figure 3: Training behavior on Yelp. Left/Middle/Right: VAE/Agg-VAE/BN-VAE (all models are with annealing). + +![](images/dae6b7dcc4892ad0fc12c6da455cacf475d9999f361ee4c777b5a957dc66c08a.jpg) + +![](images/0093f8d0f2f06b0c63118bbcc003d377679194ecc462ec587c86d55432fe2600.jpg) + +Table 7: Human evaluation criteria. + +
FluencyRelevanceInformativeness
1 Point1. Hard to understand +2. Too many syntax mistakesNot related to the query at all1. Generic responses. +2. Repeated query.
2 Points1. Several syntax mistakes but still understandable +2. short responses, e.g., Generic responses1. Response and query are in the same domain/topic but are not directly related +2. Generic responsesbetween 1 and 3.
3 PointsOnly few syntax mistakes with a moderate lengthclosely related to the query1. Creative responses. +2. Contain new information about the query.
+ +
ModelYahooYelp
NLLKLMIAUNLLKLMIAU
CNN-VAE≤332.110.0--≤359.17.6--
LSTM-LM328---351.1---
VAE328.60.00.00.0357.90.00.00.0
β-VAE (0.2)332.219.13.320.4360.711.73.010.0
β-VAE (0.4)328.76.32.88.0358.24.22.04.2
β-VAE (0.6)328.50.30.01.0357.90.20.13.8
β-VAE (0.8)328.80.00.00.0358.10.00.00.0
cyclic*330.62.12.02.3359.52.01.94.1
Skip-VAE*328.52.31.38.1357.61.91.07.4
SA-VAE327.25.22.79.8355.92.81.78.4
Agg-VAE326.75.72.915.0355.93.82.411.3
FB (4)331.04.13.83.0359.24.01.932.0
FB (5)330.65.72.03.0359.84.91.332.0
δ-VAE (0.1)*330.73.20.00.0359.83.20.00.0
δ-VAE (0.15)*331.64.80.00.0360.44.80.00.0
δ-VAE (0.2)*332.26.40.00.0361.56.40.00.0
δ-VAE (0.25)*333.58.00.00.0362.58.00.00.0
vMF-VAE (13)*327.42.0-32.0357.52.0-32.0
vMF-VAE (16)*328.53.0-32.0367.83.0-32.0
vMF-VAE (20)*329.44.0-32.0358.04.0-32.0
vMF-VAE (23)*328.75.0-32.0357.35.0-32.0
vMF-VAE (25)*330.16.0-32.0357.86.0-32.0
vMF-VAE (30)*329.57.0-32.0357.87.0-32.0
BN-VAE (0.3)*328.11.61.432.0356.71.71.432.0
BN-VAE (0.4)*327.72.72.232.0356.23.12.532.0
BN-VAE (0.5)*327.44.23.332.0356.44.43.832.0
BN-VAE (0.6)*326.76.25.632.0356.56.55.432.0
BN-VAE (0.7)*327.48.87.432.0355.99.17.432.0
Pretrained encoder
+cyclic*333.125.89.132.0361.520.59.332.0
+FB (2)*327.24.33.832.0356.64.64.232.0
+FB (3)*327.14.53.932.0356.35.85.232.0
+FB (4)*326.28.16.832.0356.07.66.632.0
+FB (5)*326.68.97.332.0356.59.07.432.0
+FB (6)*326.610.88.132.0356.512.08.632.0
+FB (7)*326.612.18.532.0356.813.48.932.0
+FB (8)*326.713.68.932.0357.515.89.232.0
+δ-VAE (0.15)*331.05.61.111.2359.45.20.55.9
vMF-VAE (13)*328.42.0-32.0357.02.0-32.0
+BN-VAE (0.6)*326.76.45.832.0355.56.65.932.0
+BN-VAE (0.7)*326.59.17.632.0355.79.17.532.0
+ +Table 8: Results on Yahoo and Yelp datasets. We report mean values across 5 different random runs. * indicates the results are from our experiments, while others are from previous report. + +Table 9: Sampled generated responses. Only the last sentence in the context is shown here. + +
Topic: ETHICS IN GOVERNMENT
Context: have trouble drawing lines as to what's illegal and what's not
Target (statement): well i mean the other problem is that they'are always up for
CVAE1. yeah2. yeah3. yeahCVAE (BOW)1. yeah2. oh yeah, they're not3. no it's not too badBN-CVAE1. it's not a country2. it is the same thing that'swhat i think is about the stateis a state3. yeah it's
Topic: VACATION SPOTS
Context: well i 'll talk to you later
Target (conventional-closing): okay now do you push the buttons now
CVAE1. okay bye - bye2. bye3. okay bye - byeCVAE (BOW)1. okay so we'll go ahead and start2. so i guess it depends on how muchyou are you3. it'sBN-CVAE1. okay bye - bye2. nice talking to you too3. all right take care bye - bye
Topic: RECYCLING
Context: are they doing a lot of recycling out in georgia
Target (statement-non-opinion): well at my workplace they are we have places foraluminum cans and we have everybody's been unk a separate trash can for
CVAE1. yeah2. yeah3. yeahCVAE (BOW)1. well that's good for a while i'll tell youthat you're not doing anything at ti andthen you're in a small town2. oh i know i've got a lot of trees andtrees and stuff and3. yeah it's like you know people thatwant to be unk and they're not goingto bother you to make a messBN-CVAE1. well we do recyclenewspapers2. yes i do too3. well we're at a point wherewe're going to be a landfillspace
Topic: UNIVERSAL HEALTH INS
Context: some of the good obviously that nobody has to worry about health care
Target (statement-non-opinion): and i guess i'll have to help with grandchildrenone of these days i hope
CVAE1. um - hum2. uh - huh3. uh - huhCVAE (BOW)1. okay well see we don't have anychoice of any of those2.um - hum3. yeahBN-CVAE1. well i hope that we shouldhave a balanced budget2. uh - huh3. well that's a good idea
\ No newline at end of file diff --git a/abatchnormalizedinferencenetworkkeepstheklvanishingaway/images.zip b/abatchnormalizedinferencenetworkkeepstheklvanishingaway/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7a6917a217577d49aca723867bf4d3cb4ac302e7 --- /dev/null +++ b/abatchnormalizedinferencenetworkkeepstheklvanishingaway/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:11ee629b19c6ca2946b2edc533297c77bcab65429ddcc565aa4347ee53e95b60 +size 1106309 diff --git a/abatchnormalizedinferencenetworkkeepstheklvanishingaway/layout.json b/abatchnormalizedinferencenetworkkeepstheklvanishingaway/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3639eac58086e578be4f98e4e4d1a67181086762 --- /dev/null +++ b/abatchnormalizedinferencenetworkkeepstheklvanishingaway/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d5e3ce749a2b08e329941a89e3a558e656e942afcb73df0d2e93a1dd17d3a527 +size 598827 diff --git a/acallformorerigorinunsupervisedcrosslinguallearning/684851d0-c13c-45a1-a7cd-979a3fa7c5e1_content_list.json b/acallformorerigorinunsupervisedcrosslinguallearning/684851d0-c13c-45a1-a7cd-979a3fa7c5e1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8a70154583065dad7bd3b2a8a581a73a7c078c6d --- /dev/null +++ b/acallformorerigorinunsupervisedcrosslinguallearning/684851d0-c13c-45a1-a7cd-979a3fa7c5e1_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dba97afa310d6ab81aada304ecff0e3c9e484dac97cf6cf62895e387a16ad0f5 +size 90862 diff --git a/acallformorerigorinunsupervisedcrosslinguallearning/684851d0-c13c-45a1-a7cd-979a3fa7c5e1_model.json b/acallformorerigorinunsupervisedcrosslinguallearning/684851d0-c13c-45a1-a7cd-979a3fa7c5e1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3e60172f41e187dc8e4ee914b5133d26012b03af --- /dev/null +++ b/acallformorerigorinunsupervisedcrosslinguallearning/684851d0-c13c-45a1-a7cd-979a3fa7c5e1_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:51bc8014fd41b057f1ac441bef74db70d7be655781668562a277358000f3f178 +size 117377 diff --git a/acallformorerigorinunsupervisedcrosslinguallearning/684851d0-c13c-45a1-a7cd-979a3fa7c5e1_origin.pdf b/acallformorerigorinunsupervisedcrosslinguallearning/684851d0-c13c-45a1-a7cd-979a3fa7c5e1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6aa8063575617be954b8bd65526fdc1488c90cfb --- /dev/null +++ b/acallformorerigorinunsupervisedcrosslinguallearning/684851d0-c13c-45a1-a7cd-979a3fa7c5e1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5cf46d5cebfea2d5cd0cd8f919805b2971481675bd809873057dc707ada9f695 +size 325517 diff --git a/acallformorerigorinunsupervisedcrosslinguallearning/full.md b/acallformorerigorinunsupervisedcrosslinguallearning/full.md new file mode 100644 index 0000000000000000000000000000000000000000..40f9d223044e1f0de59c95a6b07f863d6b15de3d --- /dev/null +++ b/acallformorerigorinunsupervisedcrosslinguallearning/full.md @@ -0,0 +1,306 @@ +# A Call for More Rigor in Unsupervised Cross-lingual Learning + +Mikel Artetxe†*, Sebastian Ruder‡*, Dani Yogatama‡, Gorka Labaka†, Eneko Agirre† + +†HiTZ Center, University of the Basque Country (UPV/EHU) + +DeepMind + +{mikel.artexe,gorka.labaka,e.agirre}@ehu.eus + +{ruder,dyogatama} $@$ google.com + +# Abstract + +We review motivations, definition, approaches, and methodology for unsupervised cross-lingual learning and call for a more rigorous position in each of them. An existing rationale for such research is based on the lack of parallel data for many of the world's languages. However, we argue that a scenario without any parallel data and abundant monolingual data is unrealistic in practice. We also discuss different training signals that have been used in previous work, which depart from the pure unsupervised setting. We then describe common methodological issues in tuning and evaluation of unsupervised cross-lingual models and present best practices. Finally, we provide a unified outlook for different types of research in this area (i.e., cross-lingual word embeddings, deep multilingual pretraining, and unsupervised machine translation) and argue for comparable evaluation of these models. + +# 1 Introduction + +The study of the connection among human languages has contributed to major discoveries including the evolution of languages, the reconstruction of proto-languages, and an understanding of language universals (Eco and Fentress, 1995). In natural language processing, the main promise of multilingual learning is to bridge the digital language divide, to enable access to information and technology for the world's 6,900 languages (Ruder et al., 2019). For the purpose of this paper, we define "multilingual learning" as learning a common model for two or more languages from raw text, without any downstream task labels. Common use cases include translation as well as pretraining multilingual representations. We will use the term interchangeably with "cross-lingual learning". + +Recent work in this direction has increasingly focused on purely unsupervised cross-lingual learning (UCL)—i.e., cross-lingual learning without any parallel signal across the languages. We provide an overview in §2. Such work has been motivated by the apparent dearth of parallel data for most of the world's languages. In particular, previous work has noted that "data encoding cross-lingual equivalence is often expensive to obtain" (Zhang et al., 2017a) whereas "monolingual data is much easier to find" (Lample et al., 2018a). Overall, it has been argued that unsupervised cross-lingual learning "opens up opportunities for the processing of extremely low-resource languages and domains that lack parallel data completely" (Zhang et al., 2017a). + +We challenge this narrative and argue that the scenario of no parallel data and sufficient monolingual data is unrealistic and not reflected in the real world (§3.1). Nevertheless, UCL is an important research direction and we advocate for its study based on an inherent scientific interest (to better understand and make progress on general language understanding), usefulness as a lab setting, and simplicity (§3.2). + +Unsupervised cross-lingual learning permits no supervisory signal by definition. However, previous work implicitly includes monolingual and cross-lingual signals that constitute a departure from the pure setting. We review existing training signals as well as other signals that may be of interest for future study (§4). We then discuss methodological issues in UCL (e.g., validation, hyperparameter tuning) and propose best evaluation practices (§5). Finally, we provide a unified outlook of established research areas (cross-lingual word embeddings, deep multilingual models and unsupervised machine translation) in UCL (§6), and conclude with a summary of our recommendations (§7). + +# 2 Background + +In this section, we briefly review existing work on UCL, covering cross-lingual word embeddings (§2.1), deep multilingual pre-training (§2.2), and unsupervised machine translation (§2.3). + +# 2.1 Cross-lingual word embeddings + +Cross-lingual word embedding methods traditionally relied on parallel corpora (Gouws et al., 2015; Luong et al., 2015). Nonetheless, the amount of supervision required was greatly reduced via cross-lingual word embedding mappings, which work by separately learning monolingual word embeddings in each language and mapping them into a shared space through a linear transformation. Early work required a bilingual dictionary to learn such a transformation (Mikolov et al., 2013a; Faruqui and Dyer, 2014). This requirement was later reduced with self-learning (Artetxe et al., 2017), and ultimately removed via unsupervised initialization heuristics (Artetxe et al., 2018a; Hoshen and Wolf, 2018) and adversarial learning (Zhang et al., 2017a; Conneau et al., 2018a). Finally, several recent methods have formulated cross-lingual embedding alignment as an optimal transport problem (Zhang et al., 2017b; Grave et al., 2019; Alvarez-Melis and Jaakkola, 2018). + +# 2.2 Deep multilingual pretraining + +Following the success in learning shallow word embeddings (Mikolov et al., 2013b; Pennington et al., 2014), there has been an increasing interest in learning contextual word representations (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018). Recent research has been dominated by BERT (Devlin et al., 2019), which uses a bidirectional transformer encoder trained on masked language modeling and next sentence prediction, which led to impressive gains on various downstream tasks. + +While the above approaches are limited to a single language, a multilingual extension of BERT (mBERT) has been shown to also be effective at learning cross-lingual representations in an unsupervised way. The main idea is to combine monolingual corpora in different languages, upsampling those with less data, and training a regular BERT model on the combined data. Conneau and Lample (2019) follow a similar approach but perform a more thorough evaluation and report substantially + +stronger results, $^{2}$ which was further scaled up by Conneau et al. (2019). Several recent studies (Wu and Dredze, 2019; Pires et al., 2019; Artetxe et al., 2020b; Wu et al., 2019) analyze mBERT to get a better understanding of its capabilities. + +# 2.3 Unsupervised machine translation + +Early attempts to build machine translation systems using monolingual data alone go back to statistical decipherment (Ravi and Knight, 2011; Dou and Knight, 2012, 2013). However, this approach was only shown to work in limited settings, and the first convincing results on standard benchmarks were achieved by Artetxe et al. (2018c) and Lample et al. (2018a) on unsupervised Neural Machine Translation (NMT). Both approaches rely on cross-lingual word embeddings to initialize a shared encoder, and train it in conjunction with the decoder using a combination of denoising autoencoding, backtranslation, and optionally adversarial learning. + +Subsequent work adapted these principles to unsupervised phrase-based Statistical Machine Translation (SMT), obtaining large improvements over the original NMT-based systems (Lample et al., 2018b; Artetxe et al., 2018b). This alternative approach uses cross-lingual $n$ -gram embeddings to build an initial phrase table, which is combined with an $n$ -gram language model and a distortion model, and further refined through iterative backtranslation. There have been several follow-up attempts to combine NMT and SMT based approaches (Marie and Fujita, 2018; Ren et al., 2019; Artetxe et al., 2019b). More recently, Conneau and Lample (2019), Song et al. (2019) and Liu et al. (2020) obtain strong results using deep multilingual pretraining rather than cross-lingual word embeddings to initialize unsupervised NMT systems. + +# 3 Motivating fully unsupervised learning + +In this section, we challenge the narrative of motivating UCL based on a lack of parallel resources. We argue that the strict unsupervised scenario cannot be motivated from an immediate practical perspective, and elucidate what we believe should be the true goals of this research direction. + +# 3.1 How practical is the strict unsupervised scenario? + +Monolingual resources subsume parallel resources. For instance, each side of a parallel corpus effectively serves as a monolingual corpus. From this argument, it follows that monolingual data is cheaper to obtain than parallel data, so unsupervised crosslingual learning should in principle be more generally applicable than supervised learning. + +However, we argue that the common claim that the requirement for parallel data "may not be met for many language pairs in the real world" (Xu et al., 2018) is largely inaccurate. For instance, the JW300 parallel corpus covers 343 languages with around 100,000 parallel sentences per language pair on average (Agić and Vulić, 2019), and the multilingual Bible corpus collected by Mayer and Cysouw (2014) covers 837 language varieties (each with a unique ISO 639-3 code). Moreover, the PanLex project aims to collect multilingual lexica for all human languages in the world, and already covers 6,854 language varieties with at least 20 lexemes, 2,364 with at least 200 lexemes, and 369 with at least 2,000 lexemes (Kamholz et al., 2014). While 20 or 200 lexemes might seem insufficient, weakly supervised cross-lingual word embedding methods already proved effective with as little as 25 word pairs (Artetxe et al., 2017). More recent methods have focused on completely removing this weak supervision (Conneau et al., 2018a; Artetxe et al., 2018a), which can hardly be justified from a practical perspective given the existence of such resources and additional training signals stemming from a (partially) shared script (\$4.2). Finally, given the availability of sufficient monolingual data, noisy parallel data can often be obtained by mining bitext (Schwenk et al., 2019a,b). + +In addition, large monolingual data is difficult to obtain for low-resource languages. For instance, recent work on cross-lingual word embeddings has mostly used Wikipedia as its source for monolingual corpora (Gouws et al., 2015; Vulić and Korhonen, 2016; Conneau et al., 2018a). However, as of November 2019, Wikipedia exists in only 307 languages3 of which nearly half have less than 10,000 articles. While one could hope to overcome this by taking the entire web as a corpus, as facilitated by Common Crawl4 and similar initiatives, this is not + +always feasible for low-resource languages. First, the presence of less resourced languages on the web is very limited, with only a few hundred languages recognized as being used in websites.5 This situation is further complicated by the limited coverage of existing tools such as language detectors (Buck et al., 2014; Grave et al., 2018), which only cover a few hundred languages. Alternatively, speech could also serve as a source of monolingual data (e.g., by recording public radio stations). However, this is an unexplored direction within UCL, and collecting, processing and effectively capitalizing on speech data is far from trivial, particularly for low-resource languages. + +All in all, we conclude that the alleged scenario involving no parallel data and sufficient monolingual data is not met in the real world in the terms explored by recent UCL research. Needless to say, effectively exploiting unlabeled data is important in any low-resource setting. However, refusing to use an informative training signal—which parallel data is—when it does indeed exist, cannot be justified from a practical perspective if one's goal is to build the strongest possible model. For this reason, we believe that semi-supervised learning is a more suitable paradigm for truly low-resource languages, and UCL should not be motivated from an immediate practical perspective. + +# 3.2 A scientific motivation + +Despite not being an entirely realistic setup, we believe that UCL is an important research direction for the reasons we discuss below. + +Inherent scientific interest. The extent to which two languages can be aligned based on independent samples—without any cross-lingual signal—is an open and scientifically relevant problem per se. In fact, it is not entirely obvious that UCL should be possible at all, as humans would certainly struggle to align two unknown languages without any grounding. Exploring the limits of UCL could help to understand the limits of the principles that the corresponding methods are based on, such as the distributional hypothesis. Moreover, this research line could bring new insights into the properties and inner workings of both language acquisition and the underlying computational models that ultimately make UCL possible. Finally, such methods may be useful in areas where supervision is impos + +sible to obtain, such as when dealing with unknown or even non-human languages. + +Useful as a lab setting. The strict unsupervised scenario, although not practical, allows us to isolate and better study the use of monolingual corpora for cross-lingual learning. We believe lessons learned in this setting can be useful in the more practical semi-supervised scenario. In a similar vein, monolingual language models, although hardly useful on their own, have contributed to large improvements in other tasks. From a research methodology perspective, unsupervised systems also set a competitive baseline, which any semi-supervised method should improve upon. + +Simplicity as a value. As we discussed previously, refusing to use an informative training signal when it does exist can hardly be beneficial, so we should not expect UCL to perform better than semi-supervised learning. However, simplicity is a value in its own right. Unsupervised approaches could be preferable to their semi-supervised counterparts if the performance gap between them is small enough. For instance, unsupervised cross-lingual embedding methods have been reported to be competitive with their semi-supervised counterparts in certain settings (Glavaš et al., 2019), while being easier to use in the sense that they do not require a bilingual dictionary. + +# 4 What does unsupervised mean? + +In its most general sense, unsupervised cross-lingual learning can be seen as referring to any method relying exclusively on monolingual text data in two or more languages. However, there are different training signals—stemming from common assumptions and varying amounts of linguistic knowledge—that one can potentially exploit under such a regime. This has led to an inconsistent use of this term in the literature. In this section, we categorize different training signals available both from a monolingual and a cross-lingual perspective and discuss additional scenarios enabled by multiple languages. + +# 4.1 Monolingual training signals + +From a computational perspective, text is modeled as a sequence of discrete symbols. In UCL, the training data consists of a set of such sequences in each of the languages. In principle, without any knowledge about the languages, one would have no + +prior information of the nature of such sequences or the possible relations between them. In practice, however, sets of sequences are assumed to be independent, and existing work differs whether they assume document-level sequences (Conneau and Lample, 2019) or sentence-level sequences (Artetxe et al., 2018c; Lample et al., 2018a). + +Nature of atomic symbols. A more important consideration is the nature of the atomic symbols in such sequences. To the best of our knowledge, previous work assumes some form of word segmentation or tokenization (e.g., splitting by whitespaces or punctuation marks). Early work on cross-lingual word embeddings considered such tokens as atomic units. However, more recent work (Hoshen and Wolf, 2018; Glavaš et al., 2019) has primarily used fastText embeddings (Bojanowski et al., 2017) which incorporate subword information into the embedding learning, although the vocabulary is still defined at the token level. In addition, there have also been approaches that incorporate character-level information into the alignment learning itself (Heyman et al., 2017; Riley and Gildea, 2018). In contrast, most work on contextual word embeddings and unsupervised machine translation operates with a subword vocabulary (Devlin et al., 2019; Conneau and Lample, 2019). + +While the above distinction might seem irrelevant from a practical perspective, we think that it is important from a more fundamental point of view (e.g. in relation to the distributional hypothesis as discussed in §3.2). Moreover, some of the underlying assumptions might not generalize to different writing systems (e.g. logographic instead of alphabetic). For instance, subword tokenization has been shown to perform poorly on reduplicated words (Vania and Lopez, 2017). In relation to that, one could also consider the text in each language as a stream of discrete character-like symbols without any notion of tokenization. Such a tabula rasa approach is potentially applicable to any arbitrary language, even when its writing system is not known, but has so far only been explored for a limited number of languages in a monolingual setting (Hahn and Baroni, 2019). + +Linguistic information. Finally, one can exploit additional linguistic knowledge through linguistic analysis such as lemmatization, part-of-speech tagging, or syntactic parsing. For instance, before the advent of unsupervised NMT, statistical deci + +pherment was already shown to benefit from incorporating syntactic dependency relations (Dou and Knight, 2013). For other tasks such as unsupervised POS tagging (Snyder et al., 2008), monolingual tag dictionaries have been used. While such approaches could still be considered unsupervised from a cross-lingual perspective, we argue that the interest of this research direction is greatly limited by two factors: (i) from a theoretical perspective, it assumes some fundamental knowledge that is not directly inferred from the raw monolingual corpora; and (ii) from a more practical perspective, it is not reasonable to assume that such resources are available in the less resourced settings where this research direction has more potential for impact. + +# 4.2 Cross-lingual training signals + +Pure UCL should not use any cross-lingual signal by definition. When we view text as a sequence of discrete atomic symbols (either characters or tokens), a strict interpretation of this principle would consider the set of atomic symbols in different languages to be disjoint, without prior knowledge of the relationship between them. + +Needless to say, any form of learning requires making assumptions, as one needs some criterion to prefer one mapping over another. In the case of UCL, such assumptions stem from the structural similarity across languages (e.g. semantically equivalent words in different languages are assumed to occur in similar contexts). In practice, these assumptions weaken as the distribution of the datasets diverges, and some UCL models have been reported to break under a domain shift (Søgaard et al., 2018; Guzmán et al., 2019; Marchisio et al., 2020). Similarly, approaches that leverage linguistic features such as syntactic dependencies may assume that these are similar across languages. + +In addition, one can also assume that the sets of symbols that are used to represent different languages have some commonalities. This departs from the strict definition of UCL above, establishing some prior connections between the sets of symbols in different languages. Such an assumption is reasonable from a practical perspective, as there are a few scripts (e.g. Latin, Arabic or Cyrillic) that cover a large fraction of languages. Moreover, even when two languages use different writing systems or scripts, there are often certain elements that are still shared (e.g. Arabic numerals, named entities written in a foreign script, URLs, certain punctuation + +tion marks, etc.). In relation to that, several models have relied on identically spelled words (Artetxe et al., 2017; Smith et al., 2017; Søgaard et al., 2018) or string-level similarity across languages (Riley and Gildea, 2018; Artetxe et al., 2019b) as training signals. Other methods use a joint subword vocabulary for all languages, indirectly exploiting the commonalities in their writing system (Lample et al., 2018b; Conneau and Lample, 2019). + +However, past work greatly differs on the nature and relevance that is attributed to such a training signal. The reliance on identically spelled words has been considered as a weak form of supervision in the cross-lingual word embedding literature (Søgaard et al., 2018; Ruder et al., 2018), and significant effort has been put into developing strictly unsupervised methods that do not rely on such signal (Conneau et al., 2018a). In contrast, the unsupervised machine translation literature has not payed much attention to this factor, and has often relied on identical words (Artetxe et al., 2018c), string-level similarity (Artetxe et al., 2019b), or a joint subword vocabulary (Lample et al., 2018b; Conneau and Lample, 2019) under the unsupervised umbrella. The same is true for unsupervised deep multilingual pretraining, where a shared subword vocabulary has been a common component (Pires et al., 2019; Conneau and Lample, 2019), although recent work shows that it is not important to share vocabulary across languages (Artetxe et al., 2020b; Wu et al., 2019). + +Our position is that making assumptions on linguistics universals is acceptable and ultimately necessary for UCL. However, we believe that any connection stemming from a (partly) shared writing system belongs to a different category, and should be considered a separate cross-lingual signal. Our rationale is that a given writing system pertains to a specific form to encode a language, but cannot be considered to be part of the language itself.[6] + +# 4.3 Multilinguality + +While most work in unsupervised cross-lingual learning considers two languages at a time, there have recently been some attempts to extend these methods to multiple languages (Duong et al., 2017; Chen and Cardie, 2018; Heyman et al., 2019), and most work on unsupervised cross-lingual pretraining is multilingual (Pires et al., 2019; Conneau + +
Monolingual signalCross-lingual signal
Sequence of symbolsShared writing system
Sets of sentences/documentsIdentical words
Tokens/subwordsString similarity
Linguistic analysis
+ +Table 1: Different types of monolingual and cross-lingual signals that have been used for unsupervised cross-lingual learning, ordered roughly from least to most linguistic knowledge (top to bottom). + +and Lample, 2019). When considering parallel data across a subset of the language pairs, multilinguality gives rise to additional scenarios. For instance, the scenario where two languages have no parallel data between each other but are well connected through a third (pivot) language has been explored by several authors in the context of machine translation (Cheng et al., 2016; Chen et al., 2017). However, given that the languages in question are still indirectly connected through parallel data, this scenario does not fall within the unsupervised category, and is instead commonly known as zero-resource machine translation. + +An alternative scenario explored in the contemporaneous work of Liu et al. (2020) is where a set of languages are connected through parallel data, and there is a separate language with monolingual data only. We argue that, when it comes to the isolated language, such a scenario should still be considered as UCL, as it does not rely on any parallel data for that particular language nor does it assume any previous knowledge of it. This scenario is easy to justify from a practical perspective given the abundance of parallel data for high-resource languages, and can also be interesting from a more theoretical point of view. This way, rather than considering two unknown languages, this alternative scenario would assume some knowledge of how one particular language is connected to other languages, and attempt to align it to a separate unknown language. + +# 4.4 Discussion + +As discussed throughout the section, there are different training signals that we can exploit depending on the available resources of the languages involved and the assumptions made regarding their writing system, which are summarized in Table 1. Many of these signals are not specific to work on UCL but have been observed in the past in allegedly language-independent NLP approaches, as discussed by Bender (2011). Others, such as a re + +liaence on subwords or shared symbols are more recent phenomena. + +While we do not aim to open a terminological debate on what UCL encompasses, we advocate for future work being more aware and explicit about the monolingual and cross-lingual signals they employ, what assumptions they make (e.g. regarding the writing system), and the extent to which these generalize to other languages. + +In particular, we argue that it is critical to consider the assumptions made by different methods when comparing their results. Otherwise the blind chase for state-of-the-art performance may benefit models making stronger assumptions and exploiting all available training signals, which could ultimately conflict with the eminently scientific motivation of this research area (see §3.2). + +# 5 Methodological issues + +In this section, we describe methodological issues that are commonly encountered when training and evaluating unsupervised cross-lingual models and propose measures to ameliorate them. + +# 5.1 Validation and hyperparameter tuning + +In conventional supervised or semi-supervised settings, we use a separate validation set for development and hyperparameter tuning. However, this becomes tricky in unsupervised cross-lingual learning, where we ideally should not use any parallel data other than for testing purposes. + +Previous work has not paid much attention to this aspect, and different methods are evaluated with different validation schemes. For instance, Artetxe et al. (2018b,c) use a separate language pair with a parallel validation set to make all development and hyperparameter decisions. They test their final system on other language pairs without any parallel data. This approach has the advantage of being strictly unsupervised with respect to the test language pairs, but the optimal hyperparameter choice might not necessarily transfer well across languages. In contrast, Conneau et al. (2018a) and Lample et al. (2018a) propose an unsupervised validation criterion that is defined over monolingual data and shown to correlate well with test performance. This enables systematic tuning on the language pair of interest, but still requires parallel data to guide the development of the unsupervised validation criterion itself. A parallel validation set has also been used for systematic tuning in + +the context of unsupervised machine translation (Marie and Fujita, 2018; Marie et al., 2019; Stojanovski et al., 2019). While this is motivated as a way to abstract away the issue of unsupervised tuning—which the authors consider to be an open problem—we argue that any systematic use of parallel data should not be considered UCL. Finally, previous work often does not report the validation scheme used. In particular, unsupervised crosslingual word embedding methods have almost exclusively been evaluated on bilingual lexicons that do not have a validation set, and presumably use the test set to guide development to some extent. + +Our position is that a completely blind development model without any parallel data is unrealistic. Some cross-lingual signals to guide development are always needed. However, this factor should be carefully controlled and reported with the necessary rigor as a part of the experimental design. We advocate for using one language pair for development and evaluating on others when possible. If parallel data in the target language pair is used, the test set should be kept blind to avoid overfitting, and a separate validation should be used. In any case, we argue that the use of parallel data in the target language pair should be minimized if not completely avoided, and it should under no circumstances be used for extensive tuning. Instead, we recommend to use unsupervised validation criteria for systematic tuning in the target language. + +# 5.2 Evaluation practices + +We argue that there are also several issues with common evaluation practices in UCL. + +Evaluation on favorable conditions. Most work on UCL has focused on relatively close languages with large amounts of high-quality parallel corpora from similar domains. Only recently have approaches considered more diverse languages as well as language pairs that do not involve English (Glavaš et al., 2019; Vulić et al., 2019), and some existing methods have been shown to completely break in less favorable conditions (Guzmán et al., 2019; Marchisio et al., 2020). In addition, most approaches have focused on learning from similar domains, often involving Wikipedia and news corpora, which are unlikely to be available for low-resource languages. We believe that future work should pay more attention to the effect of the typology and linguistic distance of the languages involved, as well as the size, noise and domain + +similarity of the training data used. + +Over-reliance on translation tasks. Most work on UCL focuses on translation tasks, either at the word level (where the problem is known as bilingual lexicon induction) or at the sentence level (where the problem is known as unsupervised machine translation). While translation can be seen as the ultimate application of cross-lingual learning and has a strong practical interest on its own, it only evaluates a particular facet of a model's cross-lingual generalization ability. In relation to that, Glavaš et al. (2019) showed that bilingual lexicon induction performance does not always correlate well with downstream tasks. In particular, they observe that some mapping methods that are specifically designed for bilingual lexicon induction perform poorly on other tasks, showing the risk of relying excessively on translation benchmarks for evaluating cross-lingual models. + +Moreover, existing translation benchmarks have been shown to have several issues on their own. In particular, bilingual lexicon induction datasets have been reported to misrepresent morphological variations, overly focus on named entities and frequent words, and have pervasive gaps in the gold-standard targets (Czarnowska et al., 2019; Kementchedjhieva et al., 2019). More generally, most of these datasets are limited to relatively close languages and comparable corpora. + +Lack of an established cross-lingual benchmark. At the same time, there is no de facto standard benchmark to evaluate cross-lingual models beyond translation. Existing approaches have been evaluated in a wide variety of tasks including dependency parsing (Schuster et al., 2019), named entity recognition (Rahimi et al., 2019), sentiment analysis (Barnes et al., 2018), natural language inference (Conneau et al., 2018b), and document classification (Schwenk and Li, 2018). XNLI (Conneau et al., 2018b) and MLDoc (Schwenk and Li, 2018) are common choices, but they have their own problems: MultiNLI, the dataset from which XNLI was derived, has been shown to contain superficial cues that can be exploited (Gururangan et al., 2018), while MLDoc can be solved by keyword matching (Artetxe et al., 2020b). There are non-English counterparts for more challenging tasks such as question answering (Cui et al., 2019; Hsu et al., 2019), but these only exist for a handful of languages. More recent datasets such as XQuAD + +
Methodological issuesExamples
Validation and hyperparameter tuningSystematic tuning with parallel data or on test data
Evaluation on favorable conditionsTypologically similar languages; always including English; training on the same domain
Over-reliance on translation tasksOverfitting to bilingual lexicon induction; known issues with existing datasets
Lack of an established benchmarkEvaluation on many different tasks; problems with common tasks (MLDoc and XNLI)
+ +Table 2: Methodological issues pertaining to validation and hyperparameter tuning and evaluation practices in current work on unsupervised cross-lingual learning. + +(Artetxe et al., 2020b), MLQA (Lewis et al., 2019) and TyDi QA (Clark et al., 2020) cover a wider set of languages, but a comprehensive benchmark that evaluates multilingual representations on a diverse set of tasks—in the style of GLUE (Wang et al., 2018)—and languages has been missing until very recently. The contemporaneous XTREME (Hu et al., 2020) and XGLUE (Liang et al., 2020) benchmarks try to close this gap, but they are still restricted to languages where existing labelled data is available. Finally, an additional issue is that a large part of these benchmarks were created through translation, which was recently shown to introduce artifacts (Artetxe et al., 2020a). + +We present a summary of the methodological issues discussed in Table 2. + +# 6 Bridging the gap between unsupervised cross-lingual learning flavors + +The three categories of UCL ( $\S 2$ ) have so far been treated as separate research topics by the community. In particular, cross-lingual word embeddings have a long history (Ruder et al., 2019), while deep multilingual pretraining has emerged as a separate line of research with its own best practices and evaluation standards. At the same time, unsupervised machine translation has been considered a separate problem in its own right, where cross-lingual word embeddings and deep multilingual pretraining have just served as initialization techniques. + +While each of these families have their own defining features, we believe that they share a strong connection that should be considered from a more holistic perspective. In particular, both cross-lingual word embeddings and deep mul + +tilingual pretraining share the goal of learning (sub)word representations, and essentially differ on whether such representations are static or context-dependent. Similarly, in addition to being a downstream application of the former, unsupervised machine translation can also be useful to develop other multilingual applications or learn better crosslingual representations. This has previously been shown for supervised machine translation (McCann et al., 2017; Siddhant et al., 2019) and recently for bilingual lexicon induction (Artetxe et al., 2019a). In light of these connections, we call for a more holistic view of UCL, both from an experimental and theoretical perspective. + +Evaluation. Most work on cross-lingual word embeddings focuses on bilingual lexicon induction. In contrast, deep multilingual pretraining has not been tested on this task, and is instead typically evaluated on zero-shot cross-lingual transfer. We think it is important to evaluate both families—cross-lingual word embeddings and deep multilingual representations—in the same conditions to better understand their strengths and weaknesses. In that regard, Artetxe et al. (2020b) recently showed that deep pretrained models are much stronger in some downstream tasks, while cross-lingual word embeddings are more efficient and sufficient for simpler tasks. However, this could partly be attributed to a particular integration strategy, and we advocate for using a common evaluation framework in future work to allow a direct comparison between the different families. + +Theory. From a more theoretical perspective, it is still not well understood in what ways crosslingual word embeddings and deep multilingual pretraining differ. While one could expect the latter to be learning higher-level multilingual abstractions, recent work suggests that deep multilingual models might mostly be learning a lexical-level alignment (Artetxe et al., 2020b). For that reason, we believe that further research is needed to understand the relation between both families of models. + +# 7 Recommendations + +To summarize, we make the following practical recommendations for future cross-lingual research: + +- Be rigorous when motivating UCL. Do not present it as a practical scenario unless supported by a real use case. + +- Be explicit about the monolingual and crosslingual signals used by your approach and the assumptions it makes, and take them into considerations when comparing different models. +- Report the validation scheme used. Minimize the use of parallel data by preferring an unsupervised validation criterion and/or using only one language for development. Always keep the test set blind. +- Pay attention to the conditions in which you evaluate your model. Consider the impact of typology, linguistic distance, and the domain similarity, size and noise of the training data. Be aware of known issues with common benchmarks, and favor evaluation on a diverse set of tasks. +- Keep a holistic view of UCL, including crosslingual word embeddings, deep multilingual pretraining and unsupervised machine translation. To the extent possible, favor a common evaluation framework for these different families. + +# 8 Conclusions + +In this position paper, we review the status quo of unsupervised cross-lingual learning—a relatively recent field. UCL is typically motivated by the lack of cross-lingual signal for many of the world's languages, but available resources indicate that a scenario with no parallel data and sufficient monolingual data is not realistic. Instead, we advocate for the importance of UCL for scientific reasons. + +We also discuss different monolingual and cross-lingual training signals that have been used in the past, and advocate for carefully reporting them to enable a meaningful comparison across different approaches. In addition, we describe methodological issues related to the unsupervised setting and propose measures to ameliorate them. Finally, we discuss connections between cross-lingual word embeddings, deep multilingual pre-training, and unsupervised machine translation, calling for an evaluation on an equal footing. + +We hope that this position paper will serve to strengthen research in UCL, providing a more rigorous look at the motivation, definition, and methodology. In light of the unprecedented growth of our field in recent times, we believe that it is essential to establish a rigorous foundation connecting past and present research, and an evaluation protocol that + +carefully controls for the use of parallel data and assesses models in diverse, challenging settings. + +# Acknowledgments + +This research was partially funded by a Facebook Fellowship, the Basque Government excellence research group (IT1343-19), the Spanish MINECO (UnsupMT TIN2017-91692-EXP MCIU/AEI/FEDER, UE) and Project BigKnowledge (Ayudas Fundación BBVA a equipos de Investigación@científica 2018). + +# References + +Zeljko Agić and Ivan Vulić. 2019. JW300: A wide-coverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204–3210, Florence, Italy. Association for Computational Linguistics. +David Alvarez-Melis and Tommi Jaakkola. 2018. Gromov-wasserstein alignment of word embedding spaces. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1881-1890, Brussels, Belgium. Association for Computational Linguistics. +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451-462, Vancouver, Canada. Association for Computational Linguistics. +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 789-798, Melbourne, Australia. Association for Computational Linguistics. +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3632-3642, Brussels, Belgium. Association for Computational Linguistics. +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019a. Bilingual lexicon induction through unsupervised machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5002-5007, Florence, Italy. Association for Computational Linguistics. +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019b. An effective approach to unsupervised machine translation. In Proceedings of the 57th Annual + +Meeting of the Association for Computational Linguistics, pages 194-203, Florence, Italy. Association for Computational Linguistics. +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2020a. Translation artifacts in cross-lingual transfer learning. arXiv preprint arXiv:2004.04721. +Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018c. Unsupervised neural machine translation. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018). +Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020b. On the Cross-lingual Transferability of Monolingual Representations. In Proceedings of ACL 2020. +Jeremy Barnes, Roman Klinger, and Sabine Schulte im Walde. 2018. Bilingual sentiment embeddings: Joint projection of sentiment across languages. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2483-2493, Melbourne, Australia. Association for Computational Linguistics. +Emily M. Bender. 2011. On Achieving and Evaluating Language-Independence in NLP. Linguistic Issues in Language Technology, 6(3):1-26. +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146. +Christian Buck, Kenneth Heafield, and Bas van Ooyen. 2014. N-gram counts and language models from the common crawl. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3579-3584, Reykjavik, Iceland. European Language Resources Association (ELRA). +Xilun Chen and Claire Cardie. 2018. Unsupervised multilingual word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 261-270, Brussels, Belgium. Association for Computational Linguistics. +Yun Chen, Yang Liu, Yong Cheng, and Victor O.K. Li. 2017. A teacher-student framework for zero-resource neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1925-1935, Vancouver, Canada. Association for Computational Linguistics. +Yong Cheng, Yang Liu, Qian Yang, Maosong Sun, and Wei Xu. 2016. Neural machine translation with pivot languages. arXiv preprint arXiv:1611.04928. +Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. Tydi qa: A benchmark + +for information-seeking question answering in typologically diverse languages. Transactions of the Association for Computational Linguistics. +Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishray Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116. +Alexis Conneau and Guillaume Lample. 2019. Crosslingual language model pretraining. In Advances in Neural Information Processing Systems 32, pages 7057-7067. +Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018a. Word translation without parallel data. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018). +Alexis Conneau, Rudy Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018b. XNLI: Evaluating cross-lingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics. +Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2019. Cross-lingual machine reading comprehension. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1586-1595, Hong Kong, China. Association for Computational Linguistics. +Paula Czarnowska, Sebastian Ruder, Edouard Grave, Ryan Cotterell, and Ann Copestake. 2019. Don't forget the long tail! A comprehensive analysis of morphological generalization in bilingual lexicon induction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 973-982, Hong Kong, China. Association for Computational Linguistics. +Andrew M. Dai and Quoc V. Le. 2015. Semi-supervised sequence learning. In Advances in Neural Information Processing Systems 28, pages 3079-3087. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. + +Qing Dou and Kevin Knight. 2012. Large scale decipherment for out-of-domain machine translation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 266-275, Jeju Island, Korea. Association for Computational Linguistics. +Qing Dou and Kevin Knight. 2013. Dependency-based decipherment for resource-limited machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1668-1676, Seattle, Washington, USA. Association for Computational Linguistics. +Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. 2017. Multilingual training of crosslingual word embeddings. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 894-904, Valencia, Spain. Association for Computational Linguistics. +Umberto Eco and James Fentress. 1995. The search for the perfect language. Blackwell Oxford. +Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 462-471, Gothenburg, Sweden. Association for Computational Linguistics. +Goran Glavaš, Robert Litschko, Sebastian Ruder, and Ivan Vulić. 2019. How to (properly) evaluate cross-lingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 710–721, Florence, Italy. Association for Computational Linguistics. +Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. *BilBOWA: Fast bilingual distributed representations without word alignments*. In *Proceedings of the 32nd International Conference on Machine Learning*, volume 37 of *Proceedings of Machine Learning Research*, pages 748-756, Lille, France. PMLR. +Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). +Edouard Grave, Armand Joulin, and Quentin Berthet. 2019. Unsupervised alignment of embeddings with Wasserstein procrustes. In Proceedings of Machine Learning Research, volume 89, pages 1880-1890. PMLR. + +Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107-112, New Orleans, Louisiana. Association for Computational Linguistics. +Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The FLORES evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6097-6110, Hong Kong, China. Association for Computational Linguistics. +Michael Hahn and Marco Baroni. 2019. Tabula nearly rasa: Probing the linguistic knowledge of character-level neural language models trained on unsegmented text. Transactions of the Association for Computational Linguistics, 7:467-484. +Geert Heyman, Bregt Verreet, Ivan Vulic, and Marie-Francine Moens. 2019. Learning unsupervised multilingual word embeddings with incremental multilingual hubs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1890-1902, Minneapolis, Minnesota. Association for Computational Linguistics. +Geert Heyman, Ivan Vulic, and Marie-Francine Moens. 2017. Bilingual lexicon induction by learning to combine word-level and character-level representations. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1085-1095, Valencia, Spain. Association for Computational Linguistics. +Yedid Hoshen and Lior Wolf. 2018. Non-adversarial unsupervised word translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 469-478, Brussels, Belgium. Association for Computational Linguistics. +Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328-339, Melbourne, Australia. Association for Computational Linguistics. +Tsung-Yuan Hsu, Chi-Liang Liu, and Hung-yi Lee. 2019. Zero-shot reading comprehension by cross-lingual transfer learning with multi-lingual language representation model. In Proceedings of the + +2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5933-5940, Hong Kong, China. Association for Computational Linguistics. +Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020. XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization. arXiv preprint arXiv:2003.11080. +David Kamholz, Jonathan Pool, and Susan Colowick. 2014. PanLex: Building a resource for pan-lingual lexical translation. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3145-3150, Reykjavik, Iceland. European Language Resources Association (ELRA). +Yova Kementchedjhieva, Mareike Hartmann, and Anders Søgaard. 2019. Lost in evaluation: Misleading benchmarks for bilingual dictionary induction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3327-3332, Hong Kong, China. Association for Computational Linguistics. +Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In Proceedings of the 6th International Conference on Learning Representations (ICLR 2018). +Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039-5049, Brussels, Belgium. Association for Computational Linguistics. +Patrick Lewis, Barlas Oğuz, Rudy Rinott, Sebastian Riedel, and Holger Schwenk. 2019. MLQA: Evaluating Cross-lingual Extractive Question Answering. arXiv preprint arXiv:1910.07475. +Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Bruce Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Rangan Majumder, and Ming Zhou. 2020. Xglue: A new benchmark dataset for cross-lingual pre-training, understanding and generation. arXiv preprint arXiv:2004.01401. +Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. arXiv preprint arXiv:2001.08210. + +Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151-159, Denver, Colorado. Association for Computational Linguistics. +Kelly Marchisio, Kevin Duh, and Philipp Koehn. 2020. When does unsupervised machine translation work? arXiv preprint arXiv:2004.05516. +Benjamin Marie and Atsushi Fujita. 2018. Unsupervised neural machine translation initialized by unsupervised statistical machine translation. arXiv preprint arXiv:1810.12703. +Benjamin Marie, Haipeng Sun, Rui Wang, Kehai Chen, Atsushi Fujita, Masao Utiyama, and Eiichiro Sumita. 2019. NICT's unsupervised neural and statistical machine translation systems for the WMT19 news translation task. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 294-301, Florence, Italy. Association for Computational Linguistics. +Thomas Mayer and Michael Cysouw. 2014. Creating a massively parallel Bible corpus. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3158-3163, Reykjavik, Iceland. European Language Resources Association (ELRA). +Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems 30, pages 6294-6305. +Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. +Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, pages 3111-3119. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics. +Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics. + +Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual BERT? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001, Florence, Italy. Association for Computational Linguistics. +Afshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Massively multilingual transfer for NER. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 151-164, Florence, Italy. Association for Computational Linguistics. +Sujith Ravi and Kevin Knight. 2011. Deciphering foreign language. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 12-21, Portland, Oregon, USA. Association for Computational Linguistics. +Shuo Ren, Zhirui Zhang, Shujie Liu, Ming Zhou, and Shuai Ma. 2019. Unsupervised neural machine translation with SMT as posterior regularization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 241-248. +Parker Riley and Daniel Gildea. 2018. Orthographic features for bilingual lexicon induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 390-394, Melbourne, Australia. Association for Computational Linguistics. +Sebastian Ruder, Ryan Cotterell, Yova Kementchedjieva, and Anders Søgaard. 2018. A discriminative latent-variable model for bilingual lexicon induction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 458-468, Brussels, Belgium. Association for Computational Linguistics. +Sebastian Ruder, Ivan Vulic, and Anders Søgaard. 2019. A Survey of Cross-lingual Word Embedding Models. Journal of Artificial Intelligence Research, 65:569-631. +Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. Cross-lingual alignment of contextual word embeddings, with applications to zero-shot dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1599–1613, Minneapolis, Minnesota. Association for Computational Linguistics. +Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzmán. 2019a. WikiMatrix: Mining 135M Parallel Sentences. arXiv preprint arXiv:1907.05791. +Holger Schwenk and Xian Li. 2018. A corpus for multilingual document classification in eight languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC) + +2018), Miyazaki, Japan. European Language Resources Association (ELRA). +Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave, and Armand Joulin. 2019b. CC-Matrix: Mining Billions of High-Quality Parallel Sentences on the WEB. arXiv preprint arXiv:1911.04944. +Aditya Siddhant, Melvin Johnson, Henry Tsai, Naveen Arivazhagan, Jason Riesa, Ankur Bapna, Orhan Firat, and Karthik Raman. 2019. Evaluating the Cross-Lingual Effectiveness of Massively Multilingual Neural Machine Translation. arXiv preprint arXiv:1909.00437. +Samuel L. Smith, David H. P. Turban, Steven Hamblin, and Nils Y. Hammerla. 2017. Offline bililingual word vectors, orthogonal transformations and the inverted softmax. In Proceedings of the 5th International Conference on Learning Representations (ICLR 2017). +Benjamin Snyder, Tahira Naseem, Jacob Eisenstein, and Regina Barzilay. 2008. Unsupervised multilingual learning for POS tagging. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 1041-1050, Honolulu, Hawaii. Association for Computational Linguistics. +Anders Søgaard, Sebastian Ruder, and Ivan Vulic. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778-788, Melbourne, Australia. Association for Computational Linguistics. +Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. MASS: Masked sequence to sequence pre-training for language generation. In Proceedings of the 36th International Conference on Machine Learning, volume 97, pages 5926-5936, Long Beach, California, USA. PMLR. +Dario Stojanovski, Viktor Hangya, Matthias Huck, and Alexander Fraser. 2019. The LMU munich unsupervised machine translation system for WMT19. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 393-399, Florence, Italy. Association for Computational Linguistics. +Clara Vania and Adam Lopez. 2017. From characters to words to in between: Do we capture morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2016-2027, Vancouver, Canada. Association for Computational Linguistics. +Ivan Vulic, Goran Glavaš, Roi Reichart, and Anna Korhonen. 2019. Do we really need fully unsupervised cross-lingual embeddings? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International + +Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4407-4418, Hong Kong, China. Association for Computational Linguistics. +Ivan Vulic and Anna Korhonen. 2016. On the role of seed lexicons in learning bilingual word embeddings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 247-257, Berlin, Germany. Association for Computational Linguistics. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353-355, Brussels, Belgium. Association for Computational Linguistics. +Shijie Wu, Alexis Conneau, Haoran Li, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Emerging cross-lingual structure in pretrained language models. arXiv preprint arXiv:1911.01464. +Shijie Wu and Mark Dredze. 2019. Beto, bentz, becas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 833–844, Hong Kong, China. Association for Computational Linguistics. +Ruochen Xu, Yiming Yang, Naoki Otani, and Yuexin Wu. 2018. Unsupervised cross-lingual transfer of word embedding spaces. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2465–2474, Brussels, Belgium. Association for Computational Linguistics. +Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017a. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1959-1970, Vancouver, Canada. Association for Computational Linguistics. +Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017b. Earth mover's distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1934-1945, Copenhagen, Denmark. Association for Computational Linguistics. \ No newline at end of file diff --git a/acallformorerigorinunsupervisedcrosslinguallearning/images.zip b/acallformorerigorinunsupervisedcrosslinguallearning/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..114cc4021d40e261e2bb643fdce1f0a126b004a1 --- /dev/null +++ b/acallformorerigorinunsupervisedcrosslinguallearning/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ed7c68e36e5c454dbbc4b73fb3eafe473605a7e08c8a3089ed21850d7f02880 +size 69399 diff --git a/acallformorerigorinunsupervisedcrosslinguallearning/layout.json b/acallformorerigorinunsupervisedcrosslinguallearning/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..2ed91987b0a110dc0ca615dee518250decb776a7 --- /dev/null +++ b/acallformorerigorinunsupervisedcrosslinguallearning/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39424e4e38815e028c10a30b9f4476d2413e3284850cb1fe0ad4369a933ab367 +size 349568 diff --git a/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/863d725e-89f9-475d-a34e-476d1deb53e1_content_list.json b/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/863d725e-89f9-475d-a34e-476d1deb53e1_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..be7caddb9b372762573cbea3b724897c8f1e4f78 --- /dev/null +++ b/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/863d725e-89f9-475d-a34e-476d1deb53e1_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a36d30f1a7f8968b41d4b5d4d1fd962ae185e9da2a42b516b7eef754e55706ed +size 42104 diff --git a/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/863d725e-89f9-475d-a34e-476d1deb53e1_model.json b/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/863d725e-89f9-475d-a34e-476d1deb53e1_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d39cc7cf55a25b66cd4625b79bfb148c5daffaa2 --- /dev/null +++ b/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/863d725e-89f9-475d-a34e-476d1deb53e1_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1bbd632df01a6949e305cd81ccf73d25ba185826bb82a24d25ae79e3a974d13a +size 49744 diff --git a/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/863d725e-89f9-475d-a34e-476d1deb53e1_origin.pdf b/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/863d725e-89f9-475d-a34e-476d1deb53e1_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..bbd0b6ee0fe85253e0854b99b513b57c1b0fdfd8 --- /dev/null +++ b/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/863d725e-89f9-475d-a34e-476d1deb53e1_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e42b5595dd9d6c8f6361161a76b6d3d676786282f9bb96369c949ff5092318d8 +size 427627 diff --git a/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/full.md b/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d6d0acf614e1deda25f7f75000765beee6c8510e --- /dev/null +++ b/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/full.md @@ -0,0 +1,181 @@ +# A Complete Shift-Reduce Chinese Discourse Parser with Robust Dynamic Oracle + +Shyh-Shiun Hung, $^{1}$ Hen-Hsen Huang, $^{2,3}$ and Hsin-Hsi Chen $^{1,3}$ + +$^{1}$ Department of Computer Science and Information Engineering, National Taiwan University, Taiwan + +$^{2}$ Department of Computer Science, National Chengchi University, Taiwan + $^{3}$ MOST Joint Research Center for AI Technology and All Vista Healthcare, Taiwan shhung@nlg.csie.ntu.edu.tw, hhhuang@nccu.edu.tw, hhchen@ntu.edu.tw + +# Abstract + +This work proposes a standalone, complete Chinese discourse parser for practical applications. We approach Chinese discourse parsing from a variety of aspects and improve the shift-reduce parser not only by integrating the pre-trained text encoder, but also by employing novel training strategies. We revise the dynamic-oracle procedure for training the shift-reduce parser, and apply unsupervised data augmentation to enhance rhetorical relation recognition. Experimental results show that our Chinese discourse parser achieves the state-of-the-art performance. + +# 1 Introduction + +Discourse parsing is one of the fundamental tasks in natural language processing (NLP). Typical types of discourse parsing include hierarchical discourse parsing and shallow discourse parsing. The former is aimed at finding the relationships among a series of neighboring elementary discourse units (EDUs) and further building up a hierarchical tree structure (Mann and Thompson, 1988). Instead of establishing a tree structure, the latter finds the across-paragraph relations between all text units in a paragraph or a document. Based on Rhetorical Structure Theory Discourse Treebank (RST-DT) (Carlson et al., 2001a), hierarchical discourse parsing in English has been well-studied. + +This paper focuses on hierarchical discourse parsing in Chinese. Previous work on hierarchical Chinese discourse parsing is mostly based on the RST-style Chinese Discourse Treebank (Li et al., 2014). To distinguish from the other Chinese Discourse Treebank (Zhou and Xue, 2012), which is annotated with the PDTB-style for shallow discourse parsing, we use the term CDTB-14 to refer to the RST-style one and the term CDTB-12 to refer to the PDTB-style one. Kong and Zhou (2017) + +propose a pipeline framework and generate the discourse parsing tree in a bottom-up way. Lin et al. (2018) propose an end-to-end system based on a recursive neural network (RvNN) to construct the parsing tree with a CKY-like algorithm. Sun and Kong (2018) use transition-based method with the stack augmented parser-Interpreter neural network (SPINN) (Bowman et al., 2016) as the backbone model, helping their model make a better prediction with the previous information. + +In this work, we attempt to construct a complete Chinese discourse parser, which supports all the four sub-tasks in hierarchical discourse parsing, including EDU segmentation, tree structure construction, nuclearity labeling, and rhetorical relation recognition. Given a paragraph, our parser extracts all EDUs, builds the tree structure, identifies the nucleuses, and recognizes the rhetorical relations of all internal nodes. We propose a revised dynamic-oracle procedure (Yu et al., 2018) for training the shift-reduce parser. Because of the limited training instances in CDTB-14, we also address the data sparsity issue by introducing unsupervised data augmentation (Xie et al., 2019). Experimental results show that our methodology is effective, and our model outperforms all the previous models. The contributions of this work are three-fold shown as follows. + +1. We explore the task of Chinese discourse parsing with a variety of strategies, and our parser achieves the state-of-the-art performance. Our robust dynamic-oracle procedure can be applied to other shift-reduce parsers. + +2. Our complete Chinese discourse parser handles a raw paragraph/document directly and performs all the subtasks in hierarchical discourse parsing. No pre-processing procedures such as Chinese word segmentation, POS-tagging, and syntactic parsing are required. + +3. We release the pre-trained, standalone, ready-to-use parser as a resource for the research community. $^{1}$ + +# 2 Methodology + +Figure 1 gives an overview of our parser. Five stages are performed to transform a raw document into a parse tree: EDU segmentation, tree structure construction, rhetorical relation and nuclearity classification, binary tree conversion, and beam search. + +# 2.1 Elementary Discourse Unit Segmentation + +Typically, EDU segmentation is a sequence labeling task (Wang et al., 2018; Peters et al., 2018). We propose a model for labeling each Chinese character in a raw document. The Begin-Inside scheme is employed that the word beginning with a new EDU will be labeled as $B$ , and the rest of the words will be labeled as $I$ . Our model is based on the pretrained text encoder BERT (Devlin et al., 2018). More specifically, we adopt the version BERT-base, Chinese since this is the only pre-trained BERT dedicated to Chinese so far. As the BERT for Chinese is character-based, we feed each Chinese character into a BERT layer to obtain its contextual embedding. Then, we fine tune the representation with an additional dense layer and measure the probability of each label of each character with a softmax layer. The model is further trained as conditional random fields (CRFs) (Lafferty et al., 2001) for finding the global optimal label sequence. + +# 2.2 Tree Construction + +We propose a shift-reduce parser for building the structure of the discourse parse tree. A shift-reduce parser maintains a stack and a queue for representing a state during parsing, and an action classifier is trained to predict the action (i.e., shift or reduce) for making a transition from the given state to the next state. In the initial state, the stack is empty, and the queue contains all the EDUs in a raw document. In the final state, the queue is empty, and the stack contains only one element, i.e., the discourse parse tree of the whole paragraph. + +To decide whether to shift or to reduce, we propose an action classifier by considering the information of the top two elements of the stack $s_1$ and $s_2$ (i.e., the two most recent discourse units) and the first element of the queue $q$ (i.e., the next + +![](images/efd6b5a8b2d22752e154585b4eabe00f8bdfe56934e5c434a28ada2333a3c2a4.jpg) +Figure 1: Overview of our Chinese discourse parser. + +EDU). The textual form of each of these three discourse units will be fed into the BERT encoder for representing as $Enc(s_1)$ , $Enc(s_2)$ , and $Enc(q)$ . Next, we concatenate the max pooling of $Enc(s_1)$ , $Enc(s_2)$ , and $Enc(q)$ and feed the resulting vector into a dense layer to predict the next action. + +Since shift-reduce is a greedy algorithm, it can hardly recover from an error state. The shift-reduce parser is typically trained with the teacher mode, where only correct states are given, and the resulting parser may perform poor when it reaches unfamiliar states. For this reason, we propose a revised dynamic-oracle procedure (Yu et al., 2018) for training our discourse parser. One drawback of the original dynamic oracle is that some golden training examples may be neglected. Because CDTB-14 has relatively few action steps to build a tree, the probability of falling into a wrong state is much small compared to that of RST-DT. In our revision, we want to guarantee all correct states have been trained. As shown in Algorithm 1, the document will be gone through twice when training a document example. We first follow the golden actions, and choose action predicted by the model with a probability $\alpha$ at the second time. We refer to them as teacher mode and student mode, respectively. Note that we follow the suggestion of Yu et al. (2018) to set $\alpha$ to 0.7. + +Algorithm 1 Training Procedure for Our Shift-Reduce Discourse Parser. +1: $S, Q \gets$ empty stack, elementary discourse units +2: while $Q$ is not empty $\vee S$ has more than 1 unit do +3: predicted, golden $\leftarrow$ ACTIONCLASSIFIER(S.top1(), S.top2(), Q.front(), GOLDENACTION +4: COMPUTELOSSANDUPDATE(predicted, golden) +5: PERFORMACTION(golden) +6: $S, Q \gets$ empty stack, elementary discourse units +7: while $Q$ is not empty $\vee S$ has more than 1 unit do +8: predicted, golden $\leftarrow$ ACTIONCLASSIFIER(S.top1(), S.top2(), Q.front(), GOLDENACTION +9: COMPUTELOSSANDUPDATE(predicted, golden) +10: if rand() > $\alpha$ then PERFORMACTION(golden) else PERFORMACTION(predicted) + +# 2.3 Rhetorical Relation Recognition + +If two discourse units are decided to be merged during the tree construction stage, a new internal node will be generated and the relationship of the two discourse units will be determined. Predicting the relation between two textual arguments is a typical classification task in NLP. We propose a BERT-based classifier, which predicts the relation of two arguments separated by the symbol [SEP], with additional dense layers as the output. + +In CDTB-14, the "coordination" relation accounts for $59.6\%$ of the training data, while minor relations suffer from data sparseness. To address this issue, we introduce unsupervised data augmentation (UDA) (Xie et al., 2019) to enhance the performance. We adopt the discourse pairs in CDTB-12 as the material for UDA. Note that other unlabeled text pairs can also be used for UDA. We chose those from CDTB-12 simply because the format is convenient for us to use. + +The original loss is shown as Eq. 1. Given a span of text $x$ , our main model $P(\cdot)$ predicts the rhetorical relation $y_{c}$ . Eq. 2 shows the additional consistency loss to enforce the smoothness of our main model, and $\hat{x}$ stands for the augmented unlabeled sentence pair. $L$ and $U$ stand for labeled data and unlabeled data, respectively. As shown in Eq. 3, we train both objectives at the same time with a weight $\lambda$ to adjust the effect of UDA. + +$$ +H = - \frac {1}{N} \sum_ {x \in L} ^ {N} \sum_ {c = 1} ^ {M} y _ {c} \log (P (y _ {c} | x)) \tag {1} +$$ + +$$ +D _ {K L} = - \frac {1}{N} \sum_ {x \in U} ^ {N} P (y | x) \log \left(\frac {P (y | x)}{P (y | \hat {x})}\right) \tag {2} +$$ + +$$ +\mathcal {L} (\theta) = H + \lambda D _ {K L} \tag {3} +$$ + +The UDA procedure first generates the augmented unlabeled sentence pairs. Various ap + +proaches to paraphrasing can be employed. In this work, we utilize the back-translation strategy (Sennrich et al., 2016), where we translate the Chinese sentence pair to English and then translate back to Chinese. This is equivalent to add noises to the original inputs. As the original and the back-translated sentence pairs express the same meaning, our model is expected to predict the same label for both pairs. By minimizing the consistency loss, our model can behave consistently no matter whether an original instance or its paraphrases are given. In this way, the model can be more generalized and robust. Besides, when our model is able to predict the same label for both sentence pairs, it means that our model has also learned their label. + +# 2.4 Nuclearity Labeling + +Nuclearity labeling is aimed at determining the nucleus from a sentence pair. The nuclearity of two sentences has a correlation with their relationship, thus we jointly train the rhetorical relation and the nuclearity classifiers, where the loss for back-propagation is the sum of the losses of both classifiers. Similar to the imbalance issue of rhetorical relation recognition, the 'Equal' class accounts for $51\%$ of training data. We also employ UDA for performance enhancement. + +# 2.5 Binary Tree Conversion + +For simplicity, our shift-reduce parser constructs a binary tree. However, the parse trees annotated in CDTB-14 are not always binary. In the training and the test sets, $8.9\%$ and $10\%$ of the internal nodes have more than two children, respectively. Most of the previous works do not handle the binary tree conversion, and some of the work further convert the golden trees into binary trees to calculate their scores, resulting in less accurate evaluation. In the + +training stage, we convert the multiway trees to their corresponding left-heavy binary trees (Morey et al., 2018). In the testing stage, we convert the binary tree constructed by our parser to the corresponding multiway tree. For example, a three-way node, $A \rightarrow XYZ$ , will be converted to $A \rightarrow A'Z$ and $A' \rightarrow XY$ . The conversion is deterministic and bidirectional, so it is free from ambiguity. + +# 2.6 Beam Search + +To decode a transition sequence during the testing stage, the standard method is to choose the action that has the maximum probability of the current time step as the input for the next time step. However, this greedy approach might fail to find the sequence that has the maximum overall probability only because one of the action probability is small in that sequence. Beam search (Wiseman and Rush, 2016) is a heuristic search algorithm that explores a graph by maintaining the top $k$ results at every time step. This approach helps keep a number of potential candidates from discarding. Note that the greedy approach is equivalent to beam search with a beam width $k = 1$ . + +When performing the shift-reduce parsing, two kinds of states have only one action to choose: (1) less than two elements in the stack, and (2) no element in the queue. Under the above two conditions, the probability of the selected action will be 1, making our model to be overly biased on those sequences having many non-optional stages. For this reason, we apply an alternative way to compute the sequence probability during beam search. Our modified beam search is still fulfilled by maintaining the top $k$ sequences, but the score of a sequence is calculated by the average probabilities of the selected actions that have more than one choice. + +# 3 Experiments + +# 3.1 Experimental Settings + +Following the setting of Kong and Zhou (2017), we divide CDTB-14 into the training set, including 450 articles (2,125 paragraphs), and test set, including 50 articles (217 paragraphs). We keep $10\%$ of the training data for validation. PARSEVAL (Carlson et al., 2001b) is used for evaluation. + +# 3.2 Experimental Results + +Table 1 shows the performances of our parser in micro-averaged F-score, compared with previous work Zhou (Kong and Zhou, 2017) and Lin (Lin + +
ModelEDU+T+R+NAll
ZhouGiven52.333.823.923.2
Lin64.642.738.535.0
BERT-CKY76.550.848.543.1
Ours82.857.656.050.5
Zhou93.846.428.823.122.0
Lin87.249.532.628.826.8
BERT-CKY92.468.943.342.037.0
Normal97.478.854.652.047.1
Dynamic97.478.954.551.847.1
Ours97.480.055.953.648.9
+ +Table 1: Performances of EDU segmentation (EDU), tree construction (T), rhetorical relation recognition (R), nuclearity labeling (N), and all subtasks, reported in Micro-averaged F-score. + +et al., 2018). We also implement BERT-CKY, a CKY parser by using BERT for representation, as an additional baseline model. The evaluation is based on multiway trees. + +Both the performances with and without golden EDUs are measured. The results show that BERT is highly competitive and has the ability to catch the potential relations between discourse units since Lin and BERT-CKY basically use the same approach while the latter model uses BERT as the text encoder. Our parser outperforms all the baseline models and achieves a significant improvement without the golden EDUs given. Note that BERT-CKY is based on Lin et al. (2018), which has its own EDU segmentation module different from ours, hence the EDU score is different. + +We examine the performance of three different training techniques for shift-reduce parsing. As mentioned in Section 2.2, Normal stands for action classifier trained with gold standard actions, Dynamic stands for Dynamic Oracle introduced by Yu et al. (2018), and Ours stands for our revised dynamic-oracle procedure where the model is trained with both gold standard actions and dynamic oracle actions. + +Compared to Normal, experimental results show no improvement made by the original dynamic oracle, while our revised dynamic oracle outperforms the other two strategies. Our strategy does not ignore the golden action in every correct state and also has the chance to explore error states. + +In order to compare with SUN (Sun and Kong, 2018), we convert the golden standard trees into binary trees and measure the performances on bi + +
ModelEDU+T+R+NAll
Sun93.078.253.2
Ours97.483.358.155.752.0
+ +nary trees in macro-averaged F-score. The results are shown in Table 2. Sun and Kong (2018) do not address all subtasks in Chinese discourse parsing, and our model outperforms SUN in every subtask. + +Table 2: Performances measured on binary trees, reported in macro-averaged F-score. + +
RelationPRF
Coordination-UDA84.377.880.9
+UDA90.776.983.2
Causality-UDA38.743.240.8
+UDA38.755.445.6
Transition-UDA80.080.080.0
+UDA80.088.984.2
Explanation-UDA46.057.651.1
+UDA45.270.955.2
+ +# 3.3 Discussions + +To examine the effectiveness of UDA, Table 3 shows the performances of rhetorical relation recognition with and without UDA. Experimental results show that application of UDA successfully enhances the recall scores of the three minor classes with a little trade-off in the recall score of the dominant class, Coordination. In addition, the F-scores of all the four relations are increased. In other words, applying UDA deals with the data imbalance issue and improves the overall performance. Applying UDA to nuclearity classification also has a similar improvement as Table 3. + +Theoretically, beam search with a larger beam width helps find a better solution. As shown in + +Table 3: Performances of the four rhetorical relations $(\%)$ with and without UDA. Occurrences of these relations are $59.6\%$ , $17.1\%$ , $1.6\%$ , and $21.7\%$ , respectively. + +
Beam SizeEDU+T+R+NAll
k=1Given82.857.656.050.5
k=281.856.855.149.7
k=581.756.754.949.6
+ +Table 4: Performances of beam search with different beam widths. + +Table 4, however, our parser is worse when a larger beam width is used, which means the sequence having higher overall score does not ensure the better decoding result. Our experiment only shows the beam widths up to five because the scores of worse sequences are already higher than that of the correct sequence in some cases. That is, the larger beam widths seem to be unnecessary. + +The reason may be that beam search is not really suitable for the shift-reduce paradigm. For example, a sequence might fall into a seriously bad stage but the rest of actions can be easily determined so that the sequence will get a high overall probability. This assumption also implies that unlike beam search applied on sequence to sequence model, we cannot judge a transition sequence is good or bad by solely considering its overall score. In addition, for longer textual units such as paragraph, human readers and writers may not follow the assumption of overall optimization. Instead, human beings read and write sequentially, similar to the greedy nature. + +We also evaluate our approach in English discourse parsing. The famous dataset, RST-DT, is used. Our model achieves F-scores of $85.0\%$ , $58.8\%$ , $69.9\%$ , and $56.7\%$ in tree construction, rhetorical relation recognition, nuclearity labeling, and all subtasks, respectively. The overall performance is similar to that of the state-of-the-art model (Yu et al., 2018). + +# 4 Conclusion + +This work proposes a standalone, complete Chinese discourse parser. We integrate BERT, UDA, and a revised training procedure for constructing a robust shift-reduce parser. Our model is compared with a number of previous models, and experimental results show that our model achieves the state-of-the-art performance and is highly competitive with different setups. We will explore cross-lingual transfer learning for supporting more languages. + +# Acknowledgements + +This research was partially supported by Ministry of Science and Technology, Taiwan, under grants MOST-106-2923-E-002-012-MY3, MOST-109-2634-F-002-040-, MOST-109-2634-F-002-034-, MOST-108-2218-E-009-051-, and by Academia Sinica, Taiwan, under grant AS-TP-107-M05. + +# References + +Samuel R. Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1466-1477, Berlin, Germany. Association for Computational Linguistics. +Lynn Carlson, Daniel Marcu, and Mary Ellen Okurovsky. 2001a. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Proceedings of the Second SIGdial Workshop on Discourse and Dialogue. +Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2001b. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Proceedings of the Second SIGdial Workshop on Discourse and Dialogue (SIGDIAL'01), pages 1-10. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. +Fang Kong and Guodong Zhou. 2017. A cdt-styled end-to-end chinese discourse parser. ACM Trans. Asian Low-Resour. Lang. Inf. Process., 16(4):26:1-26:17. +John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01, pages 282-289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. +Yancui Li, Wenhe Feng, Jing Sun, Fang Kong, and Guodong Zhou. 2014. Building Chinese discourse corpus with connective-driven dependency tree structure. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2105-2114, Doha, Qatar. Association for Computational Linguistics. +Chuan-An Lin, Hen-Hsen Huang, Zi-Yuan Chen, and Hsin-Hsi Chen. 2018. A unified RvNN framework for end-to-end Chinese discourse parsing. In Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations, pages 73-77, Santa Fe, New Mexico. Association for Computational Linguistics. +William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text-Interdisciplinary Journal for the Study of Discourse, 8(3):243-281. +Mathieu Morey, Philippe Muller, and Nicholas Asher. 2018. A dependency perspective on rst discourse parsing and evaluation. Comput. Linguist., 44(2):197-235. + +Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. CoRR, abs/1802.05365. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computational Linguistics. +Cheng Sun and Fang Kong. 2018. A transition-based framework for chinese discourse structure parsing. Journal of Chinese Information Processing, 32(12):48. +Yizhong Wang, Sujian Li, and Jingfeng Yang. 2018. Toward fast and accurate neural discourse segmentation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 962-967, Brussels, Belgium. Association for Computational Linguistics. +Sam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beam-search optimization. CoRR, abs/1606.02960. +Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. 2019. Unsupervised data augmentation for consistency training. arXiv preprint arXiv:1904.12848. +Nan Yu, Meishan Zhang, and Guohong Fu. 2018. Transition-based neural RST parsing with implicit syntax features. In Proceedings of the 27th International Conference on Computational Linguistics, pages 559-570, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Yuping Zhou and Nianwen Xue. 2012. PDTB-style discourse annotation of Chinese text. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 69-77, Jeju Island, Korea. Association for Computational Linguistics. \ No newline at end of file diff --git a/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/images.zip b/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7f0191988b4f29779798178e1600c1951e779d96 --- /dev/null +++ b/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ecd56d962d1bca65963e90aa37ce7cb513ecc23c354b33f5a36cc4165cb4eac4 +size 188938 diff --git a/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/layout.json b/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5b2b878e12945a3ca98fe5b845b99e679e59aa03 --- /dev/null +++ b/acompleteshiftreducechinesediscourseparserwithrobustdynamicoracle/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:04fe14d0944d134d79eb9e1c25932db41a569ad25b2205c240585c6108090cd0 +size 201247 diff --git a/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/bab6b959-5d76-4e05-a32d-ea7723b78eaa_content_list.json b/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/bab6b959-5d76-4e05-a32d-ea7723b78eaa_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d3de1e1330ede00baf3fc711bb2b4cd4fbfde47a --- /dev/null +++ b/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/bab6b959-5d76-4e05-a32d-ea7723b78eaa_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f45c616890a5c521336bab6bb036d8315bdf9e55980cba08b72ebf2abbbc264 +size 88396 diff --git a/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/bab6b959-5d76-4e05-a32d-ea7723b78eaa_model.json b/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/bab6b959-5d76-4e05-a32d-ea7723b78eaa_model.json new file mode 100644 index 0000000000000000000000000000000000000000..966467cbcd36a9522176290d4e410cd54bd5b8cd --- /dev/null +++ b/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/bab6b959-5d76-4e05-a32d-ea7723b78eaa_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b93ac1fa00c7b6ba955d69d12a97e4a42f8fe942db5db63246da48272e6e7ed6 +size 107254 diff --git a/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/bab6b959-5d76-4e05-a32d-ea7723b78eaa_origin.pdf b/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/bab6b959-5d76-4e05-a32d-ea7723b78eaa_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5cfb66958384fdc5ce6e9d65fefca54533292d23 --- /dev/null +++ b/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/bab6b959-5d76-4e05-a32d-ea7723b78eaa_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e03953c117d77f92249e459587c76200fc9efde458e3815856edcc3cd3abd4f +size 513390 diff --git a/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/full.md b/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..15b3bf5d7c7a2de4e5a8c3fbda3a5a071e9d61a5 --- /dev/null +++ b/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/full.md @@ -0,0 +1,280 @@ +# A Comprehensive Analysis of Preprocessing for Word Representation Learning in Affective Tasks + +Nastaran Babanejad, Ameeta Agrawal, Aijun An, Manos Papagelis + +Department of Electrical Engineering and Computer Science, + +York University, Toronto, Canada + +{nasba, ameeta, aan, papaggel}@eecs.yorku.ca + +# Abstract + +Affective tasks such as sentiment analysis, emotion classification and sarcasm detection have been popular in recent years due to abundance of user-generated data, accurate computational linguistic models, and broad range of relevant applications in various domains. At the same time, many studies have highlighted the importance of text preprocessing, as an integral step to any natural language processing prediction model and downstream task. While preprocessing in affective systems is well-studied, preprocessing in word vector based models applied to affective systems, is not. To address this limitation, we conduct a comprehensive analysis of the role of preprocessing techniques in affective analysis based on word vector models. Our analysis is the first of its kind and provides useful insights of the importance of each preprocessing technique when applied at the training phase, commonly ignored in pretrained word vector models, and/or at the downstream task phase. + +# 1 Introduction + +Affective tasks such as sentiment analysis, emotion classification and sarcasm detection have enjoyed great popularity in recent years. This success can be largely attributed to the fundamental and straightforward nature of the methods employed, the availability of vast amounts of user-generated natural language data, and the wide range of useful applications, spanning from hate speech detection to monitoring the sentiment of financial markets and news recommendation (Djuric et al., 2015; Babanejad et al., 2019). Most early models of affect analysis employed pretrained word embeddings that have been obtained under the assumption of the distributional hypothesis (Mikolov et al., 2013; Devlin et al., 2018). The distributional hypothesis suggests that two words occurring frequently in + +similar linguistic contexts tend to be more semantically similar, and therefore should be represented closer to one another in the embedding space. However, while such embeddings are useful for several natural language processing (NLP) downstream tasks, they are known to be less suitable for affective tasks in particular (Tang et al., 2014; Agrawal et al., 2018). Although some authors claim that there is a need for post-processing word embeddings for affective tasks, others find that off-the-shelf vectors are very powerful for affective lexicon learning (Lison and Kutuzov, 2017). For example, word2vec (Mikolov et al., 2013) estimates the pair of words 'happy' and 'sad' to be more similar than the pair of words 'happy' and 'joy', which is counterintuitive, and might affect the accuracy performance of the models that depend on it. + +To address the limitations of traditional word embeddings, several techniques have been proposed, including task-specific fine-tuning (Devlin et al., 2018), retrofitting (Faruqui et al., 2014), representing emotion with vectors using a multi-task training framework (Xu et al., 2018) and generating affective word embeddings (Felbo et al., 2017), to name a few. Other attempts to overcome the limitation of word vectors include optimization of hyperparameters (Levy et al., 2015), as well as fine-tuned preprocessing strategies tailored to different NLP tasks. While these strategies have demonstrated evidence of improving the accuracy performance in tasks such as word similarity, word analogy, and others (Lison and Kutuzov, 2017), their effect in affective tasks has not received considerable attention and remains less explored. Our work is motivated by the observation that preprocessing factors such as stemming, stopwords removal and many others make up an integral part of nearly every improved text classification model, and affective systems in particular (Danisman and Alpkocak, 2008; Patil and Patil, 2013). However, little work has been + +![](images/53436fb657cc0e13c072840be93b8c03e7dd72554766c0b30b22ad9ff9948c3c.jpg) +Figure 1: Framework of applying preprocessing in different stages in affective systems; (a) Pre, (b) Post. + +done towards understanding the role of preprocessing techniques applied to word embeddings in different stages of affective systems. To address this limitation, the overarching goal of this research, is to perform an extensive and systematic assessment of the effect of a range of linguistic preprocessing factors pertaining to three affective tasks, including sentiment analysis, emotion classification and sarcasm detection. Towards that end, we systematically analyze the effectiveness of applying preprocessing to large training corpora before learning word embeddings, an approach that has largely been overlooked by the community. We investigate the following research questions: (i) what is the effect of integrating preprocessing techniques earlier into word embedding models, instead of later on in a downstream classification models? (ii) which preprocessing techniques yield the most benefit in affective tasks? (iii) does preprocessing of word embeddings provide any improvement over state-of-the-art pretrained word embeddings? and if yes, how much? + +Figure 1 illustrates the difference between a) preprocessing word embeddings pipeline (Pre) vs. b) preprocessing classification dataset pipeline (Post), where preprocessing techniques in (a) are applied to the training corpus of the model and in (b) only to the classification dataset. In brief, the main contributions of our work are as follows: + +- We conduct a comprehensive analysis of the role of preprocessing techniques in affective tasks (including sentiment analysis, emotion classification and sarcasm detection), employing different models, over nine datasets; +- We perform a comparative analysis of the accuracy performance of word vector models when preprocessing is applied at the training phase (training data) and/or at the downstream task phase (classification dataset). Interestingly, we obtain best results when preprocessing is applied only to the training corpus or when it is applied to both the training corpus + +and the classification dataset of interest. + +- We evaluate the performance of our best preprocessed word vector model against state-of-the-art pretrained word embedding models; +- We make source code and data publicly available to encourage reproducibility of results1. + +The rest of the paper is organized as follows: Section 2 presents an overview of the related work. Section 3 elaborates on the preprocessing techniques employed in the evaluation of models. Section 4 describes the experimental evaluation framework. In Section 5 a comprehensive analysis of the results is provided. Section 6 concludes the paper with key insights of the research. + +# 2 Related Work + +In this section, we present an overview of related work on preprocessing classification datasets and preprocessing word embeddings, and how our work aims to bridge the gap between those efforts. + +# 2.1 Preprocessing Classification Datasets + +Preprocessing is a vital step in text mining and therefore, evaluation of preprocessing techniques has long been a part of many affective systems. Saif et al. (2014) indicated that, despite its popular use in Twitter sentiment analysis, the use of precompiled stopwords has a negative impact on the classification performance. Angiani et al. (2016) analyzed various preprocessing methods such as stopwords removal, stemming, negation, emoticons, and so on, and found stemming to be most effective for the task of sentiment analysis. Similarly, Symeonidis et al. (2018) found that lemmatization increases accuracy. Jianqiang and Xiaolin (2017) observed that removing stopwords, numbers, and URLs can reduce noise but does not affect performance, whereas replacing negation and expanding acronyms can improve the classification accuracy. + +Preprocessing techniques such as punctuation and negation (Rose et al., 2018) or post-tagging and negation (Seal et al., 2020) make up a common component of many emotion classification models (Kim et al., 2018; Patil and Patil, 2013). One of the earliest works (Danisman and Alpkocak, 2008) preserved emotion words and negative verbs during stopwords removal, replaced punctuation with descriptive new words, replaced negative short forms with long forms, and concatenated negative words with emotion words to create new words (e.g., not happy $\rightarrow$ NOThappy). Although stemming may remove the emotional meaning from some words, it has been shown to improve classification accuracy (Danisman and Alpkocak, 2008; Agrawal and An, 2012). Negations have also been found beneficial, whereas considering intensifiers and diminishers did not lead to any improvements (Strohm, 2017). + +Pecar et al. (2018) also highlight the importance of preprocessing when using user-generated content, with emoticons processing being the most effective. Along the same lines, while Gratian and Haid (2018) found pos-tags to be useful, Boiy et al. (2007) ignored pos-tagging because of its effect of reducing the classification accuracy + +The aforementioned works describe preprocessing techniques as applied directly to evaluation datasets in affective systems. In contrast, we examine the effectiveness of directly incorporating these known effective preprocessing techniques further "upstream" into the training corpus of word embeddings, which are widely used across a number of downstream tasks. + +# 2.2 Preprocessing Word Embeddings + +Through a series of extensive experiments, particularly those related to context window size and dimensionality, (Levy et al., 2015) indicate that seemingly minor variations can have a large impact on the success of word representation methods in similarity and analogy tasks, stressing the need for more analysis of often ignored preprocessing settings. Lison and Kutuzov (2017) also present a systematic analysis of context windows based on a set of four hyperparameters, including window position and stopwords removal, where the right window was found to be better than left for English similarity task, and stopwords removal substantially benefited analogy task but not similarity. + +A general space of hyperparameters and preprocessing factors such as context window size (Her + +shcovich et al., 2019; Melamud et al., 2016), dimensionality (Melamud et al., 2016), syntactic dependencies (Levy and Goldberg, 2014; Vulić et al., 2020) and their effect on NLP tasks including word similarity (Hershcovich et al., 2019), tagging, parsing, relatedness, and entailment (Hashimoto et al., 2017) and biomedical (Chiu et al., 2016) has been studied extensively in the literature. The main conclusion of these studies, however, is that these factors are heavily task-specific. Therefore, in this work we explore preprocessing factors of generating word embeddings specifically tailored to affective tasks, which have received little attention. + +A recent study investigated the role of tokenizing, lemmatizing, lowercasing and multiword grouping (Camacho-Collados and Pilehvar, 2018) as applied to sentiment analysis and found simple tokenization to be generally adequate. In the task of emotion classification, Mulki et al. (2018) examined the role of four preprocessing techniques as applied to a vector space model based on tfidf trained on a small corpus of tweets, and found stemming, lemmatization and emoji tagging to be the most effective factors. + +Distinct from prior works, we examine a much larger suite of preprocessing factors grounded in insights derived from numerous affective systems, trained over two different corpora, using three different word embedding models. We evaluate the effect of the preprocessed word embeddings in three distinct affective tasks including sentiment analysis, emotion classification and sarcasm detection. + +# 3 Preprocessing in Affective Systems + +This section describes the preprocessing factors applied to the training corpus that is then used to generate word representations and the order of the preprocessing factors which we need to follow when applying on the corpus. + +# 3.1 Preprocessing Factors + +Basic: A group of common text preprocessing applied at the very beginning, such as removing html tags, removing numbers, and lowercasing. This step removes all common punctuation from text, such as “@%*=(/ +” using the NLTK regextokenizer2. + +Spellcheck (spell): A case can be made for either correcting misspellings and typos or leaving + +them as is assuming they represent natural language text and its associated complexities. In this step, we identify words that may have been misspelled and correct them3. As unambiguous spell corrections are not very common and in most cases we have multiple options for correction, we built our own custom dictionary to suggest a replacement by parsing the ukWac corpora4 to retrieve a word-frequency list. A misspelled word that has multiple replacements is replaced with the suggested word that has the maximum frequency in the corpora. + +Negation (neg): Negation is a mechanism that transforms a positive argument into its inverse rejection (Benamara et al., 2012). Specifically in the task of affective analysis, negation plays a critical role as negation words can affect the word or sentence polarity causing the polarity to invert in many cases. Our negation procedure is as follows: + +(i) Compilation of an antonym dictionary: The first stage involves compiling an antonym dictionary using the WordNet corpus (Miller, 1995). For every synset, there are three possibilities: finding no antonym, one antonym or multiple antonyms. The first two cases are trivial (unambiguous replacements). In the case of the third option (ambiguous replacement), which represents the most common case, amongst the many choices, we consider the antonym with the maximum frequency in the ukWac corpus, as described in the previous section and finally the antonym of a word is picked at random from one of its senses in our antonym dictionary. + +(ii) Negation handler: Next, we identify the negation words in tokenized text5. If a negation word is found, the token following it (i.e., negated word) is extracted and its antonym looked up in the antonym dictionary. If an antonym is found, the negation word and the negated word are replaced with it. + +For example, let the sentence "I am not happy today" in its tokenized form ['I', 'am', 'not', 'happy', 'today']. First, we identify any negation words (i.e., 'not') and their corresponding negated words (i.e., 'happy'). Then, we look up the antonym of 'happy' in the antonym dictionary (i.e., 'sad') and replace the phrase 'not happy' with the word 'sad', resulting in a new sentence "I am sad today". + +Parts-of-Speech (pos): Four parts-of-speech + +classes, namely nouns, verbs, adjectives and adverbs have been shown to be more informative with regards to affect than the other classes. Thus, using the NLTK pos-tagger, for each sentence in the corpus we retain only the words belonging to one of these four classes, i.e., $\mathrm{NN^{*}}$ , $\mathrm{JJ^{*}}$ , $\mathrm{VB^{*}}$ , and $\mathrm{RB^{*}}$ . + +Stopwords (stop): Stopwords are generally the most common words in a language typically filtered out before classification tasks. Therefore, we remove all the stopwords using the NLTK library. + +Stemming (stem): Stemming, which reduces a word to its root form, is an essential preprocessing technique in NLP tasks. We use NLTK Snowball stemmer for stemming our training corpus. + +# 3.2 Order of Preprocessing Factors + +While some preprocessing techniques can be applied independently of each other (e.g., removing stopwords and removing punctuation), others need a more careful consideration of the sequence in which they are applied in order to obtain a more stable result. For instance, post-tagging should be applied before stemming in order for the tagger to work well, or negation should be performed prior to removing stopwords. To this end, we consider the following ordering when combining all the aforementioned preprocessing factors: spellchecking, negation handling, pos classes, removing stopwords, and stemming. + +# 4 Experimental Evaluation Framework + +# 4.1 Training Corpora + +Table 1 summarizes the details of our two training corpora with regards to their vocabulary and corpus sizes after applying various preprocessing settings. For some preprocessing such as POS (pos) and stopwords removal (stop), without any significant loss in vocabulary as indicated by the $\%$ ratio of preprocessed to basic, the corpus size reduces dramatically, in some cases more than $50\%$ , a nontrivial implication with regards to training time. + +News: This corpus consists of 142,546 articles from 15 American publications, spanning from 2013 to early $2018^{6}$ . + +Wikipedia: Comparatively a much larger corpus than the News, this corpus consists of 23,046,187 articles from Wikipedia7. + +
CorpusProcessingVocabCorpus
size%size%
NewsBasic155K100123.2M100
spell149K96123.2M100
stem137K88123.2M100
punc147K95111.0M90
neg152K9890.7M73
stop150K9775.6M61
pos154K9970.7M57
All - punc151K9793.7M76
All - pos140K9090.5M73
All - stop150K9775.3M61
All110K7155.2M49
All - stem110K7158.1M47
All - spell110K7156.4M46
All - neg110K7154.3M44
WikipediaBasic5.1M1008.1B100
All - punc4.9M967.2B89
All - pos4.8M947.0B86
All - stop4.9M966.8B84
All - stem4.3M846.4B79
All - spell4.6M906.1B75
All4.6M905.6B69
All - neg4.6M905.0B62
+ +Table 1: Details of training corpora + +
DatasetGenreTaskTotal
IMDBreviewssentiment50,000
SemEvaltweetssentiment14,157
Airlinetweetssentiment11,541
ISEARnarrativesemotions5,477
Almfairy talesemotions1,206
SSECtweetsemotions1,017
Onionheadlinessarcasm28,619
IACresponsesarcasm3,260
Redditcommentssarcasm1,010,826
+ +Table 2: Details of evaluation datasets + +# 4.2 Word Embedding Models + +We obtain our preprocessed word representations through three models: (i) CBOw (Continuous Bag-of-Words), (ii) Skip-gram: While CBOw takes the context of each word as the input and tries to predict the word corresponding to the context, skip-gram reverses the use of target and context words, where the target word is fed at the input and the output layer of the neural network is replicated multiple times to accommodate the chosen number of context words (Mikolov et al., 2013). We train both the models on both the training corpora using min count of 5 for News and 100 for Wikipedia with window sizes of 5 and 10, respectively, setting dimensionality to 300. + +(iii) BERT (Bidirectional Encoder Representations from Transformers): BERT is an unsupervised method of pretraining contextualized language representations (Devlin et al., 2018). We train the model using BERT large uncased archi + +tecture (24-layer, 1024-hidden, 16-heads, 340M parameters) with same setting for parameters as the original paper. + +We train each of the three models (CBOW, Skipgram and BERT) 8 times using 16 TPUs (64 TPU chips), Tensorflow 1.15, 1TB memory on Google Cloud and two 32 GPUs cluster of V100/RTX 2080 Ti, 1TB memory using Microsoft CNTK parallelization algorithm $^{8}$ on Amazon server. For a large model such as BERT, it takes upto 4-5 days for each run of the training. + +# 4.3 Evaluation Datasets + +We conduct our evaluation on three tasks, namely sentiment analysis, emotion classification and sarcasm detection. Table 2 presents the details of our evaluation datasets, and some illustrative examples of text are shown in Table 3. + +Sentiment Analysis: This popular task involves classifying text as positive or negative, and we use the following three datasets for evaluation: (i) IMDB: This dataset9 includes 50,000 movie reviews for sentiment analysis, consisting of 25,000 negative and 25,000 positive reviews Maas et al. (2011). (ii) Semeval 2016: This sentiment analysis in Twitter dataset10 consists of 14,157 tweets where 10,076 of them are positive and 4,081 negative Nakov et al. (2016). (iii) Airlines: This sentiment analysis dataset11 consists of 11,541 tweets about six U.S. airlines from February 2015, with 9,178 tweets labeled as positive and 2,363 negative. + +Emotion Classification: A multiclass classification task, this involves classifying text into a number of emotion categories such as happy, sad, and so on. The following datasets are used in our evaluation: (i) SSEC: The Stance Sentiment Emotion Corpus Schuff et al. (2017) is the re-annotation of the SemEval 2016 Twitter stance and sentiment corpus Mohammad et al. (2017) with emotion labels including anger, joy, sadness, fear, surprise. [12]. (ii) ISEAR: This dataset contains narratives of personal experiences evoking emotions Wallbott and Scherer (1986). We use a subset of the data consisting of five categories: sadness, anger, disgust, fear, joy. (iii) Alm: This dataset contains sentences + +
TextLabelDataset
·I must admit that this is one of the worst movies I’ve ever seen. I thought Dennis Hopper had a little more taste than to appear in this kind of yeeecchh... [truncated]negativeIMDB
·everything was fine until you lost my bag.negativeAirline
·At work, when an elderly man complained unjustifiably about me and distrusted me.angerISEAR
·The ladies danced and clapped their hands for joy.happyAlm
·if this heat is killing me i don’t wanna know what the poor polar bears are going throughsadnessSSEC
·ford develops new suv that runs purely on gasolinesarcasticOnion
·Been saying that ever since the first time I heard about creationismnot-sarcasticIAC
·Remember, it’s never a girl’s fault, it’s always the man’s fault.sarcasticReddit
+ +Table 3: Examples of text instances in the evaluation datasets + +from fairy tales marked with one of five emotion categories: angry-disgusted, fearful, happy, sad and surprised Cecilia and Ovesdotter (2008). + +Sarcasm Detection: Detecting sarcasm from text, a challenging task due to the sophisticated nature of sarcasm, involves labeling text as sarcastic or not. We use the following three datasets: (i) Onion: This news headlines dataset $^{13}$ collected sarcastic versions of current events from The Onion and non-sarcastic news headlines from HuffPost Misra and Arora (2019), resulting in a total 28,619 records. (ii) IAC: A subset of the Internet Argument Corpus Oraby et al. (2016), this dataset contains response utterances annotated for sarcasm. We extract 3260 instances from the general sarcasm type. $^{14}$ . (iii) Reddit: Self-Annotated Reddit Corpus $(\mathrm{SARC})^{15}$ is a collection of Reddit posts where sarcasm is labeled by the author in contrast to other datasets where the data is typically labeled by independent annotators Khodak et al. (2017). + +# 4.4 Classification Setup + +For classification, we employ the LSTM model as it works well with sequential data such as text. For binary classification, such as sentiment analysis and sarcasm detection, the loss function used is the binary cross-entropy along with sigmoid activation: + +$$ +\xi = - \frac {1}{N} \sum_ {i = 1} ^ {N} y _ {i} l o g (p (y _ {i})) + (1 - y _ {i}) l o g (1 - p (y _ {i})) +$$ + +where $y$ is the binary representation of true label, $p(y)$ is the predicted probability, and $i$ denotes the $i^{\mathrm{th}}$ training sample. + +For multiclass emotion classification, the loss function used is categorical cross-entropy loss over a batch of $N$ instances and $k$ classes, along with softmax activation: + +$$ +\xi = - \frac {1}{N} \sum_ {i = 1} ^ {N} \sum_ {j = 1} ^ {k} y _ {i j} \log (p (y _ {i j})) +$$ + +where $p(y)$ is the predicted probability distribution, $p(y_{ij})\in [0,1]$ + +The optimizer is Adam Kingma and Ba (2014), all loss functions are sample-wise, and we take the mean of all samples (epoch = 5, 10, batch size = 64, 128). All sentiment and sarcasm datasets are split into training/testing using $80\% / 20\%$ , with $10\%$ validation from training. For the smaller and imbalanced emotion datasets, we use stratified 5-fold cross-validation. We use a dropout layer to prevent overfitting by ignoring randomly selected neurons during training. We use early stopping when validation loss stops improving with patience $= 3$ , min-delta $= 0.0001$ . The results are reported in terms of weighted F-score (as some emotion datasets are highly imbalanced), where F-score $= 2\frac{p \cdot r}{p + r}$ , with $p$ denoting precision, and $r$ is recall. + +# 5 Discussion and Analysis + +We analyze the impact of preprocessing techniques in word representation learning on affect analysis. + +# 5.1 Effect of Preprocessing Factors + +A primary goal of this work is to identify the most effective preprocessing factors for training word embeddings for affective tasks. Table 4 details the results of our experiments comparing the performance of individual preprocessing factors as well as those of ablation studies (i.e., including all the factors but one). + +Observing the performance of the individual factors on the News corpus, we note that even a single simple preprocessing technique can bring improvements, thereby validating our intuition of incorporating preprocessing into training corpora of word representations. Second, negation (neg) processing appears to be consistently the most + +
ModelsProcessingIMDBSemevalAirlineIACOnionRedditAlmISEARSSEC
CBOWBasic83.9955.6960.7365.7468.2359.4236.8155.4351.76
stop84.4355.7261.3766.0368.1759.2736.8156.0152.33
spell86.2055.9361.9666.0069.5760.0036.8856.4152.14
stem86.9255.7261.8665.8968.4959.7236.9455.8451.89
punc86.9956.4162.0865.9369.8560.2836.9456.8952.03
pos85.6656.8362.7566.3270.2560.6337.0257.0453.19
neg88.9857.2963.8166.8771.1260.9137.2257.3954.15
All89.9657.8264.5867.2370.9060.8437.4357.7253.71
All - neg84.6755.0061.5866.0269.7359.9436.9155.8951.94
All - pos85.6956.3164.2966.9770.4860.1537.1956.2752.16
All - punc86.4156.8863.0166.7570.0160.0037.0157.1952.43
All - spell88.2356.4163.8767.2370.8360.2737.2257.4153.41
All - stop90.0160.8266.8467.2072.4962.1138.9659.2855.00
All - stem88.1260.8267.1269.2572.1361.7338.0059.0055.42
Skip-gramBasic83.0754.2361.4765.5168.0159.7535.8755.6451.49
stop83.2355.4762.0065.6268.0059.8435.9455.7651.62
spell85.9055.4862.0065.6169.7660.2836.1055.9352.30
stem86.0055.3361.8965.6068.7259.5036.0055.6951.40
punc86.6855.7962.3865.8970.0060.4436.4156.8152.71
pos85.9156.2863.2566.2469.8160.8536.4456.2352.94
neg87.2856.8963.7266.8770.5961.2736.8757.3453.10
All88.3657.0464.9166.9470.7361.1237.1057.9253.58
All - neg83.2654.0061.9566.0069.8860.0036.9455.9751.89
All - pos86.2155.2265.1266.0669.8861.0037.0056.4252.10
All - punc85.5755.9964.2966.2970.0060.9837.0157.0252.53
All - spell86.0056.9865.0066.2570.250.6137.0457.6952.86
All - stop88.7460.9367.0068.5772.2062.0238.9259.1855.18
All - stem88.4260.6767.3969.0872.0062.3637.4459.4855.23
+ +Table 4: F-score results of evaluating the effect of preprocessing factors using CBOw and Skip-gram on News corpus. The overall best results are in bold. The best result using only any one preprocessing setting is underlined. + +effective factor across all the 9 datasets, indicating its importance in affective classification, followed by parts-of-speech (pos) processing where we retained words belonging only to one of four classes. On the other hand, removing stopwords (stop), spellchecking (spell) and stemming (stem) yield little improvement and mixed results. Interestingly, applying all the preprocessing factors is barely better or in some cases even worse (Onion, Reddit and SSEC) than applying just negation. Finally, the best performance comes from combining all the preprocessing factors except stemming (All-stem). Moreover, Table 5 details the performance of ablation studies on Wikipedia corpus for all three models where we note that the best performance for the CBOw model comes from combining all the preprocessing factors except stemming (All-stem), whereas for the Skip-gram and BERT models, the best results are obtained by applying all the preprocessing factors except stopwords removal (All-stop). Considering that the Wikipedia corpus is almost 160 times bigger than the News corpus, it is unsurprising that the word embeddings obtained from the former yield considerably better results, consistent across all nine datasets. + +# 5.2 Evaluating Preprocessing Training Corpora for Word Vectors vs. Preprocessing Classification Data + +We investigate the difference between applying preprocessing to the training corpora for generating word embeddings (Pre) and applying preprocessing to the classification datasets (Post). As an example, during Pre, we first apply the preprocessing techniques (e.g., all but stemming) to the training corpus (e.g., Wikipedia), then generate word embeddings, then convert a classification dataset (e.g., IMDB) into word embedding representation, and finally classify using LSTM. Conversely, for Post, we first generate word embeddings from a training corpus (e.g., Wikipedia), then apply the preprocessing techniques (e.g., all but stemming) to the classification dataset (e.g., IMDB), which is then converted to word vector representation, and finally classified using LSTM16. + +The results of this experiment are presented in Table 6, where we observe that incorporating preprocessing into the training corpora before generat + +
ModelsProcessingIMDBSemevalAirlineIACOnionRedditAlmISEARSSEC
CBOWBasic84.9156.8968.1169.1571.0263.5845.2259.7355.84
All88.4160.2571.3971.5773.6165.2748.8162.4857.42
All - neg83.0256.0369.2869.5570.2564.1846.0060.4255.93
All - pos85.6957.2171.0070.0872.2964.8247.5362.2856.25
All - punc84.0057.3670.4670.0172.0265.0047.6861.8456.64
All - spell86.1958.2670.9870.5972.8565.0047.2961.6357.00
All - stop91.1061.0073.0072.3174.5068.2052.3964.2958.46
All - stem88.7662.1973.2572.3675.6968.5350.2865.3359.28
Skip-gramBasic84.0055.9468.3669.2071.6863.7445.0159.4555.62
All87.0059.9971.2971.2573.8265.6748.5165.0257.13
All - neg84.9756.1169.0070.1770.0464.5546.2860.5455.86
All - pos86.2157.6270.2570.8573.2265.4747.4963.4456.00
All - punc85.0057.2070.0070.7772.0065.0047.1061.7256.49
All - spell85.7558.4970.2670.8972.6365.1847.1461.2556.84
All - stop89.7661.7472.1972.0075.6968.2952.0164.0058.14
All - stem89.6660.2873.6671.9875.2468.7251.3963.4459.01
BERTBasic90.1170.8290.2371.1976.3059.7457.8165.7065.39
All91.8671.7691.7373.6678.7262.6059.7467.8067.49
All - neg90.3370.5291.0472.0077.0761.4458.1466.5966.10
All - pos91.0171.2091.6673.3178.4562.0459.0166.2568.13
All - punc91.5971.5091.6073.1878.5462.2759.6067.2567.27
All - spell91.7871.1391.3473.0278.4062.0059.4467.2167.30
All - stop94.1873.8194.8575.8079.1065.3960.7369.3369.81
All - stem92.1971.9492.0374.4977.9363.7460.1668.0067.05
+ +Table 5: F-score results of evaluating the effect of preprocessing factors using different models on Wikipedia corpus. The overall best results are shown in bold. + +
ModelsProcessingIMDBSemevalAirlineIACOnionRedditAlmISEARSSEC
CBOWPost87.4959.3371.2869.8774.2067.1347.1962.0056.27
Pre88.7662.1973.2572.3675.6968.5350.2865.3359.28
Both88.1062.4173.0071.8675.0070.1050.3964.5258.20
Skip-gramPost88.1460.4171.8570.2275.0767.0050.4462.0856.00
Pre89.7661.7472.1972.0075.6968.2952.0164.0058.14
Both89.3361.2573.5871.6275.4868.7451.6865.2958.03
BERTPost94.5870.2592.3574.6977.1063.3858.4068.2067.17
Pre94.1873.8194.8575.8079.1065.3960.7369.3369.81
Both94.6372.4193.0075.1978.6965.1760.3369.0668.43
+ +Table 6: F-score results of evaluating the effect of preprocessing word embeddings training corpus vs. preprocessing evaluation datasets + +ing word vectors (Pre) outperforms preprocessing classification datasets (Post) across all nine datasets of the three affective tasks. Interestingly though, preprocessing both the bodies of text (Both) appears to be of little benefit, suggesting the importance of preprocessing training corpora used for obtaining word embeddings. + +# 5.3 Evaluating Proposed Model against State-of-the-art Baselines + +While not a primary focus of this paper, in this final experiment we compare the performance of our preprocessed word embeddings against those of six state-of-the-art pretrained word embeddings17. + +(i) GloVe: Global vectors for word representations (Pennington et al., 2014) were trained on aggregated global word co-occurrences. We use the vectors trained on GloVe6B 6 billion words $^{18}$ , uncased, from Wikipedia and Gigaword. (ii) SSWE: Sentiment Specific Word Embeddings (unified model) $^{19}$ were trained using a corpus of 10 million tweets to encode sentiment information into the continuous representation of words (Tang et al., 2014). (iii) FastText: These pretrained word vectors $^{20}$ , based on sub-word character n-grams were trained on Wikipedia using fastText (Bojanowski et al., 2017), an extension of the word2vec model. + +
ModelsIMDBSemevalAirlineIACOnionRedditAlmISEARSSEC
GloVe85.6470.2970.2170.1971.3963.5756.2165.3058.40
SSWE80.4569.2778.2964.8552.7450.7351.0054.7152.18
FastText75.2668.5570.6955.7458.2959.3752.2825.4053.20
DeepMoji69.7962.1071.0365.6770.9053.0846.3358.2058.90
EWE71.2860.2767.8167.4370.0655.0258.3366.0958.94
Our best results:
CBOW91.1062.1973.2572.3675.6968.5352.3965.3359.28
Skip-gram89.7661.7473.6672.0075.6968.7252.0165.0259.01
BERT94.1873.8194.8575.8079.1065.3960.7369.3369.81
+ +Table 7: F-score results of comparing against state-of-the-art word embeddings. The best score is highlighted in bold, and the second best result is underlined. + +(iv) DeepMoji: These word embeddings $^{21}$ were trained using BiLSTM on 1.2 billion tweets with emojis (Felbo et al., 2017). (v) EWE: Emotion-enriched Word Embeddings $^{22}$ were learned on 200,000 Amazon product reviews corpus using an LSTM model (Agrawal et al., 2018). + +From the results in Table 7, we notice that BERT is best on eight out of nine datasets except one sarcasm dataset (Reddit), while word2vec CBOW is the second best on four datasets. Overall, our analysis suggests that preprocessing at word embedding stage (Pre) works well for all the three affective tasks. + +# 5.4 Analyzing the Three Affective Tasks + +Figure 2 summarizes the results obtained for all three tasks in terms of (a) absolute F-scores and (b) relative improvement (best preprocessing over Basic preprocessing). The IMDB dataset achieves the highest F-score overall, most likely because it consists of movie reviews which are much longer than the text from other genres. As expected, the binary classification task of sentiment analysis and sarcasm detection achieve comparable results, while the multiclass emotion classification typically has much lower F-scores. The most interesting observation, however, is noticed in Fig. 2(b) where the emotion datasets show the highest relative improvement, indicating that multiclass classification tasks may benefit the most from applying preprocessing at word embedding stage (Pre). + +# 6 Conclusions + +We systematically examined the role of preprocessing training corpora used to induce word representations for affect analysis. While all preprocessing techniques improved performance to a certain ex + +![](images/d23b15a8836fd1efce6ac673fedca117f969b147fc420bdcef0c255b1935f324.jpg) +Figure 2: Absolute F-scores vs. relative improvement + +tent, our analysis suggests that the most noticeable increase is obtained through negation processing (neg). The overall best performance is achieved by applying all the preprocessing techniques, except stopwords removal (All-stop). Interestingly, incorporating preprocessing into word representations appears to be far more beneficial than applying it in a downstream task to classification datasets. Moreover, while all the three affective tasks (sentiment analysis, sarcasm detection and emotion classification) benefit from our proposed preprocessing framework, our analysis reveals that the multiclass emotion classification task benefits the most. Exploring the space of subsets of our preprocessing factors might yield more interesting combinations; we leave this for future work. + +# Acknowledgements + +We thank the anonymous reviewers for their insightful comments. This work is funded by Natural Sciences and Engineering Research Council of Canada (NSERC) and the Big Data Research, Analytics, and Information Network (BRAIN) Alliance established by the Ontario Research Fund Research Excellence Program (ORF-RE). In particular, we thank Majid Taghdimi from Questrade to provide us with the computing resources and help in the parallelization algorithm. We would also like to thank Dr. Heidar Davoudi for the helpful discussions and insights in this project. + +# References + +Ameeta Agrawal and Aijun An. 2012. Unsupervised emotion detection from text using semantic and syntactic relations. In Proceedings of the The 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology-Volume 01, pages 346-353. IEEE Computer Society. +Ameeta Agrawal, Aijun An, and Manos Papagelis. 2018. Learning emotion-enriched word representations. In Proceedings of the 27th International Conference on Computational Linguistics, pages 950-961. +Giulio Angiani, Laura Ferrari, Tomaso Fontanini, Paolo Fornacciari, Eleonora Iotti, Federico Magliani, and Stefano Manicardi. 2016. A comparison between preprocessing techniques for sentiment analysis in twitter. In Proceedings of the 2nd International Workshop on Knowledge Discovery on the WEB, KDWeb. +Nastaran Babanejad, Ameeta Agrawal, Heidar Davoudi, Aijun An, and Manos Papagelis. 2019. Leveraging emotion features in news recommendations. In Proceedings of the 7'th International Workshop on News Recommendation and Analytics (INRA'19) in conjunction with RecSys'19, Copenhagen, Denmark, September 16 - 20, 2019. +Farah Benamara, Baptiste Chardon, Yannick Mathieu, Vladimir Popescu, and Nicholas Asher. 2012. How do negation and modality impact on opinions? In Proceedings of the Workshop on Extra-Propositional Aspects of Meaning in Computational Linguistics, ExProM '12, pages 10-18, Stroudsburg, PA, USA. Association for Computational Linguistics. +Erik Boiy, Pieter Hens, Koen Deschacht, and Marie-Francine Moens. 2007. Automatic sentiment analysis in on-line text. In Proceedings of the 11th International Conference on Electronic Publishing ELPUB2007. +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146. +Jose Camacho-Collados and Mohammad Taher Pilehvar. 2018. On the role of text preprocessing in neural network architectures: An evaluation study on text categorization and sentiment analysis. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Association for Computational Linguistics. +Ebba Cecilia and Alm Ovesdotter. 2008. Affect in text and speech. ProQuest, CiteSeer. +Billy Chiu, Gamal Crichton, Anna Korhonen, and Sampo Pyysalo. 2016. How to train good word embeddings for biomedical nlp. In Proceedings of the + +15th workshop on biomedical natural language processing, pages 166-174. +Toner Danisman and Adil Alpkocak. 2008. Feeler: Emotion classification of text using vector space model. In Proceedings of the AISB 2008 Symposium on Affective Language in Human and Machine, AISB 2008 Convention Communication, Interaction and Social Intelligence, volume 1, page 53. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Nemanja Djuric, Jing Zhou, Robin Morris, Mihajlo Grbovic, Vladan Radosavljevic, and Narayan Bhamidipati. 2015. Hate speech detection with comment embeddings. In Proceedings of the 24th International Conference on World Wide Web, WWW '15 Companion, page 29-30, New York, NY, USA. Association for Computing Machinery. +Manaal Faruqui, Jesse Dodge, Sujay K Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2014. Retrofitting word vectors to semantic lexicons. arXiv preprint arXiv:1411.4166. +Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 International Conference on Empirical Methods in Natural Language Processing (EMNLP). +Vachagan Gratian and Marina Haid. 2018. Braint at iest 2018: Fine-tuning multiclass perceptron for implicit emotion classification. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 243-247. +Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple NLP tasks. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1923-1933, Copenhagen, Denmark. Association for Computational Linguistics. +Daniel Hershcovich, Assaf Toledo, Alon Halfon, and Noam Slonim. 2019. Syntactic interchangeability in word embedding models. arXiv preprint arXiv:1904.00669. +Zhao Jianqiang and Gui Xiaolin. 2017. Comparison research on text pre-processing methods on twitter sentiment analysis. IEEE Access, 5:2870-2879. +Mikhail Khodak, Nikunj Saunshi, and Kiran Vodrahalli. 2017. A large self-annotated corpus for sarcasm. arXiv preprint arXiv:1704.05579. + +Yanghoon Kim, Hwanhee Lee, and Kyomin Jung. 2018. AttnConvnet at SemEval-2018 task 1: Attention-based convolutional neural networks for multi-label emotion classification. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 141-145, New Orleans, Louisiana. Association for Computational Linguistics. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Omer Levy and Yoav Goldberg. 2014. Dependency-based word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 2, pages 302-308. +Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211-225. +Pierre Lison and Andrey Kutuzov. 2017. Redefining context windows for word embedding models: An experimental study. In Proceedings of the 21st Nordic Conference on Computational Linguistics (NoDaLiDa), pages 284-288, Gothenburg, Sweden. Association for Computational Linguistics. +Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Linguistics. +Oren Melamud, David McClosky, Siddharth Patwardhan, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embeddings. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1030-1040, San Diego, California. Association for Computational Linguistics. +Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. +George A. Miller. 1995. Wordnet: A lexical database for english. Association for Computing Machinery, Commun. ACM, 38(11):39-41. +Rishabh Misra and Prahal Arora. 2019. Sarcasm detection using hybrid neural network. arXiv preprint arXiv:1908.07414. +Saif M Mohammad, Parinaz Sobhani, and Svetlana Kiritchenko. 2017. Stance and sentiment in tweets. ACM Transactions on Internet Technology (TOIT), 17(3):26. + +Hala Mulki, Chedi Bechikh Ali, Hatem Haddad, and Ismail Babaoglu. 2018. Tw-star at semeval-2018 task 1: Preprocessing impact on multi-label emotion classification. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 167-171. +Preslav Nakov, Alan Ritter, Sara Rosenthal, Veselin Stoyanov, and Fabrizio Sebastiani. 2016. SemEval 2016 task 4: Sentiment analysis in Twitter. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval '16, San Diego, California. Association for Computational Linguistics. +Shereen Oraby, Vrindavan Harrison, Lena Reed, Ernesto Hernandez, Ellen Riloff, and Marilyn Walker. 2016. Creating and characterizing a diverse corpus of sarcasm in dialogue. In Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 31-41, Los Angeles. Association for Computational Linguistics. +Chaitali G. Patil and Sandip Patil. 2013. Use of porter stemming algorithm andsvm for emotion extraction from news headlines. In International Journal of Electronics, Communication and Soft Computing Science and Engineering. +Samuel Pecar, Michal Farkas, Marian Simko, Peter Lacko, and Maria Bielikova. 2018. Nl-fiit at iest-2018: Emotion recognition utilizing neural networks and multi-level preprocessing. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 217-223. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543. +S Lovelyn Rose, R Venkatesan, Girish Pasupathy, and P Swaradh. 2018. A lexicon-based term weighting scheme for emotion identification of tweets. International Journal of Data Analysis Techniques and Strategies, 10(4):369-380. +Hassan Saif, Miriam Fernandez, Yulan He, and Harith Alani. 2014. On stopwords, filtering and data sparsity for sentiment analysis of twitter. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 810-817, Reykjavik, Iceland. European Language Resources Association (ELRA). +Hendrik Schuff, Jeremy Barnes, Julian Mohme, Sebastian Padó, and Roman Klinger. 2017. Annotation, modelling and analysis of fine-grained emotions on a stance and sentiment detection corpus. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 13-23. +Dibyendu Seal, Uttam K Roy, and Rohini Basak. 2020. Sentence-level emotion detection from text based + +on semantic rules. In Information and Communication Technology for Sustainable Development, pages 423-430. Springer. +Florian Strohm. 2017. The impact of intensifiers, diminishers and negations on emotion expressions. B.S. thesis, University of Stuttgart. +Symeon Symeonidis, Dimitrios Effrosynidis, and Avi Arampatzis. 2018. A comparative evaluation of preprocessing techniques and their interactions for twitter sentiment analysis. Expert Systems with Applications, 110:298-310. +Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentiment-specific word embedding for twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1555-1565, Baltimore, Maryland. Association for Computational Linguistics. + +Ivan Vulic, Simon Baker, Edoardo Maria Ponti, Ulla Petti, Ira Leviant, Kelly Wing, Olga Majewska, Eden Bar, Matt Malone, Thierry Poibean, Roi Reichart, and Anna Korhonen. 2020. Multi-simlex: A large-scale evaluation of multilingual and cross-lingual lexical semantic similarity. +Harald G Wallbott and Klaus R Scherer. 1986. How universal and specific is emotional experience? evidence from 27 countries on five continents. Information (International Social Science Council), 25(4):763-795. +Peng Xu, Andrea Madotto, Chien-Sheng Wu, Ji Ho Park, and Pascale Fung. 2018. Emo2Vec: Learning generalized emotion representation by multitask training. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 292-298, Brussels, Belgium. Association for Computational Linguistics. \ No newline at end of file diff --git a/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/images.zip b/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9c4bf9885b94bde34cf6f75f9fe33c6790bf88a5 --- /dev/null +++ b/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4ff1fe8bc1cc56acc12b2441740bd52772c2d1606d82a15acfc5526d7075e88 +size 733379 diff --git a/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/layout.json b/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a73602d2dd3f7a063597f687019eba488d90877c --- /dev/null +++ b/acomprehensiveanalysisofpreprocessingforwordrepresentationlearninginaffectivetasks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f1cd2804e1f495ce337b8bd03c3ce6130499f69eacc72bf197e2dfaaa97cad17 +size 341165 diff --git a/acorpusforlargescalephonetictypology/214b2e1f-db8e-438f-b526-0b31ec7cc50b_content_list.json b/acorpusforlargescalephonetictypology/214b2e1f-db8e-438f-b526-0b31ec7cc50b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8af1018c704d2a9c39e508f10830a8eeb4b3ab1c --- /dev/null +++ b/acorpusforlargescalephonetictypology/214b2e1f-db8e-438f-b526-0b31ec7cc50b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38029115df555983564f933a252de2542a8cd98a73bb3246f88c6624540ca054 +size 131512 diff --git a/acorpusforlargescalephonetictypology/214b2e1f-db8e-438f-b526-0b31ec7cc50b_model.json b/acorpusforlargescalephonetictypology/214b2e1f-db8e-438f-b526-0b31ec7cc50b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..ce3fa476bdf793d7e12f380a7cc0952f72f73e91 --- /dev/null +++ b/acorpusforlargescalephonetictypology/214b2e1f-db8e-438f-b526-0b31ec7cc50b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7cf6682be56063bceb055aa445d7c2279583ff8c89aea098b7edff089893a77 +size 154862 diff --git a/acorpusforlargescalephonetictypology/214b2e1f-db8e-438f-b526-0b31ec7cc50b_origin.pdf b/acorpusforlargescalephonetictypology/214b2e1f-db8e-438f-b526-0b31ec7cc50b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7ee8f7c908f130f5771ef6eb6d8714a952f0790b --- /dev/null +++ b/acorpusforlargescalephonetictypology/214b2e1f-db8e-438f-b526-0b31ec7cc50b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a95eb2431178b15e2f4dafa23f76f5479e9d95b0aff595a848140825bf0f264c +size 6115725 diff --git a/acorpusforlargescalephonetictypology/full.md b/acorpusforlargescalephonetictypology/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0f2fe5364f4dd36277025740077020aa7329ee08 --- /dev/null +++ b/acorpusforlargescalephonetictypology/full.md @@ -0,0 +1,373 @@ +# A Corpus for Large-Scale Phonetic Typology + +Elizabeth Salesky $^{G}$ Eleanor Chodroff $^{Y}$ Tiago Pimentel $^{5}$ Matthew Wiesner $^{G}$ Ryan Cotterell $^{3,0}$ Alan W Black $^{4}$ Jason Eisner $^{G}$ + +cJohns Hopkins University xUniversity of York +3University of Cambridge 0ETH Zürich 1Carnegie Mellon University esalesky@jhu.edu eleanor.chodroff@york.ac.uk + +# Abstract + +A major hurdle in data-driven research on typology is having sufficient data in many languages to draw meaningful conclusions. We present VoxClamantis v1.0, the first large-scale corpus for phonetic typology, with aligned segments and estimated phoneme-level labels in 690 readings spanning 635 languages, along with acoustic-phonetic measures of vowels and sibilants. Access to such data can greatly facilitate investigation of phonetic typology at a large scale and across many languages. However, it is nontrivial and computationally intensive to obtain such alignments for hundreds of languages, many of which have few to no resources presently available. We describe the methodology to create our corpus, discuss caveats with current methods and their impact on the utility of this data, and illustrate possible research directions through a series of case studies on the 48 highest-quality readings. Our corpus and scripts are publicly available for non-commercial use at https://voxclamantisproject.github.io. + +# 1 Introduction + +Understanding the range and limits of cross-linguistic variation is fundamental to the scientific study of language. In speech and particularly phonetic typology, this involves exploring potentially universal tendencies that shape sound systems and govern phonetic structure. Such investigation requires access to large amounts of cross-linguistic data. Previous cross-linguistic phonetic studies have been limited to a small number of languages with available data (Disner, 1983; Cho and Ldefoged, 1999), or have relied on previously reported measures from many studies (Whalen and Levitt, 1995; Becker-Kristal, 2010; Gordon and Roettger, 2017; Chodroff et al., 2019). + +![](images/7b93ef8fc433dd20485a3797575a6b2836404dd61e6e855b376a33537438e8e7.jpg) +Figure 1: The 635 languages of our corpus geo-located with mean Mel Cepstral Distortion (MCD) scores. + +Existing multilingual speech corpora have similar restrictions, with data too limited for many tasks (Engstrand and Cunningham-Andersson, 1988; Ladefoged and Maddieson, 2007) or approximately 20 to 30 recorded languages (Ardila et al., 2020; Harper, 2011; Schultz, 2002). + +The recently developed CMU Wilderness corpus (Black, 2019) constitutes an exception to this rule with over 600 languages. This makes it the largest and most typologically diverse speech corpus to date. In addition to its coverage, the CMU Wilderness corpus is unique in two additional aspects: cleanly recorded, read speech exists for all languages in the corpus, and the same content (modulo translation) exists across all languages. + +However, this massively multilingual speech corpus is challenging to work with directly. Copyright, computational restrictions, and sheer size limit its accessibility. Due to copyright restrictions, the audio cannot be directly downloaded with the sentence and phoneme alignments. A researcher would need to download original audio MP3 and text through links to bible.is, then segment these with speech-to-text sentence alignments distributed in Black (2019). For phonetic research, subsequently identifying examples of specific phonetic segments in the audio is also a near-essential + +step for extracting relevant acoustic-phonetic measurements. Carrying out this derivative step has allowed us to release a stable-access collection of token-level acoustic-phonetic measures to enable further research. + +Obtaining such measurements requires several processing steps: estimating pronunciations, aligning them to the text, evaluating alignment quality, and finally, extracting phonetic measures. This work is further complicated by the fact that, for a sizable number of these languages, no linguistic resources currently exist (e.g., language-specific pronunciation lexicons). We adapt speech processing methods based on Black (2019) to accomplish these tasks, though not without noise: in §3.4, we identify three significant caveats when attempting to use our extended corpus for large-scale phonetic studies. + +We release a comprehensive set of standoff markup of over 400 million labeled segments of continuous speech. For each segment, we provide an estimated phoneme-level label from the X-SAMPA alphabet, the preceding and following labels, and the start position and duration in the audio. Vowels are supplemented with formant measurements, and sibilants with standard measures of spectral shape. + +We present a series of targeted case studies illustrating the utility of our corpus for large-scale phonetic typology. These studies are motivated by potentially universal principles posited to govern phonetic variation: phonetic dispersion and phonetic uniformity. Our studies both replicate known results in the phonetics literature and also present novel findings. Importantly, these studies investigate current methodology as well as questions of interest to phonetic typology at a large scale. + +# 2 Original Speech + +The CMU Wilderness corpus (Black, 2019) consists of recorded readings of the New Testament of the Bible in many languages and dialects. Following the New Testament structure, these data are broken into 27 books, each with a variable number of chapters between 1 and 25. Bible chapters contain standardized verses (approximately sentence-level segments); however, the speech is originally split only by chapter. Each chapter + +![](images/599f7775f228a1be3f07c8fffea172c7a37382f49a1fedeead90f50747f42bd1.jpg) +Figure 2: The extraction process for the measurements released in VoxClamantis v1.0. + +has an average of 13 minutes of speech for a total of $\approx 20$ hours of speech and text per language. These recordings are clean, read speech with a sampling rate of $16\mathrm{kHz}$ . In most languages, they are non-dramatic readings with a single speaker; in some, they are dramatic multi-speaker readings with additive music. $^3$ The release from Black (2019) includes several resources for processing the corpus: scripts to download the original source data from bible.is, 'lexicons' created using grapheme-to-phoneme (G2P) conversion, and scripts to apply their generated sentence alignments, which facilitates downstream language processing tasks, including phoneme alignment. + +# 3 The VoxClamantis v1.0 Corpus + +Our VoxClamantis v1.0 corpus is derived from 690 audio readings of the New Testament of the Bible4 in 635 languages.5 We mark estimated speech seg- + +ments labeled with phonemic labels, and phonetic measures for the tokens that are vowels or sibilants. The extraction process is diagrammed in Figure 2. In the sections below, we detail our procedures for extracting labeled audio segments and their phonetic measures, in both high- and low-resource languages. We then outline important caveats to keep in mind when using this corpus. + +# 3.1 Extracting Phoneme Alignments + +We use a multi-pronged forced alignment strategy to balance broad language coverage (§3.1.1) with utilization of existing high-quality resources (§3.1.2). We assess the quality of our approaches in §3.1.3. We release the stand-off markup for our final alignments as both text files and Praat TextGrids (Boersma and Weenink, 2019).6 + +Using scripts and estimated boundaries from Black (2019), we first download and convert the audio MP3s to waveforms, and cut the audio and text into 'sentences' (hereafter called 'utterances' as they are not necessarily sentences). This step creates shorter-length speech samples to facilitate forced alignment; utterance boundaries do not change through our processing. + +To extract labeled segments, we first require pronunciations for each utterance. A pronunciation is predicted from the text alone using some grapheme-to-phoneme (G2P) method. Each word's predicted pronunciation is a sequence of categorical labels, which are 'phoneme-level' in the sense that they are usually intended to distinguish the words of the language. We then align this predicted sequence of 'phonemes' to the corresponding audio. + +# 3.1.1 All Languages + +Most of our languages have neither existing pronunciation lexicons nor G2P resources. To provide coverage for all languages, we generate pronunciations using the simple 'universal' G2P system Unitran (Qian et al., 2010, as extended by Black, 2019), which deterministically expands each grapheme to a fixed sequence of phones in the Extended Speech Assessment Methods Phonetic Alphabet (X-SAMPA) (Wells, 1995/2000). This naive process is error-prone for languages with opaque orthographies, as we show in §3.1.3 below and discuss further in §3.4 (Caveat B). Even so, it provides a starting point for exploring low-resource languages: after some manual inspection, a linguist may be + +able to correct the labels in a given language by a combination of manual and automatic methods. + +For each reading, to align the pronunciation strings to the audio, we fit a generative acoustic model designed for this purpose: specifically, eHMM (Prahallad et al., 2006) as implemented in Festvox (Anumanchipalli et al., 2011) to run full Baum-Welch from a flat start for 15 to 30 iterations until the mean mel cepstral distortion score (see §3.1.3) converges. Baum-Welch does not change the predicted phoneme labels, but obtains a language-specific, reading-specific, contextual (triphone) acoustic model for each phoneme type in the language. We then use Viterbi alignment to identify an audio segment for each phoneme token. + +# 3.1.2 High-Resource Languages + +A subset of the languages in our corpus are supported by existing pronunciation resources. Two such resources are Epitran (Mortensen et al., 2018), a G2P tool based on language-specific rules, available in both IPA and X-SAMPA, and WikiPron (Lee et al., 2020), a collection of crowd-sourced pronunciations scraped from Wiktionary. These are mapped from IPA to X-SAMPA for label consistency across our corpus. Epitran covers 29 of our languages (39 readings), while WikiPron's 'phonemic' annotations provide partial coverage of 13 additional languages (18 readings). We use Epitran for languages with regular orthographies where it provides high-quality support, and WikiPron for other languages covered by WikiPron annotations. While Unitran and Epitran provide a single pronunciation for a word from the orthography, WikiPron may include multiple pronunciations. In such cases, Viterbi alignment (see below) chooses the pronunciation of each token that best fits the audio. + +For most languages covered by WikiPron, most of our corpus words are out-of-vocabulary, as they do not yet have user-submitted pronunciations on Wiktionary. We train G2P models on WikiPron annotations to provide pronunciations for these words. Specifically, we use the WFST-based tool Phonetisaurus (Novak et al., 2016). Model hyperparameters are tuned on 3 WikiPron languages from SIGMORPHON 2020 (Gorman et al., 2020) (see Appendix C for details). In general, for languages that are not easily supported by Epitran-style G2P rules, training a G2P model on sufficiently many + +
ISO 639-3tpironazjmsacebturtglspailorushauindtgkjavkaz
# Types13989746184907612853121545912411779150631652349385814125021069020502
Unitran PER18.421.326.930.130.131.234.434.435.037.437.638.839.849.946.8
# Tokens291k169k125k157k190k125k185k168k169k130k201k170k159k177k142k
Weighted PER20.121.326.131.135.928.540.132.632.736.836.740.538.854.147.7
ISO 639-3swekmrsomtirpolhaeviethalaobentelhinmartam
# Types86108127143752218818681159352757233383133480752347777221783931642
Unitran PER46.954.354.657.867.167.373.880.389.190.090.395.797.8100.5
# Tokens165k176k156k121k141k164k211k26k36k173k124k191k159k139k
Weighted PER49.553.956.057.466.864.880.680.489.486.288.391.397.8102.1
+ +Table 1: Phoneme Error Rate (PER) for Unitran treating Epitran as ground-truth. 'Types' and 'Tokens' numbers reflect the number of unique word types and word tokens in each reading. We report PER calculated using word types for calibration with other work, as well as frequency-weighted PER reflecting occurrences in our corpus. + +high-quality annotations may be more accurate. + +We align the speech with the high-quality labels using a multilingual ASR model (see Wiesner et al., 2019). The model is trained in Kaldi (Povey et al., 2011) on 300 hours of data from the IARPA BABEL corpora (21 languages), a subset of Wall Street Journal (English), the Hub4 Spanish Broadcast news (Spanish), and a subset of the Voxforge corpus (Russian and French). These languages use a shared X-SAMPA phoneme label set which has high coverage of the labels of our corpus. + +Our use of a pretrained multilingual model here contrasts with §3.1.1, where we had to train reading-specific acoustic models to deal with the fact that the same Unitran phoneme label may refer to quite different phonemes in different languages (see §3.4). We did not fine-tune our multilingual model to each language, as the cross-lingual ASR performance in previous work (Wiesner et al., 2019) suggests that this model is sufficient for producing phoneme-level alignments. + +# 3.1.3 Quality Measures + +Automatically generated phoneme-level labels and alignments inherently have some amount of noise, and this is particularly true for low-resource languages. The noise level is difficult to assess without gold-labeled corpora for either modeling or assessment. However, for the high-resource languages, we can evaluate Unitran against Epitran and WikiPron, pretending that the latter are ground truth. For example, Table 1 shows Unitran's phoneme error rates relative to Epitran. Appendix B gives several more detailed analyses with examples of individual phonemes. + +Unitran pronunciations may have acceptable phoneme error rates for languages with transparent orthographies and one-to-one grapheme-tophoneme mappings. Alas, without these conditions they prove to be highly inaccurate. + +That said, evaluating Unitran labels against Epitran or WikiPron may be unfair to Unitran, since some discrepancies are arguably not errors but mere differences in annotation granularity. For example, the 'phonemic' annotations in WikiPron are sometimes surprisingly fine-grained: WikiPron frequently uses /t/ in Cebuano where Unitran only uses /t/, though these refer to the same phoneme. These tokens are scored as incorrect. Moreover, there can be simple systematic errors: Unitran always maps grapheme $< \mathrm{a}>$ to label /a/, but in Tagalog, all such tokens should be /a/. Such errors can often be fixed by remapping the Unitran labels, which in these cases would reduce PER from 30.1 to 6.8 (Cebuano) and from 34.4 to 7.8 (Tagalog). Such rules are not always this straightforward and should be created on a language-specific basis; we encourage rules created for languages outside of current Epitran support to be contributed back to the Epitran project. + +For those languages where we train a G2P system on WikiPron, we compute the PER of the G2P system on held-out WikiPron entries treated as ground truth. The results (Appendix C) range from excellent to mediocre. + +We care less about the pronunciations themselves than about the segments that we extract by aligning these pronunciations to the audio. For high-resource languages, we can again compare the segments extracted by Unitran to the higher-quality ones extracted with better pronunciations. For each Unitran token, we evaluate its label and temporal boundaries against the high-quality token that is closest in the audio, as measured by the temporal distance between their midpoints (Appendix B). + +Finally, the segmentation of speech and text into corresponding utterances is not perfect. We use the utterance alignments generated by Black (2019), in which the text and audio versions of a putative + +utterance may have only partial overlap. Indeed, Black (2019) sometimes failed to align the Unitran pronunciation to the audio at all, and discarded these utterances. For each remaining utterance, he assessed the match quality using Mel Cepstral Distortion (MCD)—which is commonly used to evaluate synthesized spoken utterances (Kominek et al., 2008)—between the original audio and a resynthesized version of the audio based on the aligned pronunciation. Each segment's audio was resynthesized given the segment's phoneme label and the preceding and following phonemes, in a way that preserves its duration, using CLUSTER-GEN (Black, 2006) with the same reading-specific eHMM model that we used for alignment. We distribute Black's per-utterance MCD scores with our corpus, and show the average score for each language in Appendix E. In some readings, the MCD scores are consistently poor. + +# 3.2 Phonetic measures + +Using the phoneme-level alignments described in §3.1, we automatically extract several standard acoustic-phonetic measures of vowels and sibilant fricatives that correlate with aspects of their articulation and abstract representation. + +# 3.2.1 Vowel measures + +Standard phonetic measurements of vowels include the formant frequencies and duration information. Formants are concentrations of acoustic energy at frequencies reflecting resonance points in the vocal tract during vowel production (Ladefoged and Johnson, 2014). The lowest two formants, F1 and F2, are considered diagnostic of vowel category identity and approximate tongue body height (F1) and backness (F2) during vowel production (Figure 3). F3 correlates with finer-grained aspects of vowel production such as rhoticity (/r/-coloring), lip rounding, and nasality (House and Stevens, 1956; Lindblom and Sundberg, 1971; Ladefoged et al., 1978), and F4 with high front vowel distinctions and speaker voice quality (Eek and Meister, 1994). Vowel duration can also signal vowel quality, and denotes lexical differences in many languages. + +We extracted formant and duration information from each vowel using Praat (Boersma and Weenink, 2019). The first four formants (F1-F4) were measured at each quartile and decile of the vowel. Formant estimation was performed with the Burg algorithm in Praat with pre-emphasis from $50\mathrm{Hz}$ , a time window of $25~\mathrm{ms}$ , a time + +![](images/1cb024500087e858f8b270d80a99c5afb5e20e77803fe80b0acf0ab603af9ba5.jpg) +Figure 3: Vowel Chart + +step of $6.25\mathrm{ms}$ , a maximum of five formants permitted, and a formant ceiling of $5000\mathrm{Hz}$ which is the recommended value for a male vocal tract (Boersma and Weenink, 2019). Note that the speakers in this corpus are predominantly male. + +# 3.2.2 Sibilant measures + +Standard phonetic measurements of sibilant fricatives such as /s/, /z/, /ʃ/, and /ʒ/ include measures of spectral shape, and also segment duration. Measures of spectral shape frequently distinguish sibilant place of articulation: higher concentrations of energy generally reflect more anterior constriction locations (e.g., /s z/ are produced closer to the teeth than /ʃ ʒ/). Segment duration can also signal contrasts in voicing status (Jongman et al., 2000). + +Our release contains the segment duration, spectral peak, the spectral moments of the frequency distribution (center of gravity: COG, variance, skewness, and kurtosis), as well as two measures of the mid-frequency peak determined by sibilant quality. These are the mid-frequency peak between 3000 and $7000\mathrm{Hz}$ for alveolar sibilants, and between 2000 and $6000\mathrm{Hz}$ for post-alveolar sibilants (Koenig et al., 2013; Shadle et al., 2016). The spectral information was obtained via multitaper spectral analysis (Rahim and Burr, 2017), with a time-bandwidth parameter $(nw)$ of 4 and 8 tapers $(k)$ over the middle $50\%$ of the fricative (Blacklock, 2004). Measurements were made using the methods described in Forrest et al. (1988) for spectral moments and Koenig et al. (2013) for spectral peak varieties. + +# 3.3 Computation times + +Generating phoneme-level alignments and extracting subsequent phonetic measures takes significant time, computational resources, and domain knowledge. Our release enables the community to use this data directly without these prerequisites. Table 2 shows that the time to extract our resources, + +
Computation Time
ResourcePer LanguageTotal Time
Utterance Alignments30m14d 13h
Phoneme Alignments3d 3h 37m6y 12d 16h
Vowel Measures45m21d 20h
Sibilant Measures20m9d 17h
3d 5h 0m6y 58d 19h
+ +Table 2: Computation time to generate the full corpus. + +once methods have been developed, was more than 6 CPU years, primarily for training eHMM models. + +# 3.4 General caveats + +We caution that our labeling and alignment of the corpus contains errors. In particular, it is difficult to responsibly draw firm linguistic conclusions from the Unitran-based segments (§3.1.1). In §5 we suggest future work to address these issues. + +A Quality of Utterance Pairs: For some utterances, the speech does not correspond completely to the text, due to incorrect cosegmentation. In our phonetic studies, we threshold using reading-level MCD as a heuristic for overall alignment quality, and further threshold remaining readings using utterance-level MCD. We recommend others do so as well. + +B Phoneme Label Consistency and Accuracy: Phoneme-level labels are predicted from text without the aid of audio using G2P methods. This may lead to systematic errors. In particular, Unitran relies on a 'universal' table that maps grapheme $\langle s \rangle$ (for example) to phoneme /s/ in every context and every language. This is problematic for languages that use $\langle s \rangle$ in some or all contexts to refer to other phonemes such as /ʃ/ or /ʃ/, or use digraphs that contain $\langle s \rangle$ , such as $\langle sh \rangle$ for /ʃ/. Thus, the predicted label /s/ may not consistently refer to the same phoneme within a language, nor to phonetically similar phonemes across languages. Even WikiPron annotations are user-submitted and may not be internally consistent (e.g., some words use /d 3/ or /t/ while others use /d5/ or /t/), nor comparable across languages. + +'Phoneme' inventories for Unitran and WikiPron have been implicitly chosen by whoever designed the language's orthography or its WikiPron pages; while this may reflect a reasonable folk phonology, it may not correspond to the inventory of underlying or surface phonemes that any linguist would be likely to posit. + +C Label and Alignment Assessment: While alignment quality for languages with Epitran and WikiPron can be assessed and calibrated beyond this corpus, it cannot for those languages with only Unitran alignments; the error rate on languages without resources to evaluate PER is unknown to us. The Unitran alignments should be treated as a first-pass alignment which may still be useful for a researcher who is willing to perform quality control and correction of the alignments using automatic or manual procedures. Our automatically-generated alignment offers an initial label and placement of the boundaries that would hopefully facilitate downstream analysis. + +D Corpus Representation: It is difficult to draw conclusions about 'average behavior' across languages. Some language families are better represented in the corpus than others, with more languages, more Bible readings per language, more hours of speech per reading, or more examples of a given phoneme of interest. Additionally, the recordings by language are largely single-speaker (and predominantly male). This means that we can often draw conclusions only about a particular speaker's idiolect, rather than the population of speakers of the language. Metadata giving the exact number of different speakers per recording do not exist. + +# 4 Phonetic Case Studies + +We present two case studies to illustrate the utility of our resource for exploration of cross-linguistic typology. Phoneticians have posited several typological principles that may structure phonetic systems. Though previous research has provided some indication as to the direction and magnitude of expected effects, many instances of the principles have not yet been explored at scale. Our case studies investigate how well they account for cross-linguistic variation and systematicity for our phonetic measures from vowels and sibilants. Below we present the data filtering methods for our case studies, followed by an introduction to and evaluation of phonetic dispersion and uniformity. + +# 4.1 Data filtering + +For quality, we use only the tokens extracted using high-resource pronunciations (Epitran and WikiPron) and only in languages with mean + +MCD lower than 8.0. $^{9}$ Furthermore, we only use those utterances with MCD lower than 6.0. The vowel analyses focus on F1 and F2 in ERB taken at the vowel midpoint (Zwicker and Terhardt, 1980; Glasberg and Moore, 1990). $^{10}$ The sibilant analyses focus on mid-frequency peak of /s/ and /z/, also in ERB. Vowel tokens with F1 or F2 measures beyond two standard deviations from the label- and reading-specific mean were excluded, as were tokens for which Praat failed to find a measurable F1 or F2, or whose duration exceeded 300 ms. Sibilant tokens with mid-frequency peak or duration measures beyond two standard deviations from the label- and reading-specific mean were also excluded. When comparing realizations of two labels such as /i/–/u/ or /s/–/z/, we excluded readings that did not contain at least 50 tokens of each label. We show data representation with different filtering methods in Appendix D. + +After filtering, the vowel analyses included 48 readings covering 38 languages and 11 language families. The distribution of language families was 21 Indo-European, 11 Austronesian, 3 Creole/Pidgin, 3 Turkic, 2 Afro-Asiatic, 2 Tai-Kadai, 2 Uto-Aztecan, 1 Austro-Asiatic, 1 Dravidian, 1 Hmong-Mien, and 1 Uralic. Approximately 8.2 million vowel tokens remained, with a minimum of $\approx 31,000$ vowel tokens per reading. The sibilant analysis included 22 readings covering 18 languages and 6 language families. The distribution of language families was 10 Indo-European, 6 Austronesian, 3 Turkic, 1 Afro-Asiatic, 1 Austro-Asiatic, and 1 Creole/Pidgin. The decrease in total number of readings relative to the vowel analysis primarily reflects the infrequency of /z/ cross-linguistically. Approximately 385,000 /s/ and 83,000 /z/ tokens remained, with a minimum of $\approx 5,200$ tokens per reading. + +# 4.2 Phonetic dispersion + +Phonetic dispersion refers to the principle that contrasting speech sounds should be distinct from one another in phonetic space (Martinet, 1955; Jakobson, 1968; Flemming, 1995, 2004). Most studies investigating this principle have focused on its va + +lidity within vowel systems, as we do here. While languages tend to have seemingly well-dispersed vowel inventories such as $\{/i/, /a/, /u/\}$ (Joos, 1948; Stevens and Keyser, 2010), the actual phonetic realization of each vowel can vary substantially (Lindau and Wood, 1977; Disner, 1983). One prediction of dispersion is that the number of vowel categories in a language should be inversely related to the degree of per-category acoustic variation (Lindblom, 1986). Subsequent findings have cast doubt on this (Livijn, 2000; Recasens and Espinosa, 2009; Vaux and Samuels, 2015), but these studies have been limited by the number and diversity of languages investigated. + +To investigate this, we measured the correlation between the number of vowel categories in a language and the degree of per-category variation, as measured by the joint entropy of (F1, F2) conditioned on the vowel category. We model $p(\mathrm{F1}, \mathrm{F2} \mid V)$ using a bivariate Gaussian for each vowel type $v$ . We can then compute the joint conditional entropy under this model as $\mathrm{H(F1,F2|V)} = \sum_v p(v) \mathrm{H(F1,F2|V = v)} = \sum_v p(v)\frac{1}{2}\ln \det(2\pi e\Sigma_v)$ , where $\Sigma_v$ is the covariance matrix for the model of vowel $v$ . + +Vowel inventory sizes per reading ranged from 4 to 20 vowels, with a median of 8. Both Spearman and Pearson correlations between entropy estimate and vowel inventory size across analyzed languages were small and not significant (Spearman $\rho = 0.11$ , $p = 0.44$ ; Pearson $r = 0.11$ , $p = 0.46$ ), corroborating previous accounts of the relationship described in Livijn (2000) and Vaux and Samuels (2015) with a larger number of languages—a larger vowel inventory does not necessarily imply more precision in vowel category production.[11] + +# 4.3 Phonetic uniformity + +Previous work suggests that F1 is fairly uniform with respect to phonological height. Within a single language, the mean F1s of /e/ and /o/—which share a height—have been found to be correlated across speakers (Yorkshire English: Watt, 2000; French: Menard et al., 2008; Brazilian Portuguese: Oushiro, 2019; Dutch, English, French, Japanese, Portuguese, Spanish: Schwartz and Menard, 2019). Though it is physically possible for these vowels + +![](images/2f12773e56a19eb6fca76971c391a196e0d0ebd892e05056238c99da49a9f615.jpg) +(a) F1 of /i/-/u/ in ERB + +![](images/fbe493d72033f489a907e771002794a25f14ba347bd63b97a0916408f76d1a5d.jpg) +(b) Mid-frequency peak of $/s/-/z/$ in ERB +Figure 4: Correlations of mean F1 (ERB) between /i/ and /u/ and of mean mid-frequency peak (ERB) between /s/ and /z/. The paired segments share a relevant phonological feature specification that is approximated by the acoustic-phonetic measurement: vowel height by F1 and sibilant place by mid-frequency peak. Each reading is represented by an ellipsoid, centered on the paired means and shaped by $\frac{1}{10}$ of their respective standard deviations. The solid line reflects the best-fit linear regression line with standard error in gray shading; the dashed line shows the line of equality. Marginal histograms show the range of variation in the segment-specific means. + +to differ in F1 realization, the correlations indicate a strong tendency for languages and individual speakers to yoke these two representations together. + +Systematicity in the realization of sibilant place of articulation has also been observed across speakers of American English and Czech (Chodroff, 2017). Phonetic correlates of sibilant place strongly covary between /s/ and /z/, which share a [+anterior] place of articulation and are produced the alveolar ridge, and between /ʃ/ and /ʒ/, which share a [-anterior] place of articulation and are produced behind the alveolar ridge. + +A principle of uniformity may account for these above findings. Uniformity here refers to a principle in which a distinctive phonological feature should have a consistent phonetic realization, within a language or speaker, across different segments with that feature (Keating, 2003; Chodroff et al., 2019). Similar principles posited in the literature include Maximal Use of Available Controls, in which a control refers to an integrated perceptual and motor phonetic target (Ménard et al., 2008), as well as a principle of gestural economy (Maddieson, 1995). Phonetic realization refers to the mapping from the abstract distinctive feature to an abstract phonetic target. We approximate this phonetic target via an acoustic-phonetic measurement, but we emphasize that the acoustic measurement is not necessarily a direct reflection of an underlying phonetic target (which could be an articulatory gesture, auditory goal, or perceptuo-motor repre + +sentation of the sound). We make the simplifying assumption that the acoustic-phonetic formants (F1, F2) directly correspond to phonetic targets linked to the vowel features of height and backness. + +More precisely, uniformity of a phonetic measure with respect to a phonological feature means that any two segments sharing that feature will tend to have approximately equal measurements in a given language, even when that value varies across languages. We can observe whether this is true by plotting the measures of the two segments against each other by language (e.g., Figure 4). + +Vowels. As shown in Figure 4 and Table 3, the strongest correlations in mean F1 frequently reflected uniformity of height (e.g., high vowels /i/ -u:/ $r = 0.79$ , $p < 0.001$ , mid vowels /e/ -o:/ $r = 0.62$ , $p < 0.01$ ).12 Nevertheless, some vowel pairs that differed in height were also moderately correlated in mean F1 (e.g., /o/ -a:/ $r = 0.66$ , $p < 0.001$ ). Correlations of mean F1 were overall moderate in strength, regardless of the vowels' phonological specifications. + +Correlations of mean F2 were also strongest among vowels with a uniform backness specification (e.g., back vowels /u/–/o:/ $r = 0.69$ , $p < 0.001$ ; front vowels /i/–/ε:/ $r = 0.69$ , $p < 0.05$ ; Table 4). The correlation between front tense vowels /i/ and /e/ was significant and in the ex + +pected direction, but also slightly weaker than the homologous back vowel pair ( $r = 0.41$ , $p < 0.05$ ). Vowels differing in backness frequently had negative correlations, which could reflect influences of category crowding or language-/speaker-specific differences in peripheralization. We leave further exploration of those relationships to future study. + +The moderate to strong F1 correlations among vowels with a shared height specification are consistent with expectations based on previous studies, and also with predictions of uniformity. Similarly, we find an expected correlation of F2 means for vowels with a shared height specification. The correlations of vowel pairs that were predicted to have significant correlations, but did not, tended to have small sample sizes (< 14 readings). + +Nevertheless, the correlations are not perfect; nor are the patterns. For instance, the back vowel correlations of F2 are stronger than the front vowel correlations. While speculative, the apparent peripheralization of /i/ (as revealed in the negative F2 correlations) could have weakened the expected uniformity relation of /i/ with other front vowels. Future research should take into account additional influences of the vowel inventory composition, as well as articulatory or auditory factors for a more complete understanding of the structural forces in the phonetic realization of vowels. + +Sibilants. The mean mid-frequency peak values for $/s/$ and $/z/$ each varied substantially across readings, and were also strongly correlated with one another $(r = 0.87, p < 0.001$ ; Figure 4). This finding suggests a further influence of uniformity on the realization of place for $/s/$ and $/z/$ , and the magnitude is comparable to previous correlations observed across American English and Czech speakers, in which $r$ was $\approx 0.90$ (Chodroff, 2017). + +# 5 Directions for Future Work + +We hope our corpus may serve as a touchstone for further improvements in phonetic typology research and methodology. Here we suggest potential steps forward for known areas (§3.4) where this corpus could be improved: + +A Sentence alignments were generated using Unitran, and could be improved with higher-quality G2P and verse-level text segmentation to standardize utterances across languages. + +B Consistent and comparable phoneme labels are the ultimate goal. Concurrent work on universal phone recognition (Li et al., 2020) addresses this issue through a universal phone inventory constrained by language-specific PHOIBLE inventories (Moran and McCloy, 2019). However, free-decoding phones from speech alone is challenging. One exciting possibility is to use the orthography and audio jointly to guide semi-supervised learning of per-language pronunciation lexicons (Lu et al., 2013; Zhang et al., 2017). +C Reliable quality assessment for current methods remains an outstanding research question for many languages. For covered languages, using a universal label set to map additional high quality lexicons (e.g., hand-annotated lexicons) to the same label space as ours would enable direct label and alignment assessment through precision, recall, and PER. +D Curating additional resources beyond this corpus would improve coverage and balance, such as contributing additional Epitran modules. Additional readings exist for many languages on the original bible.is site and elsewhere. Annotations with speaker information are not available, but improved unsupervised speaker clustering may also support better analysis. + +# 6 Conclusion + +VoxClamantis v1.0 is the first large-scale corpus for phonetic typology, with extracted phonetic features for 635 typologically diverse languages. We present two case studies illustrating both the research potential and limitations of this corpus for investigation of phonetic typology at a large scale. We discuss several caveats for the use of this corpus and areas for substantial improvement. Nonetheless, we hope that directly releasing our alignments and token-level features enables greater research accessibility in this area. We hope this corpus will motivate and enable further developments in both phonetic typology and methodology for working with cross-linguistic speech corpora. + +# Acknowledgments + +The authors gratefully acknowledge Colin Wilson for his guidance and discussion on the topic, Florian Metze for resources, and Carlos Aguirre for helpful feedback. + +# References + +Gopala Krishna Anumanchipalli, Kishore Prahallad, and Alan W. Black. 2011. Festvox: Tools for creation and analyses of large speech corpora. In Workshop on Very Large Scale Phonetics Research, UPenn, Philadelphia. +Rosana Ardila, Megan Branson, Kelly Davis, Michael Henretty, Michael Kohler, Josh Meyer, Reuben Morais, Lindsay Saunders, Francis M. Tyers, and Gregor Weber. 2020. Common Voice: A massively-multilingual speech corpus. In Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020). +Roy Becker-Kristal. 2010. Acoustic typology of vowel inventories and Dispersion Theory: Insights from a large cross-linguistic corpus. Ph.D. thesis, University of California, Los Angeles. +Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: Series B (Methodological), 57(1):289-300. +Alan W. Black. 2006. CLUSTERGEN: A statistical parametric synthesizer using trajectory modeling. In Proceedings of INTERSPEECH. +Alan W. Black. 2019. CMU Wilderness Multilingual Speech Dataset. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5971-5975, Brighton, UK. IEEE. +Oliver Blacklock. 2004. Characteristics of Variation in Production of Normal and Disordered Fricatives, Using Reduced-Variance Spectral Methods. Ph.D. thesis, University of Southampton. +Paul Boersma and David Weenink. 2019. Praat: Doing phonetics by computer [computer program]. version 6.0.45. +Taehong Cho and Peter Ladefoged. 1999. Variation and universals in VOT: Evidence from 18 languages. Journal of Phonetics, 27(2):207-229. +Eleanor Chodroff. 2017. Structured Variation in Obstruent Production and Perception. Ph.D. thesis, Johns Hopkins University. +Eleanor Chodroff, Alessandra Golden, and Colin Wilson. 2019. Covariation of stop voice onset time across languages: Evidence for a universal constraint on phonetic realization. The Journal of the Acoustical Society of America, 145(1):EL109-EL115. +Sandra Ferrari Disner. 1983. Vowel Quality: The Relation between Universal and Language-specific Factors. Ph.D. thesis, UCLA. + +Gary F. Simons Eberhard, David M. and Charles D. Fennig, editors. 2020. *Ethnologue: Languages of the world*, 23 edition. SIL international. Online version: http://www.ethnologue.com. +Arvo Eek and Einar Meister. 1994. Acoustics and perception of Estonian vowel types. *Phonetic Experimental Research*, XVIII:146-158. +Olle Engstrand and Una Cunningham-Andersson. 1988. Iris - a data base for cross-linguistic phonetic research. +Edward S. Flemming. 1995. Auditory Representations in Phonology. Ph.D. thesis, UCLA. +Edward S. Flemming. 2004. Contrast and perceptual distinctiveness. In Bruce Hayes, R. Kirchner, and Donca Steriade, editors, The Phonetic Bases of Phonological Markedness, 1968, pages 232-276. University Press, Cambridge, MA. +Harvey Fletcher. 1923. Physical measurements of audition and their bearing on the theory of hearing. Journal of the Franklin Institute, 196(3):289-326. +Karen Forrest, Gary Weisman, Paul Milenkovic, and Ronald N. Dougall. 1988. Statistical analysis of word-initial voiceless obstruents: Preliminary data. The Journal of the Acoustical Society of America, 84(1):115-123. +Brian R. Glasberg and Brian C.J. Moore. 1990. Derivation of auditory filter shapes from notched-noise data. Hearing Research, 47(1-2):103-138. +Matthew Gordon and Timo Roettger. 2017. Acoustic correlates of word stress: A cross-linguistic survey. Linguistics Vanguard, 3(1). +Kyle Gorman, Lucas F.E. Ashby, Aaron Goyzueta, Arya D. McCarthy, Shijie Wu, and Daniel You. 2020. The SIGMORPHON 2020 shared task on multilingual grapheme-to-phoneme conversion. In Proceedings of the SIGMORPHON Workshop. +Mary Harper. 2011. The IARPA Babel multilingual speech database. Accessed: 2020-05-01. +Arthur S. House and Kenneth N. Stevens. 1956. Analog studies of the nasalization of vowels. The Journal of Speech and Hearing Disorders, 21(2):218-232. +Roman Jakobson. 1968. Child Language, Aphasia and Phonological Universals. Mouton Publishers. +Allard Jongman, Ratree Wayland, and Serena Wong. 2000. Acoustic characteristics of English fricatives. The Journal of the Acoustical Society of America, 108(3):1252-1263. +Martin Joos. 1948. Acoustic phonetics. Language, 24(2):5-136. + +Patricia A. Keating. 2003. Phonetic and other influences on voicing contrasts. In Proceedings of the 15th International Congress of Phonetic Sciences, pages 20-23, Barcelona, Spain. +Laura Koenig, Christine H. Shadle, Jonathan L. Preston, and Christine R. Mooshammer. 2013. Toward improved spectral measures of /s:/ Results from adolescents. Journal of Speech, Language, and Hearing Research, 56(4):1175-1189. +John Kominek, Tanja Schultz, and Alan W. Black. 2008. Synthesizer voice quality of new languages calibrated with mean mel cepstral distortion. In Spoken Languages Technologies for Under-Resourceful Languages. +Peter Ladefoged, Richard Harshman, Louis Goldstein, and Lloyd Rice. 1978. Generating vocal tract shapes from formant frequencies. The Journal of the Acoustical Society of America, 64(4):1027-1035. +Peter Ladefoged and Keith Johnson. 2014. A Course in Phonetics. Nelson Education. +Peter Ladefoged and Ian Maddieson. 2007. The UCLA phonetics lab archive. +Jackson L. Lee, Lucas F.E. Ashby, M. Elizabeth Garza, Yeonju Lee-Sikka, Sean Miller, Alan Wong, Arya D. McCarthy, and Kyle Gorman. 2020. Massively multilingual pronunciation mining with WikiPron. In Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020). European Language Resources Association (ELRA). Resources downloadable from https://github.com/kylebgorman.wikipron. +Xinjian Li, Siddharth Dalmia, Juncheng Li, Matthew Lee, Patrick Littell, Jiali Yao, Antonios Anastasopoulos, David R Mortensen, Graham Neubig, Alan W. Black, et al. 2020. Universal phone recognition with a multilingual allophone system. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8249-8253. IEEE. +Mona Lindau and Patricia Wood. 1977. Acoustic vowel spaces. UCLA Working Papers in Phonetics, 38:41-48. +Björn Lindblom. 1986. Phonetic universals in vowel systems. In John J. Ohala and Jeri Jaeger, editors, Experimental Phonology, pages 13-44. Academic Press, Orlando. +Björn Lindblom and Johan Sundberg. 1971. Acoustical consequences of lip, tongue, jaw, and larynx movement. The Journal of the Acoustical Society of America, 50(4B):1166-1179. +Peder Livijn. 2000. Acoustic distribution of vowels in differently sized inventories-hot spots or adaptive dispersion. *Phonetic Experimental Research*, Institute of Linguistics, University of Stockholm (PER-ILUS), 11. + +Liang Lu, Arnab Ghoshal, and Steve Renals. 2013. Acoustic data-driven pronunciation lexicon for large vocabulary speech recognition. In 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, pages 374-379. IEEE. +Ian Maddieson. 1995. Gestural economy. In Proceedings of the 13th International Congress of Phonetic Sciences, Stockholm, Sweden. +André Martinet. 1955. Économie Des Changements Phonétiques: Traité de Phonologie Diachronique, volume 10. Bibliotheca Romanica. +Lucie Ménard, Jean-Luc Schwartz, and Jérôme Aubin. 2008. Invariance and variability in the production of the height feature in French vowels. Speech Communication, 50:14-28. +Steven Moran and Daniel McCloy, editors. 2019. PHOIBLE 2.0. Max Planck Institute for the Science of Human History, Jena. +David R. Mortensen, Siddharth Dalmia, and Patrick Littell. 2018. Epitran: Precision G2P for many languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Resources Association (ELRA). +Terrance M. Nearey. 1977. *Phonetic Feature Systems for Vowels*. Ph.D. thesis, University of Alberta. Reprinted 1978 by Indiana University Linguistics Club. +Josef Robert Novak, Nobuaki Minematsu, and Keikichi Hirose. 2016. Phonetisaurus: Exploring graphemeto-phoneme conversion with joint n-gram models in the WFST framework. Natural Language Engineering, 22(6):907-938. +Livia Oushiro. 2019. Linguistic uniformity in the speech of Brazilian internal migrants in a dialect contact situation. In Proceedings of the 19th International Congress of Phonetic Sciences, Melbourne, Australia 2019, pages 686-690, Melbourne, Australia. Canberra, Australia: Australasian Speech Science and Technology Association Inc. +Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The Kaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Processing Society. IEEE Catalog No.: CFP11SRW-USB. +Kishore Prahallad, Alan W. Black, and Ravishankhar Mosur. 2006. Sub-phonetic modeling for capturing pronunciation variations for conversational speech synthesis. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), volume 1. IEEE. + +Ting Qian, Kristy Hollingshead, Su-youn Yoon, Kyoung-young Kim, and Richard Sproat. 2010. A Python toolkit for universal transliteration. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Resources Association (ELRA). +Karim Rahim and Wesley S. Burr. 2017. multitaper: Multitaper spectral analysis. R package version 1.0-14. +Daniel Recasens and Aina Espinosa. 2009. Dispersion and variability in Catalan five and six peripheral vowel systems. Speech Communication, 51(3):240-258. +Tanja Schultz. 2002. GlobalPhone: A multilingual speech and text database developed at Karlsruhe University. In Seventh International Conference on Spoken Language Processing, pages 345-348, Denver, CO. +Jean-Luc Schwartz and Lucie Ménard. 2019. Structured idiosyncrasies in vowel systems. OSF Preprints. +Christine H. Shadle, Wei-rong Chen, and D. H. Whalen. 2016. Stability of the main resonance frequency of fricatives despite changes in the first spectral moment. The Journal of the Acoustical Society of America, 140(4):3219-3220. +Kenneth N. Stevens and Samuel J. Keyser. 2010. Quantal theory, enhancement and overlap. Journal of Phonetics, 38(1):10-19. +Andreas Stolcke. 2002. SRILM - an extensible language modeling toolkit. In Seventh International Conference on Spoken Language Processing, pages 901-904. +Bert Vaux and Bridget Samuels. 2015. Explaining vowel systems: Dispersion theory vs natural selection. Linguistic Review, 32(3):573-599. +Dominic J. L. Watt. 2000. Phonetic parallels between the close-mid vowels of Tyneside English: Are they internally or externally motivated? Language Variation and Change, 12(1):69-101. +John C. Wells. 1995/2000. Computer-coding the IPA: A proposed extension of SAMPA. +D.H. Whalen and Andrea G. Levitt. 1995. The universality of intrinsic F0 of vowels. Journal of Phonetics, 23:349-366. +Matthew Wiesner, Oliver Adams, David Yarowsky, Jan Trmal, and Sanjeev Khudanpur. 2019. Zero-shot pronunciation lexicons for cross-language acoustic model transfer. In Proceedings of IEEE Association for Automatic Speech Recognition and Understanding (ASRU). + +Xiaohui Zhang, Vimal Manohar, Daniel Povey, and Sanjeev Khudanpur. 2017. Acoustic data-driven lexicon learning based on a greedy pronunciation selection framework. arXiv preprint arXiv:1706.03747. +Eberhard Zwicker and Ernst Terhardt. 1980. Analytical expressions for critical-band rate and critical bandwidth as a function of frequency. The Journal of the Acoustical Society of America, 68(5):1523-1525. + +# A Pairwise Correlations between Vowel Formant Measures (§4 Case Studies) + +Table 3 and Table 4 respectively show Pearson correlations of mean F1 and mean F2 in ERB between vowels that appear in at least 10 readings. As formalized in the present analysis, phonetic uniformity predicts strong correlations of mean F1 among vowels with a shared height specification, and strong correlations of mean F2 among vowels with a shared backness specification. The respective “Height” and “Backness” columns in Table 3 and Table 4 indicate whether the vowels in each pair match in their respective specifications. $p$ -values are corrected for multiple comparisons using the Benjamini-Hochberg correction and a false discovery rate of 0.25 (Benjamini and Hochberg, 1995). Significance is assessed at $\alpha = 0.05$ following the correction for multiple comparisons; rows that appear in gray have correlations that are not significant according to this threshold. + +
V1V2Height# Readingsrp
/i//i:/120.810.006
/e://o:/100.810.015
/i//u/400.790.000
/ε//ɔ/110.680.053
/o//a/370.660.000
/i://o:/110.650.070
/i://u:/120.640.061
/e//o/350.620.001
/e//u/360.590.001
/e//a/340.580.002
/u//ə/120.580.105
/i://e:/110.580.118
/i//e/380.540.002
/ε//a/120.540.127
/u//o/380.490.007
/ε//u/140.490.135
/i//o/390.460.011
/e//ε/120.460.204
/u//a/370.420.027
/i://e/110.420.288
/u//u:/100.410.334
/i://u/110.330.430
/i://a/110.280.496
/i//a/390.270.173
/i//ε/140.240.496
/i://o/130.190.624
/i//ə/130.100.785
/u//ɔ/120.090.785
/ε//o/13-0.090.785
/e//ɔ/10-0.120.785
/u://o/10-0.120.785
/i//ɔ/11-0.420.288
/o//ə/11-0.510.173
/ə//a/11-0.900.001
+ +Table 3: Pearson correlations $(r)$ of mean F1 in ERB between vowel categories. + +
V1V2Backness# Readingsrp
/e//ε/120.770.019
/u//u:/100.770.037
/i//i:/120.700.038
/u//o/380.690.000
/i//ε/140.690.031
/u://o/100.620.130
/u//ə/120.600.107
/u//ɔ/120.520.168
/i//e/380.410.038
/ε//a/120.320.519
/o//a/370.300.159
/e://o:/100.270.666
/e//a/340.240.339
/o//ə/110.210.724
/ə//a/110.160.830
/i://e/110.110.911
/i//a/390.060.911
/i://e:/110.060.965
/e//o/350.010.965
/u//a/370.000.985
/ε//ɔ/11-0.030.965
/i://a/11-0.040.965
/ε//o/13-0.040.965
/e//u/36-0.120.666
/ε//u/14-0.220.666
/i//ə/13-0.230.666
/i://o:/11-0.420.345
/i//o/39-0.480.017
/i://o/13-0.520.149
/i//u/40-0.550.003
/i//ɔ/11-0.630.107
/e//ɔ/10-0.650.107
/i://u/11-0.800.019
/i://u:/12-0.830.009
+ +Table 4: Pearson correlations $(r)$ of mean F2 in ERB between vowel categories. + +# B Distributions of Unitran Segment Accuracy (§3.1.3 Quality Measures) + +Here we evaluate the quality of the Unitran dataset in more detail. The goal is to explore the variation in the quality of the labeled Unitran segments across different languages and phoneme labels. This evaluation includes only readings in high-resource languages, where we have not only the aligned Unitran pronunciations but also aligned high-resource pronunciations (Epitran or WikiPron) against which to evaluate them. The per-token statistics used to calculate these plots are included in the corpus release to enable closer investigation of individual phonemes than is possible here. + +# B.1 Unitran Pronunciation Accuracy + +First, in Figures 5 and 6, we consider whether Unitran's utterance pronunciations are accurate without looking at the audio. For each utterance, we compute the unweighted Levenshtein alignment between the Unitran pronunciation of the utterance and the high-resource pronunciation. For each reading, we then score the percentage of Unitran 'phoneme' tokens that were aligned to high-resource 'phoneme' tokens with exactly the same label.[14] We can see in Figure 6 that many labels are highly accurate in many readings while being highly inaccurate in many others. Some labels are noisy in some readings.[15] + +![](images/3b573b4340dfcf151f53805dececdcec21c7d160e675794790a0c900deff200d.jpg) +Figure 5: Unitran pronunciation accuracy per language, evaluated by Levenshtein alignment to WikiPron pronunciations (hatched bars) or Epitran pronunciations (plain bars). Where a language has multiple readings, error bars show the min and max across those readings. + +![](images/3965c822bae7a40deb06317aebdb5aec5312624a5a9d52deeb2eecbc4544f417.jpg) +Figure 6: Unitran pronunciation accuracy per language, for selected phonemes. Accuracy is evaluated by Levenshtein alignment as in Figure 5. Each curve is a kernel density plot with integral 1. For the $/z/$ curve, the integral between $80\%$ and $100\%$ (for example) is the estimated probability that in a high-resource language drawn uniformly at random, the fraction of Unitran $/z/$ segments that align to high-resource $/z/$ segments falls in that range. The 'all' curve is the same, but now the uniform draw is from all pairs of (high-resource language, Unitran phoneme used in that language). + +# B.2 Unitran Segment Label Accuracy + +In Figures 7 and 8, we ask the same question again, but making use of the audio data. The match for each Unitran segment is now found not by Levenshtein alignment, but more usefully by choosing the high-resource segment with the closest midpoint. For each reading, we again score the percentage of Unitran 'phoneme' tokens whose aligned high-resource 'phoneme' tokens have exactly the same label. Notice that phonemes that typically had high accuracy in Figure 6, such as /p/ and /b/, now have far more variable accuracy in Figure 8, suggesting difficulty in aligning the Unitran pronunciations to the correct parts of the audio. + +![](images/d91d430ae1c65d2772c05435dc2e647ae7fbb8a29bb3474fc6e662bc32af3c9f.jpg) +Figure 7: Unitran pronunciation accuracy per language, as in Figure 5 but with audio midpoint alignment in place of Levenshtein alignment. + +![](images/2156bf71eefffe5433935bde8eb90003d52466584ba5bfbba3a918943fda7a88.jpg) +Figure 8: Unitran pronunciation accuracy per language, for selected phonemes, as in Figure 6 but with audio midpoint alignment in place of Levenshtein alignment. + +# B.3 Unitran Segment Boundary Accuracy + +Finally, in Figures 9 and 10, we measure whether Unitran segments with the "correct" label also have the "correct" time boundaries, where "correctness" is evaluated against the corresponding segments obtained using Epitran or WikiPron+G2P. + +![](images/55e698f8c4b6d7de5b0500ee0ade33b480c579efeb9d9bdea0bf6c4dc6494a90.jpg) +Figure 9: Mean error per language in the temporal boundaries of Unitran segments.. Each Unitran segment is evaluated against the WikiPron segment (hatched bars) or Epitran segment (plain bars) with the closest midpoint, as if the latter were truth. The error of a segment is the absolute offset of the left boundary plus the absolute offset of the right boundary. Only segments where the Unitran label matches the Epitran/WikiPron label are included in the average. Where a language has multiple readings, error bars show the min and max across those readings. + +![](images/074149670e10b282458f1e3d70bcdec2f59abab1bed9ed48bd6d2ffe33a778d7.jpg) +Figure 10: Mean error per language in the temporal boundaries of Unitran segments, for selected phonemes. Each curve is a kernel density plot with integral 1. For the /z/ curve, the integral between 50ms and 100ms (for example) is the estimated probability that in a high-resource language drawn uniformly at random, the Unitran /z/ segments whose corresponding Epitran or WikiPron segments are also labeled with /z/ have mean boundary error in that range. Small bumps toward the right correspond to individual languages where the mean error of /z/ is unusually high. The 'all' curve is the same, but now the uniform draw is from all pairs of (high-resource language, Unitran phoneme used in that language). The boundary error of a segment is evaluated as in Figure 9. + +# C WikiPron Grapheme-to-Phoneme (G2P) Accuracy (§3.1.3 Quality Measures) + +For each language where we used WikiPron, Table 5 shows the phoneme error rate (PER) of Phonetisaurus G2P models trained on WikiPron entries, as evaluated on held-out WikiPron entries. This is an estimate of how accurate our G2P-predicted pronunciations are on out-of-vocabulary words, insofar as those are distributed similarly to the in-vocabulary words. (It is possible, however, that out-of-vocabulary words such as Biblical names are systematically easier or harder for the G2P system to pronounce, depending on how they were transliterated.) + +The same G2P configuration was used for all languages, with the hyperparameter settings shown in Table 6. (seq1_max and seq2_max describe how many tokens in the grapheme and phoneme sequences can align to each other.). These settings were tuned on SIGMORPHON 2020 Task 1 French, Hungarian, and Korean data (Gorman et al., 2020), using 20 random 80/20 splits. + +
ISO 639-3finlatnhxsrnmahpor-pomfemwwpor-bzengkhmmlgoribanurd
Train size4174134181126157813963320322710077543003016114211172704
PER0.82.44.14.69.610.110.710.811.414.515.515.816.119.526.7
±0.02±0.04±1.02±0.76±0.41±0.11±1.2±1.29±0.16±0.06±0.38±1.44±1.13±1.35±0.60
+ +Table 5: WikiPron G2P Phone Error Rate (PER) calculated treating WikiPron annotations as ground-truth. We perform 20 trials with random 80/20 splits per language, and report PER averaged across trials with $95\%$ confidence intervals for each language. + +
Phonetisaurus Alignment Hyperparametersseq1_max 1seq2_max 3seq1_del Trueseq2_del Truegrow Truemax EM iterations 11
Graphone Language Model Hyperparametersn-gram order 5LM type max-entdiscounting Kneser-Neygt2min 2gt3min 2gt4min 3gt5min 4
+ +Table 6: Table of final G2P hyperparameter settings. Alignment parameters not listed here for phonetisaurus-align use the default values. The language model was trained using SRILM (Stolcke, 2002) ngram-count using default values except for those listed above. + +# D Retention Statistics (§4.1 Data Filtering) + +Table 7 shows what percentage of tokens would be retained after various methods are applied to filter out questionable tokens from the readings used in §4.1. In particular, the rightmost column shows the filtering that was actually used in §4.1. We compute statistics for each reading separately; in each column we report the minimum, median, mean, and maximum statistics over the readings. The top half of the table considers vowel tokens (for the vowels in Appendix A); the bottom half considers sibilant tokens (/s/ and /z/). + +On the left side of the table, we consider three filtering techniques for Unitran alignments. Midpoint retains only the segments whose labels are "correct" according to the midpoint-matching methods of Appendix B. MCD retains only those utterances with $\mathrm{MCD} < 6$ . Outlier removes tokens that are outliers according to the criteria described in §4.1. Finally, AGG. is the aggregate retention rate retention rate after all three methods are applied in order. + +On the right side of the table, we consider the same filtering techniques for the high-resource alignments that we actually use, with the exception of Midpoint, as here we have no higher-quality annotation to match against. + +
Unitran AlignmentsHigh-Resource Alignments
# TokensMidpointMCDOutlierAGG.# TokensMCDOutlierAGG.
VowelsMin50,1322%42%83%1%61,72742%84%37%
Median21,516223%88%90%16%232,05988%90%79%
Mean23,956325%81%89%20%223,81581%90%73%
Max662,81365%100%93%60%468,864100%93%93%
# Readings494648494549484948
SibilantsMin7,19810%42%89%13%718444%91%43%
Median28,69070%87%97%59%2756987%97%85%
Mean30,02563%80%95%56%2708381%96%79%
Max63,57389%100%98%79%45,290100%99%96%
# Readings362635361925222522
+ +Table 7: Summary of quality measure retention statistics for vowels and sibilants over unique readings with reading-level MCD $< 8$ for Unitran and high-resource alignments. + +# E All VoxClamantis v1.0 Languages + +All 635 languages from 690 readings are presented here with their language family, ISO 639-3 code, and mean utterance alignment quality in Mel Cepstral Distortion (MCD) from Black (2019). Languages for which we release Epitran and/or WikiPron alignments in addition to Unitran alignments are marked with $e$ and $w$ respectively. MCD ranges from purple (low), blue-green (mid), to yellow (high). Lower MCD typically corresponds to better audio-text utterance alignments and higher quality speech synthesis, but judgments regarding distinctions between languages may be subjective. ISO 639-3 is not intended to provide identifiers for dialects or other sub-language variations, which may be present here where there are multiple readings for one ISO 639-3 code. We report the most up-to-date language names from the ISO 639-3 schema (Eberhard and Fennig, 2020). Language names and codes in many schema could be pejorative and outdated, but where language codes cannot be easily updated, language names can and often are. + +
NIGER-CONGO: 159
Abidji abi6.3
Adele ade6.9
Adioukrou adj7.4
Akan aka7.8
Akebu keu7.0
Akoose bss7.2
Anufo cko6.9
Avatime avn6.3
Bafut bfd7.3
Bandial bqj7.0
Bekwarra bkv7.3
Bete-Bendi btt9.1
Biali beh7.6
Bimoba bim7.0
Bokobaru bus6.9
Bomu bmq7.0
Buamu box8.1
Buli (Ghana) bwu7.3
Bum bmv6.4
Cameroon Mambila mcu7.6
Central-Eastern Niger fuq7.1
Cerma cme8.5
Cerma cme6.1
Chopi cce6.3
Chumburung ncu7.3
Delo ntr8.0
Denya anv6.7
Ditammari tbz7.7
Djimini Senoufo dyi7.1
Duruma dug6.7
Eastern Karaboro xrb8.1
Ekajuk eka7.5
Ewe ewe6.3
Ewe ewe6.7
Farefare gur8.1
Farefare gur8.3
Fon fon8.7
Gikyode acd7.7
Giryama nyf6.8
Gitonga toh6.8
Gogo gog7.0
Gokana gkn8.0
Gourmanchégaux7.3
Gwere gwv6.1
Hanga hag7.2
Haya hay7.1
Ifé ife7.8
Ivbie North-Okpela-Ar atg7.7
Izere izr6.8
Jola-Fonyi dyo7.1
Jola-Kasa esk7.5
Jukun Takum jhu7.9
Kabiyè kbp7.4
Kagulu kki6.6
Kako kkj7.9
Kasem xsm7.7
Kasem xsm8.0
Kenyang ken7.4
Kim kia6.8
Kim kia6.3
Koma kmy7.3
Konkombaxon7.8
Kono (Sierra Leone) kno8.1
+ +
Koonzime ozm8.0
Kouya kyf8.2
Kukele kez7.8
Kunda kdn6.4
Kuo xuo6.7
Kusaal kus7.0
Kutep kub6.9
Kutu kdc5.7
Kuwataay cwt7.4
Kwere cwe7.5
Lama (Togo) las7.9
Lelemi lef7.3
Lobi lob7.0
Lokaa yaz6.6
Lukpa dop8.0
Lyélé lee8.1
Machame jmc6.8
Mada (Nigeria) mda6.6
Makaa mcp6.9
Makhwuva vmw6.8
Malawi Lomwe lon5.8
Malba Birifor bfo6.5
Mamara Senoufo myk8.0
Mampruli maw7.6
Mankanya knf6.6
Masaaba myx6.1
Meta'mgo6.4
Miyobe soy7.2
Moba mtq8.1
Moba mtq7.2
Mochi old6.9
Mossi mos7.2
Mossi mos7.5
Mumuye zmm7.7
Mundani mnf6.8
Mwan moa7.8
Mwani wmv6.5
Mündü muh8.4
Nafaanra nfr6.8
Nande nb7.2
Nateni ntm7.4
Nawdm nzm8.3
Ndogo ndz6.9
Ngangam gng8.0
Nigeria Mambila mzK6.9
Nilamba nim6.7
Ninzo nin5.9
Nkonya nko6.8
Noone nhu7.2
Northern Dagara dgi7.3
Ntcham bud8.8
Nyabwa nwB7.7
Nyakyusa-Ngonde nyy6.7
Nyankole nyn8.0
Nyaturu rim6.7
Nyole nuj5.9
Nyoro nyo7.1
Nzima nzi7.2
Obolo ann8.5
Oku oku8.3
Pasaal sig7.5
Plapo Krumen ktj7.0
Pokomo pkb6.5
Pular fuf7.6
+ +
Rigwe iri7.3
Rundi run8.3
Saamia lsm6.8
Sango sag6.7
Sekpele lip6.6
Selee snw6.5
Sena seh6.6
Shambala ksb6.4
Sissala sld7.6
Siwu akp6.3
Soga xog6.9
South Fali fal7.7
Southern Birifor biv7.6
Southern Bobo Madaré bwc7.6
Southern Dagaare dga6.5
Southern Nuni nww7.6
Southwest Gbaya gso7.6
Supyire Senoufo spp8.3
Talinga-Bwisi tij6.5
Tampulma tpm7.1
Tharaka thk7.8
Tikar tik7.8
Timne tem7.2
Toura (Côte d'Ivoire) neb6.8
Tsonga tso5.2
Tumulung Sisaala sil8.0
Tuwuli bov6.2
Tyap kcg7.5
Vengo bay6.7
Vunjo vun6.5
West-Central Limba lia7.3
Yocoboué Dida gud7.0
AUSTRONESIAN: 106
Achinese ace6.5
Agutaynen agn6.1
Alangan alj5.9
Alune alp6.3
Ambai amk5.4
Amganad Ifugao ifa5.9
Aralle-Tabulahan atq6.7
Arop-Lokep apr6.2
Arosi aia5.6
Bada (Indonesia) bhz5.4
Balantak blz6.1
Balinese ban6.4
Bambam ptu5.8
Batad Ifugao ifb6.2
Batak Dairi btd6.1
Batak Karo btx6.2
Batak Simalungun bts6.4
Besoap beq6.4
Brooke's Point Palawa plw6.2
Caribbean Javanese jyn6.8
Cebuano ceb6.9
Central Bikol bel6.5
Central Malay pse6.6
Central Mnong cmo6.0
Central Sama sml6.7
Da'a Kaili kzf6.5
Duri mvp6.9
Fataleka far6.3
Fijian fij7.6
Fordata frd5.3
Gilbertese gil7.0
+ +
Gorontalo gor6.2
Hanunoo hnn6.0
Hiligaynon hil6.7
Iban iba6.5
e Iloko ilo6.5
e Indonesian ind7.2
e Indonesian ind6.8
e Indonesian ind6.4
Itawit itv6.6
e Javanese jav6.3
Kadazan Dusun dtp8.5
Kagayanen cgc6.2
Kalagan kqe5.9
Kankanaey kne5.7
Keley-I Kallahan ify6.2
Khehek tlx9.1
Kilivila kij6.2
Kinaray-A krj6.3
Kisar kje6.3
Koronadal Blaan bpr6.4
Lampung Api ljp6.4
Lauje law6.4
Ledo Kaili lew7.0
Luang lex6.1
Lundayeh lnd6.5
Ma'anyan mhy6.4
Madurese mad7.4
Mag-antsi Aytasgb6.4
Makasar mak6.4
Malagasy mlg8.8
Malagasy mlg7.3
Malagasy mlg6.3
Malay (macrolanguage) msa6.3
e Malay (macrolanguage) msa6.0
Mamasam qmj6.3
Manado Malay xmm5.2
Mapos Buang bzh5.8
Marano mrw6.0
Marshallese mah7.9
Matigsalug Manobo mbt6.4
Mayoyao Ifugao ifu6.6
Mentawai mww6.6
Minangkabau min6.3
Misima-Panaeati mpx6.3
Mongondow mog6.7
Muna mnb6.3
Napun py6.7
Ngaju nij7.3
Nias nia6.8
Obo Manobo obo5.8
Owa stn6.3
Palauan pau6.7
Pamona pmf6.3
Pampanga pam6.6
Pangasinan pag6.7
Paranan prf6.2
Rejiang rej6.0
Roviana rug5.7
Sambal xsb6.0
Sambal xsb6.0
Samoan smo6.0
Sangir sxn7.7
Sarangani Blaan bps6.5
Sasak sas6.3
+ +
Sudest tgo6.1Huastec hus6.1eRomanian ron6.8Yue Chinese yue8.0
Sundanese sun6.9Ixil ixl5.8eRussian rus5.6Zyphé Chin zyp7.2
eTagalog tgl6.5Ixil ixl6.5Sinte Romani rmo6.6QUECHUAN: 22
Tangoa tgp7.1Ixil ixl7.6eSpanish spa6.2Ayacucho Quechua quy7.2
Termanu twu6.1K'iche' quc6.6eSpanish spa7.9Cajamarca Quechua qvc7.8
Tombonus tx7.2K'iche' quc6.6eSpanish spa7.8Cañar Highland Quichu qxr5.6
Toraja-Sa'dan sda6.3K'iche' quc6.4eSpanish spa7.9Cusco Quechua quz6.8
Tuwali Ifugao ifk6.7K'iche' quc6.3eSpanish spa6.7Huallaga Huánuco Queq bub7.1
Uma ppk6.7K'iche' quc6.4eSwedish swe6.9Huanalies-Dos de Mayo qvh6.2
Western Bukidnon Mano mbb6.6K'iche' quc7.1eSwedish swe6.1Huaylas Ancash Quechua qwh6.6
Western Tawbuid twb6.0Kaqchikel cak6.1eTajik tkg6.8Huaylla Wanca Quechua qvw6.7
AFRO-ASIATIC: 45Kaqchikel cakUrdu urd6.6Inga inb6.8
Bana bcw7.2Kaqchikel cak5.5Vlax Romani rmy6.8Lambayeque Quechua quf6.9
Daasanach dsh6.5Kaqchikel cak6.8OTO-MANGUEAN: 27Margos-Yarowilca-Laur qvm6.1
Daba dbqKaqchikel cak7.0Atatláhuca Mixtec mib6.2Napo Lowland Quechua qvo6.4
Dangaléat daa7.0Kaqchikel cak7.9Ayutla Mixtec mily6.1North Bolivian Quechua qul6.7
Dawro dwr8.3Kekchi kek6.5Central Mazahuá maz7.0North Junin Quechua qvn7.3
Eastern Oromo hae6.5Kekchi kek6.3Chihuaxtla Triqui trs6.0Northern Conchucos An qxn5.9
Egyptian Arabic arz7.4Mam mam6.3Dixui-Tilantongo Mix xtd6.5Northern Pastaza Quic qyz6.1
Gamo gmv7.2Mam mam6.7Jalapa De Diaz Mazate maj8.3Panao Huánco Quechua qxh8.2
Gen gej7.3Mam mam7.3Jamiltepec Mixtec mxt7.4San Martin Quechua qvs6.8
Gofa gof6.5Mam mam7.1Lalana Chinantec cnl7.4South Bolivian Quechua quh6.5
Gofa gof8.2Mopán Maya mop7.0Lealao Chinantec cle6.6South Bolivian Quechua quh7.0
Gude gde7.3Popti' jac7.1Magdalena Peñasco Mix xtm5.6Southern Pastaza Quec qup6.1
Hamer-Banna amf6.5Popti' jac6.3Mezquital Otomi ote6.8Tena Lowland Quichua quw6.2
Hausa hau5.7Poqomchi' poh6.5Nopal Chaatin cya8.8EASTERN SUDANIC: 19
Hdi xed7.5Poqomchi' poh5.3Ozumacin Chinantech chz7.7Acoli ach6.8
Iraqw irk8.4Q'anjob'al kjb6.8Peñoles Mixtec mil6.7Adhola adh6.5
Kabyle kab7.4Tektititeko ttc6.0Pinotepa Nacional Mix mio6.0Alur alz7.3
Kafa kbr7.3Tz'utujil tzj6.8San Jerónimo Tecóatl maa7.7Bari bfa5.2
Kambaata ktb6.9Tzeltal tzh6.0San Jerónimo Tecóatl maa7.9Datooga tcc6.9
Kamwe hig7.8Tzeltal tzh6.5San Juan Atzingo Popo poe6.5Kakwa keo6.7
Kera ker7.3Tzotzil tzo6.2San Marcos Tlacoyalco pls5.9Karamojong kdj6.5
Kimré kqp6.7Tzotzil tzo7.1San Pedro Amuzgos Amu agz7.2Kumam kdi6.2
Konso kxc6.6Western Kanjobal knj6.8Santa Maria Zacatepec mza6.3Kupsabiny Kpz6.7
Koorete kcy7.2Yucateco yua7.0Sochiapam Chinantec cso6.1Lango (Uganda) laj7.8
Lele (Chad) ln6.8INDO-EUROPEAN: 40Southern Puebla Mixte mit6.5Luwo lwo8.4
Male (Ethiopia) mdy6.6Albanian sqi7.0Tepetotutla Chinantec cnt7.3Mabaan mfz6.7
Marba mpg7.7Awadi awa7.4Tezoatlán Mixtec mxb6.0Markweeta enb7.3
Mbuk mq7.9Bengali ben8.1Usila Chinantec cue6.7Murle mur7.8
Merey meq8.1Bengali ben8.1Yosondúa Mixtec mpm6.7Nuer nus6.9
Mesopotamian Arabic acm8.3Bengali ben8.1SINO-TIBETAN: 24Sabaot spi8.1
Mofu-Gudur mif8.0Caribbean Hindustani hns7.0Achang acn6.1Shilluk shk6.9
Muyang muy6.6Chhattisgarhi hne6.6Akeu aeu6.9Southwestern Dinka dif7.5
Mwaghavul sur7.1Dari prs6.9Akha ahk7.0Teso teo7.1
North Mofu mfk7.0English eng6.9Bawn Chin bgr6.8TURKIC: 18
Parkwa pbi6.9Fiji Hindi hif6.9Eastern Tamang taj6.1Bashkir bak6.0
Péví lme7.7French fra8.2Falam Chin cfm6.7Chuvash chv7.3
Sebat Bet Gurage sgw6.6French fra8.5Hakka Chinese hak6.3Crimean Tatar crh5.4
eSomali som8.3Hindi hin6.5Kachin kac6.3Gagauz gag5.3
Standard Arabic arb7.9Iranian Persian pes7.3Khumi Chin cnc6.2Gagauz gag5.6
Sudanese Arabic apd8.0Latin lat5.8Kulung (Nepal) kle6.0Kara-Kalpak kaa6.7
Tachelhit shi5.0Magahi mag6.4Lahu lhu6.7Karachay-Balkar krc6.9
Tamasheq taq7.1Maithili mai7.2Lashi lsi7.6Kazakh kaz6.8
eTigrinya tir6.6Malvi mup6.5Lolopo ycl7.2Khakas kjh5.4
Tumak tmc6.6Marathi mar6.8Mandarin Chinese cmm7.7Kumyk kum6.5
Wandala mfi7.9Northern Kurdish kmr7.0Maru mhx7.6Nogai nog5.4
MAYAN: 42Oriya (macrolanguage) ori7.6Min Nan Chinese nan6.8eNorth Azerbaijani azj6.8
Achi acr6.2Ossetian oss6.3Mro-Khimi Chin cmr7.3Southern Altai alt7.2
Aguacateco agu5.8Polish pol7.7Newari new6.1Tatar tat7.4
Chol ctu7.0Portuguese por7.2Pwo Northern Karen pww5.5Turkish tur7.8
Chorti caa6.4Portuguese por7.6Sherpa xsr7.4Turkish tur8.6
Chuj cac7.5Portuguese por8.2Sunwar suz6.6Tuvinian tvv6.2
Chuj cac6.7Portuguese por7.9Tedim Chin ctd6.6Uighur uig6.2
+ +![](images/ca31dec5bb2ff921827c1de8b42107b49faa4c7d4c14675b3a7abc6e14db0e95.jpg) \ No newline at end of file diff --git a/acorpusforlargescalephonetictypology/images.zip b/acorpusforlargescalephonetictypology/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6ebfcd680028bdd60cf6c0c829ea82eddebb79de --- /dev/null +++ b/acorpusforlargescalephonetictypology/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac0a6fe5527d5c76fbfdc50900adbc9cda0d5b4f0a91fa34a2ddd95c14e5ad13 +size 1815426 diff --git a/acorpusforlargescalephonetictypology/layout.json b/acorpusforlargescalephonetictypology/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c47aa7a0b6d3660685cc97a445a141dc49d99dc5 --- /dev/null +++ b/acorpusforlargescalephonetictypology/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16e4501718b62fafce6a6676965bf211581fb3421f7b5d448effe80e521cae63 +size 502915 diff --git a/activeimitationlearningwithnoisyguidance/1cd987d1-2f57-45e1-806b-b3fb3e73ee83_content_list.json b/activeimitationlearningwithnoisyguidance/1cd987d1-2f57-45e1-806b-b3fb3e73ee83_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b22923145d6532bb57e7bc2d4f0ef95cde109bea --- /dev/null +++ b/activeimitationlearningwithnoisyguidance/1cd987d1-2f57-45e1-806b-b3fb3e73ee83_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:94166a3edd0ab82f61b8cee2ca77a0adffaaa4979295aa1cc9ac8e7815ec41bc +size 80811 diff --git a/activeimitationlearningwithnoisyguidance/1cd987d1-2f57-45e1-806b-b3fb3e73ee83_model.json b/activeimitationlearningwithnoisyguidance/1cd987d1-2f57-45e1-806b-b3fb3e73ee83_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9ee1da6a8ef0f539f1b21f3b37e2b28cddf4a9f3 --- /dev/null +++ b/activeimitationlearningwithnoisyguidance/1cd987d1-2f57-45e1-806b-b3fb3e73ee83_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:62f0958d60a90e78ee4c55aa0743c5450b6116230dc41b68c6ba7c4f736914a9 +size 99133 diff --git a/activeimitationlearningwithnoisyguidance/1cd987d1-2f57-45e1-806b-b3fb3e73ee83_origin.pdf b/activeimitationlearningwithnoisyguidance/1cd987d1-2f57-45e1-806b-b3fb3e73ee83_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f432c32b93d19ea1e895a6c3c4aab6a31163042d --- /dev/null +++ b/activeimitationlearningwithnoisyguidance/1cd987d1-2f57-45e1-806b-b3fb3e73ee83_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:28d58e7dd5a6203fe67bac90886c1cf483c1f7a8480250f7d448a32aab03f0e7 +size 1266317 diff --git a/activeimitationlearningwithnoisyguidance/full.md b/activeimitationlearningwithnoisyguidance/full.md new file mode 100644 index 0000000000000000000000000000000000000000..921ad4dfaeb15feab75763ec00818c870d9c6a35 --- /dev/null +++ b/activeimitationlearningwithnoisyguidance/full.md @@ -0,0 +1,404 @@ +# Active Imitation Learning with Noisy Guidance + +Kianté Brantley + +University of Maryland + +kdrbrant@cs.umd.edu + +Amr Sharaf + +University of Maryland + +amr@cs.umd.edu + +Hal Daumé III + +University of Maryland + +Microsoft Research + +me@hal3.name + +# Abstract + +Imitation learning algorithms provide state-of-the-art results on many structured prediction tasks by learning near-optimal search policies. Such algorithms assume training-time access to an expert that can provide the optimal action at any queried state; unfortunately, the number of such queries is often prohibitive, frequently rendering these approaches impractical. To combat this query complexity, we consider an active learning setting in which the learning algorithm has additional access to a much cheaper noisy heuristic that provides noisy guidance. Our algorithm, LEAQI, learns a difference classifier that predicts when the expert is likely to disagree with the heuristic, and queries the expert only when necessary. We apply LEAQI to three sequence labeling tasks, demonstrating significantly fewer queries to the expert and comparable (or better) accuracies over a passive approach. + +# 1 Introduction + +Structured prediction methods learn models to map inputs to complex outputs with internal dependencies, typically requiring a substantial amount of expert-labeled data. To minimize annotation cost, we focus on a setting in which an expert provides labels for pieces of the input, rather than the complete input (e.g., labeling at the level of words, not sentences). A natural starting point for this is imitation learning-based "learning to search" approaches to structured prediction (Daumé et al., 2009; Ross et al., 2011; Bengio et al., 2015; Leblond et al., 2018). In imitation learning, training proceeds by incrementally producing structured outputs on piece at a time and, at every step, asking the expert "what would you do here?" and learning to mimic that choice. This interactive model comes at a substantial cost: the expert demonstrator must be continuously available and must be able to answer a potentially large number of queries. + +We reduce this annotation cost by only asking an expert for labels that are truly needed; our algorithm, Learning to Query for Imitation (LEAQI, /'li:tfi:/) achieves this by capitalizing on two factors. First, as is typical in active learning (see §2), LEAQI only asks the expert for a label when it is uncertain. Second, LEAQI assumes access to a noisy heuristic labeling function (for instance, a rule-based model, dictionary, or inexpert annotator) that can provide low-quality labels. LEAQI operates by always asking this heuristic for a label, and only querying the expert when it thinks the expert is likely to disagree with this label. It trains, simultaneously, a difference classifier (Zhang and Chaudhuri, 2015) that predicts disagreements between the expert and the heuristic (see Figure 1). + +The challenge in learning the difference classifier is that it must learn based on one-sided feedback: if it predicts that the expert is likely to agree with the heuristic, the expert is not queried and the classifier cannot learn that it was wrong. We address this one-sided feedback problem using the Apple Tasting framework (Helmbold et al., 2000), in which errors (in predicting which apples are tasty) are only observed when a query is made (an apple is tasted). Learning in this way particularly important in the general case where the heuristic is likely not just to have high variance with respect to the expert, but is also statistically biased. + +Experimentally (§4.5), we consider three structured prediction settings, each using a different type of heuristic feedback. We apply LEAQI to: English named entity recognition where the heuristic is a rule-based recognizer using gazetteers (Khashabi et al., 2018); English scientific keyphrase extraction, where the heuristic is an unsupervised method (Florescu and Caragea, 2017); and Greek part-of-speech tagging, where the heuristic is a small dictio + +![](images/3a70e2b269cb71bfa2bc8853b1a278912ec0c33546b0e82f0ef964fba0589bf0.jpg) +Figure 1: A named entity recognition example (from the Wikipedia page for Clarence Ellis). $\pmb{x}$ is the input sentence and $\pmb{y}$ is the (unobserved) ground truth. The predictor $\pi$ operates left-to-right and, in this example, is currently at state $s_{10}$ to tag the 10th word; the state $s_{10}$ (highlighted in purple) combines $\pmb{x}$ with $\hat{\pmb{y}}_{1:9}$ . The heuristic makes two errors at $t = 4$ and $t = 6$ . The heuristic label at $t = 10$ is $y_{10}^{h} = \mathsf{ORG}$ . Under Hamming loss, the cost at $t = 10$ is minimized for $a = \mathsf{ORG}$ , which is therefore the expert action (if it were queried). The label that would be provided for $s_{10}$ to the difference classifier is 0 because the two policies agree. + +nary compiled from the training data (Zesch et al., 2008; Haghighi and Klein, 2006). In all three settings, the expert is a simulated human annotator. We train LEAQI on all three tasks using fixed BERT (Devlin et al., 2019) features, training only the final layer (because we are in the regime of small labeled data). The goal in all three settings is to minimize the number of words the expert annotator must label. In all settings, we're able to establish the efficacy of LEAQI, showing that it can indeed provide significant label savings over using the expert alone and over several baselines and ablations that establish the importance of both the difference classifier and the Apple Tasting paradigm. + +# 2 Background and Related Work + +We review first the use of imitation learning for structured prediction, then online active learning, and finally applications of active learning to structured prediction and imitation learning problems. + +# 2.1 Learning to Search + +The learning to search approach to structured prediction casts the joint prediction problem of producing a complex output as a sequence of smaller classification problems (Ratnaparkhi, 1996; Collins and Roark, 2004; Daumé et al., 2009). For instance, in the named entity recognition example from Figure 1, an input sentence $x$ is labeled one word at a time, left-to-right. At the depicted state $(s_{10})$ , the model has labeled the first nine words and must next label the tenth word. Learning to search approaches assume access to an oracle policy $\pi^{\star}$ , which provides the optimal label at every position. + +In (interactive) imitation learning, we aim to imitate the behavior of the expert policy, $\pi^{\star}$ , which provides the true labels. The learning to search view allows us to cast structured prediction as a (degenerate) imitation learning task, where states + +Algorithm 1 DAgger(II, $N,\langle \beta_i\rangle_{i = 0}^N,\pi^\star$ +1: initialize dataset $D = \{\}$ +2: initialize policy $\hat{\pi}_1$ to any policy in $\Pi$ +3: for $i = 1\ldots N$ do +4: $\triangleright$ stochastic mixture policy +5: Let $\pi_i = \beta_i\pi^\star + (1 - \beta_i)\hat{\pi}_i$ +6: Generate a $T$ -step trajectory using $\pi_i$ +7: Accumulate data $D \gets D \cup \{(s, \pi^\star(s))\}$ for all $s$ in those trajectories +8: Train classifier $\hat{\pi}_{i+1} \in \Pi$ on $D$ +9: end for +10: return best (or random) $\hat{\pi}_i$ + +are (input, prefix) pairs, actions are operations on the output, and the horizon $T$ is the length of the sequence. States are denoted $s \in S$ , actions are denoted $a \in [K]$ , where $[K] = \{1, \ldots, K\}$ , and the policy class is denoted $\Pi \subseteq [K]^S$ . The goal in learning is to find a policy $\pi \in \Pi$ with small loss on the distribution of states that it, itself, visits. + +A popular imitation learning algorithm, DAgger (Ross et al., 2011), is summarized in Alg 1. In each iteration, DAgger executes a mixture policy and, at each visited state, queries the expert's action. This produces a classification example, where the input is the state and the label is the expert's action. At the end of each iteration, the learned policy is updated by training it on the accumulation of all generated data so far. DAgger is effective in practice and enjoys appealing theoretical properties; for instance, if the number of iterations $N$ is $\tilde{O}(T^2\log(1/\delta))$ then with probability at least $1 - \delta$ , the generalization error of the learned policy is $O(1/T)$ (Ross et al., 2011, Theorem 4.2). + +# 2.2 Active Learning + +Active learning has been considered since at least the 1980s often under the name "selective sam + +pling" (Rendell, 1986; Atlas et al., 1990). In ag- nostic online active learning for classification, a learner operates in rounds (e.g. Balcan et al., 2006; Beygelzimer et al., 2009, 2010). At each round, the learning algorithm is presented an example $x$ and must predict a label; the learner must decide whether to query the true label. An effective margin-based approach for online active learning is provided by Cesa-Bianchi et al. (2006) for linear models. Their algorithm defines a sampling probability $\rho = b / (b + z)$ , where $z$ is the margin on the current example, and $b > 0$ is a hyperparameter that controls the aggressiveness of sampling. With probability $\rho$ , the algorithm requests the label and performs a perceptron-style update. + +Our approach is inspired by Zhang and Chaudhuri's (2015) setting, where two labelers are available: a free weak labeler and an expensive strong labeler. Their algorithm minimizes queries to the strong labeler, by learning a difference classifier that predicts, for each example, whether the weak and strong labelers are likely to disagree. Their algorithm trains this difference classifier using an example-weighting strategy to ensure that its Type II error is kept small, establishing statistical consistency, and bounding its sample complexity. + +This type of learning from one-sided feedback falls in the general framework of partial-monitoring games, a framework for sequential decision making with imperfect feedback. Apple Tasting is a type of partial-monitoring game (Littlestone and Warmuth, 1989), where, at each round, a learner is presented with an example $\pmb{x}$ and must predict a label $\hat{y} \in \{-1, +1\}$ . After this prediction, the true label is revealed only if the learner predicts $+1$ . This framework has been applied in several settings, such as spam filtering and document classification with minority class distributions (Sculley, 2007). Sculley (2007) also conducts a through comparison of two methods that can be used to address the one-side feedback problem: label-efficient online learning (Cesa-Bianchi et al., 2006) and margin-based learning (Vapnik, 1982). + +# 2.3 Active Imitation & Structured Prediction + +In the context of structured prediction for natural language processing, active learning has been considered both for requesting full structured outputs (e.g. Thompson et al., 1999; Culotta and McCallum, 2005; Hachey et al., 2005) and for requesting only pieces of outputs (e.g. Ringger et al., + +2007; Bloodgood and Callison-Burch, 2010). For sequence labeling tasks, Haertel et al. (2008) found that labeling effort depends both on the number of words labeled (which we model), plus a fixed cost for reading (which we do not). + +In the context of imitation learning, active approaches have also been considered for at least three decades, often called "learning with an external critic" and "learning by watching" (Whitehead, 1991). More recently, Judah et al. (2012) describe RAIL, an active learning-for-imitation-learning algorithm akin to our ACTIVATEDAGGER baseline, but which in principle would operate with any underlying i.i.d. active learning algorithm (not just our specific choice of uncertainty sampling). + +# 3 Our Approach: LEAQI + +Our goal is to learn a structured prediction model with minimal human expert supervision, effectively by combining human annotation with a noisy heuristic. We present LEAQI to achieve this. As a concrete example, return to Figure 1: at $s_{10}$ , $\pi$ must predict the label of the tenth word. If $\pi$ is confident in its own prediction, LEAQI can avoid any query, similar to traditional active learning. If $\pi$ is not confident, then LEAQI considers the label suggested by a noisy heuristic (here: ORG). LEAQI predicts whether the true expert label is likely to disagree with the noisy heuristic. Here, it predicts no disagreement and avoids querying the expert. + +# 3.1 Learning to Query for Imitation + +Our algorithm, LEAQI, is specified in Alg 2. As input, LEAQI takes a policy class $\Pi$ , a hypothesis class $\mathcal{H}$ for the difference classifier (assumed to be symmetric and to contain the "constant one" function), a number of episodes $N$ , an expert policy $\pi^{\star}$ , a heuristic policy $\pi^{\mathrm{h}}$ , and a confidence parameter $b > 0$ . The general structure of LEAQI follows that of DAGger, but with three key differences: + +(a) roll-in (line 7) is according to the learned policy (not mixed with the expert, as that would require additional expert queries), +(b) actions are queried only if the current policy is uncertain at $s$ (line 12), and +(c) the expert $\pi^{\star}$ is only queried if it is predicted to disagree with the heuristic $\pi^{\mathrm{h}}$ at $s$ by the difference classifier, or if apple tasting method switches the difference classifier label (line 15; see §3.2). + +Algorithm 2 LEAQI(Π, H, N, π*, πh, b) +1: initialize dataset $D = \{\}$ +2: initialize policy $\pi_1$ to any policy in $\Pi$ +3: initialize difference dataset $S = \{\}$ +4: initialize difference classifier $h_1(s) = 1 (\forall s)$ +5: for $i = 1 \dots N$ do +6: Receive input sentence $x$ +7: generate a $T$ -step trajectory using $\pi_i$ +8: Generate output $\hat{y}$ using $\pi_i$ +9: for each $s$ in $\hat{y}$ do +10: draw bernoulli random variable +11: $Z_i \sim \mathrm{Bern}\left(\frac{b}{b + \text{certainty}(\pi_i, s)}\right)$ ; see §3.3 +12: if $Z_i = 1$ then +13: set difference classifier prediction +14: $\hat{d}_i = h_i(s)$ +15: if AppleTaste $(s, \pi^{\mathrm{h}}(s), \hat{d}_i)$ then +16: predict agree query heuristic +17: $D \gets D \cup \{(s, \pi^{\mathrm{h}}(s))\}$ +18: else +19: predict disagree query expert +20: $D \gets D \cup \{(s, \pi^{\star}(s))\}$ +21: $d_i = \mathbb{1}[\pi^{\star}(s) = \pi^{\mathrm{h}}(s)]$ +22: $S \gets S \cup \{(s, \pi^{\mathrm{h}}(s), \hat{d}_i, d_i)\}$ +23: end if +24: end if +25: end for +26: Train policy $\pi_{i+1} \in \Pi$ on $D$ +27: Train difference classifier $h_{i+1} \in \mathcal{H}$ on $S$ to minimize Type II errors (see §3.2) +28: end for +29: return best (or random) $\pi_i$ + +In particular, at each state visited by $\pi_{i}$ , LEAQI estimates $z$ , the certainty of $\pi_{i}$ 's prediction at that state (see §3.3). A sampling probability $\rho$ is set to $b / (b + z)$ where $z$ is the certainty, and so if the model is very uncertain then $\rho$ tends to zero, following (Cesa-Bianchi et al., 2006). With probability $\rho$ , LEAQI will collect some label. + +When a label is collected (line 12), the difference classifier $h_i$ is queried on state $s$ to predict if $\pi^{\star}$ and $\pi^{\mathrm{h}}$ are likely to disagree on the correct action. (Recall that $h_1$ always predicts disagreement per line 4.) The difference classifier's prediction, $\hat{d}_i$ , is passed to an apple tasting method in line 15. Intuitively, most apple tasting procedures (including the one we use, STAP; see §3.2) return $\hat{d}_i$ , unless the difference classifier is making many Type II errors, in which case it may return $\neg \hat{d}_i$ . + +A target action is set to $\pi^{\mathrm{h}}(s)$ if the apple taste + +Algorithm 3 AppleTaste_STAP(S, $a_i^{\mathrm{h}},\hat{d}_i$ +1: $\triangleright$ count examples that are action $a_{i}^{h}$ +2: let $t = \sum_{(-,a, - , - )\in S}\mathbb{1}[a_{i}^{\mathrm{h}} = a]$ +3: $\triangleright$ count mistakes made on action $a_{i}^{h}$ +4: let $m = \sum_{(-,a,\hat{d},d)\in S}\mathbb{1}[\hat{d}\neq d\wedge a_{i}^{\mathrm{h}} = a]$ +5: $w = \frac{t}{|S|}$ $\triangleright$ percentage of time $a_{i}^{h}$ was seen +6: if $w < 1$ then +7: $\triangleright$ skew distribution +8: draw $r\sim Beta(1 - w,1)$ +9: else +10: draw $r\sim$ Uniform(0,1) +11: end if +12: return $(d = 1)\wedge (r\leq \sqrt{(m + 1) / t})$ + +ing algorithm returns "agree" (line 17), and the expert $\pi^{\star}$ is only queried if disagreement is predicted (line 20). The state and target action (either heuristic or expert) are then added to the training data. Finally, if the expert was queried, then a new item is added to the difference dataset, consisting of the state, the heuristic action on that state, the difference classifier's prediction, and the ground truth for the difference classifier whose input is $s$ and whose label is whether the expert and heuristic actually disagree. Finally, $\pi_{i + 1}$ is trained on the accumulated action data, and $h_{i + 1}$ is trained on the difference dataset (details in §3.3). + +There are several things to note about LEAQI: + +If the current policy is already very certain, a expert annotator is never queried. +If a label is queried, the expert is queried only if the difference classifier predicts disagreement with the heuristic, or the apple tasting procedure flips the difference classifier prediction. +Due to apple tasting, most errors the difference classifier makes will cause it to query the expert unnecessarily; this is the "safe" type of error (increasing sample complexity but not harming accuracy), versus a Type II error (which leads to biased labels). +$\diamond$ The difference classifier is only trained on states where the policy is uncertain, which is exactly the distribution on which it is run. + +# 3.2 Apple Tasting for One-Sided Learning + +The difference classifier $h\in \mathcal{H}$ must be trained (line 27) based on one-sided feedback (it only ob + +serves errors when it predicts "disagree") to minimize Type II errors (it should only very rarely predict "agree" when the truth is "disagree"). This helps keep the labeled data for the learned policies unbiased. The main challenge here is that the feedback to the difference classifier is one-sided: that is, if it predicts "disagree" then it gets to see the truth, but if it predicts "agree" it never finds out if it was wrong. We use one of (Helmbold et al., 2000)'s algorithms, STAP (see Alg 3), which works by random sampling from apples that are predicted to not be tasted and tasting them anyway (line 12). Formally, STAP tastes apples that are predicted to be bad with probability $\sqrt{(m + 1) / t}$ , where $m$ is the number of mistakes, and $t$ is the number of apples tasted so far. + +We adapt Apple Tasting algorithm STAP to our setting for controlling the number of Type II errors made by the difference classifier as follows. First, because some heuristic actions are much more common than others, we run a separate apple tasting scheme per heuristic action (in the sense that we count the number of error on this heuristic action rather than globally). Second, when there is significant action imbalance2 we find it necessary to skew the distribution from STAP more in favor of querying. We achieve this by sampling from a Beta distribution (generalizing the uniform), whose mean is shifted toward zero for more frequent heuristic actions. This increases the chance that Apple Tasting will have on finding bad apples error for each action (thereby keeping the false positive rate low for predicting disagreement). + +# 3.3 Measuring Policy Certainty + +In step 11, LEAQI must estimate the certainty of $\pi_{i}$ on $s$ . Following Cesa-Bianchi et al. (2006), we implement this using a margin-based criteria. To achieve this, we consider $\pi$ as a function that maps actions to scores and then chooses the action with largest score. The certainty measure is then the difference in scores between the highest and second highest scoring actions: + +$$ +\operatorname {c e r t a i n t y} (\pi , s) = \max _ {a} \pi (s, a) - \max _ {a ^ {\prime} \neq a} \pi (s, a ^ {\prime}) +$$ + +# 3.4 Analysis + +Theoretically, the main result for LEAQI is an interpretation of the main DAgger result(s). Formally, let $d_{\pi}$ denote the distribution of states visited by $\pi$ , $C(s,a) \in [0,1]$ be the immediate cost of performing action $a$ in state $s$ , $C_{\pi}(s) = \mathbb{E}_{a \sim \pi(s)} C(s,a)$ , and the total expected cost of $\pi$ to be $J(\pi) = T\mathbb{E}_{s \sim d_{\pi}} C_{\pi}(s)$ , where $T$ is the length of trajectories. $C$ is not available to a learner in an imitation setting; instead the algorithm observes an expert and minimizes a surrogate loss $\ell(s,\pi)$ (e.g., $\ell$ may be zero/one loss between $\pi$ and $\pi^{\star}$ ). We assume $\ell$ is strongly convex and bounded in $[0,1]$ over $\Pi$ . + +Given this setup assumptions, let $\epsilon_{\mathrm{pol - approx}} = \min_{\pi \in \Pi}\frac{1}{N}\sum_{i = 1}^{N}\mathbb{E}_{s\sim d_{\pi_i}}\ell (s,\pi)$ be the true loss of the best policy in hindsight, let $\epsilon_{\mathrm{dc - approx}} = \min_{h\in \mathcal{H}}\frac{1}{N}\sum_{i = 1}^{N}\mathbb{E}_{s\sim d_{\pi_i}}err(s,h,\pi^{\star}(s)\neq \pi^{\mathrm{h}}(s))$ be the true error of the best difference classifier in hindsight, and assuming that the regret of the policy learner is bounded by $reg_{\mathrm{pol}}(N)$ after $N$ steps, Ross et al. (2011) shows the following: + +Theorem 1 (Thm 4.3 of Ross et al. (2011)). After $N$ episodes each of length $T$ , under the assumptions above, with probability at least $1 - \delta$ there exists a policy $\pi \in \pi_{1:N}$ such that: + +$$ +\begin{array}{l} \mathbb {E} _ {s \sim d _ {\pi}} \ell (s, \pi) \leq \\ \epsilon_ {\mathrm {p o l - a p p r o x}} + r e g _ {\mathrm {p o l}} (N) + \sqrt {(2 / N) \log (1 / \delta)} \\ \end{array} +$$ + +This holds regardless of how $\pi_{1:N}$ are trained (line 26). The question of how well LEAQI performs becomes a question of how well the combination of uncertainty-based sampling and the difference classifier learn. So long as those do a good job on their individual classification tasks, DAgger guarantees that the policy will do a good job. This is formalized below, where $Q^{\star}(s,a)$ is the best possible cumulative cost (measured by $C$ ) starting in state $s$ and taking action $a$ : + +Theorem 2 (Theorem 2.2 of Ross et al. (2011)). Let $u$ be such that $Q^{\star}(s,a) - Q^{\star}(s,\pi^{\star}(s))\leq u$ for all $a$ and all $s$ with $d_{\pi}(s) > 0$ ; then for some $\pi \in \pi_{1:N}$ , as $N\to \infty$ : + +$$ +J (\pi) \leq J (\pi^ {\star}) + u T \epsilon_ {\mathrm {p o l - a p p r o x}} +$$ + +Here, $u$ captures the most long-term impact a single decision can have; for example, for average Hamming loss, it is straightforward to see that $u = \frac{1}{T}$ + +
TaskNamed Entity RecognitionKeyphrase ExtractionPart of Speech Tagging
LanguageEnglish (en)English (en)Modern Greek (el)
DatasetCoNLL'03 (Tjong +Kim Sang and De Meulder, +2003)SemEval 2017 Task 10 +(Augenstein et al., 2017)Universal Dependencies +(Nivre, 2018)
# Ex14,9872,8091,662
Avg. Len14.526.325.5
# Actions5217
MetricEntity F-scoreKeyphrase F-scorePer-tag accuracy
FeaturesEnglish BERT (Devlin et al., +2019)SciBERT (Beltagy et al., +2019)M-BERT (Devlin et al., +2019)
HeuristicString matching against an offline gazeteer of entities from Khashabi et al. (2018)Output from an unsupervised keyphrase extraction model +Florescu and Caragea +(2017)Dictionary from Wiktionary, similar to Zesch et al. (2008) and Haghighi and Klein (2006)
Heur QualityP 88%, R 27%, F 41%P 20%, R 44%, F 27%10% coverage, 67% acc
+ +Table 1: An overview of the three tasks considered in experiments. + +because any single mistake can increase the number of mistakes by at most 1. For precision, recall and F-score, $u$ can be as large as one in the (rare) case that a single decision switches from one true positive to no true positives. + +# 4 Experiments + +The primary research questions we aim to answer experimentally are: + +Q1 Does uncertainty-based active learning achieve lower query complexity than passive learning in the learning to search settings? +Q2 Does learning a difference classifier improve query efficiency over active learning alone? +Q3 Does Apple Tasting successfully handle the problem of learning from one-sided feedback? +Q4 Is the approach robust to cases where the noisy heuristic is uncorrelated with the expert? +Q5 Is casting the heuristic as a policy more effective than using its output as features? + +To answer these questions, we conduct experiments on three tasks (see Table 1): English named entity recognition, English scientific keyphrase extraction, and low-resource part of speech tagging on Modern Greek (el), selected as a low-resource setting. + +# 4.1 Algorithms and Baselines + +In order to address the research questions above, we compare LEAQI to several baselines. The baselines below compare our approach to previous methods: + +# DAGGER. Passive DAGger (Alg 1) + +ACTIVEDAGGER. An active variant of DAgger that asks for labels only when uncertain. (This is equivalent to LEAQI, but with neither the difference classifier nor apple tasting.) + +DAGGER+FEAT. DAGGER with the heuristic policy's output appended as an input feature. + +ACTIVEDAGGER+FEAT. ACTIVEDAGGER with the heuristic policy as a feature. + +The next set of comparisons are explicit ablations: + +LEAQI+NOAT LEAQI with no apple tasting. + +LEAQI+NOISYHEUR. LEAQI, but where the heuristic returns a label uniformly at random. + +The baselines and LEAQI share a linear relationship. DAGGER is the baseline algorithm used by all algorithms described above but it is very query inefficient with respect to an expert annotator. ACTIVEDAGGER introduces active learning to make DAGGER more query efficient; the delta to the previous addresses Q1. LEAQI+NOAT introduces the difference classifier; the delta addresses + +Q2. LEAQI adds apple tasting to deal with one-sided learning; the delta addresses Q3. Finally, LEAQI+NOISYHEUR. (vs LEAQI) addresses Q4 and the +FEAT variants address Q5. + +# 4.2 Data and Representation + +For named entity recognition, we use training, validation, and test data from CoNLL'03 (Tjong Kim Sang and De Meulder, 2003), consisting of IO tags instead of BIO tags (the “B” tag is almost never used in this dataset, so we never attempt to predict it) over four entity types: Person, Organization, Location, and Miscellaneous. For part of speech tagging, we use training and test data from modern Greek portion of the Universal Dependencies (UD) treebanks (Nivre, 2018), consisting of 17 universal tags4. For keyphrase extraction, we use training, validation, and test data from SemEval 2017 Task 10 (Augenstein et al., 2017), consisting of IO tags (we use one “I” tag for all three keyphrase types). + +In all tasks, we implement both the policy and difference classifier by fine-tuning the last layer of a BERT embedding representation (Devlin et al., 2019). More specifically, for a sentence of length $T$ , $w_{1},\ldots ,w_{T}$ , we first compute BERT embeddings for each word, $x_{1},\ldots ,x_{T}$ using the appropriate BERT model: English BERT and M-BERT for named entity and part-of-speech, respectively, and SciBERT (Beltagy et al., 2019) for keyphrase extraction. We then represent the state at position $t$ by concatenating the word embedding at that position with a one-hot representation of the previous action: $s_t = [w_t;\mathrm{onehot}(a_{t - 1})]$ . This feature representation is used both for learning the labeling policy and also learning the difference classifier. + +# 4.3 Expert Policy and Heuristics + +In all experiments, the expert $\pi^{\star}$ is a simulated human annotator who annotates one word at a time. The expert returns the optimal action for the relevant evaluation metric (F-score for named entity recognition and keyphrase extraction, and accuracy for part-of-speech tagging). We take the annotation cost to be the total number of words labeled. + +The heuristic we implement for named entity recognition is a high-precision gazeteer-based string matching approach. We construct this by taking a gazeteer from Wikipedia using the CogComp framework (Khashabi et al., 2018), and use + +FlashText (Singh, 2017) to label the dataset. This heuristic achieves a precision of 0.88, recall of 0.27 and F-score of 0.41 on the training data. + +The keyphrase extraction heuristic is the output of an "unsupervised keyphrase extraction" approach (Florescu and Caragea, 2017). This system is a graph-based approach that constructs word-level graphs incorporating positions of all word occurrences information; then using PageRank to score the words and phrases. This heuristic achieves a precision of 0.20, recall of 0.44 and F-score of 0.27 on the training data. + +The part of speech tagging heuristic is based on a small dictionary compiled from Wiktionary. Following Haghighi and Klein (2006) and Zesch et al. (2008), we extract this dictionary using Wiktionary as follows: for word $w$ in our training data, we find the part-of-speech $y$ by querying Wiktionary. If $w$ is in Wiktionary, we convert the Wiktionary part of speech tag to a Universal Dependencies tag (see §A.1), and if word $w$ is not in Wiktionary, we use a default label of "X". Furthermore, if word $w$ has multiple parts of speech, we select the first part of speech tag in the list. The label "X" is chosen 90% of the time. For the remaining 10%, the heuristic achieves an accuracy of 0.67 on the training data. + +# 4.4 Experimental Setup + +Our experimental setup is online active learning. We make a single pass over a dataset, and the goal is to achieve an accurate system as quickly as possible. We measure performance (accuracy or F-score) after every 1000 words ( $\approx$ 50 sentences) on held-out test data, and produce error bars by averaging across three runs and reporting standard deviations. + +Hyperparameters for DAGGER are optimized using grid-search on the named entity recognition training data and evaluated on development data. We then fix DAGGER hyperparameters for all other experiments and models. The difference classifier hyperparameters are subsequently optimized in the same manner. We fix the difference classifier hyperparameters for all other experiments. + +# 4.5 Experimental Results + +The main results are shown in the top two rows of Figure 2; ablations of LEAQI are shown in Figure 3. + +![](images/2b5f554c895efdb3b9004152e504029fce644be4f62309403167c0d88f4a242a.jpg) +Figure 2: Empirical evaluation on three tasks: (left) named entity recognition, (middle) keyphrase extraction and (right) part of speech tagging. The top rows show performance (f-score or accuracy) with respect to the number of queries to the expert. The bottom row shows the number of queries as a function of the number of words seen. + +![](images/f86d185834306ec7a1e79a2b252a8eed4eb6333fed7bbcb1ae12a38e15bceefc.jpg) + +![](images/47173f5b9a81d00bdcb2fc32de2126d24434e83f431fd1491089051e1b95978d.jpg) + +In Figure 2, the top row shows traditional learning curves (performance vs number of queries), and the bottom row shows the number of queries made to the expert as a function of the total number of words seen. + +Active vs Passive (Q1). In all cases, we see that the active strategies improve on the passive strategies; this difference is largest in keyphrase extraction, middling for part of speech tagging, and small for NER. While not surprising given previous successes of active learning, this confirms that it is also a useful approach in our setting. As expected, the active algorithms query far less than the passive approaches, and LEAQI queries the least. + +Heuristic as Features vs Policy (Q5). We see that while adding the heuristic's output as a feature can be modestly useful, it is not uniformly useful and, at least for keyphrase extraction and part of speech tagging, it is not as effective as LEAQI. For named entity recognition, it is not effective at all, but this is also a case where all algorithms perform essentially the same. Indeed, here, LEAQI learns quickly with few queries, but never quite reaches the performance of ActiveDAgger. This is likely due to the difference classifier becoming overly confident too quickly, especially on the "O" + +label, given the (relatively well known) oddness in mismatch between development data and test data on this dataset. + +Difference Classifier Efficacy (Q2). Turning to the ablations (Figure 3), we can address Q2 by comparing the ActiveDAgger curve to the LeaQI+NoAT curve. Here, we see that on NER and keyphrase extraction, adding the difference classifier without adding apple tasting results in a far worse model: it learns very quickly but plateaus much lower than the best results. The exception is part of speech tagging, where apple tasting does not seem necessary (but also does not hurt). Overall, this essentially shows that without controlling Type II errors, the difference classifier on it's own does not fulfill its goals. + +Apple Tasting Efficacy (Q3). Also considering the ablation study, we can compare LeaQI+NoAT with LeaQI. In the case of part of speech tagging, there is little difference: using apple tasting to combat issues of learning from one sided feedback neither helps nor hurts performance. However, for both named entity recognition and keyphrase extraction, removing apple tasting leads to faster learning, but substantially lower final performance (accuracy or f-score). This is somewhat expected: + +![](images/ffcab7c4776d0c5559c26e2b961f0bba8599d4e91f40367277f80177e86493bd.jpg) +Figure 3: Ablation results on (left) named entity recognition, (middle) keyphrase extraction and (right) part of speech tagging. In addition to LEAQI and DAgger (copied from Figure 2), these graphs also show LEAQI+NOAT (apple tasting disabled), and LEAQI+NOISYHEUR. (a heuristic that produces labels uniformly at random). + +![](images/03e14bb767888d2e20947852ff3c507ec1c95b1e82c3633f146b0fd4f1fd2260.jpg) + +![](images/b68df208b3892af6e5b498f73e0a7c145387c83eb73f3421058def437ebca843.jpg) + +without apple tasting, the training data that the policy sees is likely to be highly biased, and so it gets stuck in a low accuracy regime. + +Robustness to Poor Heuristic (Q4). We compare LeaQI+NoisyHeur to ActiveDAgger. Because the heuristic here is useless, the main hope is that it does not degrade performance below ActiveDAgger. Indeed, that is what we see in all three cases: the difference classifier is able to learn quite quickly to essentially ignore the heuristic and only rely on the expert. + +# 5 Discussion and Limitations + +In this paper, we considered the problem of reducing the number of queries to an expert labeler for structured prediction problems. We took an imitation learning approach and developed an algorithm, LEAQI, which leverages a source that has low-quality labels: a heuristic policy that is suboptimal but free. To use this heuristic as a policy, we learn a difference classifier that effectively tells LEAQI when it is safe to treat the heuristic's action as if it were optimal. We showed empirically—across Named Entity Recognition, Keyphrase Extraction and Part of Speech Tagging tasks—that the active learning approach improves significantly on passive learning, and that leveraging a difference classifier improves on that. + +1. In some settings, learning a difference classifier may be as hard or harder than learning the structured predictor; for instance if the task is binary sequence labeling (e.g., word segmentation), minimizing its usefulness. +2. The true labeling cost is likely more complicated than simply the number of individual + +actions queried to the expert. + +Despite these limitations, we hope that LEAQI provides a useful (and relatively simple) bridge that can enable using rule-based systems, heuristics, and unsupervised models as building blocks for more complex supervised learning systems. This is particularly attractive in settings where we have very strong rule-based systems, ones which often outperform the best statistical systems, like coreference resolution (Lee et al., 2011), information extraction (Riloff and Wiebe, 2003), and morphological segmentation and analysis (Smit et al., 2014). + +# Acknowledgements + +We thank Rob Schapire, Chicheng Zhang, and the anonymous ACL reviewers for very helpful comments and insights. This material is based upon work supported by the National Science Foundation under Grant No. 1618193 and an ACM SIGHPC/Intel Computational and Data Science Fellowship to KB. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor of the ACM. + +# References + +Les E Atlas, David A Cohn, and Richard E Ladner. 1990. Training connectionist networks with queries and selective sampling. In NeurIPS. +Isabelle Augenstein, Mrinal Das, Sebastian Riedel, Lakshmi Vikraman, and Andrew McCallum. 2017. Semeval 2017 task 10: Scienceie - extracting keyphrases and relations from scientific publications. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). + +Nina Balcan, Alina Beygelzimer, and John Langford. 2006. Agnostic active learning. In ICML. +Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scibert: Pretrained language model for scientific text. In EMNLP. +Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. In NeurIPS. +Alina Beygelzimer, Sanjoy Dasgupta, , and John Langford. 2009. Importance weighted active learning. In ICML. +Alina Beygelzimer, Daniel Hsu, John Langford, and Tong Zhang. 2010. Agnostic active learning without constraints. In NeurIPS. +Michael Bloodgood and Chris Callison-Burch. 2010. Bucking the trend: Large-scale cost-focused active learning for statistical machine translation. In ACL. +Nicolò Cesa-Bianchi, Claudio Gentile, and Luca Zaniboni. 2006. Worst-case analysis ofselective sampling for linear classification. JMLR. +Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In ACL. +Aron Culotta and Andrew McCallum. 2005. Reducing labeling effort for structured prediction tasks. In AAAI. +Hal Daumé, III, John Langford, and Daniel Marcu. 2009. Search-based structured prediction. Machine Learning Journal. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In *NAACL*. +Corina Florescu and Cornelia Caragea. 2017. Position-Rank: An unsupervised approach to keyphrase extraction from scholarly documents. In ACL. +Ben Hachey, Beatrice Alex, and Markus Becker. 2005. Investigating the effects of selective sampling on the annotation task. In CoNLL. +Robbie Haertel, Eric K. Ringger, Kevin D. Seppi, James L. Carroll, and Peter McClanahan. 2008. Assessing the costs of sampling methods in active learning for annotation. In ACL. +Aria Haghighi and Dan Klein. 2006. Prototype-driven learning for sequence models. +David P. Helmbold, Nicholas Littlestone, and Philip M. Long. 2000. Apple tasting. Information and Computation. +Kshitij Judah, Alan Paul Fern, and Thomas Glenn Dietterich. 2012. Active imitation learning via reduction to iid active learning. In AAAI. + +Daniel Khashabi, Mark Sammons, Ben Zhou, Tom Redman, Christos Christodoulopoulos, Vivek Srikumar, Nicholas Rizzolo, Lev Ratinov, Guanheng Luo, Quang Do, Chen-Tse Tsai, Subhro Roy, Stephen Mayhew, Zhili Feng, John Wieting, Xiaodong Yu, Yangqiu Song, Shashank Gupta, Shyam Upadhyay, Naveen Arivazhagan, Qiang Ning, Shaoshi Ling, and Dan Roth. 2018. CogCompNLP: Your swiss army knife for NLP. In LREC. +Rémi Leblond, Jean-Baptiste Alayrac, Anton Osokin, and Simon Lacoste-Julien. 2018. SEARNN: Training RNNs with global-local losses. In ICLR. +Heeyoung Lee, Yves Peirsman, Angel Chang, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2011. Stanford's multi-pass sieve coreference resolution system at the conll-2011 shared task. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task. +N. Littlestone and M. K. Warmuth. 1989. The weighted majority algorithm. In Proceedings of the 30th Annual Symposium on Foundations of Computer Science. +Joakim et. al Nivre. 2018. Universal dependencies v2.5. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics, Charles University. +Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In EMNLP. +Larry Rendell. 1986. A general framework for induction and a study of selective induction. *Machine Learning Journal*. +Ellen Riloff and Janyce Wiebe. 2003. Learning extraction patterns for subjective expressions. In EMNLP. +Eric Ringger, Peter McClanahan, Robbie Haertel, George Busby, Marc Carmen, James Carroll, Kevin Seppi, and Deryle Lonsdale. 2007. Active learning for part-of-speech tagging: Accelerating corpus annotation. In Proceedings of the Linguistic Annotation Workshop. +Stéphane Ross, Geoff J. Gordon, and J. Andrew Bagnell. 2011. A reduction of imitation learning and structured prediction to no-regret online learning. In AI-Stats. +David Sculley. 2007. Practical learning from one-sided feedback. In KDD. +Vikash Singh. 2017. Replace or retrieve keywords in documents at scale. CoRR, abs/1711.00046. +Peter Smit, Sami Virpioja, Stig-Arne Gronroos, and Mikko Kurimo. 2014. Morfessor 2.0: Toolkit for statistical morphological segmentation. In EACL. +Cynthia A. Thompson, Mary Elaine Califf, and Raymond J. Mooney. 1999. Active learning for natural language parsing and information extraction. In ICML. + +Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In NAACL/HLT. +Vladimir Vapnik. 1982. Estimation of Dependencies Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics). Springer-Verlag, Berlin, Heidelberg. +Steven Whitehead. 1991. A study of cooperative mechanisms for faster reinforcement learning. Technical report, University of Rochester. +Torsten Zesch, Christof Müller, and Iryna Gurevych. 2008. Extracting lexical semantic knowledge from Wikipedia and Wiktionary. In LREC. +Chicheng Zhang and Kamalika Chaudhuri. 2015. Active learning from weak and strong labelers. In NeurIPS. + +# Supplementary Material For: Active Imitation Learing with Noisy Guidance + +# A Experimental Details: + +# A.1 Wiktionary to Universal Dependencies + +
POS Tag SourceGreek, Modern (el) WiktionaryUniversal Dependencies
adjectiveADJ
adpositionADP
prepositionADP
adverbADV
auxiliaryAU
coordinating conjunctionCCONJ
determinerDET
interjectionINTJ
nounNOUN
numeralNUM
particlePART
pronounPRON
proper nounpROPN
punctuationPUNCT
subordinating conjunctionSCONJ
symbolSYM
verbVERB
otherX
articleDET
conjunctionPART
+ +Table 2: Conversion between Greek, Modern (el) Wiktionary POS tags and Universal Dependencies POS tags. + +# A.2 Hyperparameters + +Here we provide a table of all of hyperparameters we considered for LEAQI and baselines models. (see section 4.4) + +Table 3: Hyperparameters + +
HyperparameterValues ConsideredFinal Value
Policy Learning rate10-3, 10-4, 10-5, 10-6, 5.5 · 10-6, 10-610-6
Difference Classifier Learning rate h10-1, 10-2, 10-3, 10-410-2
Confidence parameter (b)5.0 · 10-1, 10 · 10-1, 15 · 10-15.0 · 10-1
+ +# A.3 Ablation Study Difference Classifier Learning Rate (see Figure 4) + +# A.4 Ablation Study Confidence Parameter: $b$ (see Figure 5) + +![](images/6f7cbadb190386e86c96e14044719e9591f9d816f787b634f0e6b45958403197.jpg) + +![](images/05fc4d4ab749ab291ad5a0de13685feccc7a56e0fd722dd098ddbc9dd8ab8f0e.jpg) + +![](images/0b39415a44d4a80f10abfa820e7f84c267e2d69ece84a23bb9956551c8e20986.jpg) + +![](images/6a8badf81a22544a55b9097e5782debd1b3d0a7e790fadae2106cd34afa3a6de.jpg) +Figure 4: (top-row) English keyphrase extraction and (bottom-row) low-resource language part of speech tagging on Greek, Modern (el). We show the performance of using different learning for the difference classifier $h$ . These plots indicate that their is small difference in performance depending on the difference classifier learning rate. + +![](images/3ab2abf5edb0cc5c129b58b02f09e08ec0a91db5b12c5fc6b9447024109565f3.jpg) + +![](images/91455704d4bb4f79f8822c2a201f127d2bb6142d6f393b499ce7cee5114ce546.jpg) + +![](images/5662759b0712ba0c4911a4c0aa4461f53bfa8656a8c895140460a628dd58e77a.jpg) + +![](images/e7063bc2f6124f2372dcc4ab6abbb70e19ae16af05998df2edeeb6dc722292cb.jpg) + +![](images/d9625f54db75d49a263afab16762d5579a302d5d7142c0bd42bd3a7f21558566.jpg) +Figure 5: (top-row) English keyphrase extraction and (bottom-row) low-resource language part of speech tagging on Greek, Modern (el). We show the performance of using difference confidence parameters $b$ . These plots indicate that our model is robust to difference confidence parameters. + +![](images/ca420f11365320a9bb86dc21b249b351778e20a18fb238fe33eb141754289667.jpg) \ No newline at end of file diff --git a/activeimitationlearningwithnoisyguidance/images.zip b/activeimitationlearningwithnoisyguidance/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7bb3ae9f657654ac7109deea1da7d7cf1626eff8 --- /dev/null +++ b/activeimitationlearningwithnoisyguidance/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b0236bc25c8668ea76bfe416229edab8f9eee4cb90284350a6d7fe9404b9cd6 +size 939572 diff --git a/activeimitationlearningwithnoisyguidance/layout.json b/activeimitationlearningwithnoisyguidance/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3c826bf14b19add94aa338925fcd57798446135a --- /dev/null +++ b/activeimitationlearningwithnoisyguidance/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9d5b921f3737eabc66103ca5458c043365726adeea706a6512bafd41a26a85de +size 505696 diff --git a/activelearningforcoreferenceresolutionusingdiscreteannotation/97c9749f-2ce8-4b80-9725-60e8e57f6363_content_list.json b/activelearningforcoreferenceresolutionusingdiscreteannotation/97c9749f-2ce8-4b80-9725-60e8e57f6363_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..65ff0fc3c5372f1d9ca49f1db098196f3c19e218 --- /dev/null +++ b/activelearningforcoreferenceresolutionusingdiscreteannotation/97c9749f-2ce8-4b80-9725-60e8e57f6363_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:28a8212d394f9a369b80bcd2d95ec4a16ceaa2830b62329e4398ffbabec2adf6 +size 70603 diff --git a/activelearningforcoreferenceresolutionusingdiscreteannotation/97c9749f-2ce8-4b80-9725-60e8e57f6363_model.json b/activelearningforcoreferenceresolutionusingdiscreteannotation/97c9749f-2ce8-4b80-9725-60e8e57f6363_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f8bd68a06b352a5706e0f592edbba7a6a3efe7ce --- /dev/null +++ b/activelearningforcoreferenceresolutionusingdiscreteannotation/97c9749f-2ce8-4b80-9725-60e8e57f6363_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:25e8eb8d264408d84ddcaae63fda8a0578bdd910e1d0ff7e3ce334c1c604c6ac +size 81934 diff --git a/activelearningforcoreferenceresolutionusingdiscreteannotation/97c9749f-2ce8-4b80-9725-60e8e57f6363_origin.pdf b/activelearningforcoreferenceresolutionusingdiscreteannotation/97c9749f-2ce8-4b80-9725-60e8e57f6363_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..dd73b0ec89175c3dff2c606a28612f8cf44cd125 --- /dev/null +++ b/activelearningforcoreferenceresolutionusingdiscreteannotation/97c9749f-2ce8-4b80-9725-60e8e57f6363_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed6463d7b804195cd1e6f9b429af7b02a48c445810b7778c0293a6fb46c48189 +size 925801 diff --git a/activelearningforcoreferenceresolutionusingdiscreteannotation/full.md b/activelearningforcoreferenceresolutionusingdiscreteannotation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5d96e865466adfa27aad5b1a04b001f837767659 --- /dev/null +++ b/activelearningforcoreferenceresolutionusingdiscreteannotation/full.md @@ -0,0 +1,362 @@ +# Active Learning for Coreference Resolution using Discrete Annotation + +Belinda Z. Li†* Gabriel Stanovsky\* Luke Zettlemoyer\* +\* University of Washington Allen Institute for AI Facebook belindali@fb.com {gabis,1sz}@cs.washington.edu + +# Abstract + +We improve upon pairwise annotation for active learning in coreference resolution, by asking annotators to identify mention antecedents if a presented mention pair is deemed not coreferent. This simple modification, when combined with a novel mention clustering algorithm for selecting which examples to label, is much more efficient in terms of the performance obtained per annotation budget. In experiments with existing benchmark coreference datasets, we show that the signal from this additional question leads to significant performance gains per human-annotation hour. Future work can use our annotation protocol to effectively develop coreference models for new domains. Our code is publicly available. $^1$ + +# 1 Introduction + +Coreference resolution is the task of resolving anaphoric expressions to their antecedents (see Figure 1). It is often required in downstream applications such as question answering (Dasigi et al., 2019) or machine translation (Stanovsky et al., 2019). Exhaustively annotating coreference is an expensive process as it requires tracking coreference chains across long passages of text. In news stories, for example, important entities may be referenced many paragraphs after their introduction. + +Active learning is a technique which aims to reduce costs by annotating samples which will be most beneficial for the learning process, rather than fully labeling a large fixed training set. Active learning consists of two components: (1) a task-specific learning algorithm, and (2) an iterative sample selection algorithm, which examines the performance of the model trained at the previous iteration and selects samples to add to the annotated + +A volcano in Mexico, known to locals as Po-po, just started spewing molten rock. Are the two mentions coreferent? No What is the first appearance of the entity that the yellowhighlighted text refers to? A volcano in Mexico + +Figure 1: Discrete annotation. The annotator is shown the document, a span (yellow), and the span's predicted antecedent (blue). In case the answer to the coreference question is negative (i.e., the spans are not coreferring), we present a follow-up question ("what is the first appearance of the entity?"), providing additional cost-effective signal. Our annotation interface can be seen in Figure 5 in the Appendix. + +training set. This method has proven successful for various tasks in low-resource domains (Garrette and Baldridge, 2013; Kholghi et al., 2015; Syed et al., 2016, 2017). + +Sachan et al. (2015) showed that active learning can be employed for the coreference resolution task. They used gold data to simulate pairwise human-annotations, where two entity mentions are annotated as either coreferring or not (see first question in Figure 1). + +In this paper, we propose two improvements to active learning for coreference resolution. First, we introduce the notion of discrete annotation (Section 3), which augments pairwise annotation by introducing a simple additional question: if the user deems the two mentions non-coreferring, they are asked to mark the first occurrence of one of the mentions (see second question in Figure 1). We show that this simple addition has several positive implications. The feedback is relatively easy for annotators to give, and provides meaningful signal which dramatically reduces the number of annotations needed to fully label a document. + +Second, we introduce mention clustering (Section 4). When selecting the next mention to label, we take into account aggregate model predictions + +for all antecedents which belong to the same cluster. This avoids repeated labeling that would come with separately verifying every mention pair within the same cluster, as done in previous methods. + +We conduct experiments across several sample selection algorithms using existing gold data for user labels and show that both of our contributions significantly improve performance on the CoNLL-2012 dataset (Pradhan et al., 2012). Overall, our active learning method presents a superior alternative to pairwise annotation for coreference resolution, achieving better performing models for a given annotation budget. + +# 2 Background + +Our work relies on two main components: a coreference resolution model and a sample selection algorithm. + +Coreference resolution model We use the span ranking model introduced by Lee et al. (2017), and later implemented in AllenNLP framework (Gardner et al., 2018). This model computes span embeddings for all possible spans $i$ in a document, and uses them to compute a probability distribution $P(y = \mathrm{ant}(i))$ over the set of all candidate antecedents $\mathcal{Y}(i) = \{K \text{ previous mentions in the document}\} \cup \{\epsilon\}$ , where $\epsilon$ is a dummy antecedent signifying that span $i$ has no antecedent. This model does not require additional resources, such as syntactic dependencies or named entity recognition, and is thus well-suited for active learning scenarios for low-resource domains. + +Sample selection algorithm Previous approaches for the annotation of coreference resolution have used mostly pairwise selection, where pairs of mentions are shown to a human annotator who marks whether they are co-referring (Gasperin, 2009; Laws et al., 2012; Zhao and Ng, 2014; Sachan et al., 2015). To incorporate these binary annotations into their clustering coreference model, Sachan et al. (2015) introduced the notion of must-link and cannot-link penalties, which we describe and extend in Section 4. + +# 3 Discrete Annotation + +In discrete annotation, as exemplified in Figure 1, we present the annotator with a document where the least certain span $i$ ("Po-po", in the example) and $i$ 's model-predicted antecedent, $A(i)$ ("locals"), are + +highlighted. Similarly to pairwise annotation, annotators are first asked whether $i$ and $A(i)$ are coreferent. If they answer positively, we move on to the next sample. Otherwise, we deviate from pairwise sampling and ask the annotator to mark the antecedent for $i$ ("A volcano in Mexico") as the follow-up question. The annotator can abstain from answering the follow-up question in case $i$ is not a valid mention or if it does not have an antecedent in the document. See Figure 5 in the Appendix for more example annotations. + +In Section 5, we show that discrete annotation is superior to the classic pairwise annotation in several aspects. First, it makes better use of human annotation time, as often an annotator needs to resolve the antecedent of the presented mention to answer the first question. For example, identifying that "Po-po" refers to the volcano, and not the locals. Second, we find that discrete annotation is a better fit for mention ranking models (Lee et al., 2017), which assign the most-likely antecedent to each mention, just as an annotator does in discrete annotation. + +# 4 Mention Clustering + +We experiment with three selection techniques by applying popular active learning selectors like entropy or query-by-committee (Settles, 2010) to clusters of spans. Because our model outputs antecedent probabilities and predictions, we would like to aggregate these outputs, such that we have only one probability per mention cluster rather than one per antecedent. We motivate this with an example: suppose span $i$ 's top two most likely antecedents are $y_{1}$ and $y_{2}$ . In scenario 1, $y_{1}$ and $y_{2}$ are predicted to be clustered together, and in scenario 2, they are predicted to be clustered apart. Span $i$ should have a "higher certainty" in scenario 1 (and thus be less likely to be picked by active learning), because its two most likely antecedents both imply the same clustering, whereas in scenario 2, picking $y_{1}$ vs. $y_{2}$ results in a different downstream clustering. Thus, rather than simply using the raw probability $i$ refers to a particular antecedents, we use the probability $i$ belongs to a certain cluster. This implies modelling $y_{1}$ and $y_{2}$ "jointly" in scenario 1, and separately in scenario 2. + +Formally, we compute the probability that a span $i$ belongs in a cluster $C$ by summing $P(\operatorname{ant}(i) = y)$ + +for all $y$ that belong in some cluster $C$ , since $i$ having an antecedent in a cluster necessarily also implies $i$ is also in that cluster. This allows us to convert the predicted antecedent probabilities to in-cluster probabilities: + +$$ +P (i \in C) = \sum_ {y \in C \cap \mathcal {Y} (i)} P (\operatorname {a n t} (i) = y) \tag {1} +$$ + +Similarly, for query-by-committee, we aggregate predictions such that we have one vote per cluster rather than one vote per antecedent: + +$$ +V (i \in C) = \sum_ {y \in C \cap \mathcal {Y} (i)} V (A (i) = y) \tag {2} +$$ + +where $V(A(i) = y)\in \{0,1,\dots ,\mathcal{M}\}$ refers to the number of models that voted $y$ to be the antecedent of $i$ . + +The cluster information $(y\in C\cap \mathcal{V}(i))$ we use in Equations 1 and 2 is computed from a combination of model-predicted labels and labels queried through active learning. Antecedents which were not predicted to be in clusters are treated as singleton clusters. + +Additionally, to respect user annotations during the selection process, we must keep track of all prior annotations. To do this, we use the concept of must-link (ML; if two mentions are judged coreferent) and cannot-link (CL; if two mentions are judged non-coreferent) relations between mentions introduced by Sachan et al. (2015), and adapt it for our purposes. Specifically, in our discrete setting, we build the links as follows: if the user deems the pair coreferent, it is added to ML. Otherwise, it is added to CL, while the user-corrected pair (from the second question) is always added to ML. + +In addition, we use these links to guide how we select for the next mention to query. For example, if a CL relation exists between spans $m_{1}$ and $m_{2}$ , we will be less likely to query for $m_{1}$ , since we are slightly more certain on what $m_{1}$ 's antecedent should be (not $m_{2}$ ). Formally, we revise probabilities and votes $P(i \in C)$ and $V(i \in C)$ in accordance to our link relations, which affects the selector uncertainty scores. $^{3}$ + +Finally, following (Sachan et al., 2015), we impose transitivity constraints, which allow us to model links beyond what has been explicitly + +pointed out during annotation: + +$$ +M L \left(m _ {i}, m _ {j}\right) \wedge M L \left(m _ {j}, m _ {k}\right)\rightarrow M L \left(m _ {i}, m _ {k}\right) \tag {3} +$$ + +$$ +C L \left(m _ {i}, m _ {j}\right) \wedge M L \left(m _ {i}, m _ {k}\right)\rightarrow C L \left(m _ {j}, m _ {k}\right) \tag {4} +$$ + +However, recomputing these closures after each active learning iteration can be extremely inefficient. Instead, we build up the closure incrementally by adding only the minimum number of necessary links to maintain the closure every time a new link is added. + +We experiment with the following clustered selection techniques: + +Clustered entropy We compute entropy over cluster probabilities and select the mention with the highest clustered entropy: + +$$ +E (i) = - \sum_ {C \in \text {a l l c l u s t e r s}} P (i \in C) \cdot \log P (i \in C) \tag {5} +$$ + +Where $P(i \in C)$ is defined as in Equation 1. + +Clustered query-by-committee We train $\mathcal{M}$ models (with different random seeds) and select the mention with the highest cluster vote entropy: + +$$ +\mathrm {V E} (i) = - \sum_ {C \in \text {a l l c l u s t e r s})} \frac {V (i \in C)}{\mathcal {M}} \cdot \log \frac {V (i \in C)}{\mathcal {M}} \tag {6} +$$ + +Using votes counted over clusters, as defined in Equation 2. + +Least coreferent clustered mentions / Most coreferent unclustered mentions (LCC/MCU) We aim to select a subset of spans for which the model was least confident in its prediction. For each span $i$ which was assigned a cluster $C_i$ , we compute a score $s_C(i) = P(i \in C_i)$ , and choose $n$ spans with the smallest $s_C(i)$ . For each singleton $j$ , we give an "unclustered" score $s_U(i) = \max_{C \in \text{all clusters}} P(j \in C)$ and choose $m$ spans with the largest $s_U(i)$ . $P(i \in C_i)$ and $P(j \in C)$ are computed with Equation 1. + +# 5 Evaluation + +We compare discrete versus pairwise annotation using the English CoNLL-2012 coreference dataset (Pradhan et al., 2012). Following Sachan et al. (2015), we conduct experiments where user judgments are simulated from gold labels. + +![](images/2fa1e8de04a4096ce1b14ee535b49e2c885d110b1b8def236cdac8264c952d5a.jpg) +Figure 2: Comparing various selectors for discrete versus pairwise annotation (dashed orange line). + +
Set# labels/docActive learning +iteration# docs# ?s
A201st (retrained 0x)515
207th (retrained 6x)515
2002nd (retrained 1x)515
2008th (retrained 7x)515
B202nd (retrained 1x)515
208th (retrained 7x)515
2001st (retrained 0x)515
2007th (retrained 6x)515
+ +Annotation time estimation To compare annotation times between pairwise and discrete questions, we collected eight 30-minute sessions from 7 in-house annotators with background in NLP. Annotators were asked to answer as many instances as they could during those 30 minutes. We additionally asked 1 annotator to annotate only discrete questions for 30 minutes. To be as representative as possible, the active learning queries for these experiments were sampled from various stages of active learning (see Table 1). On average, an annotator completed about 67 questions in a single session, half of which were answered negatively, requiring the additional discrete question. Overall, these estimates rely on 826 annotated answers. Our annotation interface is publicly available, $^{4}$ see examples in Figure 5 in the Appendix. + +Timing results are shown in Table 2. Answering + +Table 1: Timing experiments sampling. For each of the 2 datasets, we collected 60 total active learning questions from 20 documents. We collected 5 documents and 15 questions for each of the 4 categories: trained with many/few labels per document, and early/late in active learning process. The 15 questions were sampled randomly from within an iteration. + +
Avg. Time per ?
Initial question15.96s
Follow-up question15.57s
ONLY Follow-up questions28.01s
+ +Table 2: Average annotation time for the initial pairwise question, the discrete followup question, and the discrete question on its own. + +the discrete question after the initial pairwise question takes about the same time as answering the first question (about $16s$ ). Furthermore, answering only discrete questions took $28.01s$ per question, which confirmed that having an initial pairwise question indeed saves annotator time if answered positively. + +In the following experiments, we use these measurements to calibrate pairwise and discrete followup questions when computing total annotation times. + +Baselines We implement a baseline for pairwise annotation with entropy selector. We also implement two discrete annotation baselines with random selection. The partially-labelled baseline follows the standard active learning training loop, but selects the next mention to label at random. The fully-labelled baseline creates a subset of the training data by taking as input an annotation time $t$ and selecting at random a set of documents that the user can fully label in $t$ hours using ONLY discrete annotation. By comparing the fully-labelled baseline against our active learning results, we can determine whether active learning is effective over labelling documents exhaustively. + +Hyperparameters We use the model hyperparameters from the AllenNLP implementation of Lee et al. (2017). We train up to 20 epochs with a patience of 2 before adding labels. After all documents have been added, we retrain from scratch. We use a query-by-committee of $\mathcal{M} = 3$ models, due to memory constraints. For LCC/MCU, given $L$ annotations per document, we split the annotations equally between clusters and singletons. + +Results Figure 2 plots the performance of discrete annotation with the various selectors from Section 4, against the performance of pairwise annotation, calibrated according to our timing experiments. In all figures, we report MUC, B3, and CEAFe as an averaged F1 score. + +The three non-random active learning frameworks outperform the fully-labelled baseline, show + +![](images/60bb4e8e3fddd5a8f6b8a4f90029ebc351063e01e53088fb28200862320d0bb0.jpg) +Figure 3: Mention detection accuracy (in document-micro F1) for pairwise versus discrete selection per human annotation time. + +ing that active learning is more effective for coreference resolution when annotation budget is limited. + +Most notably, Figure 2 shows that every nonrandom discrete selection protocol outperforms pairwise annotation. Where the gap in performance is the largest ( $>15$ minutes per document), we consistently improve by $\sim 4\%$ absolute $F1$ over pairwise selection. + +# 6 Analysis + +A major reason for discrete annotations outperforms the pairwise baseline. The number of pairwise annotations needed to fully label a document is much larger than the number of discrete annotations. In an average development document with 201 candidates per mention, the number of pairwise queries needed to fully label a document is 15,050, while the maximum number of discrete queries is only 201 (i.e., asking for the antecedent of every mention). Thus, the average document can be fully annotated via discrete annotation in only $2.6\%$ of the time it takes to fully label it with pairwise annotation, suggesting that our framework is also a viable exhaustive annotation scheme. + +Further analysis shows that the improvement in discrete selection stems in part from better use of annotation time for mention detection accuracy (Figure 3) and pronoun resolution (Figure 4), in which we measure performance only on clusters with pronouns, as identified automatically by the spaCy tagger (Honnibal and Montani, 2017). + +Finally, Table 3 shows ablations on our discrete annotation framework, showing the contribution of each component of our paradigm. + +![](images/babe66c2bdbb07f4e9046f7ad658ffbfc1062de0aa431ef03ff982988ee21a21.jpg) +Figure 4: Pronoun resolution accuracy (average F1) for pairwise versus discrete selection per human annotation time. + +
F1 score
Discrete annotation57.08
—clustered probabilities56.49
—incremental link closures56.98
Pairwise annotation54.27
+ +Table 3: Ablations over the different model elements, at a single point ( $\sim$ 315 annotation hours). Entropy selector was used for all experiments. + +# 7 Discussion and Conclusion + +We presented discrete annotation, an attractive alternative to pairwise annotation in active learning of coreference resolution in low-resource domains. By adding a simple question to the annotation interface, we obtained significantly better models per human-annotation hour. In addition, we introduced a clustering technique which further optimizes sample selection during the annotation process. More broadly, our work suggests that improvements in annotation interfaces can elicit responses which are more efficient in terms of the obtained performance versus the invested annotation time. + +# Acknowledgements + +We would like to thank Christopher Clark, Terra Blevins, and the anonymous reviewers for their helpful feedback, and Aaron Jaech, Mason Kamb, Madian Khabsa, Kaushal Mangipudi, Nayeon Lee, and Anisha Uppugonduri for their participation in our timing experiments. + +# References + +Pradeep Dasigi, Nelson F. Liu, Ana Marasovic, Noah A. Smith, and Matt Gardner. 2019. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). +Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew E. Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2018. Allennlp: A deep semantic natural language processing platform. CoRR, abs/1803.07640. +Dan Garrette and Jason Baldridge. 2013. Learning a part-of-speech tagger from two hours of annotation. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 138-147, Atlanta, Georgia. Association for Computational Linguistics. +Caroline Gasperin. 2009. Active learning for anaphora resolution. In Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural Language Processing, HLT '09, pages 1-8, Stroudsburg, PA, USA. Association for Computational Linguistics. +Matthew Honnibal and Ines Montani. 2017. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. To appear. +Mahnoosh Kholghi, Laurianne Sitbon, Guido Zuccon, and Anthony Nguyen. 2015. Active learning: a step towards automating medical concept extraction. Journal of the American Medical Informatics Association, 23(2):289-296. +Florian Laws, Florian Heimerl, and Hinrich Schütze. 2012. Active learning for coreference resolution. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 508-512, Montréal, Canada. Association for Computational Linguistics. +Kenton Lee, Luheng He, Mike Lewis, and Luke S. Zettlemoyer. 2017. End-to-end neural coreference resolution. ArXiv, abs/1707.07045. +Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll-2012 shared task: Modeling multilingual unrestricted coreference in onthonotes. In Joint Conference on EMNLP and CoNLL-Shared Task, pages 1-40. Association for Computational Linguistics. +Mrinmaya Sachan, Eduard Hovy, and Eric P. Xing. 2015. An active learning approach to coreference resolution. In Proceedings of the 24th International Conference on Artificial Intelligence, IJ-CAI'15, pages 1312-1318. AAAI Press. + +Burr Settles. 2010. Active learning literature survey. University of Wisconsin, Madison, 52(55-66):11. +Gabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In ACL, page (to appear), Florence, Italy. Association for Computational Linguistics. +A. R. Syed, A. Rosenberg, and E. Kislal. 2016. Supervised and unsupervised active learning for automatic speech recognition of low-resource languages. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5320-5324. +A. R. Syed, A. Rosenberg, and M. Mandel. 2017. Active learning for low-resource speech recognition: Impact of selection size and language modeling data. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5315-5319. +Shanheng Zhao and Hwee Tou Ng. 2014. Domain adaptation with active learning for coreference resolution. In Proceedings of the 5th International Workshop on Health Text Mining and Information Analysis (Louhi), pages 21-29, Gothenburg, Sweden. Association for Computational Linguistics. + +# A Appendix + +# A.1 Timing Experiment Details and Computations. + +In order to properly calibrate the results from discrete and pairwise querying, we conducted experiments (eight 30-minute sessions) to time how long annotators take to answer discrete and pairwise questions. See Figure 5 for the interface we designed for our experiments. + +The questions we ask for the experiment are all sampled from real queries from full runs of our active learning simulations. To obtain representative times, we sampled a diverse selection of active learning questions—at various stages of active learning (first iteration before retraining vs. after retraining $n$ times) and various numbers of annotation per document (20 vs. 200). For each document, we randomly selected between 1-5 questions (of the total 20 or 200) to ask the annotator. Full details on how we sampled our queries can be found in Table 1. Note that we divided our samples into two datasets. We ran four 30-minute sessions with Dataset A before Dataset B and four 30-minute sessions with Dataset B before Dataset A—for a total of eight 30-minute sessions across 7 annotators (1 annotator completed a 1-hour session). + +Since pairwise annotation is the same as answering only the initial question under the discrete setting, we run a single discrete experiment for each annotation session and use the time taken to answer an initial question as a proxy for pairwise annotation time. Our results show that answering the initial question took an average of $15.96s$ whereas answering the follow-up question took $15.57s$ . Thus, we derive the following formulas to compute the time it takes for pairwise and discrete annotation: + +$$ +t = 1 5. 9 6 p \tag {7} +$$ + +$$ +t = 1 5. 9 6 d _ {c} + 1 5. 5 7 d _ {n c} \tag {8} +$$ + +where $p = \#$ of pairwise instances. $d_{c}, d_{nc} = \#$ of discrete instances for which the initial pair was "coreferent" $(d_{c})$ and "not coreferent" $(d_{nc})$ , respectively. We also compute the number of pairwise examples $p$ we can query in the same time it takes to query $d_{c} + d_{nc}$ discrete examples: + +$$ +1 5. 9 6 p = 1 5. 9 6 d _ {c} + 1 5. 5 7 d _ {n c} +$$ + +$$ +p = d _ {c} + 0. 9 7 6 d _ {n c} \tag {9} +$$ + +Moreover, we additionally conduct a single 30-minute experiment to determine how long it takes + +to answer only discrete questions (without the initial pairwise step). We find that it takes $28.01s$ per question under the only-discrete setting. This is longer than the time it takes to answer a pairwise question, thus confirming that having an initial pairwise question indeed saves time if the pair is coreferent. Moreover, this also shows that answering the initial pairwise question significantly helps with answering the follow-up discrete question. + +# A.2 Additional Model Adaptations + +Adapting Link Relations for our Model We use must-link and cannot-link relations between mentions to guide our active learning selector. We revise probabilities and model outputs (from which the model computes uncertainty scores for entropy, QBC, and LCC/MCU) in accordance to the following rules: + +1. Clustered entropy. For every $CL(a, b)$ relationship, we set $P(\mathrm{ant}(a) = b) = 0$ and re-normalize probabilities of all other candidate antecedents. This decreases the probability that the active learning selector chooses $a$ . Moreover, for every $ML(a, b)$ relationship, we set $P(\mathrm{ant}(a) = b) = 1$ and $P(\mathrm{ant}(a) = c) = 0$ for all $c \neq b$ . If there are multiple $ML$ relationships involving $a$ , we choose only one of $a$ 's antecedent to set to 1 (to maintain the integrity of the probability distribution). This guarantees that the active learning selector will never select $a$ , as any ML link out of $a$ means we have already queried for $a$ . +2. Clustered query-by-committee. To ensure we do not choose a mention we have already queried for, after each user judgment, for every $ML(a,b)$ relation, we set $V(A(a) = b) = \mathcal{M}$ , and $V(A(a) = c) = 0$ for all other $c \neq b$ . Moreover, for every $CL(a,b)$ relation, we set $V(A(a) = b) = 0$ , which decreases the vote entropy of $a$ , making it less likely for the selector to choose $a$ . +3. LCC/MCU. We revise the probabilities in the same way as in clustered entropy and add the constraint that, when choosing MCU spans $j$ , we disregard those that already have probability 1 (signifying that we have already queried for them). + +Incremental Closures Algorithm We introduce an algorithm to compute link closures incrementally. Instead of re-computing and re-adding the + +![](images/341e67d8788e2b64fd70b84eac1921c82f14696746d849ca5fd98376fbdca31b.jpg) +Figure 5: Timing experiments interface. Top: The initial pairwise question. Bottom: The user is presented with the discrete question when they click "No". They are asked to select the appropriate tokens in the text representing the first occurrence of the yellow entity in the text. + +entire set of closures (based on a set of all prior human annotations that we keep track of) each time we query for a new mention, we add the minimum set of necessary links. See Algorithm 1. + +To determine how much time our incremental closure algorithm saves over recomputing closures from scratch, we simulated annotations on a single document with 1600 mentions, and recorded how long it took to re-compute the closure after each annotation. Our experiments show that recomputing from scratch takes progressively longer as more labels get added: at 1600 labels, our incremental algorithm is 556 times faster than recomputing from scratch (1630ms vs. 2.93ms). + +Figure 6 plots the runtime of our incremental closure algorithm ("incremental closure") against the run-time of recomputing closures from scratch ("closure") using Equations 3 and 4. In the latter case, we keep track of the set of user-added edges which we update after each annotation, and re-compute the closures from that set. + +# A.3 Additional Analysis + +Computing the time to fully-label a document under discrete and pairwise annotation. First, we compute the maximum number of pairwise questions we can ask. We consider the setup of Lee et al. (2017)'s model. This model considers only spans with highest mention scores (the "top spans"), and only considers at most $K$ antecedents per top span. Thus, for a document with $m$ top spans, we can ask up to + +$$ +\frac {K (K - 1)}{2} + (m - K) K \tag {10} +$$ + +pairwise questions. The first factor $\frac{K(K - 1)}{2}$ comes from considering the first $K$ spans in the document. For each of these spans $i = 1\cdots K$ , we can ask about the first $i - 1$ spans. The second factor $(m - K)K$ comes from considering the spans after the $K$ -th span. For each of these $m - K$ spans in the document, we can only consider up to $K$ antecedents. Using statistics for the average document $(m = 201)$ and the standard hyper-parameter settings $(K = 100)$ , we plug into Equation 10 to + +![](images/29dd98a8d40b1369d117bf21db315115bb54f8771a6d7a15aceb740f1c03f2fb.jpg) +Figure 6: Under each closure algorithm, the time to compute the closure after the next annotation is added, as # of existing annotations increases. + +get 15,050 overall pairwise questions needed to fully label a document (in worst-case). Meanwhile, the maximum number of discrete questions we can ask is only 201 (i.e., asking for the antecedent of every mention). Using timing Equations 7 and 8, we compute that it takes at most $6337.53s$ to answer 201 discrete questions in the worst-case scenario, and $240198s$ to answer 15050 pairwise questions. Thus, in the worst-case scenario for both discrete and pairwise selection, discrete selection will take only $2.64\%$ of the time it takes pairwise selection to fully label a document. + +Quantifying "Information Gain" from Discrete and Pairwise Annotation. Let $\overline{D_U}$ be the set of training documents we are annotating for in a given round of active learning. To better quantify how much information discrete and pairwise annotation can supply in same amount of time, we define $\Delta F1$ as the change in the $F1$ score on $\overline{D_U}$ , before and after model predictions are supplemented with user annotation. + +Figure 7 shows average $\Delta F1$ as annotation time increases for discrete and pairwise annotation. Across the 10 annotation times we recorded, discrete annotation results in an average $\Delta F1$ that more than twice that of pairwise, in the same annotation time. + +# A.4 Hyperparameters + +Model. We preserve the hyperparameters from the AllenNLP implementation of Lee et al. (2017)'s model. The AllenNLP implementation mostly maintains the original hyperparameters, except it sets the maximum number of antecedents considered to $K = 100$ , and excludes speaker features + +![](images/0a0180aa4c4ef0b47653ff82a480530ed858a6c83ec55ef5fe90e07ada256a79.jpg) +Figure 7: Comparing F1 score improvement on $\overline{D_U}$ for discrete vs. pairwise annotation. + +and variational dropout, due to machine memory limitations. + +Training. We use a 700/2102 fully-labelled/unlabelled initial split of the training data, and actively label 280 documents at a time. We train to convergence each round. Before all documents have been added, we train up to 20 epochs with a patience of 2 before we add more training documents. After all documents have been added, we retrain from scratch and use the original training hyperparameters from Lee et al. (2017). + +**Operators.** For query-by-committee, we use a committee of $\mathcal{M} = 3$ models. We were not able to experiment with more due to memory constraints. + +For LCC/MCU, given $L$ annotations per document, we allocate $n$ annotations to least-coreferent clustered mentions and the remaining $m$ to most-coreferent unclustered mentions. We use $n = \min(L/2, \text{number of clustered spans})$ , and $m = \min(L - n, \text{number of un-clustered spans})$ . + +# A.5 Active Learning Training Setup Full Details + +In our active learning setup, we begin by training our model on a 700-document subset of the full training set. We discard the labels of the remaining 2102 documents. In each round of active learning, we choose 280 unlabelled documents, and query up to $Q$ annotations per document. We then add these documents to the labelled set and continue training our model on this set (now with new documents). After all documents have been labelled, we retrain our model on the full document set from scratch, resetting all model and trainer parameters. + +In Algorithm 2, we show our main training loop for active learning using discrete selection. This is the training loop we use for our clustered entropy and LCC/MCU selectors, and our partially-labelled random baseline. In Algorithm 3, we modify that loop for the clustered query-by-committee selector. + +In Algorithm 1, we show our incremental closures algorithm, which builds up the transitive closure incrementally by adding only the minimum number of necessary links to maintain the closure each time a new link is added. + +# Algorithm 1: Incremental Link Closures Algorithm + +Let $(a,b) =$ link pair being added, $A = a$ 's old cluster before the pair is added, $B = b$ 's old cluster before the pair is added, $\overline{A} =$ set of element $a$ has a CL relationship to before the pair is added, $\overline{B} =$ set of elements $b$ has a CL relationship to before the pair is added. + +1. If pair $(a, b)$ was added to must-link, both must-link and cannot-link needs to be updated. + +First, resolve the MLs by adding a ML relationship between every element in $A$ and every element in $B$ : + +$$ +\forall a ^ {\prime}, b ^ {\prime} \quad (M L (a, a ^ {\prime}) \wedge M L (b, b ^ {\prime})) \rightarrow (M L (a, b ^ {\prime}) \wedge M L (a ^ {\prime}, b) \wedge M L (a ^ {\prime}, b ^ {\prime})) +$$ + +Next, resolve the CLs by adding a CL relationship between every element of $A$ and $\overline{B}$ , and every element of $B$ and $\overline{A}$ : + +$$ +\forall a ^ {\prime}, \hat {b} \quad (M L (a, a ^ {\prime}) \wedge C L (\hat {b}, \hat {b})) \rightarrow (C L (a, \hat {b}) \wedge C L (a ^ {\prime}, \hat {b})) +$$ + +$$ +\forall b ^ {\prime}, \hat {a} \quad (M L (b, b ^ {\prime}) \wedge C L (a, \hat {a})) \rightarrow (C L (b, \hat {a}) \wedge C L (b ^ {\prime}, \hat {a})) +$$ + +2. If pair $(a, b)$ was added to cannot-link, only cannot-link needs to be updated. Add a CL relationship between every element of $A$ and every element of $B$ : + +$$ +\forall a ^ {\prime}, b ^ {\prime} \quad (M L (a, a ^ {\prime}) \wedge M L (b, b ^ {\prime})) \rightarrow (C L (a, b ^ {\prime}) \wedge C L (a ^ {\prime}, b) \wedge C L (a ^ {\prime}, b ^ {\prime})) +$$ + +# Algorithm 2: Training loop for active learning + +$D_F = \{\text{fully-labelled docs}\}$ , $D_U = \{\text{unlabelled docs}\}$ , $D_A = \{\text{docs labelled through active learning}\}$ , $M = \text{model}$ , $ML = \text{must-link pairs}$ , $CL = \text{cannot-link pairs}$ ; + +Init: $D_F = \{\text{first 700 docs}\}$ , $D_U = \{\text{remaining docs}\}$ , $D_A = \emptyset$ , $ML = CL = \emptyset$ ; + +while $D_U$ is not empty do + +train $M$ to convergence on data $D_F\cup D_A$ $\overline{D_U} = 280$ -document subset of $D_U$ .. +for $D\in \overline{D_U}$ do + $\mathcal{P}_D,\mathcal{L}_D,\mathcal{C}_D = \mathrm{run}M$ on $D$ . $\mathcal{P}_D =$ model-outputted probabilities $= \{P(y = \mathrm{ant}(i))|y\in \mathcal{V}(i),i\in$ top_spans(D)} $\mathcal{L}_D =$ model-outputted antecedent labels $= \{(i,A(i))|i\in$ top_spans(D)} $\mathcal{C}_D =$ model-outputted clusters from $\mathcal{L}_D$ +while num_queried $<$ num_to_query do + $m =$ choose next-mention-to-query $(\mathcal{P}_D,\mathcal{C}_D)$ [Section 4] + $a = \max_{y\in \mathcal{Y}(m)\backslash \epsilon}P(y = \mathrm{ant}(m))$ if user deems m and a coreferent then $ML = ML\cup (a,m);$ $\mathcal{L}_D = \mathcal{L}_D\cup (a,m);$ Add $(a,m)$ to $\mathcal{C}_D$ . +else $\hat{a} =$ user-selected antecedent for m; $CL = CL\cup (a,m);ML = ML\cup (\hat{a},m);$ $\mathcal{L}_D = (\mathcal{L}_D\backslash (a,m))\cup (\hat{a},m);$ Remove $(a,m)$ and add $(\hat{a},m)$ to $\mathcal{C}_D$ . +end + $ML,CL =$ compute-link-closures; [Algorithms 1] + $\mathcal{P}_D =$ update-based-on-links(ML,CL); [Section A.2]] +end +Label $D$ with $\mathcal{C}_D$ .. +end + $D_A = D_A\cup \overline{D_U};D_U = D_U\backslash \overline{D_U};$ + +end + +# Algorithm 3: Training loop for active learning with QBC selector (Differences from Algorithm 2 are highlighted) + +```latex +$D_F = \{$ fully-labelled docs\}, $D_U = \{$ unlabelled docs\}, $D_A = \{$ docs labelled through active learning\}, $\widehat{M} =$ ensemble model of submodels $\{M_1,\dots ,M_M\}$ , $ML =$ must-link pairs, $CL =$ cannot-link pairs; Init: $D_{F} = \{$ first 700 docs\}, $D_U = \{$ remaining docs\}, $D_A = \emptyset$ , $ML = CL = \emptyset$ ; +while $D_U$ is not empty do +train all $M_1,\dots ,M_M$ to convergence on data $D_F\cup D_A$ $\overline{D_U} = 280$ document subset of $D_U$ +for $D\in \overline{D_U}$ do + $\{\mathcal{P}_{D,i}\} ,\{\mathcal{L}_{D,i}\} ,\mathcal{P}_D,\mathcal{L}_D,\mathcal{C}_D = \text{run}\widehat{M}$ on $D$ $\mathcal{P}_{D,i} =$ submodel $i$ 's output probabilities + $\mathcal{L}_{D,i} =$ submodel $i$ 's output antecedent labels + $\mathcal{P}_D =$ ensembled (averaged) output probabilities from each submodel + $\mathcal{L}_D =$ ensembled antecedent labels computed from $\mathcal{P}_D$ $\mathcal{C}_D =$ ensembled clusters computed from $\mathcal{L}_D$ +while num_queried < num_to_query do + $m =$ choose next-mention-to-query( $\{\mathcal{L}_{D,i}\}, \mathcal{C}_D$ ); [[Section 4]] + $a = \max_{y\in \mathcal{Y}(m)\backslash \epsilon}P(y = \operatorname {ant}(m))$ ; [[Probabilities from $\mathcal{P}_D$ ]] +if user deems m and a coreferent then + $ML = ML\cup (a,m)$ ; +Add $(a,m)$ to $\mathcal{C}_D$ +else + $\hat{a} =$ user-selected antecedent for m; + $CL = CL\cup (a,m)$ ; $ML = ML\cup (\hat{a},m)$ ; Remove $(a,m)$ and add $(\hat{a},m)$ to $\mathcal{C}_D$ +end + $ML,CL =$ compute-link-closures $(ML,CL)$ ; [[Algorithm 1]] + $\mathcal{L}_{D,i} =$ update-based-on-links $(ML,CL)$ ; [[Section A.2]] +end +Label $D$ with $\mathcal{C}_D$ +end + $DA = DA\cup \overline{DU}; D_U = D_U\backslash \overline{DU}$ ; +``` \ No newline at end of file diff --git a/activelearningforcoreferenceresolutionusingdiscreteannotation/images.zip b/activelearningforcoreferenceresolutionusingdiscreteannotation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6c049348e946e24b215116333929230ca8023b8d --- /dev/null +++ b/activelearningforcoreferenceresolutionusingdiscreteannotation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:50033a4d1e9ea7ca50cf5cd0e36260fd80d4f49512d608b76b38decb4c75b8a4 +size 420120 diff --git a/activelearningforcoreferenceresolutionusingdiscreteannotation/layout.json b/activelearningforcoreferenceresolutionusingdiscreteannotation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5086359affdf2589be047d5a813b888987ae43bf --- /dev/null +++ b/activelearningforcoreferenceresolutionusingdiscreteannotation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10043a32cdb25a2044f180a902a1189034471ad47a6dab1b8060a64b1ab016ae +size 478114 diff --git a/adaptivecompressionofwordembeddings/616ae366-0520-4bac-92fe-f6a591154b5a_content_list.json b/adaptivecompressionofwordembeddings/616ae366-0520-4bac-92fe-f6a591154b5a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..8ecb295ba0b23188125ca1571987c8294198643c --- /dev/null +++ b/adaptivecompressionofwordembeddings/616ae366-0520-4bac-92fe-f6a591154b5a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3a5c5a6482da404988e983742b673d5bc6a036385efed5654cfcedd6c85c9846 +size 67877 diff --git a/adaptivecompressionofwordembeddings/616ae366-0520-4bac-92fe-f6a591154b5a_model.json b/adaptivecompressionofwordembeddings/616ae366-0520-4bac-92fe-f6a591154b5a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b8c0362051d48f37e8f7361c318a5062518de608 --- /dev/null +++ b/adaptivecompressionofwordembeddings/616ae366-0520-4bac-92fe-f6a591154b5a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fc7e92eccb5d1d68c94ad5a43485e20d6778c1683b459937fc11e6cfbb49cb4c +size 83192 diff --git a/adaptivecompressionofwordembeddings/616ae366-0520-4bac-92fe-f6a591154b5a_origin.pdf b/adaptivecompressionofwordembeddings/616ae366-0520-4bac-92fe-f6a591154b5a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..dc79c6bcbcb902ca15b0d9dbfe97d2e96604c328 --- /dev/null +++ b/adaptivecompressionofwordembeddings/616ae366-0520-4bac-92fe-f6a591154b5a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:466f4d90d6f3d0142288d6b11727f91210c804cfbe2225029cb026f3ddc2fd13 +size 601962 diff --git a/adaptivecompressionofwordembeddings/full.md b/adaptivecompressionofwordembeddings/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8cf6906544c8eed1b4698f862e87e5a154d390cc --- /dev/null +++ b/adaptivecompressionofwordembeddings/full.md @@ -0,0 +1,300 @@ +# Adaptive Compression of Word Embeddings + +Yeachan Kim $^{1}$ Kang-Min Kim $^{2}$ SangKeun Lee $^{2,3}$ + +1Artificial Intelligence Research Institute, Republic of Korea + +$^{2}$ Department of Computer Science and Engineering $^{3}$ Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea + +yeachan@airi.kr, {kangmin89, yalphy}@korea.ac.kr + +# Abstract + +Distributed representations of words have been an indispensable component for natural language processing (NLP) tasks. However, the large memory footprint of word embeddings makes it challenging to deploy NLP models to memory-constrained devices (e.g., self-driving cars, mobile devices). In this paper, we propose a novel method to adaptively compress word embeddings. We fundamentally follow a code-book approach that represents words as discrete codes such as (8, 5, 2, 4). However, unlike prior works that assign the same length of codes to all words, we adaptively assign different lengths of codes to each word by learning downstream tasks. The proposed method works in two steps. First, each word directly learns to select its code length in an end-to-end manner by applying the Gumbel-softmax tricks. After selecting the code length, each word learns discrete codes through a neural network with a binary constraint. To showcase the general applicability of the proposed method, we evaluate the performance on four different downstream tasks. Comprehensive evaluation results clearly show that our method is effective and makes the highly compressed word embeddings without hurting the task accuracy. Moreover, we show that our model assigns word to each code-book by considering the significance of tasks. + +# 1 Introduction + +Deep neural networks have greatly improved the performance in various tasks, such as image classification (Huang et al., 2017), text classification (Liu and Lapata, 2018), and machine translation (Edunov et al., 2018). This break-through performance facilitates the demand to deploy such models to embedded systems (e.g., self-driving cars, mobile devices). However, the neural models typically require a large storage or memory footprint, + +which is a significant concern when deploying neural models to memory-constrained devices (Hinton et al., 2015). To alleviate this limitation, several works have proposed methods that compress the neural models while minimizing loss of accuracy as much as possible (Han et al., 2015, 2016; Liu and Zhu, 2018). + +However, deploying models for natural language processing (NLP) tasks is challenging. Unlike other domains, NLP models have an embedding layer which maps words and phrases to real-valued vectors. The problem is that these embeddings usually take more parameters than the remaining networks. In practice, for a neural translation model in OpenNMT (Klein et al., 2017), the word embedding parameters account for $80\%$ of the total parameters. Therefore, it is significant to reduce the parameters of the embedding layer for deploying NLP models to memory-constrained devices. + +To compress word embeddings, several works proposed code-book based approaches (Shu and Nakayama, 2018; Tissier et al., 2019), which represent each word as few discrete and shared codes. For example, the word dog and dogs could be represented as $(3,5,2,1)$ and $(3,5,2,7)$ , respectively. This sharing scheme and discrete codes make the embeddings have smaller parameters and interpretability to some extent. However, these methods assign the same length of codes to each word without considering the significance of downstream tasks. It means that, for a sentiment analysis, excellent and the require the same amount of memory. This observation makes room for improvement in compressing word embeddings. + +In this paper, we attempt to further compress word embeddings by adaptively assigning different lengths of codes to each word in an end-to-end manner. We propose AdaComp that adaptively learns to compress word embeddings by considering downstream tasks. The proposed compression works + +in two steps. First, each word in pre-trained word embeddings learns to select its code length in an end-to-end manner by applying Gumbel-softmax tricks (Jang et al., 2016). After selecting its code length, each word learns discrete codes through a binary-constraint encoder and decoder network. To instill task-specific features to the selection process, we compress each word embedding by learning a downstream task. This allows us to learn the task-specific features naturally. Compared to prior works, AdaComp could give each word more options to represent their meaning since the proposed model utilizes a number of different code-books. + +To showcase the general applicability of AdaComp, we conduct four different NLP tasks, which are sentiment classification, chunking, natural language inference, and language modeling. Comprehensive evaluation results not only show that our method could compress original word embeddings quite well without hurting task accuracy but also demonstrate that AdaComp assigns each word to different code-books by considering the significance of a task. AdaComp could be applied to most existing NLP systems with minor modifications since the proposed model is a network-agnostic, in-place architecture. We thus believe that existing NLP systems could benefit from our work. + +We organize the remainder of this paper as follows. In Section 2, we discuss related work. In Section 3, we describe the proposed method. We report our performance evaluation results and analyze our methodology in detail in Section 4 and 5, respectively. Finally, we conclude this paper in Section 6. + +# 2 Related Work + +In this section, we review several studies that attempt to compress neural models, including an embedding layer. + +# 2.1 Neural Networks Compression + +The majority of works for compression is to compress neural networks itself (e.g., convolutional neural network, recurrent neural network), and most of them focus on compressing neural models in the field of computer vision. These approaches usually include pruning, quantization, and low precision representation methods. For pruning, several works (Han et al., 2015; Li et al., 2017; Lee et al., 2019) focus on how each connection (i.e., weights) affects to tasks, and they remove redun + +dant or unimportant connections from the networks. Some works (Han et al., 2016; Chen et al., 2016; Louizos et al., 2019) quantize the connections into several bins to enforce weight sharing. These approaches represent each connection as some representative values, and such values are selected by clustering (centroids) or hashing (hash buckets) techniques. Representing each connection with low precision (i.e., few bits or binary) is also appealing for compressing neural networks (Anwar et al., 2015; Courbariaux et al., 2015; Hubara et al., 2016). In particular, Courbariaux et al. (2015) and Hubara et al. (2016) show that binary constraint is sufficiently effective in network learning without largely affecting the task accuracy. + +# 2.2 Word Embeddings Compression + +Several studies have proposed compressing methods for word embeddings because the majority of parameters in NLP models lies in an embedding layer. For example, Ling et al. (2016) reduces the memory requirement of word embeddings by quantizing each dimension of embeddings into significantly fewer bits than the standard 64 bits. It shows that 4 or 8 bit is enough to represent each word embedding. Instead of reducing the parameters of each word embedding, Chen et al. (2016) reduces the number of words in vocabulary by filtering out uncommon words. For the removed words, they reconstruct these embeddings by combining several frequent words. Recently, several methods (Shu and Nakayama, 2018; Shi and Yu, 2018; Tissier et al., 2019) decompose each word into a few numbers of codes and learn corresponding code vectors to represent the original embeddings. Shu and Nakayama (2018) uses a deep code-book approach to represent each word. To automatically learn discrete codes, they utilize reparameterization tricks in an encoder and decoder architecture. Similarly, Tissier et al. (2019) utilizes an auto-encoder with a binary constraint to represent words. Compared to the aforementioned methods, AdaComp is the first work that represents each word differently in terms of length of codes. Furthermore, we learn task-specific features directly by learning a downstream task at the same time. + +# 3 Adaptive Compression + +In this section, we describe the proposed method, which is denoted as AdaComp, in detail. The primary strategy of AdaComp is straightforward and + +![](images/78fc83aa6dfe4a439eee06972c1e1487c4431416125fc4eb4385b7a7cd33ed62.jpg) +Figure 1: Main strategy of our compression model (AdaComp). Solid line indicates the selected codebook. + +is shown in Figure 1. We start with the pre-trained word embeddings (e.g., GloVe (Pennington et al., 2014), word2vec (Mikolov et al., 2013)), and the compression method works in two steps. Given an input embedding, AdaComp learns to adaptively select its code length in an end-to-end manner by applying Gumbel-softmax tricks (Jang et al., 2016) (Section 3.1). After selecting a code length, each word learns its discrete codes through an encoder and decoder, which has a binary latent space (Section 3.2). + +# 3.1 Adaptive Code-book Selection + +To represent each word as discrete codes, several code-book approaches build a single code-book $C_k$ where $k$ is the length of codes. Instead of assigning the same length of codes, we adaptively assign different lengths of codes to each word. To this end, we have a set of different code-books $C = \{C_{k_1}, C_{k_2}, \dots, C_{k_n}\}$ . The objective for the first phase is to select a single code-book from the set of code-books in an end-to-end manner. + +Given an input embedding $e_w$ , we first compute an encoding vector $\alpha_w$ by feeding it to neural networks. + +$$ +\alpha_ {w} = \sigma_ {1} \left(\theta^ {T} \sigma_ {2} \left(\theta^ {\prime T} e _ {w} + b ^ {\prime}\right) + b\right) \tag {1} +$$ + +where $\theta \in \mathbb{R}^{d\times |C|}$ , $\theta^{\prime}\in \mathbb{R}^{d\times d}$ and $b,b^{\prime}$ are trainable weight matrices and biases of the networks, respectively, where $d$ is the dimension of the original embeddings. The functions $\sigma_{1}(\cdot)$ , $\sigma_{2}(\cdot)$ are the softmax and tanh function, respectively. Then, we could select a single code-book by applying an argmax or a sign function into the resultant encoding. However, deriving discrete values (i.e., the index of the code-books) in the neural networks is + +not trivial since the aforementioned functions are not differentiable. + +To handle such problem, several methods proposed to deal with discrete values in a neural network naturally. In our work, we use the Gumbel softmax tricks since we need a one-hot vector to represent the discrete index of the set of codebooks. The Gumbel softmax allows the neural networks to naturally have a $k$ -dimensional one-hot vector in the intermediate of the networks. Let $u_{w}$ be the one-hot vector for a word $w$ , the $i$ -th element of the vector is computed as follows: + +$$ +\begin{array}{l} u _ {w} ^ {i} = \operatorname {s o f t m a x} _ {\tau} \left(\log \alpha_ {w} ^ {i} + g _ {i}\right)) \\ = \frac {\exp \left(\left(\log \alpha_ {w} ^ {i} + g _ {i}\right) / \tau\right)}{\sum_ {j = 1} ^ {| C |} \exp \left(\left(\log \alpha_ {w} ^ {j} + g _ {j}\right) / \tau\right)} \tag {2} \\ \end{array} +$$ + +where $g_{i},\dots,g_{|C|}$ are i.i.d noise samples drawn from Gumbel distribution and $\tau$ is the relaxation factor of the Gumbel softmax. Similarly, (Shu and Nakayama, 2018) utilized Gumbel softmax for compression. However, they used it to derive discrete codes of each word, not the index of the set of code-books as in AdaComp. + +# 3.2 Binarized Codes Learning + +After selecting a specific code-book from the set $C$ , AdaComp learns the discrete codes in the selected code-book. To this end, we use a binary constraint encoder and decoder, which has a binary latent space. When the training converges, the binary latent vector of each word is used as the discrete code, and the decoder is used as the code vectors in each code-book. + +Again, we start from the original word embedding. To produce discrete codes, we feed the embeddings to the binary constraint networks. Let $w$ be the word in an input text and $n$ be the code length of the selected code-book, and the code learning works as follows: + +$$ +e _ {w} ^ {\prime} = W \phi \left(W ^ {T} e _ {w} + b\right) + b ^ {\prime} \tag {3} +$$ + +where $W \in \mathbb{R}^{d \times n}$ and $b, b'$ are trainable weight matrices and biases in the encoder and decoder, respectively. As can be seen from the equation, we use the same weights at the encoding and decoding phase. This is because such tied weights enable + +faster training and have a greater regularization effect than individual weights (Alain and Bengio, 2014; Gulordava et al., 2018). The function $\phi$ is the binary constraint function. We use the following threshold function: + +$$ +\phi (x _ {i}) = R e L U (S i g n (x _ {i})) = \left\{ \begin{array}{l l} + 1 & \mathrm {x} _ {i} \geq 0, \\ 0 & \text {o t h e r w i s e}, \end{array} \right. +$$ + +This function produces the binary codes, which consist of 1 and 0. However, we face the same problem with the previous section. The derivative of the sign function is zero almost everywhere, making it incompatible with back-propagation. To naturally learn the binary codes in an end-to-end manner, we apply the straight-through estimator (Hinton, 2013) to the threshold function. This estimator allows gradients to skip the threshold function. In our work, we use a different version of the straight-through estimator to take into account a saturation effect. Let the gradients above the threshold function as $\frac{\partial L}{\partial N}$ , we obtain gradients of the threshold function as follows: + +$$ +\frac {\partial L}{\partial \phi} = \frac {\partial L}{\partial N} 1 _ {| g | \leq 1} \tag {4} +$$ + +where $g$ is the value of the gradients above the threshold function. This function allows us to naturally learn binary codes by preserving the information of the gradients and canceling the gradient when $g$ is too large, which could mess up the training. + +Thus far, we adaptively select the code-book from the set, which has a different length of code-books, and produce the binary codes of each word. To jointly learn the above two phases in an end-to-end manner, we relate them as follows: + +$$ +o _ {w} = E _ {w} ^ {T} u _ {w} \tag {5} +$$ + +where $E_w^T \in \mathbb{R}^{|C| \times d}$ is the reconstructed embeddings of $w$ for all code lengths. By multiplying the selection vector (i.e., $u_w$ ) by the reconstructed embeddings, AdaComp learns two phases in an end-to-end manner. We feed the reconstructed embedding $o_w$ to task-specific networks for learning a downstream task. + +# 3.3 Orthogonality Constraint + +To cover a large number of words in the vocabulary, reducing the redundancy of representations for each code vector is significant. We thus put the orthogonality constraint into code vectors, which penalizes redundant latent representations and encourages each code vector to encode different aspects of the original word embeddings. + +$$ +P = \left\| W ^ {T} W - I \right\| _ {F} ^ {2} \tag {6} +$$ + +where $W$ is the parameters of the code vectors (i.e., decoder), $I$ is an identity matrix. $\| \cdot \| _F$ stands for Frobenius norm of a matrix. We add this term to our objective function. + +# 3.4 Optimization + +Since AdaComp learns compression by learning a downstream task, the objective function depends on each task. For example, if the task is sentiment classification, the objective function could be negative log-likelihood over sentiments. Let the objective function be $L_{task}$ , the total objective function is as follows: + +$$ +L = L _ {\text {t a s k}} + \lambda \cdot P \tag {7} +$$ + +where $\lambda$ is the control factor of orthogonality, and we set this to 0.01. + +We empirically found that pretraining AdaComp significantly increases the performance for several tasks (detailed in Section 5.1). We thus pretrain our model using an auto-encoder loss, which is as follows: + +$$ +L _ {p r e} = \sum_ {w \in V} \| o _ {w} - e _ {w} \| _ {2} ^ {2} \tag {8} +$$ + +When the loss of pretraining converges, we attach the pre-trained AdaComp to an embedding layer of task-specific networks and learn a downstream task using Eq.7. + +# 4 Experiments + +In this section, we show the performance evaluation of the proposed model. To showcase the general applicability of AdaComp, we conduct four different tasks, which are sentence classification, chunking, natural language inference, and language model. Through the above tasks, we validate the efficacy of AdaComp on the settings of many-to-one (sentiment classification), many-to-many (chunking, language modeling), and multiple inputs (natural language inference). + +
TaskVocabulary sizeMemory size (MB)
SST-517,08020.5
CoNLL-200019,07222.9
SNLI34,04540.9
PTB9,80911.8
+ +Table 1: Memory size of the original word embeddings (i.e., GloVe) and the number of words in each task. Each embedding is 300 dimensional vectors and is represented by 32 bit floating point. + +# Experimental settings + +The proposed compressing model starts from pretrained word embeddings. In the experiments, we use the publicly available GloVe3 (Pennington et al., 2014) with 300 dimension for 400k words. For hyper-parameter settings, we use Adam (Kingma and Ba, 2014) optimizer with 0.001 learning rate and the batch size is 64. We choose the above parameters by validating both sentiment classification and natural language inference tasks. + +# Model Comparison + +In this paper, we examine the following methods which use different kinds of compressing methods: + +- QWE (Ling et al., 2016): This model quantizes the weights from floating-point to few bits of precision less than standard 64 bits. We evaluate two settings, which are 4 and 8-bit representations. +- Pruning (Han et al., 2015): This model prunes redundant weights from the networks. We prune the weights of word embeddings until this technique removes $80\%$ or $90\%$ neurons from the embeddings. +- NC (Shu and Nakayama, 2018): This model compresses the pre-trained embeddings using a single code-book using a deep neural network. We compare two different settings, which are the moderate size (16x16 codebook) and the large size (32x16 code-book). +- Bin (Tissier et al., 2019): This model compresses word embeddings through an autoencoder which has binary constraint on a latent space. Among their two methods, we choose rec since it performs better with deep + +neural networks (i.e., LSTM, CNN). We compare two settings that have 64 and 128 binary codes. + +- AdaComp (Ours): This is the proposed model in this paper. We use four different codebooks since we found that using four codebooks leads to the most effective performance with a memory requirement (detailed in Section 5.2). We use three different settings on the four code-books which have (128, 64, 32, 16), (64, 32, 16, 8) and (32, 16, 8, 4) length of code-books. On the tables and figures, we use the max length of codes to denote each model. + +The aforementioned methods do not learn task-specific features since they learn to compress embeddings using the auto-encoder loss. To fairly compare with our method, we apply the strategy in (Shu and Nakayama, 2018) to each model. In short, we first fine-tune the original embeddings to tasks and then compress the learned embeddings through the above methods. + +# Evaluation metrics + +We report both a task performance and a total memory size. The total memory size is estimated from all parameters which are used to represent all words in tasks. Note that it does not contain the size of task-specific networks. For our method, we report memory size and performance when we deploy our model to other systems. It contains the parameters of multiple code-books and binary codes about each word. The memory size of the original embeddings about each task is listed in Table 1. + +# 4.1 Experimental Results + +Table 2 shows the overall results on four tasks. We describe each task and the task-specific networks as below. + +# Sentiment classification + +Sentence classification is the task of classifying a sentence into pre-defined classes. We use the stanford sentiment treebank (SST) dataset as a representative dataset. The SST has 5 classes about sentiment (very negative, negative, neutral, positive, very positive). The performance is measured + +
ModelSST-5CoNLL-2000SNLIPTB
accuracyratioF1ratioaccuracyratiopplratio
GloVe42.1x193.1x179x1100.3x1
QWE (4-bit) (Ling et al., 2016)41.8x893.1x877.9x8113.8x8
QWE (8-bit) (Ling et al., 2016)41.9x493.3x478.6x4109.1x4
Pruning 90% (Han et al., 2015)35.4x1090.4x1078x10113.2x10
Pruning 80% (Han et al., 2015)41.6x591.7x578.2x5124.7x5
NC (16×16) (Shu and Nakayama, 2018)37.2x4691.8x5077.8x71119.2x30
NC (32×16) (Shu and Nakayama, 2018)40.9x2392.4x2578.5x35112.4x15
Bin (64) (Tissier et al., 2019)36.8x9591.5x10077.3x116116x74
Bin(128) (Tissier et al., 2019)39.1x4892.7x4977.6x59110.1x37
AdaComp (32)42.0x17192.1x17377.6x232110.8x105
AdaComp (64)43.2x8493x8978.4x119106x52
AdaComp (128)42.9x4593.1x4478.7x60108.9x26
+ +Table 2: Comparison results on four tasks. The ratio on the table is calculated by dividing the size of the original embeddings by that of comparison models. Higher performance means better model except for PTB (perplexity). + +by the accuracy on test set. For text classification model, we reproduce the LSTM model used in (Zhang et al., 2015) as a baseline. It feeds word embeddings in sequence, and averages hidden states of the last layer to represent an input sentence for classification. In this model, we set the hidden states to 450 dimension and use two-stacked LSTMs. + +As can be seen from the table, code-book approaches (i.e., NC, Bin, AdaComp) basically show better results than others in both performance and memory size. Among them, AdaComp makes more highly compressed embeddings than others with better performance. For example, AdaComp (32) achieve as much as $11\%$ improvement on test accuracy compared to other code-book approaches which use the same number of codes with the longest codes in ours. Furthermore, our model requires nearly 2x less memory sizes compared to others. + +# Chunking + +Chunking is the task of dividing a sentence into syntactically correlated parts of words. The CoNLL 2000 shared task (Tjong Kim Sang and Buchholz, 2000) is a benchmark dataset for text chunking. It has 24 tags about each word with its start and end symbols. The performance is measured by F1 score. For the chunking model, we use an LSTM-based sequence tagger which was proposed by (Huang et al., 2015). We set the hidden states to 300 dimensions and use two-stacked LSTMs. + +The results are shown in the same table. Even + +though the quantization method (i.e., QWE 8-bit) achieves the best performance when they restrict the values into 8-bits, the compression ratio is quite lower than other methods, and the performance starts to degrade as they use smaller bits to represent words. Compared to the other code-book methods, AdaComp achieves strong performance with highly compressed embeddings. For example, AdaComp (128) does not hurt the accuracy of the original embeddings with approximately 44x compressed embeddings. + +# Textual entailment + +Textual entailment is the task of determining whether a hypothesis is true, given a premise. The Stanford Natural Language Inference (SNLI) (Bowman et al., 2015) dataset is a benchmark for this task. This dataset contains approximately 550K hypothesis/premise pairs with entailment, contradiction, and neutral labels. For this task, we use an LSTM-based encoder model which was proposed by (Bowman et al., 2016). It uses two different LSTMs with 300-dimensional hidden states to encode each information (i.e., premise and hypothesis). The concatenated vectors for two sentences are classified into the three labels. + +Even though the performance of our method, including others, is lower than the elementary embeddings, AdaComp yields strong performance with a high compression ratio in this task. Compared to other methods that use the largest memory, the proposed model (i.e., AdaComp (128)) requires the + +![](images/574197c9c10842ff59653cedcbb1c6104af810f0f25c8e193794e1435bca3eec.jpg) +Figure 2: Ratio of code-book assignment on each setting. Best viewed in color. + +least memory size while resulting in the closest performance with the original embeddings. + +# Language modeling + +Language modeling is the task of scoring a sentence whether it is natural or not comparing to the training dataset. This task has been widely used in several mobile applications by recommending the next word or sentence based on a user text. In this task, we use Penn Treebank (PTB) to evaluate the performance. We report test perplexity about each method. For this task, we use a word-based LSTM model which was used in (Kim et al., 2016). We select a medium-size model with 650-dimensional hidden states to encode each word and apply dropout (Srivastava et al., 2014) to the top of LSTMs. + +Similar to the previous task, the performance of the methods is lower than the original embeddings. We conjecture the lower performance comes from that these tasks (i.e., language modeling, natural language inference) require more generalized features than other tasks. This is why these tasks are used to pretrain neural models for various NLP tasks (Cer et al., 2018; Radford et al.). Compared to others, again, AdaComp achieves the best results in terms of both metrics. + +# 5 Analysis + +# 5.1 Utility of pre-training and Fine-tuning AdaComp + +Before AdaComp learns to compress word embeddings, we pretrain the model using the auto-encoder loss (Eq. 8). To show that pretraining step is indeed effective, we report accuracy and a ratio of code-book assignment. Here, we evaluate the performance of all tasks when we use different set of code-books (detailed in Section 5.3). Figure 3a shows the performance results. The result shows + +![](images/1281fa69bda1ba44f5eae5e25348eec5fd1323853a17de7f7b47e6b097bd4682.jpg) +(a) + +![](images/b1ece9690c0e31fd561aebbfa8e3027bc11d40757618dc3f45c47230b4ea3911.jpg) +(b) +Figure 3: Performance variation when we use different settings (the length and the number of code-books) on each task. Dashed line indicates the model which is not pretrained. Performance (y-axis) indicates evaluation metrics for each task. Note that the evaluation metric for language modeling is perplexity, thus, the performance is reversed on PTB. Best viewed in color. + +that the model with pretraining performs better than the model, which is not pretrained. This is clearly evident when we use smaller code-books to represent words. We believe that the pretraining step guides our model towards basins of attraction of minima that support a better generalization. This is the similar results with (Erhan et al., 2010). + +Figure 2 shows the comparison of the codebook assignment on each setting for the SNLI task. When we only pretrain the compressing model, the large portion of words, around $80\%$ , is assigned to the largest code-book (i.e., 128). However, when we fine-tune the pre-trained models to the task, the ratio of the large one is significantly decreased. This means that fine-tuning could reduce the memory requirement by a large margin. Without the pretraining step, fine-tuning model achieves a smaller memory size than the pre-trained models. However, we have shown that pretraining leads to better + +![](images/2a6d6613ba6b91cfb7a17d0c40aa73b7e8f20a35a89aae013a8312bb6c818442.jpg) +Figure 4: Visualization of the reconstructed embeddings with their code-book assignment. Best viewed in color. + +performance. To achieve a reasonable memory size with reliable performance, we have applied both pretraining and finetuning to AdaComp. + +# 5.2 Effectiveness of the length or number of code-books + +We evaluate the performance of our method when we use different lengths or numbers of code-books. We first plot the results of different lengths of code-books in Figure 3a. Here, we use four code-books as default and the length of codes is divided by two along with the next smaller code-book. For example, the value 64 in the axis means (64, 32, 16, 8) and 32 means (32, 16, 8, 4). + +As you can see in Figure 3a, utilizing the large size of code-books leads to improved performance than the models with smaller lengths. These results come from that larger code-books could represent more aspects of original embeddings. Figure 3b shows the performance variation of different number of code-books. Here, we use 128 code vectors and divide these vectors into several code-books. The x-axis means the number of code-books that correspond to (128), (64, 64), (64, 32, 32), (64, 32, 16, 16), (64, 32, 16, 8, 8). We observe that the performance does not depend on different code-books very much compared to lengths of code-books. To get better performance with high compression ratio, we have used four code-books in the experiments. + +# 5.3 Code-book distribution for a task + +Unlike other methods, AdaComp learns to compress embeddings with learning a downstream task. + +To confirm how the model assigns each word to different code-books, we visualize the code-book assignment. To this end, we project the reconstructed embeddings into 2-dimensional space using t-SNE (Maaten and Hinton, 2008), and we use the embeddings when we perform the sentiment classification task using AdaComp (64). To show important words (i.e., sentiment words) to the task, we take the sentiment words (positive and negative) from (Hu and Liu, 2004) and denote these words if they existed in the embeddings of AdaComp. + +Figure 4 shows the 2-dimensional projection of the reconstructed embeddings with their assigned code-books. We observe that important sentiment words are assigned to the longest code-book, and the ratio of sentiment words are significantly decreased along with the smaller code-books. This result shows that AdaComp uses longer codes to represent task-sensitive words and shorter codes to represent less significant words to the task. + +# 6 Conclusion + +In this paper, we have described AdaComp that adaptively compresses word embeddings by using different lengths of code-books. To this end, we have used the Gumbel-softmax tricks and the binary-constraint networks to learn the code-book selection and its discrete codes in an end-to-end manner. To showcase the general applicability of AdaComp, we conduct four different NLP tasks, which are sentence classification, chunking, natural language inference, and language modeling. Evaluation results have clearly shown that AdaComp could obtain better results than other methods in terms of both accuracy and memory requirement. We also found that AdaComp assigns each word to different code-books by considering the significance of tasks. Although we have focused on compressing the embeddings by learning task-specific features, AdaComp could be used at NLP tasks without fine-tuning. We believe that our method can benefit simultaneously from other compression techniques, such as pruning (Han et al., 2016) and low-precision representation (Ling et al., 2016). We leave this as an avenue for future work. + +# Acknowledgement + +This work was supported by Institute of Information communications Technology Planning Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2019-0-00079, Artificial Intelligence + +Graduate School Program(Korea University)), and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2018R1A2A1A05078380). + +# References + +Guillaume Alain and Yoshua Bengio. 2014. What regularized auto-encoders learn from the data-generating distribution. The Journal of Machine Learning Research. +Sajid Anwar, Kyuyeon Hwang, and Wonyong Sung. 2015. Fixed point optimization of deep convolutional neural networks for object recognition. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP). +Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). +Samuel R Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). +Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder. arXiv preprint arXiv:1803.11175. +Yunchuan Chen, Lili Mou, Yan Xu, Ge Li, and Zhi Jin. 2016. Compressing neural language models by sparse word representations. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). +Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. 2015. Binaryconnect: Training deep neural networks with binary weights during propagations. In Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS). +Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). +Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. 2010. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research. + +Kristina Gulordava, Laura Aina, and Gemma Boleda. 2018. How to represent a word and predict it, too: Improving tied architectures for language modelling. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). +Song Han, Huizi Mao, and William J Dally. 2016. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In Proceedings of the International Conference on Learning Representations (ICLR). +Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. In Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS). +Geoffrey E. Hinton. 2013. Neural networks for machine learning. coursera, video lectures. +Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. CoRR, abs/1503.02531. +Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the International conference on Knowledge Discovery and Data mining (KDD). +Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. 2017. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991. +Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Binarized neural networks. In Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS). +Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144. +Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neural language models. In Proceedings of the Association for the Advancement of Artificial Intelligence (AAAI). +Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. +Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics. + +Namhoon Lee, Thalaiyasingam Ajanthan, and Philip HS Torr. 2019. Snip: Single-shot network pruning based on connection sensitivity. In Proceedings of the International Conference on Learning Representations (ICLR). +Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2017. Pruning filters for efficient convnets. In Proceedings of the International Conference on Learning Representations (ICLR). +Shaoshi Ling, Yangqiu Song, and Dan Roth. 2016. Word embeddings with limited memory. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). +Mason Liu and Menglong Zhu. 2018. Mobile video object detection with temporally-aware feature maps. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +Yang Liu and Mirella Lapata. 2018. Learning structured text representations. Transactions of the Association of Computational Linguistics. +Christos Louizos, Matthias Reisser, Tijmen Blankevoort, Efstratios Gavves, and Max Welling. 2019. Relaxed quantization for discretized neural networks. In Proceedings of the International Conference on Learning Representations (ICLR). +Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research. +Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS). +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. +Kaiyu Shi and Kai Yu. 2018. Structured word embedding for low memory neural network language model. In Proceedings of the Interspeech. +Raphael Shu and Hideki Nakayama. 2018. Compressing word embeddings via deep compositional code learning. In Proceedings of the International Conference on Learning Representations (ICLR). +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research. + +Julien Tissier, Christophe Gravier, and Amaury Habrard. 2019. Near-lossless binarization of word embeddings. In Proceedings of the Association for the Advancement of Artificial Intelligence (AAAI), volume 33. +Erik F Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the conll-2000 shared task: chunking. In Proceedings of the International Conference on Computational Natural Language Learning (CoNLL). +Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS). \ No newline at end of file diff --git a/adaptivecompressionofwordembeddings/images.zip b/adaptivecompressionofwordembeddings/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..177fe06e5d57622095e902afe91ea73c349e757a --- /dev/null +++ b/adaptivecompressionofwordembeddings/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ee7c368900a1309717495a9a255cfb39ec2dbe6672b2a73c97211967c83b50a +size 287605 diff --git a/adaptivecompressionofwordembeddings/layout.json b/adaptivecompressionofwordembeddings/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b4cd01d0425a9f772a2eb81ee23af227c94860ad --- /dev/null +++ b/adaptivecompressionofwordembeddings/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ae97be0fa56813cad3049e1e833d44f7a71d3acb847168c198cb71fbf07327a +size 307963 diff --git a/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/b79bfcf9-9cb4-48e2-b419-26083f7ffb1b_content_list.json b/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/b79bfcf9-9cb4-48e2-b419-26083f7ffb1b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d9967ce7933584a0dbd5a1825398aa2e82a996f6 --- /dev/null +++ b/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/b79bfcf9-9cb4-48e2-b419-26083f7ffb1b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2310c32463be39cc8962c9efdeff30cac388c4b3a1473066eeb39a0802bf7059 +size 97941 diff --git a/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/b79bfcf9-9cb4-48e2-b419-26083f7ffb1b_model.json b/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/b79bfcf9-9cb4-48e2-b419-26083f7ffb1b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0dfb26d497fbef04839e66092424f92ec9686754 --- /dev/null +++ b/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/b79bfcf9-9cb4-48e2-b419-26083f7ffb1b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9225c10402ebb0839e4fe4903045786ba2ef1c3c18705a9a522be293980a4036 +size 121206 diff --git a/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/b79bfcf9-9cb4-48e2-b419-26083f7ffb1b_origin.pdf b/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/b79bfcf9-9cb4-48e2-b419-26083f7ffb1b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a3e6bc2ec6193d1fe7266694277e5eefde1e76d8 --- /dev/null +++ b/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/b79bfcf9-9cb4-48e2-b419-26083f7ffb1b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:15c3115f7680a0c7849bdb558f941ab4204452ff457059b7509c1ab0a0192b04 +size 799576 diff --git a/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/full.md b/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..210cd792845231e3bcc5662aca918f39f84493e3 --- /dev/null +++ b/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/full.md @@ -0,0 +1,458 @@ +# Addressing Posterior Collapse with Mutual Information for Improved Variational Neural Machine Translation + +Arya D. McCarthy\* and Xian Li and Jiatao Gu and Ning Dong + +$\spadesuit$ Johns Hopkins University + +Facebook + +arya@jhu.edu, {xianl,jgu,dnn} $@$ fb.com + +# Abstract + +This paper proposes a simple and effective approach to address the problem of posterior collapse in conditional variational autoencoders (CVAEs). It thus improves performance of machine translation models that use noisy or monolingual data, as well as in conventional settings. Extending Transformer and conditional VAEs, our proposed latent variable model measurably prevents posterior collapse by (1) using a modified evidence lower bound (ELBO) objective which promotes mutual information between the latent variable and the target, and (2) guiding the latent variable with an auxiliary bag-of-words prediction task. As a result, the proposed model yields improved translation quality compared to existing variational NMT models on WMT $\mathrm{Ro}\leftrightarrow \mathrm{En}$ and $\mathrm{De}\leftrightarrow \mathrm{En}$ . With latent variables being effectively utilized, our model demonstrates improved robustness over non-latent Transformer in handling uncertainty: exploiting noisy source-side monolingual data (up to $+3.2$ BLEU), and training with weakly aligned web-mined parallel data (up to $+4.7$ BLEU). + +# 1 Introduction + +The conditional variational autoencoder (CVAE; Sohn et al., 2015) is a conditional generative model for structured prediction tasks like machine translation. This model, learned by variational Bayesian methods (Kingma and Welling, 2014), can capture global signal about the target in its latent variables. Unfortunately, variational inference for text generation often yields models that ignore their latent variables (Bowman et al., 2016), a phenomenon called posterior collapse. + +In this paper, we introduce a new loss function for CVAEs that counteracts posterior collapse, motivated by our analysis of CVAE's evidence lower bound objective (ELBO). Our analysis (§2) + +reveals that optimizing ELBO's second term not only brings the variational posterior approximation closer to the prior, but also decreases mutual information between latent variables and observed data. Based on this insight, we modify CVAE's ELBO in two ways (§3): (1) We explicitly add a principled mutual information term back into the training objective, and (2) we use a factorized decoder (Chen et al., 2017), which also predicts the target bag-of-words as an auxiliary decoding distribution to regularize our latent variables. Our objective is effective even without Kullback-Leibler term (KL) annealing (Bowman et al., 2016), a strategy for iteratively altering ELBO over the course of training to avoid posterior collapse. + +In applying our method to neural machine translation (NMT; Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014), we find that we have measurably mitigated posterior collapse. The latent variables are not ignored, even in the presence of a powerful Transformer decoder. By addressing this problem, the resulting NMT model has improved robustness and performance in low-resource scenarios. Noisy data like those scraped from the Internet (Smith et al., 2013; Michel and Neubig, 2018) present a challenge for NMT (Khayrallah and Koehn, 2018; Ott et al., 2018a); we are measurably more able to model this extrinsic uncertainty than the (non-latent) Transformer (Vaswani et al., 2017) or existing variational NMT with the CVAE architecture (Zhang et al., 2016). Finally, we extend the model to semi-supervised learning (Cheng et al., 2016) to more effectively learn from monolingual data. + +In summary, our conditional text generation model overcomes posterior collapse by promoting mutual information. It can easily and successfully integrate noisy and monolingual data, and it does this without the cost of lower BLEU score than non-latent NMT in typical settings. + +# 2 Formalism and Mathematical Analysis + +Here we review the standard framework for neural MT. Next, we connect this to the conditional variational autoencoder, a model with latent random variables whose distributions are learned by black-box variational Bayesian inference. Finally, we analyze the CVAE's objective to explain why these models will ignore their latent variables ("posterior collapse"). + +# 2.1 Neural Machine Translation + +Problem instances in machine translation are pairs of sequences $(\pmb{x} \triangleq [x_1, \dots, x_m], \pmb{y} \triangleq [y_1, \dots, y_n])$ , where $\pmb{x}$ and $\pmb{y}$ represent the source and target sentences, respectively. Conventionally, a neural machine translation model is a parameterized conditional distribution whose likelihood factors in an autoregressive fashion: + +$$ +p _ {\boldsymbol {\theta}} (\boldsymbol {y} \mid \boldsymbol {x}) = \prod_ {t = 1} ^ {n} p _ {\boldsymbol {\theta}} \left(y _ {t} \mid \boldsymbol {x}, \boldsymbol {y} _ {< t}\right). \tag {1} +$$ + +The dominant translation paradigm first represents the source sentence as a sequence of contextualized vectors (using the encoder), then decodes this representation into a target hypothesis according to Equation 1. The parameters $\theta$ are learned by optimizing the log-likelihood of training pairs with stochastic gradient methods (Bottou and Cun, 2004; Kingma and Ba, 2015). Decoding is deterministic, using an efficient approximate search like beam search (Tillmann and Ney, 2003). The Transformer architecture with multi-head attention has become the state of the art for NMT (Vaswani et al., 2017). + +# 2.2 The Conditional Variational Autoencoder + +Our NMT approach extends the conditional variational autoencoder (Sohn et al., 2015), which we identify as a generalization of Variational NMT (Zhang et al., 2016). It introduces a latent random variable $z$ into the standard NMT conditional distribution from Equation $1:^{1,2}$ + +$$ +p _ {\boldsymbol {\theta}} (\boldsymbol {y} \mid \boldsymbol {x}) = \int_ {z} \underbrace {p _ {\boldsymbol {\theta}} (\boldsymbol {y} \mid z , \boldsymbol {x})} _ {\text {d e c o d e r}} \cdot \underbrace {p _ {\boldsymbol {\theta}} (\boldsymbol {z} \mid \boldsymbol {x})} _ {\text {e n c o d e r}} \mathrm {d} z. \tag {2} +$$ + +For a given source sentence $\pmb{x}$ , first a latent variable $\pmb{z}$ is sampled from the encoder, then the target sen + +tence $\pmb{y}$ is generated by the decoder: $z\sim p_{\theta}(z\mid$ $\pmb {x}),\pmb {y}\sim p_{\theta}(\pmb {y}\mid \pmb {z},\pmb {x})$ 3 + +It is intractable to marginalize Equation 2 over $\mathcal{Z}$ . Instead, the CVAE training objective is a variational lower bound (the ELBO) of the conditional log-likelihood. It relies on a parametric approximation of the model posterior: $q_{\phi}(z\mid x,y)$ . The variational family we choose for $q$ is a neural network whose parameters $\phi$ are shared (i.e., amortized) across the dataset. + +The ELBO lower-bounds the log-likelihood, as can be proven with Jensen's inequality. Its form is: + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {C V A E}} = \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}, \boldsymbol {y})} [ \log p _ {\boldsymbol {\theta}} (\boldsymbol {y} \mid \boldsymbol {x}, \boldsymbol {z}) ] \\ - \mathrm {D} _ {\mathrm {K L}} \left(q _ {\phi} (\boldsymbol {z} \mid \boldsymbol {x}, \boldsymbol {y}) \| p _ {\boldsymbol {\theta}} (\boldsymbol {z} \mid \boldsymbol {x})\right), \tag {3} \\ \end{array} +$$ + +where $\mathrm{D}_{\mathrm{KL}}$ represents the Kullback-Leibler divergence between two distributions. + +We use amortized variational inference to simultaneously perform learning and approximate posterior inference, updating both $\theta$ and $\phi$ with stochastic gradient methods. Improving $\theta$ raises the lower bound, and improving $\phi$ keeps the bound tight with respect to the model conditional log-likelihood. The same argument pertains to the joint maximization interpretation of the expectation-maximization (EM) algorithm (Neal and Hinton, 1998). (Our optimization is a variational generalization of EM.) + +# 2.3 Posterior Collapse + +Despite their success when applied to computer vision tasks, variational autoencoders in natural language generation suffer from posterior collapse, where the learnt latent code is ignored by a strong autoregressive decoder. This presents a challenge to conditional language generation tasks in NLP like machine translation. + +The phenomenon can be explained mathematically by an analysis of the ELBO objective, as well as from the perspective of a powerful decoder that can model the true distribution without needing the latent code. We consider both in this subsection. + +ELBO surgery Recall that the computed objective approximates the objective on the true data distribution $p_{\mathcal{D}}$ , using a finite number of samples + +![](images/a49234ed462410637574b5f9939059bc710b3e1ec80561815555d4a75fb42792.jpg) +Figure 1: Model architecture in training (with parallel data) and inference. + +(see, e.g., Brown et al., 1992): + +$$ +\mathcal {L} = \mathbb {E} _ {p _ {\mathcal {D}} (\boldsymbol {x}, \boldsymbol {y})} \left[ \mathcal {L} _ {\mathrm {C V A E}} \left(\phi , \boldsymbol {\theta}; \boldsymbol {x}, \boldsymbol {y}\right) \right]. \tag {4} +$$ + +We can factor the KL term of Equation 3 (omitting parameter subscripts) as: + +$$ +\begin{array}{l} \mathbb {E} _ {p _ {\mathcal {D}} (\boldsymbol {x}, \boldsymbol {y})} [ \mathrm {D} _ {\mathrm {K L}} (q (\boldsymbol {z} \mid \boldsymbol {x}, \boldsymbol {y}) \| p (\boldsymbol {z} \mid \boldsymbol {x})) ] \\ = \underbrace {\operatorname {H} (\boldsymbol {x} , \boldsymbol {y}) - \operatorname {H} (\boldsymbol {x} , \boldsymbol {y} \mid \boldsymbol {z})} _ {\triangleq \mathrm {I} _ {q _ {\phi}} (\boldsymbol {z}; \boldsymbol {x}, \boldsymbol {y})} + \underbrace {\mathbb {E} _ {q (\boldsymbol {z})} \log \frac {q (\boldsymbol {z})}{p (\boldsymbol {z})}} _ {\triangleq \mathrm {D} _ {\mathrm {K L}} (q _ {\phi} (\boldsymbol {z}) \| p (\boldsymbol {z}))}, \tag {5} \\ \end{array} +$$ + +which we prove in Appendix A, following (Hoffman and Johnson, 2016). + +As both the resulting mutual information and KL terms are non-negative (Cover and Thomas, 2006), the global minimum of Equation 5 is $\mathrm{I}_{q_{\phi}}(z; x, y) = \mathrm{D}_{\mathrm{KL}}(q_{\phi}(z) \| p(z)) = 0$ . Unfortunately, at this point, the consequence of the optimization is that the latent variable $z$ is conditionally independent of the data $(x, y)$ . + +A powerful decoder Revisiting Equation 3, we see that the decoder is conditioned on both the stochastic latent variable $z$ and the source text $x$ . A sufficiently high-capacity autoregressive decoder can model the conditional density directly, ignoring the latent variable and reducing inference to Equation 1. The KL term can then be reduced to its minimum (0) by equating the posterior to the prior. To prevent this, some work weakens the decoder in various ways. This is a challenge, because NMT requires a powerful decoder such as Transformer with direct attention to the encoder. + +# 3 An Information-Infused Objective + +We modify our training objective to explicitly retain mutual information between the latent variable $z$ and the observation $(x, y)$ . Further, we use an auxiliary decoder that only uses the latent variable, not the encoder states. We combine it with the existing decoder as a mixture of softmaxes (Yang et al., 2018a). The model is trained with amortized variational inference. When source-language monolingual text is available, we augment our modified CVAE objective with a similarly modified (non-conditional) VAE objective. The training and inference strategy is summarized in Figure 1. + +# 3.1 Adding $\mathrm{I}_{q_{\phi}}(z; x, y)$ to ELBO + +To combat the optimization dilemma from Equation 5 (namely, that the objective discourages mutual information between the latent variable and the data), we explicitly add the mutual information term to the CVAE's ELBO and obtain a new training objective: + +$$ +\begin{array}{l} \mathcal {L} _ {\mathrm {M I C V A E}} = \mathcal {L} _ {\mathrm {C V A E}} + \mathrm {I} _ {q _ {\phi}} (\boldsymbol {z}; \boldsymbol {x}, \boldsymbol {y}) \\ = \mathbb {E} _ {q _ {\phi} (\boldsymbol {z} | \boldsymbol {x}, \boldsymbol {y})} \log p (\boldsymbol {y} \mid \boldsymbol {x}, \boldsymbol {z}) \\ - \mathrm {D} _ {\mathrm {K L}} \left(q _ {\phi} (\boldsymbol {z}) \| p (\boldsymbol {z})\right) \tag {6} \\ \end{array} +$$ + +The new training objective $\mathcal{L}_{\mathrm{MICVAE}}$ aims to match the aggregated approximate posterior distribution of the latent variable $q_{\phi}(z)$ (Hoffman and Johnson, 2016) to the aggregated-posterior prior distribution $p_{\theta}(z)$ .4 + +# 3.2 Guiding $z$ to Encode Global Information + +Several existing approaches weaken the decoder: limiting its capacity to encourage latent variables to be utilized (Bowman et al., 2016; Gulrajani et al., 2017). Here we propose a different approach: explicitly guiding the information encoded in $z$ without reducing the decoder's capacity. + +The decision to weaken the decoder can be understood in the context of Bits-Back Coding theory (Chen et al., 2017), which suggests that at optimality the decoder will model whatever it can locally, and only the residual will be encoded in the latent variable $z$ . A consequence is that explicit information placement can give more powerful latent representations. + +Inspired by this Bits-Back perspective, we add a global auxiliary loss for $\mathbf{z}$ to encode information which cannot be modelled locally by the autoregressive decoder $\prod_{t} p_{\theta}(y_t \mid x, y_{ModelDKLIqφ(z, x)Iqφ(z, y)NLLDCVAE + KLA0.0010.0014.2E-63.17Our model0.170.180.313.16 + +Table 1: Our model mitigates posterior collapse. The KL value refers to $\mathrm{D}_{\mathrm{KL}}(q_{\phi}(\boldsymbol {z}\mid \boldsymbol {x},\boldsymbol {y})\parallel p_{\theta}(\boldsymbol {z}\mid \boldsymbol {x}))$ for DCVAE and $\mathrm{D}_{\mathrm{KL}}(q_{\phi}(\boldsymbol {z}\mid \boldsymbol {y})\parallel p_{\theta}(\boldsymbol {z}\mid \boldsymbol {x}))$ for our model. + +# 5.4 Preventing Posterior Collapse + +We compare our model to a standard DCVAE lacking the new objective. We report four metrics of posterior collapse on the validation set of WMT Ro-En: + +1. Kullback-Leibler divergence (KL). +2. Mutual information between the latent variable and the source: $\mathrm{I}_{q_{\phi}}(\pmb {z};\pmb {x})$ +3. Mutual information between the latent variable and the target: $\mathrm{I}_{q_{\phi}}(z; \mathbf{y})$ . +4. Negative conditional log-likelihood (NLL) per token. + +Table 1 shows that when using standard DCVAE ELBO, even with the common practice of KL annealing (KLA), both the KL loss and mutual information settle to almost 0 which is consistent with the analysis in Equation 5. + +We also plot the progression of $\mathrm{D}_{\mathrm{KL}}$ , $\mathrm{I}_{q_{\phi}}(\boldsymbol {z};\boldsymbol {x})$ and $\operatorname{I}_{q_{\phi}}(\boldsymbol {z};\boldsymbol {y})$ during training in Figure 2. The posterior collapse of the baseline model is apparent: both $\mathrm{D}_{\mathrm{KL}}$ mutual information terms drop to 0 at the beginning of training as a result ELBO's design. On the other hand, our model, without using any annealing schedule, effectively increases mutual information and prevents KL loss from settling to a degenerate solution early on. + +# 5.5 Translation Quality + +We report corpus-level BLEU (Papineni et al., 2002) $^6$ on the test sets where the translations are generated by sampling each $z_{k}$ with soft-assignment (vs. argmax). + +Supervised Learning on Parallel Data First, we evaluate our model's performance when trained with parallel data on standard WMT datasets. Table 2 shows that our model consistently outperforms both VNMT and DCVAE models—which + +![](images/8cc9aa36fa04348ccc36fd05cdd851282af8fa1423218a7730613c0924ef0a15.jpg) + +![](images/14e0882afd24ab93bb72c99b020c3edee003ec450a353b4082fde20a881b8b07.jpg) + +![](images/efb8691d9b5f5ca3d052027e07f4a8b3de910199175504d385e32f8ab79c8d88.jpg) + +![](images/9360e1ae944f223b52f976d2e086965fa2ed7706f646e888f3d2d5e307115567.jpg) + +![](images/accfd5f7b40450acbc8a46b435c3b2174d0201f1d2b0d6c7bcc751fc5831c036.jpg) + +![](images/448f15a493855c0a1dd13feccc725e9e41c53ad0985d87acbff7af24fd5135d7.jpg) + +![](images/5ffe304c2a7eff37f9cd2230fa9401be7fed76e6c161148005c7a91ce0ea9c64.jpg) +Figure 2: Row (A): comparison of KL and mutual information between baseline (DCVAE, solid triangle, orange color) and our model (solid circle, teal color). Rows (B) and (C): ablation study on relative contribution from MICVAE and BoW. All metrics are computed on the WMT16 Ro-En validation set over the course of 140k training updates. + +![](images/53073f5d6bc7c33e373047587a5a0e55602e9d17fa35f0b81c3046b26085ae34.jpg) + +![](images/008bb69db7e427223e8d52040bab963a673029efd3e9c470e35dbd283c89bb2f.jpg) + +
ModelWMT16WMT14
Ro-EnEn-RoDe-EnEn-De
VNMT34.2034.2730.3525.84
DCVAE34.1634.5129.7625.46
Our model34.7634.9731.3926.42
Non-latent34.7334.5430.8926.36
+ +require ad-hoc KL annealing—while on par with a strong Transformer baseline. + +Semi-supervised with Source-side Monolingual Data Leveraging monolingual data is a common practice to improve low resource NMT. One popular approach uses target-side monolingual data through "backtranslation" as a data augmentation, but how to effectively leverage source-side monolingual data is an open challenge (Sennrich et al., + +Table 2: BLEU score on WMT benchmarks. Best result on each dataset is in bold. Our model provides minor gains $(\leq 0.5$ points) over the standard Transformer, not degrading like VNMT and DCVAE. Alongside improvements in semi-supervised or noisy settings, this suggests that there is no BLEU compromise in choosing this model. + +
ModelFr–EnEn–Fr
Non-latent26.724.8
DCVAE26.426.1
+ source mono27.326.4
Our model28.626.3
+ source mono29.826.7
+ +Table 3: Translation performance (BLEU) of utilizing source-side monolingual data. Best result on each data condition (with and without monolingual data) is bold. + +2016a; Zhang and Zong, 2016; Wu et al., 2019). We use the joint training objective described in Equation 14. To have a fair comparison, we also extend VNMT and DCVAE with the same joint training algorithm, i.e., the newly added monolingual data is used to train their corresponding sequence encoder and inference network with standard VAE ELBO. That is, the only difference is that our model was trained to promote mutual information $\mathrm{I}_{q_{\phi}}(z,x)$ and $\mathrm{I}_{q_{\phi}}(z,y)$ . As shown in Table 3, by doing so the proposed model brings larger gains during semi-supervised learning with source-side monolingual data. + +![](images/119d0297ce5193551913855f4f918694d590fa759fcfb6bcd861c6a72e11be49.jpg) +Figure 3: BLEU when increasing the number of noisy parallel sentences (ranked by Zipporah) in training, Si-En. + +Robustness to Noisy Data While high-quality parallel data is scarce for low-resource language pairs, weakly aligned sentence pairs can be mined from massive unpaired data such as Paracrawl. We evaluate our model's performance when augmenting the training set with increasingly noisy parallel data filtered by Zipporah (Xu and Koehn, 2017). Because VNMT and DCVAE underperform our proposal in previous experiments, we omit them from this experiment. Figure 3 shows the results in the Sinhala-English direction. Our model always outperforms standard Transformer, which struggles as more (and noisier) data is added. The gap grows from +1.2 to +4.7 BLEU. + +# 6 Analysis + +Ablation Study How do the different ingredients of our proposed approach contribute to preventing posterior collapse and improving translation quality? We explore two variants of the proposed model: 1) modified ELBO only: only adding mutual information term to the training objective, while without gradients from $\mathcal{L}_{\mathrm{BoW}}$ , 2) BoW only: which is equivalent to DCVAE combined with BoW decoder. + +First, we perform the same collapse metrics evaluation as in Table 1. Figure 2(B) suggests that by explicitly adding mutual information term back to the training objective, both $\mathrm{I}_{q_{\phi}}(z; \boldsymbol{x})$ and $\mathrm{I}_{q_{\phi}}(z; \boldsymbol{y})$ are effectively raised, while the remaining aggregated KL term is still optimized to zero. Such behavior is consistent with the analysis revealed + +
ModelDe-En (3.9M)Ro-En (608K)
BoW and LMICVAE31.434.8
BoW only31.134.2
+ +Table 4: Ablation study on translation quality (BLEU). The information-infused loss function provides additional performance over the DCVAE with a bag-of-words decoder. + +in Equation 5. On the other hand, regularizing $z$ with the BoW decoder only, shown in Figure 2(C), is very effective in preventing KL vanishing as well as increasing mutual information. When two approaches are combined, as was shown in Figure 2(A), the model retains higher mutual information for both $\mathrm{I}_{q_{\phi}}(z; \boldsymbol{x})$ and $\mathrm{I}_{q_{\phi}}(z; \boldsymbol{y})$ . + +Next, we see whether the difference in mutual information yields different translation quality. We compare two models: BoW only (Figure 2(C)) and both (Figure 2(A)), on WMT14 De-En and WMT16 Ro-En test sets. Table 4 shows the difference matters more in a low-data regime. + +Analysis of Outputs Delving into model predictions helps us understand how our model outperforms the others. We examined erroneous 1-best predictions on the Ro-En data. We provide salient examples of phenomena we identified in Table 5. (Naturally, as the Ro-En score differences are not dramatic, the predictions are largely similar.) + +Several examples support the fact that our model has more fluent and accurate translations than the baseline or VNMT. VNMT often struggles by introducing disfluent words, and both VNMT and Transformer select justifiable but incorrect words. For instance, in our second example, the gender and animacy of the possessor are not specified in Romanian. Our model selects a more plausible pronoun for this context. + +Analysis of Latent Variables Finally, we probe whether different latent variables encode different information. We random sample 100 sentences from two test sets of distinct domains, MTNT (Reddit comments) and WMT (news) with 50 sentences each. We plot the $t$ -SNE projection of their corresponding samples $z_{k}$ inferred from $\Phi_{k}$ , $k = 1,2,3,4$ respectively. Figure 4 suggests that different latent variables learn to organize the data in different manners, but there was no clear signal that any of them exclusively specialize in encoding a domain label. We leave a thorough analysis of + +
Source: ma intristeaza foarte tare .
Reference: that really saddens me .
Base: i am very saddened .
VNMT: i am saddened very loudly . (Wrong sense of tare)
Ours: i am very saddened .
Source: cred ca executia sa este gresita .
Reference: i believe his execution is wrong .
Base: i believe that its execution is wrong .
VNMT: i believe that its execution is wrong .
Ours: i believe that his execution is wrong .
Source: da , chinatown
Reference: yes , chinatown
Base: yes , chinatown
VNMT: yes , thin .
Ours: yes , chinatown
Source: nu stiu cine va fi propus pentru aceasta functie .
Reference: i do not know who will be proposed for this position .
Base: i do not know who will be proposed for this function .
VNMT: i do not know who will be proposed for this function .
Ours: i do not know who will be proposed for this position .
Source: recrutarea , o prioritate tot mai mare pentru companii
Reference: recruitment , a growing priority for companies
Base: recruitment , an increasing priority for companies
VNMT: recruitment , [article missing] increasing priority for companies
Ours: recruitment , a growing priority for companies
+ +Table 5: Translation examples from the baseline Transformer, VNMT, and our model. Disfluent words or absences are in red, and slightly incorrect lexical choice is in blue. Romanian diacritics have been stripped. + +![](images/6090dbeb18c655a9064d50de4a4e0c93ad954eaf24d64d449dfa20e5d952df66.jpg) +Figure 4: $t$ -SNE visualization of $z_{k}$ , $k = 1,2,3,4$ samples from 100 sentences from two datasets with distinct domains, MTNT (orchid) and WMT news (green). + +their information specialization to future work. + +# 7 Related Work + +Unlike most prior work in (conditional) text generation, we tackle posterior collapse without requiring an annealing schedule (Bowman et al., 2016; Sønderby et al., 2016; Kim et al., 2018), a weakened decoder (Gulrajani et al., 2017), or a restricted variational family (Razavi et al., 2019). + +Unlike Ma et al. (2018), who also employ bag-of-words as an NMT objective, our BoW decoder only sees the latent variable $z$ , not the encoder states. Conversely, unlike Weng et al. (2017), our generative decoder has access to both the latent variable and the encoder states; bag-of-words prediction is handled by separate parameters. + +VNMT (Zhang et al., 2016) applies CVAE with Gaussian priors to conditional text generation. VRNMT (Su et al., 2018) extends VNMT, mod + +eling the translation process in greater granularity. Both needed manually designed annealing schedules to increase KL loss and avoid posterior collapse. Discrete latent variables have been applied to NMT (Kaiser et al., 2017; Gu et al., 2018; Shen et al., 2019), without variational inference or addressing posterior collapse. Approaches to stop posterior collapse include aggressively trained inference networks (He et al., 2019), skip connections (Dieng et al., 2019), and expressive priors (Tomczak and Welling, 2018; Razavi et al., 2019). + +Unlike our conditional approach, Shah and Barber (2018) jointly model the source and target text in a generative fashion. Their EM-based inference is more computationally expensive than our amortized variational inference. Eikema and Aziz (2019) also present a generative (joint) model relying on autoencoding; they condition the source text $\pmb{x}$ on the latent variable $\pmb{z}$ . Finally, Schulz et al. (2018), like us, value mutual information between the data and the latent variable. While they motivate KL annealing using mutual information, we show that the annealing is unnecessary. + +# 8 Conclusion + +We have presented a conditional generative model with latent variables whose distribution is learned with variation inference, then evaluated it in machine translation. Our approach does not require an annealing schedule or a hamstrung decoder to avoid posterior collapse. Instead, by providing a new analysis of the conditional VAE objective to improve it in a principled way and incorporating an auxiliary decoding objective, we measurably prevented posterior collapse. + +As a result, our model has outperformed previous variational NMT models in terms of translation quality, and is comparable to non-latent Transformer on standard WMT Ro $\leftrightarrow$ En and De $\leftrightarrow$ En datasets. Furthermore, the proposed method has improved robustness in dealing with uncertainty in data, including exploiting source-side monolingual data as well as training with noisy parallel data. + +# 9 Acknowledgments + +We thank Alexandra DeLucia, Chu-Cheng Lin, Hongyuan Mei, Kenton Murray, Guanghui Qin, and João Sedoc (alphabetically) for remarks on the exposition. + +# References + +Léon Bottou and Yann L. Cun. 2004. Large scale online learning. In Advances in Neural Information Processing Systems 16, pages 217-224. Curran Associates, Inc. +Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10-21, Berlin, Germany. Association for Computational Linguistics. +Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, Jennifer C. Lai, and Robert L. Mercer. 1992. An estimate of an upper bound for the entropy of English. Computational Linguistics, 18(1):31-40. +Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2017. Variational lossy autoencoder. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. +Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semisupervised learning for neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1965-1974, Berlin, Germany. Association for Computational Linguistics. +Thomas M. Cover and Joy A. Thomas. 2006. Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). Wiley, New York, NY, USA. +Anna Currey, Antonio Valerio Miceli Barone, and Kenneth Heafield. 2017. Copied monolingual data improves low-resource neural machine translation. In Proceedings of the Second Conference on Machine Translation, pages 148-156, Copenhagen, Denmark. Association for Computational Linguistics. +Adji B. Dieng, Yoon Kim, Alexander M. Rush, and David M. Blei. 2019. Avoiding latent variable collapse with generative skip models. In Proceedings of Machine Learning Research, volume 89 of Proceedings of Machine Learning Research, pages 2397-2405. +Bryan Eikema and Wilker Aziz. 2019. Auto-encoding variational neural machine translation. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 124–141, Florence, Italy. Association for Computational Linguistics. +Jiatao Gu, James Bradbury, Caiming Xiong, Victor O. K. Li, and Richard Socher. 2018. Non-autoregressive neural machine translation. In 6th + +International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. Open-Review.net. +Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David Vázquez, and Aaron C. Courville. 2017. Pixelvae: A latent variable model for natural images. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. +Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The FLORES evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6098-6111, Hong Kong, China. Association for Computational Linguistics. +Junxian He, Daniel Spokoyny, Graham Neubig, and Taylor Berg-Kirkpatrick. 2019. Lagging inference networks and posterior collapse in variational autoencoders. In ICLR. +Matthew D. Hoffman and Matthew J. Johnson. 2016. ELBO surgery: yet another way to carve up the variational evidence lower bound. In Workshop in Advances in Approximate Bayesian Inference, volume 1. +Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with Gumbel-Softmax. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. +Lukasz Kaiser, Aidan N. Gomez, Noam Shazeer, Ashish Vaswani, Niki Parmar, Llion Jones, and Jakob Uszkoreit. 2017. One model to learn them all. CoRR, abs/1706.05137v1. +Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent convolutional neural networks for discourse compositionality. In Proceedings of the Workshop on Continuouss Vector Space Models and their Compositionality, pages 119-126, Sofia, Bulgaria. Association for Computational Linguistics. +Huda Khayrallah and Philipp Koehn. 2018. On the impact of various types of noise on neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 74-83, Melbourne, Australia. Association for Computational Linguistics. +Yoon Kim, Sam Wiseman, Andrew Miller, David Sontag, and Alexander Rush. 2018. Semi-amortized variational autoencoders. In Proceedings of the + +35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2678-2687, Stockholm, Sweden. PMLR. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings. +Adam Lopez and Matt Post. 2013. Beyond bitext: Five open problems in machine translation. In Twenty Years of Bitext. +Shuming Ma, Xu Sun, Yizhong Wang, and Junyang Lin. 2018. Bag-of-words as target for neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 332-338, Melbourne, Australia. Association for Computational Linguistics. +Chris J. Maddison, Andriy Mnih, and Yee Whye Teh. 2017. The concrete distribution: A continuous relaxation of discrete random variables. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. +Paul Michel and Graham Neubig. 2018. MTNT: A testbed for machine translation of noisy text. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 543-553, Brussels, Belgium. Association for Computational Linguistics. +Radford M. Neal and Geoffrey E. Hinton. 1998. A view of the EM algorithm that justifies incremental, sparse, and other variants. In Michael I. Jordan, editor, Learning in Graphical Models, pages 355-368. Springer Netherlands, Dordrecht. +Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018a. Analyzing uncertainty in neural machine translation. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 3953-3962. PMLR. +Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018b. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 1-9, Belgium, Brussels. Association for Computational Linguistics. + +Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. +Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186-191, Belgium, Brussels. Association for Computational Linguistics. +Ali Razavi, Aaron van den Oord, Ben Poole, and Oriol Vinyals. 2019. Preventing posterior collapse with delta-vaes. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. +Danilo Jimenez Rezende and Shakir Mohamed. 2015. Variational inference with normalizing flows. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pages 1530-1538. JMLR.org. +Philip Schulz, Wilker Aziz, and Trevor Cohn. 2018. A stochastic decoder for neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1243-1252, Melbourne, Australia. Association for Computational Linguistics. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computational Linguistics. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. +Harshil Shah and David Barber. 2018. Generative neural machine translation. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 1346-1355. Curran Associates, Inc. +Tianxiao Shen, Myle Ott, Michael Auli, and Marc'Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 5719-5728. PMLR. + +Jason R. Smith, Herve Saint-Amand, Magdalena Plamada, Philipp Koehn, Chris Callison-Burch, and Adam Lopez. 2013. Dirt cheap web-scale parallel text from the common crawl. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1374-1383, Sofia, Bulgaria. Association for Computational Linguistics. +Kihyuk Sohn, Honglak Lee, and Xinchen Yan. 2015. Learning structured output representation using deep conditional generative models. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 3483-3491. Curran Associates, Inc. +Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. 2016. Ladder variational autoencoders. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 3738-3746. Curran Associates, Inc. +Jinsong Su, Shan Wu, Deyi Xiong, Yaojie Lu, Xianpei Han, and Biao Zhang. 2018. Variational recurrent neural machine translation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5488-5495. AAAI Press. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104-3112. Curran Associates, Inc. +Christoph Tillmann and Hermann Ney. 2003. Word reordering and a dynamic programming beam search algorithm for statistical machine translation. Computational Linguistics, 29(1):97-133. +Jakub M. Tomczak and Max Welling. 2018. VAE with a VampPrior. In International Conference on Artificial Intelligence and Statistics, AISTATS 2018, 9-11 April 2018, Playa Blanca, Lanzarote, Canary Islands, Spain, volume 84 of Proceedings of Machine Learning Research, pages 1214-1223. PMLR. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc. +Rongxiang Weng, Shujian Huang, Zaixiang Zheng, Xinyu Dai, and Jiajun Chen. 2017. Neural ma + +chine translation with word predictions. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 136-145, Copenhagen, Denmark. Association for Computational Linguistics. +Lijun Wu, Yiren Wang, Yingce Xia, Tao QIN, Jianhuang Lai, and Tie-Yan Liu. 2019. Exploiting monolingual data at scale for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4205-4215, Hong Kong, China. Association for Computational Linguistics. +Hainan Xu and Philipp Koehn. 2017. Zipporah: a fast and scalable data cleaning system for noisy web-crawled parallel corpora. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2945-2950, Copenhagen, Denmark. Association for Computational Linguistics. +Yilin Yang, Liang Huang, and Mingbo Ma. 2018a. Breaking the beam search curse: A study of (re-)scoring methods and stopping criteria for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3054-3059, Brussels, Belgium. Association for Computational Linguistics. +Zhilin Yang, Zihang Dai, Ruslan Salakhutdinov, and William W. Cohen. 2018b. Breaking the softmax bottleneck: A high-rank RNN language model. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. +Biao Zhang, Deyi Xiong, Jinsong Su, Qun Liu, Rongrong Ji, Hong Duan, and Min Zhang. 2016. Variational neural discourse relation recognizer. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 382-391, Austin, Texas. Association for Computational Linguistics. +Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1535-1545, Austin, Texas. Association for Computational Linguistics. +Shengjia Zhao, Jiaming Song, and Stefano Ermon. 2019. Infovae: Balancing learning and inference in variational autoencoders. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 5885-5892. AAAI Press. + +Tiancheng Zhao, Kyusong Lee, and Maxine Eskenazi. 2018. Unsupervised discrete sentence representation learning for interpretable neural dialog generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1098-1107, Melbourne, Australia. Association for Computational Linguistics. + +# A Derivation of Equation 5 + +To prove the decomposition of the conditional VAE's regularization term into a mutual information term and a KL divergence term, we introduce a random variable $\ell$ representing an index into the training data; it uniquely identifies $(\pmb{x}^{(\ell)},\pmb{y}^{(\ell)})$ . This alteration is "entirely algebraic" (Hoffman and Johnson, 2016) while making our process both more compact and more interpretable. + +$$ +q (\ell , \boldsymbol {z}) \triangleq q (\ell) q (\boldsymbol {z} \mid \ell) \quad q (\boldsymbol {z} \mid \ell) \triangleq q (\boldsymbol {z} \mid \boldsymbol {x} ^ {(\ell)}, \boldsymbol {y} ^ {(\ell)}) \quad q (\ell) \triangleq \frac {1}{L} +$$ + +$$ +p (\ell , \boldsymbol {z}) \triangleq p (\ell) p (\boldsymbol {z} \mid \ell) \quad p (\boldsymbol {z} \mid \ell) \triangleq p (\boldsymbol {z}) \quad p (\ell) \triangleq \frac {1}{L} +$$ + +We define the marginals $p(z)$ and $q(z)$ as the aggregated posterior (Tomczak and Welling, 2018) and aggregated approximate posterior (Hoffman and Johnson, 2016). (This allows the independence assumption above.) Moving forward will require just a bit of information theory: the definitions of entropy and mutual information. For these, we direct the reader to the text of Cover and Thomas (2006). + +Given these definitions, the regularization term of the ELBO objective may be expressed as + +$$ +\mathbb {E} _ {\ell} \left[ \mathrm {D} _ {\mathrm {K L}} \left(q (\boldsymbol {z} \mid \boldsymbol {x}, \boldsymbol {y}) \parallel p (\boldsymbol {z} \mid \boldsymbol {x})\right) \right] = \sum_ {\ell} \frac {1}{L} q (\boldsymbol {z} \mid \boldsymbol {x}, \boldsymbol {y}) \log \frac {q (\boldsymbol {z} \mid \boldsymbol {x} , \boldsymbol {y})}{p (\boldsymbol {z} \mid \boldsymbol {x})}. +$$ + +We may now multiply the numerator and denominator by $\frac{1}{L}$ and use its equivalence to $p(\ell)$ and $q(\ell)$ . + +$$ += \sum_ {\ell} q (\ell , z) \log \frac {q (\ell , z)}{p (\ell , z)} +$$ + +Factoring then gives us two log terms. + +$$ += \sum_ {\ell} q (\ell , z) \left[ \log \frac {q (z)}{p (z)} + \log \frac {q (\ell \mid z)}{p (\ell)} \right] +$$ + +We then distribute the weighted sum. + +$$ += \mathrm {D} _ {\mathrm {K L}} (q (\boldsymbol {z}) \| p (\boldsymbol {z})) + \mathbb {E} _ {q (\boldsymbol {z})} \left[ \mathrm {D} _ {\mathrm {K L}} (q (\ell | \boldsymbol {z}) | p (\ell)) \right] +$$ + +Because of how we defined $p(\ell)$ , we expand the second term and factor out the constant $\mathrm{H}(p(\ell)) = \log L$ . + +$$ += \mathrm {D} _ {\mathrm {K L}} (q (\boldsymbol {z}) \| p (\boldsymbol {z})) + \log L - \mathbb {E} _ {q (\boldsymbol {z})} [ \mathrm {H} (q (\ell | \boldsymbol {z})) ] +$$ + +Finally, we arrive at the result from Equation 5 by using $\log L = \mathrm{H}(q(\ell))$ . + +$$ += \mathrm {D} _ {\mathrm {K L}} (q (z) \parallel p (z)) + \mathrm {I} _ {q} (\ell ; z). +$$ \ No newline at end of file diff --git a/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/images.zip b/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..beb093e1ab3bbeb08a42c1f0f1cf5682a9b9f2d0 --- /dev/null +++ b/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d3f8ad2104ac56e971a6fd6428197c01e48ab49a878f5bb487e6e924020cb9c +size 484307 diff --git a/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/layout.json b/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a2fe4e8d32cdc1c3850b25ef70d73cbdc85d0262 --- /dev/null +++ b/addressingposteriorcollapsewithmutualinformationforimprovedvariationalneuralmachinetranslation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:10bf450849d301b0884de846bdcca09281e995a5bfafc61f293f0b0ffd045194 +size 521127 diff --git a/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/51e7fd91-085f-49b4-b676-0acefc0db829_content_list.json b/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/51e7fd91-085f-49b4-b676-0acefc0db829_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..269e5b639cafb9b68daa8ff563daaf61dedbd006 --- /dev/null +++ b/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/51e7fd91-085f-49b4-b676-0acefc0db829_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82e4126abb51ed19c19d2b2a62729a0de821f6efaab71ed6e059e5e7f6c38d67 +size 60930 diff --git a/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/51e7fd91-085f-49b4-b676-0acefc0db829_model.json b/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/51e7fd91-085f-49b4-b676-0acefc0db829_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e06e20867991e86d91ed0d32ca3c45810cb34a34 --- /dev/null +++ b/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/51e7fd91-085f-49b4-b676-0acefc0db829_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0d108e5f58f5e028bd75a7345ce0ba72dcfe069a6d31ec034a0c54f3c50ec0f8 +size 70912 diff --git a/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/51e7fd91-085f-49b4-b676-0acefc0db829_origin.pdf b/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/51e7fd91-085f-49b4-b676-0acefc0db829_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d2569377d6f4d995a74373d67fd8c4248b349f3a --- /dev/null +++ b/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/51e7fd91-085f-49b4-b676-0acefc0db829_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe687229a3890cadac5cacac8a05fd00ecf697d789520cfe6edd2b626c2dc646 +size 1436832 diff --git a/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/full.md b/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/full.md new file mode 100644 index 0000000000000000000000000000000000000000..13741b9a0af80c5ba29d9c1989ae7bbc9a6c464b --- /dev/null +++ b/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/full.md @@ -0,0 +1,236 @@ +# A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers + +Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su + +Institute of Information Science, Academia Sinica, Taiwan + +miao.iisas@gmail.com, {ccliang, kysu} @iis.sinica.edu.tw + +# Abstract + +We present ASDiv (Academia Sinica Diverse MWP Dataset), a diverse (in terms of both language patterns and problem types) English math word problem (MWP) corpus for evaluating the capability of various MWP solvers. Existing MWP corpora for studying AI progress remain limited either in language usage patterns or in problem types. We thus present a new English MWP corpus with 2,305 MWPs that cover more text patterns and most problem types taught in elementary school. Each MWP is annotated with its problem type and grade level (for indicating the level of difficulty). Furthermore, we propose a metric to measure the lexicon usage diversity of a given MWP corpus, and demonstrate that ASDiv is more diverse than existing corpora. Experiments show that our proposed corpus reflects the true capability of MWP solvers more faithfully. + +# 1 Introduction + +Human math/science tests have been considered more suitable for evaluating AI progress than the Turing test (Clark and Etzioni, 2016). Among them, math word problems (MwPs) are frequently chosen to study natural language understanding and simulate human problem solving (Bakman, 2007; Mukherjee and Garain, 2008; Liang et al., 2016), because the answer is not a span within the given problem text that can be directly extracted. Table 1 shows a typical example of MwP, which consists of a few sentences that involve quantities. + +Current MWP corpora can be classified into four categories: (1) the Number Word Problem corpus (Shi et al., 2015), which contains number word problems only; (2) the Arithmetic Word Problem corpora (Hosseini et al., 2014; Roy et al., 2015), which involve the four basic arithmetic operations + +
Math Word Problem
A sandwich is priced at $0.75. A cup of pudding is priced at $0.25. Tim bought 2 sandwiches and 4 cups of pudding. How much money should Tim pay?
Solution: 0.75 x 2 + 0.25 x 4 = 2.5
+ +Table 1: A math word problem + +(addition, subtraction, multiplication and division) with either single-step or multi-step operations; (3) the Algebraic Word Problem corpora (Kushman et al., 2014; Koncel-Kedziorski et al., 2015; Roy and Roth, 2017; Upadhyay & Chang, 2015; Wang et al., 2017), which focus on algebraic MwPs; and (4) the Mixed-type MWP corpora (Huang et al., 2016, Ling et al., 2017, Amini et al., 2019), which are large-scale collections of either daily algebra or GRE/GMAT examination MwPs. Table 2 is a comparison of existing English MWP corpora. + +However, these existing corpora are either limited in terms of the diversity of the associated problem types (as well as lexicon usage patterns), or lacking information such as difficulty levels. For example, categories (1), (2), and (3) collect only certain types of MwPs. On the other hand, although large-scale mixed-type MWP corpora contain more problem types, the annotated answers or formulas are sometimes inconsistent, and the corresponding difficulty level is usually not provided. + +Furthermore, low-diversity corpora are typically characterized by highly similar problems, which usually yields over-optimistic results (Huang et al., 2016) (as the answer frequently can be simply obtained from the existing equation template associated with the most similar MwP in the training-set). Roy and Roth (2017) shown significantly lowered performance if highly similar MwPs are removed. Therefore, dataset diversity is more critical than the dataset size for accurately judging the true capability of an MwP solver. + +We thus present ASDiv (Academia Sinica Diverse MWP Dataset), a new MwP corpus that contains diverse lexicon patterns with wide problem + +
CorpusMWP categoryAnnotationSize
Dolphin1878 (Shi et al., 2015)Number-word problemsEquation/answer1,878
AI2 (Hosseini et al., 2014)395
IL (Roy et al., 2015)Arithmetic word problemsEquation/answer562
AllArith (Roy and Roth, 2017)831
KAZB (Kushman et al., 2014)Equation/answer514
ALGES (Koncel-Kedziorski et al., 2015)Algebraic word problemsEquation/answer508
DRAW (Upadhyay & Chang, 2015)Equation/answer/template1,000
Dolphin18K (Huang et al., 2016)Arithmetic/algebraic + domain knowledge problemsEquation/answer18K
AQuA (Ling et al., 2017)Arithmetic/algebraic + domain knowledge problemsRationale/answer (multi-choice problems)100K
MathQA (Amini et al., 2019)Arithmetic/algebraic + domain knowledge problemsDecomposed-linear for-mula/answer (multi-choice problems)37K
ASDivArithmetic/algebraic + domain knowledge problemsEquation/answer + grade-level/problem-type2,305
+ +Table 2: Comparison of different English MWP corpora + +type coverage. Each problem provides consistent equations and answers. It is further annotated with the corresponding problem type and grade level, which can be used to test the capability of a system and to indicate the difficulty level of a problem, respectively. The diverse lexicon patterns can be used to assess whether an MWP solver obtains answers by understanding the meaning of the problem text, or simply by finding an existing MWP with similar patterns (Huang et al., 2016). Problem type diverseness is crucial for evaluating whether a system is competitive with humans when solving MwPs of various categories. Besides, to assess text diversity, we propose a lexicon usage diversity metric to measure the diversity of an MWP corpus. + +This paper makes the following contributions: (1) We construct a diverse (in terms of lexicon usage), wide-coverage (in problem type), and publicly available1 MWP corpus, with annotations that can be used to assess the capability of different systems. (2) We propose a lexicon usage diversity metric to measure the diversity of an MWP corpus and use it to evaluate existing corpora. (3) We show that the real performance of state-of-the-art (SOTA) systems is still far behind human performance if evaluated on a corpus that mimics a real human test. + +# 2 Problem Type + +A problem type $(PT)$ indicates a crucial math operation pattern for solving an MWP. As MwPs of + +the same problem type share a similar pattern (in language usages, logic representation, or inferences), they thus indicate stereotypical math operation patterns that could be adopted to solve an MWP (Liang et al., 2018). In $ASDiv$ , each MWP is annotated with a specific $PT$ taught at elementary schools. Some examples of selected $PTs$ are shown in Table 5 (Appendix). Currently, we provide 24 different common $PTs$ and classify them into three main categories according to the operations involved and illustrated below. These $PTs$ are usually specified in math textbooks and mostly covered in elementary schools. + +Basic arithmetic operations: This category includes: Addition, Subtraction, Difference, Multiplication, three different Divisions (i.e., common-division, floor-division, and ceil-division), Sum, Surplus, Number-Operation, three different Time-Variant-Quantities (TVQ), and Multi-step. The first seven types are self-explanatory. Number-Operation indicates that the problem description consists mainly of numbers and their relations. $TVQ^2$ denotes an entity-state related variable (e.g., initial/current/final-state and change) whose value is updated sequentially according to a sequence of events described in an MWP. Last, in a Multi-step problem, the answer is obtained from multiple arithmetic operations. + +Aggregative operations: This category includes: (1) Comparison, (2) Set-Operation, (3) Ra + +![](images/652371ea7021f0689aca9e128d841da50afe0f5ac0be9a16d00bc55249c44ba7.jpg) +Figure 1: Lexicon usage diversity of various corpora. + +tio, (4) Number-Pattern, (5) Algebra-1, and (6) Algebra-2. The first three types are self-explanatory. Number-Pattern refers to the problems which involve deducing a pattern from a sequence of integers (Table 5 (Appendix) shows an example). Algebra-1 and Algebra-2 are algebraic problems with one and two unknown variables, respectively. + +Additional domain knowledge required: This category includes Greatest Common Divisor, Least Common Multiple, Geometry, and UnitTrans. Additional geometric knowledge (e.g., area = length * width) is required in Geometry problems. UnitTrans means that the answer is obtained via conversion to the metric system (e.g., converting 'miles to 'kilometers'). + +# 3 ASDiv Math Word Problem Corpus + +This corpus was designed based on the following guidelines: (1) The corpus should be as diverse as possible in terms of lexicon usage so that the answer is less likely to be obtained via mechanical/statistical pattern matching without understanding the content. (2) The corpus should cover most PTs found in primary school so that it can approximate real human tests. (3) The corpus should be annotated with sufficient information so that it can be used not only to assess the capability of various systems but also to facilitate system development. + +# 3.1 Corpus Diversity Metrics + +We first propose a lexicon usage diversity metric, in terms of BLEU (Papineni et al., 2002), to measure the degree of diversity of a given corpus. This metric is from 0 to 1; higher value indicates the corpus is more diverse. We first use Stanford CoreNLP (Manning et al., 2014) to tokenize and tag POSs, and then use NLTK (Bird et al., 2004) to lemmatize each token. Furthermore, we normalize the original sentences with: (1) stop word removal; and (2) named entity and quantity normalization, + +which replace the associated person names and quantity values with meta symbols in an MWP (i.e., two MwPs are regarded as identical if they differ only in names or quantity-values). This thus places the focus on essential words that matter in the MWP. The obtained sequence is then used to measure the lexicon usage diversity specified below. + +Let $P = \{P_{1}, P_{2}, \ldots, P_{M}\}$ be a specific set of MwPs in a given corpus with the same $PT$ , where $P_{i}$ is the $i$ -th MwP in $P$ . For a given $P_{i}$ , we define its lexicon usage diversity (LD) of $P_{i}$ as + +$$ +L D _ {i} = 1 - \max _ {j, j \neq i} \frac {B L E U (P _ {i} , P _ {j}) + B L E U (P _ {j} , P _ {i})}{2}, +$$ + +where $BLEU(P_i, P_j)$ is the BLEU score between $P_i$ and $P_j$ ( $j \neq i, j = \{1, 2, \dots, M\}$ ). We measure the BLEU score bi-directionally with n-grams up to $n = 4$ . This measure is mainly used to identify repeated usage of lexicon and phrases; $n = 4$ suffices for this case. $LD_i$ evaluates the lexicon diversity between $P_i$ and all $P_j$ ( $i \neq j$ ). Furthermore, the mean of all $LD_i$ (under the same corpus) can be used to indicate the corpus lexicon diversity (CLD). Adding a new MWP with a low LD to an existing corpus introduces little new information to the corpus; thus, it should be either discarded or revised. This diversity metric can help the corpus constructor to decide whether an MWP can be directly adopted or not. + +# 3.2 Challenges in Constructing a Large-Scale MWP Dataset + +Since MathQA is the second-largest dataset in Table 2 (with 37K MwPs), and is cleaner (Amini et al., 2019) than the largest one (AQuA), we first evaluate it with the above LD measurement. Figure 1 shows that its CLD is only 0.05. + +To understand the reason for the low diversity of MathQA $(\mathrm{LD} = 0$ for $85\%$ of the MathQA MwPs), we investigated this dataset. We observed that MathQA includes various MWP subsets, each of + +which shares the same sentence pattern among its members. Figure 1 clearly shows its skewed distribution. Figure 3 (Appendix) shows a subset of which all 105 members share the same sentence pattern. + +Since most MWP solvers can only solve arithmetic MwPs, we further selected its arithmetic subset, generated their corresponding equations according to the annotated formulas, and then solved the equations using the SymPy package. Afterwards, we verified the consistency between the answer obtained from the annotated formula and the labeled answer. The results show that the annotated formulas of $27\%$ of the problems do not match their labeled answers. + +We randomly inspected 30 inconsistent MwPs and classified them into three error-types: (1) Incorrect formula $(67\%)$ , for which the annotated formula cannot be used to solve the given MWP; (2) problematic description $(23\%)$ , for which the description text is either incomplete or problematic; (3) valueless answer $(10\%)$ , for which the given answer is either wrong or inappropriate. Table 6 (Appendix) illustrates examples of each error-type. + +Although building a large corpus via crowdsourcing is a tempting approach, it can result in a poor-quality corpus if the annotation procedure is not well controlled. We believe the quality of the dataset is more important than its size, if they cannot be achieved simultaneously. + +# 3.3 Corpus Construction + +To account for the problems observed in MathQA, we first collected MwPs from 28 websites and then either pruned the problem or revised the text if it was highly similar to any existing ones (according to the proposed lexicon usage diversity metric). This yielded a total of 2,305 MwPs. + +Next, we hired one master-degree research assistant with a background in automatic MWP solving to annotate the problem type, equation, answer, and grade level manually for each MWP. If annotations were provided with the original MWP (22.6% of the source MwPs included equations and answers; 52% had answers only; 63.5% included grade-level information), we used it directly; otherwise, we annotated them manually5. + +Since MWPs are usually clearly specified (with + +![](images/f451ea87c81b3e9289746c7c23df9931f8c57ff099307f093f0c265197c2b7da.jpg) +Figure 2: Distribution of PT categories (G1~G6) + +a sure answer), there is no ambiguous interpretation once the answer is given. Therefore, as opposed to other corpora in which annotations (mostly linguistic attributes) are mainly based on human subjective judgment, the MWP answer/equation annotation is more objective and must be consistent. As a result, human carefulness, instead of human agreement, is a more critical issue in this task. Since an incorrect math expression usually yields an incorrect answer, we used a program to automatically verify the consistency between the annotated equations and the answers. Inconsistent MwPs were re-annotated and checked again. Afterwards, we randomly selected 480 samples (20 samples per problem type) to verify the final annotation correctness. All those samples were correct, which confirms our above assertion. + +Figure 2 shows the distribution of different problem categories in six grade levels in elementary school. Most arithmetic operations appear in grade levels 1 to 4, which means students learn basic arithmetic operations in this stage. We further separate Addition/Subtraction from Multiplication/Division to highlight that they are in different difficulty levels for students. Figure 2 also indicates Multiplication/Division is more emphasized in grade 3 and 4. In grades 5 and 6, improved math skills enable students to solve difficult MwPs that require more aggregative operations and additional domain knowledge. Thus, the grade level is a useful indicator of difficulty and can be employed to evaluate the capability of MwP solving systems. + +# 3.4 LD Distributions of Various Corpora + +We compare the diversity among various MwPs of the same $PT$ (for those corpora without annotated PT Category, diversity is measured over the whole corpus). Lastly, we generate the associated $LD$ distributions (uniformly quantized into 20 intervals between 0 and 1) and calculate the corpus lexicon + +
MathQA-C +(CLD=0.08)ASDiv-A +(CLD=0.50)ASDiv +(CLD=0.49)
L-0.680.36
U-0.780.37
G0.860.68#0.36#
+ +Table 3: Accuracies for different systems (CLD denotes the corpus lexicon diversity; L, U and G denote the $LCA++$ , UnitDep, and GTS systems respectively. -' denotes failure on this corpus; # indicates performance is significantly lower than "-C" with p<0.01. + +
G1G2G3G4G5G6
L0.530.640.490.350.030.01
U0.550.650.510.340.030.01
G0.640.600.470.340.070.01
+ +Table 4: Performance of various grade levels on the ASDiv. L/U/G are the same as that in Table 3. + +diversity (CLD, Section 3.1) on corpora frequently adopted for comparing various systems: (1) AI2, (2) IL, (3) KAZB, (4) ALGES, (5) DRAW, (6) AllArith, and (7) MathQA. + +Figure 1 shows the distributions of CLD for various corpora: there are about $85\%$ , $28\%$ , $22\%$ and $20\%$ identical MwPs (these numbers are the percentages of MwPs with $LD_{i} = 0$ w.r.t. each dataset) in MathQA, IL, AI2 and ALGES corpora respectively, whereas ASDiv contains none. We also evaluate syntactic pattern diversity (in terms of POS n-gram) and the diversity between MwPs in the training set and the test set. Both yield similar trends, too (details are given in the Appendix). + +# 4 Experiments + +To study the correlation between CLD and system performance, we selected three SOTA MWP solvers to conduct the experiments: two based on statistical models, $LCA++$ (Roy and Roth, 2015) and UnitDep (Roy and Roth, 2017); and one using a neural network which adopts two-layer gate-feedforward networks for a Goal-driven Tree-structured approach (GTS) (Xie et al., 2019). + +Since the selected MWP solvers solve only arithmetic MWPs, we first collected 4,117 MWPs from MathQA to construct a subset that its associated formulas satisfy the following two conditions: (1) they involve only arithmetic operations; and (2) they contain neither external constants (which would necessitate external domain knowledge to solve the problem and is out of the scope of this work) nor reused operands (which rarely occur and would complicate the solution procedure). We filtered out inconsistent problems (specified in Section 3.2) and termed the remaining 3,000 MWPs as + +MathQA-C dataset (-C for consistent) to evaluate the performance. Similarly, we extracted a subset of 1,218 MwPs that involve only arithmetic operations (and also satisfy the constraints mentioned above) from ASDiv, and termed this the ASDiv-A dataset (-A for arithmetic). The CLDs for MathQA-C and ASDiv-A were found to be 0.08 and 0.50, respectively. Also, LD = 0 for 82% of the MathQA-C MwPs. + +Afterwards, we tested the solvers against three MWP corpora: MathQA-C, ASDiv-A, and ASDiv. MathQA-C is reported with 5-fold cross-validation accuracy. For ASDiv-A and ASDiv, we randomly split the MWPs of each $PT$ into five nearly equally-sized subsets, and report the 5-fold cross-validation accuracy. For GTS system, we repeated the experiment 5 times and obtained the averaged answer accuracy. + +Table 3 compares the answer accuracies of various systems. We observe that the overall performance is only around $36\%$ on ASDiv, which shows that the performance of the current SOTA systems still is not competitive with human performance, and that CLD is correlated with the system performance (i.e., lower diversity implies higher performance) and is a useful metric to evaluate existing corpora. Table 4 further shows the accuracy of different grade levels on ASDiv: the performance of grades 5 and 6 are significantly lower than the performance of grade 1 to 4. As accuracy is highly correlated with the grade level, the grade level is a useful index for indicating the difficulty of MwPs. + +# 5 Conclusion and Future Work + +We present an MWP corpus which not only is highly diverse in terms of lexicon usage but also covers most problem types taught in elementary school. Each MWP is annotated with the corresponding problem type, equation, and grade level, which are useful for machine learning and assessing the difficulty level of each MWP. We also propose a metric to measure the diversity of lexicon usage of a given corpus. In terms of this metric, we show that in comparison with those corpora widely adopted to compare systems, ours is more suitable for assessing the real performance of an MWP solver. Last, we conduct experiments to show that a low-diverse MWP corpora will exaggerate the true performance of SOTA systems (we are still far behind human-level performance), and that grade level is a useful index for indicating the difficulty of an MWP. + +# References + +Aida Amini, Saadia Gabriel, Peter Lin, Rik Koncel-Kedziorski, Yejin Choi, Hannaneh Hajishirzi. 2019. MathQA: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms. In Proceedings of NAACL-HLT 2019. +Yefim Bakman. 2007. Robust understanding of word problems with extraneous information. http://lanl.arxiv.org/abs/math.GM/0701393. +Steven Bird and Edward Loper. 2004. NLTK: The Natural Language Toolkit. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, 2004, pages 214-217. +Peter Clark and Oren Etzioni. 2016. My Computer is an Honor Student - but how Intelligent is it? Standardized Tests as a Measure of AI. AI Magazine, pages 5-12. +Danqing Huang, Shuming Shi, Chin-Yew Lin, Jian Yin and Wei-Ying Ma. 2016. How well do computers solve math word problems? Large-scale Dataset construction and evaluation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany, Association for Computational Linguistics (ACL), 2016, pages 887-896. +Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb Categorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014. +Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations. Transactions of the Association for Computational Linguistics (TACL), 3:585-597. +Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. Association for Computational Linguistics (ACL), 1:271-281, Jun. 2014. +Chao-Chun Liang, Shih-Hong Tsai, Ting-Yun Chang, Yi-Chung Lin and Keh-Yih Su. 2016. A Meaning-based English Math Word Problem Solver with Understanding, Reasoning and Explanation. In Proceedings of the 26th International Conference on Computational Linguistics (COLING): System Demonstrations, pages 151-155, Osaka, Japan, December 11-17 2016. +Chao-Chun Liang, Yu-Shiang Wong, Yi-Chung Lin and Keh-Yih Su. 2018. A Meaning-based Statistical English Math Word Problem Solver. In Proceedings of NAACL-HLT 2018. + +Wang Ling, Dani Yogatama, Chris Dyer and Phil Blunson. 2017. Program Induction for Rationale Generation: Learning to Solve and Explain Algebraic Word Problems. Association for Computational Linguistics (ACL), 2017. +Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 55-60. +Anirban Mukherjee and Utpal Garain. 2008. A review of methods for automatic understanding of natural language mathematical problems. Artificial Intelligence Review, 29(2):93-122. +Kishore Papineni, Salim Roukos, Todd Ward and Wei-Jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), 2002, pages 311-318. +Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2015, pages 1743-1752. +Subhro Roy and Dan Roth. 2017. Unit Dependency Graph and its Application to Arithmetic Word Problem Solving. AAAI-2017. +Shuming Shi, Yuehui Wang, Chin-Yew Lin, Xiaojiang Liu, and Yong Rui. 2015. Automatically Solving Number Word Problems by Semantic Parsing and Reasoning. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), Lisbon, Portugal, 2015, pages 1132-1142. +Shyam Upadhyay and Ming-Wei Chang. 2015. DRAW: A challenging and diverse algebra word problem set. Number MSR-TR-2015-78 +Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural solver for math word problems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. +Zhipeng Xie, and Shichao Sun. 2019. A goal-driven tree-structured neural model for math word problems. In Proceedings of the 28th International Joint Conference on Artificial Intelligence. AAAI Press. + +# Appendix + +# Appendix A: Examples of a few Selected Problem Types + +Table 5 shows examples of selected types in "Basic arithmetic operations", "Aggregative operations", and "Additional domain knowledge required" categories. + +
Problem typeExamples
Basic arithmetic operations
Number-OperationI have 3 hundreds, 8 tens, and 3 ones. What number am I?
TVQ-InitialTim's cat had kittens. He gave 3 to Mary and 6 to Sara. He now has 9 kittens. How many kittens did he have to start with?
TVQ-ChangeFor the school bake sale, Wendy made pastries. She baked 41 cupcakes and 31 cookies. After the sale, she had 32 to take back home. How many pastries did she sell?
TVQ-FinalMelanie had 7 dimes in her bank. Her dad gave her 8 dimes and her mother gave her 4 dimes. How many dimes does Melanie have now?
Multi-StepThey served a total of 179 adults and 141 children, if 156 of all the people they served are male, how many are female? (combination of multiple operations) +(Equation: x = (179+141)-156)
Aggregative operations
Algrbra-1Maddie, Luisa, and Amy counted their books. Maddie had 15 books. Luisa had 18 books. Together, Amy and Luisa had 9 more books than Maddie. How many books did Amy have? +(Equation: (x+18)-9 = 15, where x: "money of Amy")
Algrbra-2The cost of a private pilot course is $1,275. The flight portion costs $625 more than the ground school portion. What is the cost of each? +(Equation: x+y = 1275; x = y+625, where x: The cost of flight portion; y: The cost of ground school portion)
ComparisonA bookstore was selling 2 books for $15.86. You could buy 7 books for $55.93 online. Which place has a lower unit price?
Set-OperationSarah's team played 8 games of basketball. During the 8 games her team's scores were 69, 68, 70, 61, 74, 62, 65 and 74. What is the mean of the scores?
RatioThere are 43 empty seats and 7 occupied seats on an airplane. What is the ratio of the number of occupied seats to the total number of seats?
Number-PatternThe Wholesome Bakery baked 5 loaves of bread on Wednesday, 7 loaves of bread on Thursday, 10 loaves of bread on Friday, 14 loaves of bread on Saturday, and 19 loaves of bread on Sunday. If this pattern continues, how many loaves of bread will they bake on Monday?
Additional domain knowledge required
G.C.D.A teacher is to arrange 60 boys and 72 girls in rows. He wishes to arrange them in such a way that only boys or girls will be there in a row. Find the greatest number of students that could be arranged in a row.
L.C.M.On a track for remote-controlled racing cars, racing car A completes the track in 28 seconds, while racing car B completes it in 24 seconds. If they both start at the same time, after how many seconds will they be side by side again?
UnitTransMrs. Hilt will buy a new pair of shoes in 11 days. How many minutes must she wait before she can buy her new pair of shoes?
+ +Table 5: Examples of selected problem types + +# Appendix B: Problematic MwPs in MathQA + +Table 6 shows examples of inconsistent MwPs in MathQA, and Figure 3 shows examples using the same sentence pattern. + +
Error typesExamplesAnnotated FormulaLabeled Answer
Incorrect formula (67%)Problem-1 + “What is the 25 th digit to the right of the decimal point in the decimal form of 6 / 11 ?”Annotation Formula: + divide(n1,n2) + Desired Formula: + Not available66
Problem-2 + “In two triangles, the ratio of the area is 4 : 3 and that of their heights is 3 : 4. find the ratio of their bases.”Annotation Formula: + multiply(n0,n0)|multi-ply(n1,n1)|divide(#0,#1) + Desired Formula: + Not available716:9
Problematic description (23%)Problem-3 + “The lcm of two numbers is 495 and their lcm is 5. if the sum of the numbers is 10, then their difference is” + Desired text: “...sum of the numbers is 100”Annotation Formula: + multiply(n0,n1)|divide(#0,n2)10
Valueless answer (10%)Problem-4 + “9886 + x = 13200, then x is ?” + Options: a) 3327, b)3237, c)3337, d) 2337, e)none of theseAnnotation Formula: + subtract(n1,n0)Annotation Operation (e): none of these + Desired Answer: 3414
+ +A train running at the speed of $100\mathrm{km / hr}$ crosses a pole in 18 seconds. What is the length of the train? A train running at the speed of $100\mathrm{km / hr}$ crosses a pole in 9 seconds. What is the length of the train? A train running at the speed of $108\mathrm{km / hr}$ crosses a pole in 7 seconds. Find the length of the train? A train running at the speed of $110\mathrm{km / hr}$ crosses a pole in 9 seconds. What is the length of the train? A train running at the speed of $120\mathrm{km / hr}$ crosses a pole in 12 seconds. What is the length of the train? A train running at the speed of $120\mathrm{km / hr}$ crosses a pole in 18 seconds. What is the length of the train? A train running at the speed of $120\mathrm{km / hr}$ crosses a pole in 9 seconds. Find the length of the train? A train running at the speed of $126\mathrm{km / hr}$ crosses a pole in 9 seconds. Find the length of the train? A train running at the speed of $142\mathrm{km / hr}$ crosses a pole in 12 seconds. Find the length of the train? A train running at the speed of $162\mathrm{km / hr}$ crosses a pole in 9 seconds. Find the length of the train? A train running at the speed of $180\mathrm{km / hr}$ crosses a pole in 18 seconds. What is the length of the train? A train running at the speed of $180\mathrm{km / hr}$ crosses a pole in 36 seconds. What is the length of the train? A train running at the speed of $180\mathrm{km / hr}$ crosses a pole in 7 seconds. What is the length of the train? A train running at the speed of $180\mathrm{km / hr}$ crosses a pole in 8 seconds. Find the length of the train? A train running at the speed of $180\mathrm{km / hr}$ crosses a pole in 9 seconds. Find the length of the train? + +Table 6: Some examples of inconsistent answers on MathQA. + +Figure 3: Examples in MathQA with the same sentence pattern (after sentence normalization). These MwPs all share the same sentence pattern, "a train running at the speed of [NUM1] km/hr crosses a pole in [NUM2] sec. [what is | find] the length of the train?" + +# Appendix C: Additional Experiments for Corpus Diversity Metrics + +We also provide a syntactic pattern diversity to measure the syntactic diversity of an MWP. Let $P = \{P_{1}, P_{2}, \dots, P_{M}\}$ be a specific set of MwPs in a given corpus with the same problem type, where $P_{i}$ is the $i$ -th MWP in $P$ , and $T_{i}$ is the corresponding POS sequence of $P_{i}$ . For example, "NNP VBZ CD NNS" is the POS tagging sequence for "Mary has 5 books." For a given $P_{i}$ , we define its syntactic pattern diversity $(SD_{i})$ as + +$$ +S D _ {i} = 1 - \max _ {j, j \neq i} \frac {B L E U (T _ {i} , T _ {j}) + B L E U (T , T _ {i})}{2}, +$$ + +where $BLEU(T_i, T_j)$ is measured between $T_i$ and $T_j$ ( $j \neq i$ , $j = \{1, 2, \dots, M\}$ ). Figure 4 shows that there are $87\%$ , $54\%$ , $46\%$ and $33\%$ identical syntactic patterns (these numbers are the percentages of MwPs with $SD_i = 0$ w.r.t. each dataset) in the MathQA, IL, AI2, and ALGES corpora, respectively, while ASDiv only has $4\%$ . This shows that our corpus is also more diverse in terms of syntactic patterns. + +We also measure the diversity between the test-set and the training-set, as the similarity between them is a critical factor for causing exaggerated performance. The $LD$ and $SD$ metrics between the test-set and the training-set can be obtained by modifying the previous formulas to + +$$ +\begin{array}{l} \begin{array}{r} L D _ {i} = 1 - \max _ {j} \frac {B L E U (P _ {i} , P _ {j}) + B L E U (P _ {j} , P _ {i})}{2}, \\ P _ {i} \in D S _ {t e s t} a n d P _ {j} \in D S _ {t r a i n} \end{array} \\ \begin{array}{c} S D _ {i} = 1 - \max _ {j} \frac {B L E U (T _ {i} , T _ {j}) + B L E U (T _ {j} , T _ {i})}{2}, \\ T _ {i} \in D S _ {t e s t} a n d T _ {j} \in D S _ {t r a i n} \end{array} \\ \end{array} +$$ + +where $DS_{test}$ and $DS_{train}$ are all the MwPs in the test set and the training set, respectively. For a given problem $P_i$ from the test set, $LD_i$ and $SD_i$ denote the lexicon-pattern and syntactic-pattern diversity between $P_i$ and all the problems $P_j$ (in the training set), respectively. If $P_i$ has a low diversity index, it can be easily solved via a training set MwP with similar patterns. + +![](images/8138146e7274bab0050cf4d628ed0b798be37d194c4ff752dc48579f7500f413.jpg) +Figure 4: Syntactic pattern diversity of various corpora + +![](images/827aae3d132eda4e08b44c6834170e51b902bafe34e8b322523ff0eab09c57a7.jpg) +Figure 5: Lexicon usage diversity of various corpora: test-set versus training-set + +![](images/c5939be419a93e60e7e9d07aa34bd711930b7a877e8313fc7f041070281bed11.jpg) +Figure 6: Syntactic pattern diversity of various corpora: test-set versus training-set + +![](images/6744d1d68adc8039dc49c6850421022e4a9db506306fadd9ccc2245613e489a6.jpg) +Figure 7: Lexicon usage diversity measured only within the test-set for various corpora. Here, the suffix "-test" denotes that it is measured only within the test-set. + +For AI2, IL, KAZB, ALGES, AllArith, DRAW, and MathQA, we follow their own either n-fold cross-validation setting or the train/dev/test splits originally specified in each dataset. For MathQA-C and ASDiv-A, we follow the same n-fold cross-validation setting specified in Section 4. Figure 5 shows the LD between the test-set and the training-set for various corpora. + +Also, though not directly shown in this figure, the lexicon diversity indices of $48\%$ of the MwPs in our ASDiv-A are larger than or equal to 0.5. In contrast, they are $35\%, 34\%, 29\%, 23\%, 23\%, 18\%$ and $7\%$ in AI2, IL, ALGES, KAZB, AllArith, DRAW, and MathQA-C respectively. These statistics suggest that our corpus is more diverse (and thus more difficult) than other well-known MWP corpora. Figure 6 shows the SD between the test-set and the training-set for different corpora. Again, this shows that our corpus also contains more diverse POS sequences. + +Last, in Figure 5, MathQA actually possesses the highest CLD between its official test-set and training-set: 0.85, surprisingly higher than that of all other corpora (e.g., they are 0.52, 0.44, 0.42 and 0.42 for ASDiv-A, IL, AllArith, and AI2, respectively). In comparison with its CLD across the whole corpus (0.05 in Figure 1), 0.85 is much higher. It is thus suspected that the MwPs within its test/training-set might be very similar to each other. As explained in Footnote #7, a test-set with very similar MwPs hinders the assessment of the true performance of an MWP solver. We thus further compare the CLDs within the test set for a few corpora. Figure 7 illustrates that the CLD=0.27 of MathQA measured within the test set is actually low in comparison with the corresponding CLDs of other corpora (e.g., the means are 0.57 for ASDiv-A and ASDiv corpora). \ No newline at end of file diff --git a/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/images.zip b/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4b7dc5eca65ce8a04f1595ecb6602232a336461f --- /dev/null +++ b/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:940e354ed8621cb29fd7669689ecdbe095d3a3ee74c394797381ac3642f51b50 +size 772777 diff --git a/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/layout.json b/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..7bcae5c65c2ec7bd303fdfe4b4a6c2f944e33ba0 --- /dev/null +++ b/adiversecorpusforevaluatinganddevelopingenglishmathwordproblemsolvers/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:49cc260d61ec438df27fd032842281fa6f59778f9b080d1621626108c47bd6f6 +size 292954 diff --git a/advaugrobustadversarialaugmentationforneuralmachinetranslation/363fa19c-75be-49a0-bdba-992851b40339_content_list.json b/advaugrobustadversarialaugmentationforneuralmachinetranslation/363fa19c-75be-49a0-bdba-992851b40339_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..85dc2ccc6fdebf2235d42a6e437025a7dd48ba59 --- /dev/null +++ b/advaugrobustadversarialaugmentationforneuralmachinetranslation/363fa19c-75be-49a0-bdba-992851b40339_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7de291dcc1c2caedb083bc4a0b8bcf74782ccf2bca4b0de9676e334e07b65ac3 +size 76081 diff --git a/advaugrobustadversarialaugmentationforneuralmachinetranslation/363fa19c-75be-49a0-bdba-992851b40339_model.json b/advaugrobustadversarialaugmentationforneuralmachinetranslation/363fa19c-75be-49a0-bdba-992851b40339_model.json new file mode 100644 index 0000000000000000000000000000000000000000..abc51e2c3a8c155dedf4d92760bdb0e3a54060da --- /dev/null +++ b/advaugrobustadversarialaugmentationforneuralmachinetranslation/363fa19c-75be-49a0-bdba-992851b40339_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:37ca37bb2d0e2135d88aaf5fd22c4c5c76b8b1a79cab9365a8ef753010094d04 +size 90021 diff --git a/advaugrobustadversarialaugmentationforneuralmachinetranslation/363fa19c-75be-49a0-bdba-992851b40339_origin.pdf b/advaugrobustadversarialaugmentationforneuralmachinetranslation/363fa19c-75be-49a0-bdba-992851b40339_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..aa640e74f3fc7fe24eb77aa75f6f1a52fa7f9993 --- /dev/null +++ b/advaugrobustadversarialaugmentationforneuralmachinetranslation/363fa19c-75be-49a0-bdba-992851b40339_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4386e80771d6a5a2733a5c3c332bca5b55c8f2fbf4399bc1a812d1799fa4d399 +size 729849 diff --git a/advaugrobustadversarialaugmentationforneuralmachinetranslation/full.md b/advaugrobustadversarialaugmentationforneuralmachinetranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..72b9952dac90d12c73d8fb70fbc7f9f2c3716fa0 --- /dev/null +++ b/advaugrobustadversarialaugmentationforneuralmachinetranslation/full.md @@ -0,0 +1,332 @@ +# AdvAug: Robust Adversarial Augmentation for Neural Machine Translation + +Yong Cheng, Lu Jiang, Wolfgang Macherey and Jacob Eisenstein +Google Research + +{chengyong, lujiang, wmach, jeisenstein}@google.com + +# Abstract + +In this paper, we propose a new adversarial augmentation method for Neural Machine Translation (NMT). The main idea is to minimize the vicinal risk over virtual sentences sampled from two vicinity distributions, of which the crucial one is a novel vicinity distribution for adversarial sentences that describes a smooth interpolated embedding space centered around observed training sentence pairs. We then discuss our approach, AdvAug, to train NMT models using the embeddings of virtual sentences in sequence-to-sequence learning. Experiments on Chinese-English, English-French, and English-German translation benchmarks show that AdvAug achieves significant improvements over the Transformer (up to 4.9 BLEU points), and substantially outperforms other data augmentation techniques (e.g. back-translation) without using extra corpora. + +# 1 Introduction + +Recent work in neural machine translation (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017) has led to dramatic improvements in both research and commercial systems (Wu et al., 2016). However, a key weakness of contemporary systems is that performance can drop dramatically when they are exposed to input perturbations (Belinkov and Bisk, 2018; Cheng et al., 2019), even when these perturbations are not strong enough to alter the meaning of the input sentence. Consider a Chinese sentence, "zhejia feiji meiyou zhuangshang zhujia huo yiyuan, shizai shi qiji". If we change the word "huo (或)" to its synonym "ji (及)", the Transformer model will generate contradictory results of "It was indeed a miracle that the plane did not touch down at home or hospital." versus "It was a miracle that the plane landed at home and hospital." Such perturbations + +can readily be found in many public benchmarks and real-world applications. This lack of stability not only lowers translation quality but also inhibits applications in more sensitive scenarios. + +At the root of this problem are two interrelated issues: first, machine translation training sets are insufficiently diverse, and second, NMT architectures are powerful enough to overfit — and, in extreme cases, memorize — the observed training examples, without learning to generalize to unseen perturbed examples. One potential solution is data augmentation which introduces noise to make the NMT model training more robust. In general, two types of noise can be distinguished: (1) continuous noise which is modeled as a real-valued vector applied to word embeddings (Miyato et al., 2016, 2017; Cheng et al., 2018; Sato et al., 2019), and (2) discrete noise which adds, deletes, and/or replaces characters or words in the observed sentences (Belinkov and Bisk, 2018; Sperber et al., 2017; Ebrahimi et al., 2018; Michel et al., 2019; Cheng et al., 2019; Karpukhin et al., 2019). In both cases, the challenge is to ensure that the noisy examples are still semantically valid translation pairs. In the case of continuous noise, it only ensures that the noise vector lies within an $L_{2}$ -norm ball but does not guarantee to maintain semantics. While constructing semantics-preserving continuous noise in a high-dimensional space proves to be non-trivial, state-of-the-art NMT models are currently based on adversarial examples of discrete noise. For instance, Cheng et al. (2019) generate adversarial sentences using discrete word replacements in both the source and target, guided by the NMT loss. This approach achieves significant improvements over the Transformer on several standard NMT benchmarks. Despite this promising result, we find that the generated adversarial sentences are unnatural, and, as we will show, suboptimal for learning robust NMT models. + +In this paper, we propose AdvAug, a new adversarial augmentation technique for sequence-to-sequence learning. We introduce a novel vicinity distribution to describe the space of adversarial examples centered around each training example. Unlike prior work (Cheng et al., 2019), we first generate adversarial sentences in the discrete data space and then sample virtual adversarial sentences from the vicinity distribution according to their interpolated embeddings. Our intuition is that the introduced vicinity distribution may increase the sample diversity for adversarial sentences. Our idea is partially inspired by mixup (Zhang et al., 2018), a technique for data augmentation in computer vision, and we also use a similar vicinity distribution as in mixup to augment the authentic training data. Our AdvAug approach finally trains on the embeddings sampled from the above two vicinity distributions. As a result, we augment the training using virtual sentences in the feature space as opposed to in the data space. The novelty of our paper is the new vicinity distribution for adversarial examples and the augmentation algorithm for sequence-to-sequence learning. + +Extensive experimental results on three translation benchmarks (NIST Chinese-English, IWSLT English-French, and WMT English-German) show that our approach achieves significant improvements of up to 4.9 BLEU points over the Transformer (Vaswani et al., 2017), outperforming the former state-of-the-art in adversarial learning (Cheng et al., 2019) by up to 3.3 BLEU points. When compared with widely-used data augmentation methods (Sennrich et al., 2016a; Edunov et al., 2018), we find that our approach yields better performance even without using extra corpora. We conduct ablation studies to gain further insights into which parts of our approach matter most. In summary, our contributions are as follows: + +1. We propose to sample adversarial examples from a new vicinity distribution and utilize their embeddings, instead of their data points, to augment the model training. +2. We design an effective augmentation algorithm for learning sequence-to-sequence NMT models via mini-batches. +3. Our approach achieves significant improvements over the Transformer and prior state-of-the-art models on three translation benchmarks. + +# 2 Background + +Neural Machine Translation. Generally, NMT (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017) models the translation probability $P(\mathbf{y}|\mathbf{x};\boldsymbol{\theta})$ based on the encoder-decoder paradigm where $\mathbf{x}$ is a source-language sentence, $\mathbf{y}$ is a target-language sentence, and $\boldsymbol{\theta}$ is a set of model parameters. The decoder in the NMT model acts as a conditional language model that operates on a shifted copy of $\mathbf{y}$ , i.e., $\langle sos\rangle, y_0, \dots, y_{|\mathbf{y}| - 1}$ where $\langle sos\rangle$ is a start symbol of a sentence and representations of $\mathbf{x}$ learned by the encoder. For clarity, we use $e(\mathbf{x}) \in \mathbb{R}^{d\times |\mathbf{x}|}$ to denote the feature vectors (or word embeddings) of the sentence $\mathbf{x}$ where $d$ is dimension size. + +Given a parallel training corpus $S$ , the standard training objective for NMT is to minimize the empirical risk: + +$$ +\mathcal {L} _ {\text {c l e a n}} (\boldsymbol {\theta}) = \underset {P _ {\delta} (\mathbf {x}, \mathbf {y})} {\mathbb {E}} [ \ell (f (e (\mathbf {x}), e (\mathbf {y}); \boldsymbol {\theta}), \ddot {\mathbf {y}}) ], \tag {1} +$$ + +where $f(e(\mathbf{x}), e(\mathbf{y}); \boldsymbol{\theta})$ is a sequence of model predictions $f_{j}(e(\mathbf{x}), e(\mathbf{y}); \boldsymbol{\theta}) = P(y | \mathbf{y}_{ \eta)} \sum_ {i = 1} ^ {m} I (\ell_ {i} > \eta) \ell_ {i}, \tag {14} +$$ + +where $I(\cdot)$ is an indicator function and $\eta$ is set by a moving average tracking the $p$ -th percentile of the example losses of every mini-batch. In our experiments, we set the $p$ -th percentile to be $100 \times (1 - r_t)$ for the training iteration $t$ , and gradually anneal $r_t$ using $r_t = 0.5^{t / \beta}$ , where $\beta$ is the hyperparameter. + +# 3.2 $P_{\text{out}}$ for Authentic Data + +We define the $\mu_{aut}$ in the vicinity distribution $P_{aut}$ for authentic examples as follows: + +$$ +\mu_ {a u t} (\tilde {\mathbf {x}}, \tilde {\mathbf {y}} | \mathbf {x}, \mathbf {y}) = \frac {1}{| \mathcal {S} |} \sum_ {(\mathbf {x} ^ {\prime}, \mathbf {y} ^ {\prime}) \in \mathcal {S}} \mathbb {E} _ {\lambda} [ +$$ + +$$ +\begin{array}{l} \delta (e (\tilde {\mathbf {x}}) = m _ {\lambda} (\mathbf {x}, \mathbf {x} ^ {\prime}), e (\tilde {\mathbf {y}}) = m _ {\lambda} (\mathbf {y}, \mathbf {y} ^ {\prime}), \\ \tilde {\omega} = m _ {\lambda} \left(\omega , \omega^ {\prime}\right) ]. \tag {15} \\ \end{array} +$$ + +The translation loss on authentic data is integrated over all examples of the vicinity distribution $P_{\text{out}}$ : + +$$ +\mathcal {L} _ {\text {a u t}} (\boldsymbol {\theta}) = \underset {P _ {\text {a u t}} (\tilde {\mathbf {x}}, \tilde {\mathbf {y}})} {\mathbb {E}} \left[ \ell \left(f \left(e \left(\tilde {\mathbf {x}}\right), e \left(\tilde {\mathbf {y}}\right); \boldsymbol {\theta}\right), \tilde {\boldsymbol {\omega}}\right) \right]. \tag {16} +$$ + +In our experiments, we select the value of $\lambda$ in Eq. (15) twice for every $(\mathbf{x},\mathbf{y})$ : (1) a constant 1.0 and (2) a sample from the Beta distribution. The former is equivalent to sampling from the empirical distribution $P_{\delta}$ whereas the latter is similar to applying mixup in the embedding space of the sequence model. In other words, $\mathcal{L}_{\mathrm{out}}(\pmb{\theta})$ equals the sum of two translation losses, $\mathcal{L}_{\mathrm{clean}}(\pmb{\theta})$ computed on the original training examples when $\lambda$ is 1.0 and $\mathcal{L}_{\mathrm{mixup}}(\pmb{\theta})$ computed on virtual examples when $\lambda$ is sampled from a Beta distribution. Accordingly, when $\lambda$ is 1.0 we set $\tilde{\omega}$ to be the interpolation of the sequences of one-hot label vectors for $\mathbf{y}$ and $\mathbf{y}'$ , i.e. $\boldsymbol{\omega} = \ddot{\mathbf{y}}$ and $\boldsymbol{\omega}' = \ddot{\mathbf{y}}'$ . Otherwise $\tilde{\omega}$ is the interpolation of model output vectors of $(\mathbf{x},\mathbf{y})$ and $(\mathbf{x}',\mathbf{y}')$ , that is, $\boldsymbol{\omega} = f(e(\mathbf{x}),e(\mathbf{y});\hat{\boldsymbol{\theta}})$ and $\boldsymbol{\omega}' = f(e(\mathbf{x}',e(\mathbf{y}');\hat{\boldsymbol{\theta}})$ . + +Algorithm 1: Proposed AdvAug function. +Input: A batch of source and target sentences (X, Y); the selection ratio $r_t$ ; the hyperparameter $\alpha$ . Output: Mini-batch losses $\mathcal{L}_{adv}$ and $\mathcal{L}_{aut}$ +1 Function AdvAug (X, Y): +2 foreach $(\mathbf{x}, \mathbf{y}) \in (\mathbf{X}, \mathbf{Y})$ do +3 $\omega \gets f(e(\mathbf{x}), e(\mathbf{y}); \hat{\boldsymbol{\theta}})$ ; +4 Sample two adversarial examples $(\mathbf{x}', \mathbf{y}')$ and $(\mathbf{x}'', \mathbf{y}'')$ from $A_{(\mathbf{x}, \mathbf{y})}$ by Eq. (4); +5 $\lambda \gets \text{Beta}(\alpha, \alpha)$ ; +6 $e(\tilde{\mathbf{x}}) \gets m_{\lambda}(\mathbf{x}', \mathbf{x}'')$ , $e(\tilde{\mathbf{y}}) \gets m_{\lambda}(\mathbf{y}', \mathbf{y}'')$ ; +7 $\ell_i \gets \ell(f(e(\tilde{\mathbf{x}}), e(\tilde{\mathbf{y}});\boldsymbol{\theta}), \boldsymbol{\omega})$ ; +8 end +9 Compute $\mathcal{L}_{adv}$ using $r_t$ and $\{\ell_i\}$ by Eq. (14); +10 $(\mathbf{X}', \mathbf{Y}') \gets \text{Shuffle}(\mathbf{X}, \mathbf{Y})$ ; +11 foreach $(\mathbf{x}, \mathbf{y}, \mathbf{x}', \mathbf{y}') \in (\mathbf{X}, \mathbf{Y}, \mathbf{X}', \mathbf{Y}')$ do +12 $\omega \gets f(e(\mathbf{x}), e(\mathbf{y}; \hat{\boldsymbol{\theta}})$ ; +13 $\omega' \gets f(e(\mathbf{x}', e(\mathbf{y}')$ ; +14 $\lambda \gets \text{Beta}(\alpha, \alpha)$ ; +15 $e(\tilde{\mathbf{x}}) \gets m_{\lambda}(\mathbf{x}, \mathbf{x}')$ , $e(\tilde{\mathbf{y}}) \gets m_{\lambda}(\mathbf{y}, \mathbf{y}')$ ; +16 $\tilde{\boldsymbol{\omega}} \gets m_{\lambda}(\boldsymbol{\omega}, \boldsymbol{\omega}')$ ; +17 $\ell_i \gets \ell(f(e(\tilde{\mathbf{x}}), e(\tilde{\mathbf{y}});\boldsymbol{\theta}), \tilde{\boldsymbol{\omega}}) + \ell(f(e(\mathbf{x}), e(\mathbf{y}); \boldsymbol{\theta}), \ddot{\mathbf{y}})$ ; +18 end +19 Compute $\mathcal{L}_{aut}$ by averaging $\{\ell_i\}$ ; +20 return $\mathcal{L}_{adv}$ , $\mathcal{L}_{aut}$ + +# 3.3 Training + +Finally, the training objective in our AdvAug is a combination of the two losses: + +$$ +\boldsymbol {\theta} ^ {*} = \underset {\boldsymbol {\theta}} {\operatorname {a r g m i n}} \left\{\mathcal {L} _ {a u t} (\boldsymbol {\theta}) + \mathcal {L} _ {a d v} (\boldsymbol {\theta}) \right\}. \tag {17} +$$ + +Here, we omit two bidirectional language model losses for simplicity, which are used to recommend word candidates to maintain semantic similarities (Cheng et al., 2019). + +In practice, we need to compute the loss via minibatch training. For the $P_{\text{out}}$ , we follow the pair sampling inside each mini-batch in mixup. It can avoid padding too much tokens because sentences of similar lengths are grouped within a mini-batch (Vaswani et al., 2017). For the $P_{\text{adv}}$ , we sample a pair of examples from $A_{(\mathbf{x},\mathbf{y})}$ for each $(\mathbf{x},\mathbf{y})$ and cover the distribution over multiple training epochs. The entire procedure to calculate the translation losses, $\mathcal{L}_{\text{adv}}(\boldsymbol{\theta})$ and $\mathcal{L}_{\text{out}}(\boldsymbol{\theta})$ , is presented in Algorithm 1. In a nutshell, for each batch of training examples, we firstly sample virtual examples from $P_{\text{adv}}$ and $P_{\text{out}}$ by interpolating the embeddings of the adversarial or authentic training examples. Then we calculate the translation loss using their interpolated embeddings. + +
MethodLoss Config.MT06MT02MT03MT04MT05MT08
Vaswani et al. (2017)\( \mathcal{L}_{clean} \)44.5745.4944.5546.2044.9635.11
Miyato et al. (2017)-45.2845.9544.6845.9945.3235.84
Sato et al. (2019)-45.7546.3745.0246.4945.8835.90
Cheng et al. (2019)-46.9547.0646.4847.3946.5837.38
Sennrich et al. (2016a)*-46.3947.3147.1047.8145.6936.43
Edunov et al. (2018)*-46.2047.7846.9347.8046.8136.79
Ours\( \mathcal{L}_{mixup} \)45.1246.3244.8146.6146.0836.00
\( \mathcal{L}_{aut} \)46.7346.7946.1347.5446.8837.21
\( \mathcal{L}_{clean} + \mathcal{L}_{adv} \)47.8948.5348.7348.6048.7639.03
\( \mathcal{L}_{aut} + \mathcal{L}_{adv} \)49.2649.0347.9648.8649.8839.63
Ours*\( \mathcal{L}_{aut} + \mathcal{L}_{adv} \)49.9850.3449.8150.6150.7240.45
+ +Table 1: Baseline comparison on NIST Chinese-English translation. * indicates the model uses extra corpora and - means not elaborating on its training loss. + +# 4 Experiments + +# 4.1 Setup + +We verify our approach on translation tasks for three language pairs: Chinese-English, English-French, and English-German. The performance is evaluated with the 4-gram BLEU score (Papineni et al., 2002) calculated by the multi-bleu.perl script. We report case-sensitive tokenized BLEU scores for English-French and English-German, and case-insensitive tokenized BLEU scores for Chinese-English. Note that all reported BLEU scores in our approach are from a single model rather than averaging multiple models (Vaswani et al., 2017). + +For the Chinese-English translation task, the training set is the LDC corpus consisting of 1.2M sentence pairs. The NIST 2006 dataset is used as the validation set, and NIST 02, 03, 04, 05, 08 are used as the test sets. We apply byte-pair encoding (BPE) (Sennrich et al., 2016b) with 60K merge operations to build two vocabularies comprising 46K Chinese sub-words and 30K English sub-words. We use the IWSLT 2016 corpus for English-French translation. The training corpus with 0.23M sentence pairs is preprocessed with the BPE script with 20K joint operations. The validation set is test2012 and the test sets are test2013 and test2014. For English-German translation, we use the WMT14 corpus consisting of 4.5M sentence pairs. The validation set is newtest2013 whereas the test set is newtest2014. We build a shared vocabulary of 32K sub-words using the BPE script. + +We implement our approach on top of the Transformer (Vaswani et al., 2017). The size of the hidden unit is 512 and the other hyperparameters + +are set following their default settings. There are three important hyperparameters in our approach, $\alpha$ in the Beta distribution and the word replacement ratio of $\gamma_{src} \in \xi_{src}$ , and $\gamma_{tgt} \in \xi_{tgt}$ detailed in Eq. (4). Note that $\gamma_{src}$ and $\gamma_{tgt}$ are not new hyperparameters but inherited from (Cheng et al., 2019). We tune these hyperameters on the validation set via a grid search, i.e. $\alpha \in \{0.2, 0.4, 4, 8, 32\}$ , $\gamma_{src} \in \{0.10, 0.15, 0.25\}$ and $\gamma_{tgt} \in \{0.10, 0.15, 0.30, 0.5\}$ . For the mixup loss $\mathcal{L}_{\text{mixup}}$ , $\alpha$ is fixed to 0.2. For the loss $\mathcal{L}_{\text{aut}}$ and $L_{\text{adv}}$ , the optimal value of $\alpha$ is 8.0. The optimal values of $(\gamma_{src}, \gamma_{tgt})$ are found to be (0.25, 0.50), (0.15, 0.30) and (0.15, 0.15) for Chinese-English, English-French and English-German, respectively, while it is set to (0.10, 0.10) only for back-translated sentence pairs. $\beta$ in Eq. (14) is set to 250K, 100K, 1M for Chinese-English, English-French and English-German. Unlike Cheng et al. (2019), we remove the learning of target language models to speed up the training. For each training batch, we introduce a batch of augmented adversarial examples and a batch of augmented authentic examples, which costs twice the vanilla training. For constructing adversarial examples, we solely compute the gradients for word embeddings which takes little time. After summing up the time of all steps, our total training time is about 3.3 times the vanilla training. + +# 4.2 Main Results + +Chinese-English Translation. Table 1 shows results on the Chinese-English translation task, in comparison with the following six baseline methods. For a fair comparison, we implement all these + +
MethodLoss Config.English-FrenchEnglish-German
test2013test2014newstest13newstest14
Vaswani et al. (2017)\( \mathcal{L}_{clean} \)40.7837.5725.8027.30
Sato et al. (2019)-41.6738.7225.9727.46
Cheng et al. (2019)-41.7639.4626.3428.34
Ours\( \mathcal{L}_{mixup} \)40.7838.1126.2828.08
\( \mathcal{L}_{aut} \)41.4938.7426.3328.58
\( \mathcal{L}_{aut} + \mathcal{L}_{adv} \)43.0340.9127.2029.57
+ +Table 2: Results on IWSLT16 English-French and WMT14 English-German translation. + +methods using the Transformer backbone or report results from those papers on the same corpora. + +1. The seminal Transformer model for NMT (Vaswani et al., 2017). +2. Following Miyato et al. (2017), we use adversarial learning to add continuous gradient-based perturbations to source word embeddings and extend it to the Transformer model. +3. Sato et al. (2019) leverage Miyato et al. (2017)'s idea into NMT by incorporating gradient-based perturbations to both source and target word embeddings and optimize the model with adversarial training. +4. Cheng et al. (2019) generate discrete adversarial examples guided by the gradients of word embeddings. Adversarial examples are used to both attack and defend the NMT model. +5. Sennrich et al. (2016a) translate monolingual corpora using an inverse NMT model and then augment the training data with them. +6. Based on Sennrich et al. (2016a), Edunov et al. (2018) propose three improved methods to generate back-translated data, which are sampling, top10 and beam+noise. Among those, we choose beam+noise as our baseline method, which can be regarded as an approach to incorporating noise into data. + +We first verify the importance of different translation losses in our approach. We find that both $\mathcal{L}_{\text{out}}$ and $\mathcal{L}_{\text{adv}}$ are useful in improving the Transformer model. $\mathcal{L}_{\text{adv}}$ is more important and yields a significant improvement when combined with the standard empirical loss $\mathcal{L}_{\text{clean}}$ (cf. Eq. (1)). These results validate the effectiveness of augmenting with virtual adversarial examples. When we use both $\mathcal{L}_{\text{out}}$ and $\mathcal{L}_{\text{adv}}$ to train the model, we obtain the best performance (up to 4.92 BLEU points on MT05). We also compare with the mixup loss. However, $\mathcal{L}_{\text{mixup}}$ is only slightly better than the + +standard empirical loss $\mathcal{L}_{\text{clean}}$ . + +Compared with the baseline methods without using extra corpora, our approach shows significant improvements over the state-of-the-art models. In particular, the superiority of $\mathcal{L}_{clean} + \mathcal{L}_{adv}$ over both Cheng et al. (2019) and Sato et al. (2019) verifies that we propose a more effective method to address adversarial examples in NMT. We also directly incorporate two adversarial examples to NMT models without interpolating their embeddings, but we do not observe any further gain over Cheng et al. (2019). This substantiates the superior performance of our approach on the standard data sets. + +To compare with the approaches using extra monolingual corpora, we sample 1.25M English sentences from the Xinhua portion of the GIGA-WORD corpus and list our performance in the last row of Table 1. When the back-translated corpus is incorporated, our approach yields further improvements, suggesting our approach complements the back-translation approaches. + +English-French and English-German Translation. Table 2 shows the comparison with the Transformer model (Vaswani et al., 2017), Sato et al. (2019) and Cheng et al. (2019) on English-French and English-German translation tasks. Our approach consistently outperforms all three baseline methods, yielding significant 3.34 and 2.27 BLEU point gains over the Transformer on the English-French and English-German translation tasks, respectively. We also conduct similar ablation studies on the translation loss. We still find that the combination of $\mathcal{L}_{adv}$ abd $\mathcal{L}_{aut}$ performs the best, which is consistent with the findings in the Chinese-English translation task. The substantial gains on these two translation tasks suggest the potential applicability of our approach to more language pairs. + +
Input但(但是)协议执行过程一波三折,致使和平进程一再受挫
Referencehowever, implementation of the deals has witnessed ups and downs, resulting in continuous setbacks in the peace process
Vaswani et al. on Inputhowever, the process of implementing the agreement was full of twists and turns, with the result that the peace process suffered setbacks again and again.
on Noisy Inputthe process of the agreement has caused repeated setbacks to the peace process.
Ours on Inputhowever, the process of implementing the agreement experienced twists and turns, resulting in repeated setbacks in the peace process.
on Noisy Inputhowever, the process of implementing the agreement experienced twists and turns, resulting in repeated setbacks in the peace process.
+ +Table 3: Translation Examples of Transformer and our model for an input and its adversarial input. + +
Lossα =
0.20.44832
Lmixup45.2845.3845.6445.09-
Laut45.9545.9246.7046.7346.54
Lclean+Ladv47.0646.8847.6047.8947.81
+ +Table 4: Effect of $\alpha$ on the Chinese-English validation set. “-” indicates that the model fails to converge. + +
Method0.000.050.100.15
Vaswani et al.44.5941.5438.8435.71
Miyato et al.45.1142.1139.3936.44
Sato et al.45.7544.0441.2538.78
Cheng et al.46.9544.2041.7139.89
Ours49.2647.5344.7141.76
+ +Table 5: Results on artificial noisy inputs. The column lists results for different noise fractions. + +# 4.3 Effect of $\alpha$ + +The hyperparameter $\alpha$ controls the shape of the Beta distribution over interpolation weights. We study its effect on the validation set in Table 4. Notable differences occur when $\alpha < 1$ and $\alpha > 1$ , this is because the Beta distribution show two different shapes with $\alpha = 1$ as a critical point. As we see, both $\mathcal{L}_{\text{out}}$ and $\mathcal{L}_{\text{adv}}$ prefer a large $\alpha$ and perform better when $\alpha = 8$ . Recall that when $\alpha$ is large, $m_{\lambda}$ behaves similarly to a simple average function. In $\mathcal{L}_{\text{mixup}}$ , $\alpha = 4$ performs slightly better, and a large $\alpha = 32$ will fail the model training. Although the result with $\alpha = 4$ appears to be slightly better, it consumes more iterations to train the model to reach the convergence, i.e., $90\mathrm{K}$ for $\alpha = 4$ vs. $20\mathrm{K}$ for $\alpha = 0.2$ . These indicate the differences between the proposed vicinity distributions and the one used in mixup. + +![](images/51600b116de689004ab0eb7514c8c96cdb687a3a8860d47e5dbdceda8e01cc9d.jpg) +Figure 2: BLEU scores over iterations on the Chinese-English validation set. + +# 4.4 Robustness to Noisy Inputs and Overfitting + +To test robustness on noisy inputs, we follow Cheng et al. (2019) to construct a noisy data set by randomly replacing a word in each sentence of the standard validation set with a relevant alternative. The relevance between words is measured by the similarity of word embeddings. 100 noisy sentences are generated for each of them and then re-scored to pick the best one with a bidirectional language model. Table 5 shows the results on artificial noisy inputs with different noise levels. Our approach shows higher robustness over all baseline methods across all noise levels. + +Figure 2 shows the evolution of BLEU scores during training. For $\mathcal{L}_{clean}$ , the BLEU score reaches its peak at about 20K iterations, and then the model starts overfitting. In comparison, all of the training losses proposed in this paper are capable of resisting overfitting: in fact, even after 100K iterations, no significant regression is observed (not + +shown in this figure). At the same iteration, our results are consistently higher than both the empirical risk $(\mathcal{L}_{clean})$ and mixup $(\mathcal{L}_{mixup})$ . + +As shown in Table 3, the baseline yields an incorrect translation possibly because the word "danshi(但是)" seldom occurs in this context in our training data. In contrast, our model incorporates embeddings of virtual sentences that contain "danshi(但是)" or its synonym "dan(但)". This encourages our model to learn to push their embeddings closer during training, and make our model more robust to small perturbations in real sentences. + +# 5 Related Work + +Data Augmentation. Data augmentation is an effective method to improve machine translation performance. Existing methods in NMT may be divided into two categories, based upon extra corpora (Sennrich et al., 2016a; Cheng et al., 2016; Zhang and Zong, 2016; Edunov et al., 2018) or original parallel corpora (Fadaee et al., 2017; Wang et al., 2018; Cheng et al., 2019). Recently, mixup (Zhang et al., 2018) has become a popular data augmentation technique for semi-supervised learning (Berthelot et al., 2019) and overcoming real-world noisy data (Jiang et al., 2019). Unlike prior works, we introduce a new method to augment the representations of the adversarial examples in sequence-to-sequence training of the NMT model. Even without extra monolingual corpora, our approach substantially outperforms the widely-used back-translation methods (Sennrich et al., 2016a; Edunov et al., 2018). Furthermore, we can obtain even better performance by including additional monolingual corpora. + +Robust Neural Machine Translation. It is well known that neural networks are sensitive to noisy inputs (Szegedy et al., 2014; Goodfellow et al., 2014), and neural machine translation is no exception. Thus improving the robustness of NMT models has become a popular research topic (e.g., Belinkov and Bisk, 2018; Sperber et al., 2017; Ebrahimi et al., 2018; Cheng et al., 2018, 2019; Karpukhin et al., 2019; Li et al., 2019). Many of these studies focus on augmenting the training data to improve robustness, especially with adversarial examples (Ebrahimi et al., 2018; Cheng et al., 2019; Karpukhin et al., 2019; Michel et al., 2019). Others also tried to deal with this issue by finding better input representations (Durrani et al., 2019), adding adversarial regularization (Sato et al., 2019) + +and so on. In contrast to those studies, we propose the vicinity distribution defined in a smooth space by interpolating discrete adversarial examples. Experimental results show substantial improvements on both clean and noisy inputs. + +# 6 Conclusion + +We have presented an approach to augment the training data of NMT models by introducing a new vicinity distribution defined over the interpolated embeddings of adversarial examples. To further improve the translation quality, we also incorporate an existing vicinity distribution, similar to mixup for observed examples in the training set. We design an augmentation algorithm over the virtual sentences sampled from both of the vicinity distributions in sequence-to-sequence NMT model training. Experimental results on Chinese-English, English-French and English-German translation tasks demonstrate the capability of our approach to improving both translation performance and robustness. + +# References + +Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations. +Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine translation. In International Conference on Learning Representations. +David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raffel. 2019. Mixmatch: A holistic approach to semi-supervised learning. arXiv preprint arXiv:1905.02249. +Olivier Chapelle, Jason Weston, Léon Bottou, and Vladimir Vapnik. 2001. Vicinal risk minimization. In Advances in neural information processing systems, pages 416-422. +Yong Cheng, Lu Jiang, and Wolfgang Macherey. 2019. Robust neural machine translation with doubly adversarial inputs. In Association for Computational Linguistics. +Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. 2018. Towards robust neural machine translation. In Association for Computational Linguistics. +Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semi-supervised learning for neural machine translation. In Association for Computational Linguistics. + +Nadir Durrani, Fahim Dalvi, Hassan Sajjad, Yonatan Belinkov, and Preslav Nakov. 2019. One size does not fit all: Comparing nmt representations of different granularities. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies. +Javid Ebrahimi, Daniel Lowd, and Dejing Dou. 2018. On adversarial examples for character-level neural machine translation. In Proceedings of COLING. +Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In *Empirical Methods in Natural Language Processing*. +Marzieh Fadaee, Arianna Bisazza, and Christof Monz. 2017. Data augmentation for low-resource neural machine translation. In Association for Computational Linguistics. +Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In International Conference on Machine Learning. +Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. +Lu Jiang, Di Huang, and Weilong Yang. 2019. Synthetic vs real: Deep learning on controlled noise. arXiv preprint arXiv:1911.09781. +Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. 2018. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In International Conference on Machine Learning. +Vladimir Karpukhin, Omer Levy, Jacob Eisenstein, and Marjan Ghazvininejad. 2019. Training on synthetic noise improves robustness to natural noise in machine translation. In Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019). +Xian Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, and Hassan Sajjad. 2019. Findings of the first shared task on machine translation robustness. arXiv preprint arXiv:1906.11943. +Paul Michel, Xian Li, Graham Neubig, and Juan Pino. 2019. On evaluation of adversarial perturbations for sequence-to-sequence models. In North American Chapter of the Association for Computational Linguistics: Human Language Technologies. +Takeru Miyato, Andrew M Dai, and Ian Goodfellow. 2017. Adversarial training methods for semi-supervised text classification. In International Conference on Learning Representations. + +Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. 2016. Distributional smoothing with virtual adversarial training. In International Conference on Learning Representations. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Association for Computational Linguistics. +Motoki Sato, Jun Suzuki, and Shun Kiyono. 2019. Effective adversarial regularization for neural machine translation. In Association for Computational Linguistics. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Association for Computational Linguistics. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Association for Computational Linguistics. +Matthias Sperber, Jan Niehues, and Alex Waibel. 2017. Toward robust neural machine translation for noisy input sequences. In International Workshop on Spoken Language Translation. +Christian Szegedy, Wojciech Zaremba, Sutskever Ilya, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Machine Learning. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. +Xinyi Wang, Hieu Pham, Zihang Dai, and Graham Neubig. 2018. Switchout: an efficient data augmentation algorithm for neural machine translation. In Empirical Methods in Natural Language Processing. +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. +Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empirical risk minimization. In International Conference on Learning Representations. +Jiajun Zhang and Chengqing Zong. 2016. Exploiting source-side monolingual data in neural machine translation. In Empirical Methods in Natural Language Processing. \ No newline at end of file diff --git a/advaugrobustadversarialaugmentationforneuralmachinetranslation/images.zip b/advaugrobustadversarialaugmentationforneuralmachinetranslation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9f60252281f31171f83f367b7e1653e9c8792e03 --- /dev/null +++ b/advaugrobustadversarialaugmentationforneuralmachinetranslation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:95a75e1e511d02910b151da34b79d247423131d57634dfb56a09e4d475bc9b45 +size 499900 diff --git a/advaugrobustadversarialaugmentationforneuralmachinetranslation/layout.json b/advaugrobustadversarialaugmentationforneuralmachinetranslation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4a694b5ee11bffc34535926390d081855b67ea6f --- /dev/null +++ b/advaugrobustadversarialaugmentationforneuralmachinetranslation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:31b715854ed29f6784428969d8bf4282e391ff1ca933c0086e4caae632058b9b +size 476003 diff --git a/adversarialanddomainawarebertforcrossdomainsentimentanalysis/0823cfa9-ef2b-4db0-8984-de86735c538b_content_list.json b/adversarialanddomainawarebertforcrossdomainsentimentanalysis/0823cfa9-ef2b-4db0-8984-de86735c538b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..28b16c4fbdea9f2e8dd7f2731a8dd1c4cb668ee8 --- /dev/null +++ b/adversarialanddomainawarebertforcrossdomainsentimentanalysis/0823cfa9-ef2b-4db0-8984-de86735c538b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0c103b4b1f4190b49ba078cfda3df60ec4de0fd15e33dd892c772c82441cb917 +size 68964 diff --git a/adversarialanddomainawarebertforcrossdomainsentimentanalysis/0823cfa9-ef2b-4db0-8984-de86735c538b_model.json b/adversarialanddomainawarebertforcrossdomainsentimentanalysis/0823cfa9-ef2b-4db0-8984-de86735c538b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..66133a42807572e56ac21e44785ee16c219040a8 --- /dev/null +++ b/adversarialanddomainawarebertforcrossdomainsentimentanalysis/0823cfa9-ef2b-4db0-8984-de86735c538b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:44548198f40e76e130a886e0e4f49474eceb7f9133e0cf2b1e293b084cede13d +size 81682 diff --git a/adversarialanddomainawarebertforcrossdomainsentimentanalysis/0823cfa9-ef2b-4db0-8984-de86735c538b_origin.pdf b/adversarialanddomainawarebertforcrossdomainsentimentanalysis/0823cfa9-ef2b-4db0-8984-de86735c538b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e00ad6524f4943a85ef0854b63a80dce419f266b --- /dev/null +++ b/adversarialanddomainawarebertforcrossdomainsentimentanalysis/0823cfa9-ef2b-4db0-8984-de86735c538b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4f44f20f757fc59008d2d21de65b613149088b225e856a6149b6c90b8e2b4588 +size 886009 diff --git a/adversarialanddomainawarebertforcrossdomainsentimentanalysis/full.md b/adversarialanddomainawarebertforcrossdomainsentimentanalysis/full.md new file mode 100644 index 0000000000000000000000000000000000000000..dc96330a4b33f29843b4399d1b6880ad9addb1a0 --- /dev/null +++ b/adversarialanddomainawarebertforcrossdomainsentimentanalysis/full.md @@ -0,0 +1,304 @@ +# Adversarial and Domain-Aware BERT for Cross-Domain Sentiment Analysis + +$^{12}$ Chunning Du, $^{12}$ Haifeng Sun, $^{12}$ Jingyu Wang, $^{*12}$ Qi Qi, $^{12}$ Jianxin Liao + +1State Key Laboratory of Networking and Switching Technology, + +Beijing University of Posts and Telecommunications, Beijing 100876, China + +$^{2}$ EBUPT Information Technology Co., Ltd., Beijing 100191, China + +{duchunning,sunhaifeng_1,wangjingyu,qiqi}@ebupt.com + +# Abstract + +Cross-domain sentiment classification aims to address the lack of massive amounts of labeled data. It demands to predict sentiment polarity on a target domain utilizing a classifier learned from a source domain. In this paper, we investigate how to efficiently apply the pre-training language model BERT on the unsupervised domain adaptation. Due to the pre-training task and corpus, BERT is task-agnostic, which lacks domain awareness and can not distinguish the characteristic of source and target domain when transferring knowledge. To tackle these problems, we design a post-training procedure, which contains the target domain masked language model task and a novel domain-distinguish pre-training task. The post-training procedure will encourage BERT to be domain-aware and distill the domain-specific features in a self-supervised way. Based on this, we could then conduct the adversarial training to derive the enhanced domain-invariant features. Extensive experiments on Amazon dataset show that our model outperforms state-of-the-art methods by a large margin. The ablation study demonstrates that the remarkable improvement is not only from BERT but also from our method. + +# 1 Introduction + +Sentiment analysis aims to automatically identify the sentiment polarity of the textual data. It is an essential task in natural language processing with widespread applications, such as movie reviews and product recommendations. Recently, deep networks have significantly improved the state-of-the-art in sentiment analysis. However, training deep networks is highly depended on a large amount of labeled training data which is time-consuming and requires expensive manual labeling. Thus, there is a strong need to leverage or reuse rich labeled + +data from a different but related source domain. Cross-domain sentiment analysis, which transfers the knowledge learned from source domain to a new target domain, becomes a promising direction. + +The main challenge of cross-domain sentiment analysis is domain discrepancy due to different expression of the user's emotion across domains. To address the problem, a wide-used approach is designed to extract domain invariant features, which means that the distributions of features from the source domain and target domain are similar (Zellinger et al., 2017; Persello and Bruzzone, 2016; Ganin et al., 2016; Yu and Jiang, 2016a). One effective way to obtain domain-invariant features is adversarial training(Ganin et al., 2016; Li et al., 2017; Zheng et al., 2019). Specifically, A domain discriminator is learned by minimizing the classification error of distinguishing the source from the target domains, while a deep classification model learns transferable representations that are indistinguishable by the domain discriminator. + +Very recently, pre-trained language models have shown to be effective for improving many language tasks (Peters et al., 2018). Bidirectional Encoder Representations from Transformers (BERT) realized a breakthrough, which pre-trains its encoder using language modeling and by discriminating surrounding sentences in a document from random ones (Devlin et al., 2019). Pre-training in this manner can construct bidirectional contextual representation, and the large-scale pre-training enables BERT powerful ability in language understanding. We only need to add one output layer and fine-tune BERT to get the state-of-the-art results in sentiment analysis. Theoretically, BERT can enhance the performance in cross-domain sentiment analysis. However, some important problems remain when directly fine-tuning BERT in the task of cross-domain sentiment analysis: + +Firstly, there is no labeled data in the target do + +main which brings many difficulties to the fine-tune procedure. If we fine-tune BERT only by the source labeled data, the shift between training and test distributions will degrade the BERT's performance. Secondly, BERT is task-agnostic and has almost no understanding of opinion text. BERT is pretrained by the universal language Wikipedia, leaving the domain challenges unresolved (Xu et al., 2019). For example, in the pre-training procedure, BERT may learn to guess the [MASK] in "The [MASK] is bright" as "sun". But in a laptop sentiment analysis, it is more likely to be "screen". Especially, in the cross-domain sentiment analysis scenario, the labeled data is limited, which is insufficient to fine-tune BERT to ensure full domain-awareness. Thirdly, cross-domain sentiment analysis also arises a new challenge for BERT to distinguish the characteristic of source and target domain to keep the transferable features and abandon domain-specific information. + +To address above problems, we design a novel pre-training task to make BERT domain-aware and then improve the BERT's fine-tuning procedure by adversarial training. Specifically, a novel post-training procedure is implemented that adapts BERT with unlabeled data from different domains to enhance the domain-awareness. Apart from the target domain masked language model task, we introduce the domain-distinguish pre-training task. BERT will be pre-trained to distinguish whether the two sentences come from the same domain. The domain-distinguish pre-training task will encourage BERT to distill syntactic and semantic domain-specific features, so as to be domain-aware. The proposed post-training procedure gives us a new way to fully utilize language knowledge from the target domain and link the source and target in a self-supervised way. Based on this, we could then conduct the adversarial training to derive the enhanced domain-invariant features. + +Experiments on Amazon reviews benchmark dataset show that our model gets the average result $90.12\%$ in accuracy, $4.22\%$ absolute improvement compared with state-of-the-art methods. The ablation study shows that the remarkable improvement is not only from BERT but also from our method. The contributions of this paper can be summarized as follows: + +- We apply BERT to cross-domain sentiment analysis task and leverage the post-training method to inject the target domain knowledge + +to BERT. + +- A novel domain-distinguish pre-training task is proposed for the BERT post-training. This design encourages BERT to be domain-aware and distill the domain-specific features in a self-supervised way. + +# 2 Related Work + +# 2.1 Cross-Domain Sentiment Analysis + +Cross-domain sentiment analysis aims to generalize a classifier that is trained on a source domain, for which typically plenty of labeled data is available, to a target domain, for which labeled data is scarce. There are many pivot-based methods (Blitzer et al., 2007a; Yu and Jiang, 2016b; Ziser and Reichart, 2018; Peng et al., 2018), which focus on inducing a low-dimensional feature representation shared across domains based on the cooccurrence between pivots and non-pivots. However, selecting pivot words is very tedious, and the pivot words are manually selected, which may not be accurate. Recently, some adversarial learning methods (Ganin et al., 2016; Li et al., 2017; Zheng et al., 2019) propose to train the feature generator to minimize the classification loss and simultaneously deceive the discriminator, which is end-to-end without manually selecting pivots. + +# 2.2 Language Model Pre-training + +Pre-trained language representations with self-supervised objectives have become standard in a wide range of NLP tasks. Previous work can be divided into two main categories: feature-based approaches and fine-tuning approaches. + +The recent proposed feature-based approaches mainly focus on learning contextualized word representations such as CoVe (McCann et al., 2017) and ELMo (Peters et al., 2018). As with traditional word embeddings, these learned representations are also typically used as features in a downstream model. On the other hand, fine-tuning approaches mainly pre-train a language model on a large corpus with an unsupervised objective and then fine-tune the model with in-domain labeled data for downstream applications. The advantage of these approaches is that few parameters need to be learned from scratch. Specifically, Howard and Ruder (2018) propose ULMFiT, which uses a different learning rate for each layer with learning warmup and gradual unfreezing. GPT (Radford et al., 2018) uses a transformer encoder (Vaswani + +et al., 2017) instead of an LSTM and jointly finetunes with the language modeling objective. Moreover, BERT (Devlin et al., 2019) is a large-scale language model consisting of multiple layers of transformer, which further incorporates bidirectional representations. BERT is the state-of-art pre-training language model. However, in the cross-domain sentiment analysis scenario, BERT is task-agnostic and can not distinguish the characteristic of source and target domain. + +# 3 Model + +In this section, we introduce the proposed model for cross-domain sentiment analysis in detail. We begin by giving the problem definition and notations. Then BERT and post-training method are formally presented in the second subsection. Finally, the adversarial training process is introduced. We also give a theoretical analysis of our model. + +# 3.1 Problem Definition and Notations + +In the task of cross-domain sentiment analysis, we are given two domains $D_{s}$ and $D_{t}$ which denote a source domain and a target domain, respectively. In the source domain, $D_{s}^{l} = \{x_{s}^{i},y_{s}^{i}\}_{i = 1}^{N_{s}^{l}}$ are $N_{s}^{l}$ labeled source domain examples, where $x_{s}^{i}$ means a sentence and $y_{s}^{i}$ is the corresponding polarity label. There are also $N_{s}^{u}$ unlabeled data $D_{s}^{u} = \{x_{s}^{i}\}_{i = 1 + N_{s}^{l}}^{N_{s}^{l} + N_{s}^{u}}$ in the source domain. In the target domain, there is a set of unlabeled data $D_{t} = \{x_{t}^{i}\}_{i = 1}^{N_{t}}$ , where $N_{t}$ is the number of unlabeled data. Cross-domain sentiment analysis demands us to learn a robust classifier trained on labeled source domain data to predict the polarity of unlabeled sentences from the target domain. + +# 3.2 Background of BERT + +BERT (Devlin et al., 2019) builds on the Transformer networks (Vaswani et al., 2017), which relies purely on attention mechanism and allows modeling of dependencies without regard to their distance in the input sequences. BERT is pre-trained by predicting randomly masked words in the input (MLM task) and classifying whether the sentences are continuous or not (NSP task). The MLM task allows the word representation to fuse the left and the right context, and the NSP task enables BERT to infer the sentences' relationship. The pre-trained BERT can be easily fine-tuned by one softmax output layer for classification task. + +# 3.3 BERT Post-training + +Despite the success, BERT suffers from the domain challenge. BERT is pre-trained by Wikipedia, leading to task-agnostic and little understanding of opinion text. Especially in the cross-domain sentiment analysis scenario, the lack of abundant labeled data limits the fine-tune procedure, which degrades BERT due to the domain shift. This task also demands BERT to distinguish the characteristic of source and target domain for better knowledge transfer. Therefore, we propose BERT posttraining which takes BERT's pre-trained weights as the initialization for basic language understanding and adapts BERT by novel self-supervised pretrained tasks: domain-distinguish task and target domain masked language model. + +# 3.3.1 Domain-distinguish Task + +The next sentence prediction (NSP) task encourages BERT to model the relationship between sentences beyond word-level, which benefits the task of Question Answering and Natural Language Inference. However, cross-domain sentiment analysis operates on a single text sentence and does not require the inference ability. Instead, the ability to distinguish domains plays an important role. Therefore, during the post-training procedure, we replace the NSP task by domain-distinguish task (DDT). Specifically, we construct the sentence-pair input: [CLS] sentence A [SEP] sentence B [SEP], where [CLS] and [SEP] are special embeddings for classification and separating sentences. $50\%$ of time sentence A and sentence B are all randomly sampled from target domain reviews, we label it TargetDomain. And $50\%$ of time sentence A and sentence B come from target domain and another domain, whose label is MixDomain. We do not fix the collocation, in another word, we only ensure that the two sentences come from different domains but the order is random. For example: + +$$ +\begin{array}{l} \begin{array}{l} \text {I n p u t} = [ \text {C L S} ] \text {T h e m o u s e i s s m o o t h a n g r e t} \\ [ \text {S E P} ] \text {T h e s c r e e n i s p l a i n} [ \text {S E P} ] \end{array} \\ \text {L a b e l} = \text {T a r g e t D o m a i n} \\ \text {I n p u t} = [ \mathrm {C L S} ] \text {T h i s b o o k i s b o r i n g [ S E P ] T h e} \\ \text {L a b e l} = \text {M i x D o m a i n} \\ \end{array} +$$ + +The domain-distinguish pre-training is a classifi- + +cation task. We add one output layer on the pooled representation and maximize the likelihood of the right label. The domain-distinguish pre-training enables BERT to distill the specific features for different domains, which enhances the downstream adversarial training and benefits the cross-domain sentiment analysis. + +# 3.3.2 Target Domain MLM + +To inject the target domain knowledge, we leverage the masked language model (MLM) (Devlin et al., 2019). It requires to predict the randomly masked words in the sentence, which encourages BERT to construct a deep bidirectional representation. In the cross-domain sentiment analysis, there are no labeled data but plenty of unlabeled data in the target domain to post-train BERT by MLM. Specifically, we replace $15\%$ of tokens by [MASK] at random. The final hidden vectors corresponding to the mask tokens are fed into an output softmax over the vocabulary. We maximize the likelihood of the masked token id. + +Post-training by unlabeled review data in target domain will effectively alleviate the shift of domain knowledge. For example, if the masked word is an opinion word in "This movie is [MASK]", this objective challenges BERT to learn the representation for fine-grained opinion words in movie review domain, such as "touchable" or "disturbing". + +One problem is that the DDT task mixes sentences from other domains in the sentence pair. The sentence from other domains will be the noise which brings domain bias. Therefore, we only mask the tokens in target domain sentences if the domain-distinguish task label is MixDomain. + +The total loss of the post-training procedure is the sum of losses of target domain MLM and domain-distinguish task. The adaptation takes about 5 hours to complete on one single NVIDIA P100 GPU. + +# 3.4 Adversarial Training + +The post-training procedure injects target domain knowledge and brings domain-awareness to BERT. Based on the post-trained BERT, we now could utilize the adversarial training to abandon the distilled domain-specific features to derive the domain-invariant features. Specifically, a sentiment classifier and a domain discriminator is designed operating on the hidden state $h_{[CLS]}$ of the special classification embedding [CLS]. + +# 3.4.1 Sentiment Classifier + +The sentiment classifier is simply a fully-connected layer and outputs the probabilities through a softmax layer: + +$$ +y _ {s} = \operatorname {s o f t m a x} \left(W _ {s} h _ {[ C L S ]} + b _ {s}\right). \tag {1} +$$ + +The classifier is trained by the labeled data in the source domain and the loss function is cross-entropy: + +$$ +L _ {s e n} = - \frac {1}{N _ {s} ^ {l}} \sum_ {i = 1} ^ {N _ {s} ^ {l}} \sum_ {j = 1} ^ {K} \hat {y} _ {s} ^ {i} (j) \log y _ {s} ^ {i} (j), \tag {2} +$$ + +where $\hat{y}_s^i\in \{0,1\}$ is the ground truth label in the source domain, and $K$ denotes the number of different polarities. + +# 3.4.2 Domain Discriminator + +The domain discriminator aims to predict domain labels of samples, i.e., coming from the source or target domain. The parameters of BERT are optimized to maximize the loss of the domain discriminator. This target will encourage BERT to fool the domain discriminator to generate domain-invariant features. + +Specifically, before feeding $h_{[CLS]}$ to the domain discriminator, the hidden state of classification embedding [CLS] $h_{[CLS]}$ goes through the gradient reversal layer (GRL) (Ganin et al., 2016). During the forward propagation, the GRL acts as an identity function but during the backpropagation, the GRL reverses the gradient by multiplying it by a negative scalar $\lambda$ . GRL can be formulated as a 'pseudo-function' $Q_{\lambda}(x)$ by two equations below in order to describe its forward- and backward-behaviors: + +$$ +Q _ {\lambda} (x) = x, \tag {3} +$$ + +$$ +\frac {\partial Q _ {\lambda} (x)}{\partial x} = - \lambda I. \tag {4} +$$ + +We denote the hidden state $h_{[CLS]}$ through the GRL as $Q_{\lambda}(h_{[CLS]}) = \hat{h}_{[CLS]}$ and then feed it to the domain discriminator as: + +$$ +d = \operatorname {s o f t m a x} \left(W _ {d} \hat {h} _ {[ C L S ]} + b _ {d}\right). \tag {5} +$$ + +The target is to minimize the cross-entropy for all data from the source and target domains: + +$$ +L _ {d o m} = - \frac {1}{N _ {s} + N _ {t}} \sum_ {i} ^ {N _ {s} + N _ {t}} \sum_ {j} ^ {K} \hat {d} ^ {i} (j) \log d ^ {i} (j), \tag {6} +$$ + +where $\hat{d}^i\in \{0,1\}$ is the ground truth domain label. Due to the GRL, the parameters for domain discriminator $\theta_{dd}$ are optimized to increase the ability to predict domain labels, however, the parameters for BERT $\theta_{BERT}$ are optimized to fool the domain discriminator, leading to domain-invariant features. + +# 3.4.3 Joint Learning + +The sentiment classifier and the domain discriminator are jointly trained, and the total loss is: + +$$ +L _ {t o t a l} = L _ {s e n} + L _ {d o m}. \tag {7} +$$ + +The post-training procedure and our proposed domain-distinguish pre-training task will enhance the adversarial training to obtain lower classification error in the target domain, we will analyze it in Sec 3.5. + +# 3.5 Theoretical Analysis + +In this section, we provide a theoretical analysis of our approach. First, we provide an insight into existing theory, then we introduce an expansion of the theory related to our method and explain how the post-training and adversarial training cooperate to obtain a remarkably better result than state-of-the-art methods. + +For each domain, there is a labeling function on inputs $X$ , defined as $f: X \to [0,1]$ . The ideal label functions for source and target domain are denoted as: $f_s$ and $f_t$ , respectively. We define a hypothesis label function $h: X \to [0,1]$ and a disagreement function: + +$$ +\epsilon \left(h _ {1}, h _ {2}\right) = E \left[ \left| h _ {1} (x) - h _ {2} (x) \right| \right]. \tag {8} +$$ + +Then the expected error on the source samples of $h$ is defined as: $\epsilon_s(h) = \epsilon_s(h, f_s)$ . For the target domain, we have: $\epsilon_t(h) = \epsilon_t(h, f_t)$ . + +The divergence between source and target domain could thus be measured by $\mathcal{H}\Delta \mathcal{H}$ -distance, which is defined as follows: + +$$ +d _ {\mathcal {H} \Delta \mathcal {H}} \left(D _ {s}, D _ {t}\right) = 2 \sup _ {h, h ^ {\prime} \in \mathcal {H}} \left| \epsilon_ {s} \left(h, h ^ {\prime}\right) - \epsilon_ {t} \left(h, h ^ {\prime}\right) \right| \tag {9} +$$ + +This distance is firstly proposed in (Ben-David et al., 2010) and frequently used to measure the adaptability between different domains (Shen et al., 2018; Chen et al., 2019). + +# 3.5.1 Theorem 1. + +Let $\mathrm{H}$ be the hypothesis class. Given two different domains $D_{s},D_{t}$ , we have: + +$$ +\forall h \in H, \epsilon_ {t} (h) \leq \epsilon_ {s} (h) + \frac {1}{2} d _ {\mathcal {H} \Delta \mathcal {H}} \left(D _ {s}, D _ {t}\right) + C \tag {10} +$$ + +This theorem means that the expected error on the target domain is upper bounded by three terms: (1) the expected error on the source domain; (2) the divergence between the distributions $D_{s}$ and $D_{t}$ ; (3) the error of the ideal joint hypothesis. Normally, $C$ is disregarded because it is considered to be negligibly small. Therefore, the first and second terms are important quantitatively in computing the target error. + +For the first term, the error rate of source domain $\epsilon_{s}$ , it is easy to minimize with source labeled training data. Moreover, we adopt BERT, which brings powerful contextual representation for lower error rate. The second item in Eq. 10 demands us to generate similar features among different domains. Our proposed domain-distinguish pre-training task and post-training for BERT enable the model to identify the specific features for different domains. This ability will enhance the domain discriminator, which will help to find more complicated domain specific features and get abandoned by adversarial training. Therefore, we further decrease the divergence between the domains, which is quantitatively measured by $\mathcal{A}$ -distance in Sec 4.6. + +# 4 Experiments + +In this section, we empirically evaluate the performance of our proposed methods. + +# 4.1 Datasets and Experimental Setting + +We conduct the experiments on the widely-used Amazon reviews benchmark datasets collected by (Blitzer et al., 2007b). It contains reviews from four different domains: Books (B), DVDs (D), Electronics (E) and Kitchen appliances (K). For each domain, there are 2,000 labeled reviews and approximately 4000 unlabeled reviews. Following the convention of previous works (Ziser and Reichart, 2018; Ganin et al., 2016; Qu et al., 2019), we construct 12 cross-domain sentiment analysis tasks. For each task, we employ a 5-fold cross-validation protocol, that is, in each fold, 1600 balanced samples are randomly selected from the labeled data for training and the rest 400 for validation. + +
S→TPrevious ModelsBERT
DANNPBLMHATNACANIATNBERTHATN-BERTBERT-ATBERT-DABERT-DAAT
D→B81.7082.5086.3082.3587.0089.4089.8189.5590.4090.86
E→B78.5571.4081.0079.7581.8086.5087.1087.1588.3188.91
K→B79.2574.2083.3080.8084.7087.5587.8887.6587.9087.98
B→D82.3084.2086.1083.4586.8088.9689.3689.7089.7589.70
E→D79.7075.0084.0081.7584.1087.9588.8188.2089.0390.13
K→D80.4579.8084.5082.1084.1087.3087.8987.7288.3588.81
B→E77.6077.6085.7081.2086.5086.1587.2187.3088.1189.57
D→E79.7079.6085.6082.8086.9086.5586.9986.0588.1589.30
K→E86.6587.1087.0086.6087.6090.4590.3190.2590.5991.72
B→K76.1082.5085.2083.0585.9089.0589.4189.5590.6590.75
D→K77.3583.2086.2078.6085.8087.5387.5987.6988.5590.50
E→K83.9587.8087.9083.3588.7091.6092.0191.9192.7593.18
Average80.2980.4085.1082.1585.9088.2588.6988.5689.3790.12
+ +Table 1: Accuracy of domain adaptation on Amazon benchmark. + +# 4.2 Implementation Details + +We adopt $\mathrm{BERT}_{\mathrm{base}}$ (uncased) as the basis for all experiments. When generating the post-training data, each sentence in the target domain gets duplicated 10 times with different masks and sentences pair. We limit the maximum sequence length is 256. During the post-training, we train with batch size of 16 for 10000 steps. The optimizer is Adam with learning rate $2\mathrm{e} - 5$ , $\beta_{1} = 0.9$ , $\beta_{2} = 0.999$ , L2 weight decay of 0.01. During the adversarial training, The weights in sentiment classifier and domain discriminator are initialized from a truncated normal distribution with mean 0.0 and stddev 0.02. In the gradient reversal layer (GRL), we define the training progress as $p = \frac{t}{T}$ , where $t$ and $T$ are current training step and the maximum training step, respectively, and the adaptation rate $\lambda$ is increased as $\lambda = \frac{2}{1 + \exp(-10p)} - 1$ . + +# 4.3 Compared Methods + +We compare our method with 5 state-of-the-art methods: DANN (Ganin et al., 2016), PBLM (Ziser and Reichart, 2018), HATN (Li et al., 2018), ACAN (Qu et al., 2019), IATN (Zhang et al., 2019). We also design several variants of BERT as baselines: + +- BERT: Fine-tuning vanilla BERT by the source domain labeled data. +- HATN-BERT: HATN (Li et al., 2018) model based on BERT. +- BERT-AT: This method conducts the adversarial training operating on vanilla BERT. + +- BERT-DA: Fine-tuning domain-aware BERT by the source domain labeled data. The domain-aware BERT is obtained by post-training. +- BERT-DAAT: Our proposed method introduced in Sec 3. + +# 4.4 Experimental Results + +Table 2 shows the classification accuracy of different methods. We can observe that the proposed BERT-DAAT outperforms all other methods. + +For the previous models, they mostly base on the word2vec (Mikolov et al., 2013) or glove (Pennington et al., 2014). Compared to BERT's contextual word representation, they can not model complex characteristics of word use and how these uses vary across linguistic contexts, resulting in relatively worse overall performance. We can see that the vanilla BERT, which is fine-tuned only by the source domain labeled data without utilizing target domain data, can still outperform all the previous methods. For fair comparison, we reproduce the experiment of HATN model (Li et al., 2018) that incorporates BERT as the base model. As shown in Table 2, HATN-BERT achieves a comparable result with BERT-AT. + +For the BERT variants, we did not see a remarkable improvement in the results of BERT-AT, which conducts adversarial training on BERT. It demonstrates that, in the task of cross-domain sentiment analysis, the bottleneck of BERT is the lack of domain-awareness and can not be tackled purely + +![](images/c737d8995730b207d4c91b2f875680444cdb587387581941cc6d24ca90e36544.jpg) +Figure 1: The effect of post-training and adversarial training on the distribution of the extracted features. The figure shows t-SNE visualization of the BERT's hidden state for the $\mathrm{B}\rightarrow \mathrm{E}$ task. The red, blue, green and black points denote the source negative, source positive, target negative and target positive examples, respectively. + +by adversarial training. On the contrary, the posttraining procedure could improve the result by $1.12\%$ on average. It verifies the effectiveness of our proposed post-training methods that could inject the domain knowledge to BERT. As expected, BERT-DAAT performs best among the variants of BERT, $0.75\%$ absolute improvement to BERT-DA and $1.87\%$ absolute improvement to BERT, showing that the post-training procedure could further enhance the adversarial training. + +# 4.5 Visualization of Features + +To intuitively assess the effects of the post-training and adversarial training on BERT, we further perform a visualization of the feature representations of the variants of BERT for the training data in the source domain and the testing data in the target domain for the $\mathrm{B}\rightarrow \mathrm{E}$ task. As shown in Figure 1, the graphs are obtained by applying t-SNE on the set of all representation of source and target data points. Every sample is mapped into a 768-dimensional feature space through BERT and projected back into a two-dimensional plane by the t-SNE. + +In the vanilla BERT representation (first subgraph in Figure 1), we could observe that data points of different polarities in source domain are well separated. While for the target domain, some data points are mixed together. It shows that only utilizing source domain labeled data is not enough for the target domain classification. For the post-trained BERT (subgraph for BERT-DA), data points belong to four clusters, indicating that domains and sentiment polarities are both well classified. It verifies that our post-training strategy brings domain-awareness to BERT. Moreover, compared to the first subgraph, the boundary for sentiment polarity classification is more clear, showing that injecting domain knowledge by post-training is beneficial to sentiment classification. + +The latter two subgraphs in Figure 1 are the feature distributions obtained by adversarial training. One common characteristic is that data samples from different domains are very close to each other through adversarial training. However, the boundary for sentiment polarity classification is not very clear in BERT-AT's feature representation, resulting in degraded performance. For our proposed BERT-DAAT, the post-training enables the domain-awareness and help to distill more complicated domain specific features. The adversarial training is thus enhanced to get more domain-invariant features. We can find that target points are homogeneously spread out among source points, which decreases the divergence between the domains. According to Theorem 10, it can lower the upper boundary of the target error. + +# 4.6 $\mathcal{A}$ -distance + +Theorem 10 shows that the divergence between domains $d_{\mathcal{H}\Delta \mathcal{H}}(D_s,D_t)$ plays an important role. To quantitatively measure it, we compare the $\mathcal{A}$ -distance, which is usually used to measure domain discrepancy (Ben-David et al., 2010). The definition of $\mathcal{A}$ -distance is: $d_{\mathcal{A}} = 2(1 - 2\epsilon)$ , where $\epsilon$ is the generalization error of a classifier trained with the binary classification task of discriminating the source domain and target domain. More precisely, to obtain $\mathcal{A}$ -distance, we firstly split source and target domain data into two subsets of equal size and get the feature representation. We then train a linear SVM on the first subset to predict which domain the sample comes from. The error rate $\epsilon$ could be calculated on the second subset through the trained SVM, and $\mathcal{A}$ -distance is obtained by $d_{\mathcal{A}} = 2(1 - 2\epsilon)$ . + +We compare the $\mathcal{A}$ -distance of BERT, BERT-AT, and BERT-DAAT. Results are shown in Figure 2. For each cross-domain sentiment analysis task, + +![](images/fe333c9bfd2c965e2df1a0feac7396ed23096df0e94f91e4fd8d691d877ab846.jpg) +Figure 2: Comparison of $\mathcal{A}$ -distance of different models. + +the $\mathcal{A}$ -distance of BERT is highest. It is easy to conclude that applying adversarial training can effectively decrease the $\mathcal{A}$ -distance. Overall, the $\mathcal{A}$ -distance of BERT-DAAT is lower than BERT-AT, verifying that the post-training could enhance the adversarial training to decrease the domain discrepancy. + +# 4.7 Ablation Studies + +To analyze the effect of different components including post-training steps and post-training tasks, we conduct the ablation experiments. + +# 4.7.1 Effects of Post-Training Steps + +In this subsection, we study the effect of post-training steps. Figure 3 presents the accuracy on the task of $\mathrm{E} \rightarrow \mathrm{K}$ based on the checkpoint that has been post-trained for $k$ steps. The results for BERT-DA are obtained by fine-tuning source domain labeled data, BERT-DAAT is adversarial training by source labeled data and target unlabeled data. + +We find that, with limited post-training steps (fewer than 5000 steps), BERT-DA and BERT-DAAT perform similarly with BERT and BERT-AT, respectively. However, given post-training steps more than 5000, both the results of BERT-DA and BERT-DAAT see an increase. Especially, after post-training more than 5000 steps, BERT-DAAT shows remarkable strengths compared to BERT-DA. This shows that plenty of post-training steps is necessary to inject domain knowledge and domain-awareness. + +# 4.7.2 Effects of Post-training Tasks + +The post-training tasks in our work include target domain masked language model (MLM) and our proposed domain-distinguish task (DDT). We design two models which ablate MLM and DDT + +![](images/c85b19cfea568fd34d33d03093a6c62df2661e8347d239e442d7169824130266.jpg) +Figure 3: Ablation study on the number of post-training steps. The x-axis is the value of post-training steps $k$ . The y-axis is the accuracy on the task of $\mathrm{E} \rightarrow \mathrm{K}$ . + +
ModelD→BE→BK→B
BERT89.4086.5087.55
BERT-DAAT90.8688.9187.98
-w/o MLM89.9187.3987.80
-w/o DDT90.0288.0187.63
+ +Table 2: Ablation study over post-training tasks. w/o means without. + +separately and compare them with BERT-DAAT on the tasks of $\mathrm{D}\rightarrow \mathrm{B}$ $\mathrm{E}\rightarrow \mathrm{B}$ ,and $\mathrm{K}\rightarrow \mathrm{B}$ .Results in Table 2 indicate that: the target domain masked language model task (MLM) and domain-distinguish taskDDT) are both beneficial to cross-domain sentiment analysis. + +# 5 Conclusion and Future Work + +In this paper, we propose the BERT-DAAT model for cross-domain sentiment analysis. Our purpose is to inject the target domain knowledge to BERT and encourage BERT to be domain-aware. Specifically, we conduct post-training and adversarial training. A novel domain-distinguish pre-training task is designed to distill the domain-specific features in a self-supervised. Experimental results on Amazon dataset demonstrate the effectiveness of our model, which remarkably outperforms state-of-the-art methods. + +The proposed post-training procedure could also be applied to other domain adaptation scenarios such as named entity recognition, question answering, and reading comprehension. In the future, we would like to investigate the application of our theory in these domain adaptation tasks. + +# Acknowledgements + +This work was supported in part by the National Key R&D Program of China 2018YFB1800502, in part by the National Natural Science Foundation of China under Grants 61671079 and 61771068, in part by the Beijing Municipal Natural Science Foundation under Grant 4182041, and in part by the Ministry of Education and China Mobile Joint Fund MCM20180101. This work was also supported by BUPT Excellent Ph.D. Students Foundation CX2020206. + +# References + +Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine Learning. +John Blitzer, Mark Dredze, and Fernando Pereira. 2007a. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL, 2007. +John Blitzer, Mark Dredze, and Fernando Pereira. 2007b. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL 2007. +Xinyang Chen, Sinan Wang, Mingsheng Long, and Jianmin Wang. 2019. Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation. In ICML 2019. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. +Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor S. Lempitsky. 2016. Domain-adversarial training of neural networks. J. Mach. Learn. Res. +Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In ACL, 2018. +Zheng Li, Ying Wei, Yu Zhang, and Qiang Yang. 2018. Hierarchical attention transfer network for cross-domain sentiment classification. In AAAI, 2018. +Zheng Li, Yu Zhang, Ying Wei, Yuxiang Wu, and Qiang Yang. 2017. End-to-end adversarial memory network for cross-domain sentiment classification. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017. + +Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017. +Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. +Minlong Peng, Qi Zhang, Yu-Gang Jiang, and Xuanjing Huang. 2018. Cross-domain sentiment classification with target domain specific information. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers. +Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In EMNLP 2014. +Claudio Persello and Lorenzo Bruzzone. 2016. Kernel-based domain-invariant feature selection in hyperspectral images for transfer learning. IEEE Trans. Geoscience and Remote Sensing. +Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018. +Xiaoye Qu, Zhikang Zou, Yu Cheng, Yang Yang, and Pan Zhou. 2019. Adversarial category alignment network for cross-domain sentiment classification. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding with unsupervised learning. In Technical report, OpenAI. +Jian Shen, Yanru Qu, Weinan Zhang, and Yong Yu. 2018. Wasserstein distance guided representation learning for domain adaptation. In AAAI, 2018. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017.. + +Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2019. BERT post-training for review reading comprehension and aspect-based sentiment analysis. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. +Jianfei Yu and Jing Jiang. 2016a. Learning sentence embeddings with auxiliary tasks for cross-domain sentiment classification. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 236-246. +Jianfei Yu and Jing Jiang. 2016b. Learning sentence embeddings with auxiliary tasks for cross-domain sentiment classification. In EMNLP, 2016. +Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschlager, and Susanne Saminger-Platz. 2017. Central moment discrepancy (CMD) for domain-invariant representation learning. In ICLR, 2017. +Kai Zhang, Hefu Zhang, Qi Liu, Hongke Zhao, Hengshu Zhu, and Enhong Chen. 2019. Interactive attention transfer network for cross-domain sentiment classification. In AAAI 2019. +Li Zheng, Li Xin, Wei Ying, Bing Lidong, Zhang Yu, and Yang Qiang. 2019. Transferable end-to-end aspect-based sentiment analysis with selective adversarial learning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). +Yftah Ziser and Roi Reichart. 2018. Pivot based language modeling for improved neural domain adaptation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018. \ No newline at end of file diff --git a/adversarialanddomainawarebertforcrossdomainsentimentanalysis/images.zip b/adversarialanddomainawarebertforcrossdomainsentimentanalysis/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..15997125609639c8933d9a70b934a1990924b7aa --- /dev/null +++ b/adversarialanddomainawarebertforcrossdomainsentimentanalysis/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:319d5c5c141c3c57c9d34c619c4d2525ae9d101723544f90c29eacfb44875aa0 +size 369957 diff --git a/adversarialanddomainawarebertforcrossdomainsentimentanalysis/layout.json b/adversarialanddomainawarebertforcrossdomainsentimentanalysis/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..ec891000cc9e926228031dbe3c8e33fe0145d5de --- /dev/null +++ b/adversarialanddomainawarebertforcrossdomainsentimentanalysis/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:38172bcbec7fe3e7a2642b81312940688239e426a53bd6882727580bfb9ccc21 +size 323564 diff --git a/adversarialnlianewbenchmarkfornaturallanguageunderstanding/5f58bfe5-0630-438f-bf75-cd8e5cc547b0_content_list.json b/adversarialnlianewbenchmarkfornaturallanguageunderstanding/5f58bfe5-0630-438f-bf75-cd8e5cc547b0_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a8734a6efd09659ff39382429afc0b8231eb95c4 --- /dev/null +++ b/adversarialnlianewbenchmarkfornaturallanguageunderstanding/5f58bfe5-0630-438f-bf75-cd8e5cc547b0_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5637ad39798547053ccd01849e4659de0c32c0f4c194bd6f0fe9efd63922899b +size 106929 diff --git a/adversarialnlianewbenchmarkfornaturallanguageunderstanding/5f58bfe5-0630-438f-bf75-cd8e5cc547b0_model.json b/adversarialnlianewbenchmarkfornaturallanguageunderstanding/5f58bfe5-0630-438f-bf75-cd8e5cc547b0_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a85999d87e4ff8b8e0986a4f4ca6067e6262fd6a --- /dev/null +++ b/adversarialnlianewbenchmarkfornaturallanguageunderstanding/5f58bfe5-0630-438f-bf75-cd8e5cc547b0_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3fb799455421a77956b88dbfda303d4ad931ca0d7024d01d6d8a0197c900c7af +size 126861 diff --git a/adversarialnlianewbenchmarkfornaturallanguageunderstanding/5f58bfe5-0630-438f-bf75-cd8e5cc547b0_origin.pdf b/adversarialnlianewbenchmarkfornaturallanguageunderstanding/5f58bfe5-0630-438f-bf75-cd8e5cc547b0_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..93dfc642e0ccb07474bba1a3eb08da2e069ed75a --- /dev/null +++ b/adversarialnlianewbenchmarkfornaturallanguageunderstanding/5f58bfe5-0630-438f-bf75-cd8e5cc547b0_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aa9aacbff30a02a7f45bebeeb5aacff035e889d4ed6b7e428cfab4a8afec7f2e +size 1420864 diff --git a/adversarialnlianewbenchmarkfornaturallanguageunderstanding/full.md b/adversarialnlianewbenchmarkfornaturallanguageunderstanding/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a9bb95b667ac6533049b5a5237e41608350de692 --- /dev/null +++ b/adversarialnlianewbenchmarkfornaturallanguageunderstanding/full.md @@ -0,0 +1,370 @@ +# Adversarial NLI: A New Benchmark for Natural Language Understanding + +Yixin Nie*, Adina Williams†, Emily Dinan†, Mohit Bansal*, Jason Weston†, Douwe Kiela† + +*UNC Chapel Hill + +$\dagger$ Facebook AI Research + +# Abstract + +We introduce a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks, while posing a more difficult challenge with its new test set. Our analysis sheds light on the shortcomings of current state-of-the-art models, and shows that non-expert annotators are successful at finding their weaknesses. The data collection method can be applied in a never-ending learning scenario, becoming a moving target for NLU, rather than a static benchmark that will quickly saturate. + +# 1 Introduction + +Progress in AI has been driven by, among other things, the development of challenging large-scale benchmarks like ImageNet (Russakovsky et al., 2015) in computer vision, and SNLI (Bowman et al., 2015), SQuAD (Rajpurkar et al., 2016), and others in natural language processing (NLP). Recently, for natural language understanding (NLU) in particular, the focus has shifted to combined benchmarks like SentEval (Conneau and Kiela, 2018) and GLUE (Wang et al., 2018), which track model performance on multiple tasks and provide a unified platform for analysis. + +With the rapid pace of advancement in AI, however, NLU benchmarks struggle to keep up with model improvement. Whereas it took around 15 years to achieve "near-human performance" on MNIST (LeCun et al., 1998; Ciresan et al., 2012; Wan et al., 2013) and approximately 7 years to surpass humans on ImageNet (Deng et al., 2009; Russakovsky et al., 2015; He et al., 2016), the GLUE benchmark did not last as long as we would have hoped after the advent of BERT (Devlin et al., + +2018), and rapidly had to be extended into SuperGLUE (Wang et al., 2019). This raises an important question: Can we collect a large benchmark dataset that can last longer? + +The speed with which benchmarks become obsolete raises another important question: are current NLU models genuinely as good as their high performance on benchmarks suggests? A growing body of evidence shows that state-of-the-art models learn to exploit spurious statistical patterns in datasets (Gururangan et al., 2018; Poliak et al., 2018; Tsuchiya, 2018; Glockner et al., 2018; Geva et al., 2019; McCoy et al., 2019), instead of learning meaning in the flexible and generalizable way that humans do. Given this, human annotators—be they seasoned NLP researchers or non-experts—might easily be able to construct examples that expose model brittleness. + +We propose an iterative, adversarial human-and-model-in-the-loop solution for NLU dataset collection that addresses both benchmark longevity and robustness issues. In the first stage, human annotators devise examples that our current best models cannot determine the correct label for. These resulting hard examples—which should expose additional model weaknesses—can be added to the training set and used to train a stronger model. We then subject the strengthened model to the same procedure and collect weaknesses over several rounds. After each round, we train a new model and set aside a new test set. The process can be iteratively repeated in a never-ending learning (Mitchell et al., 2018) setting, with the model getting stronger and the test set getting harder in each new round. Thus, not only is the resultant dataset harder than existing benchmarks, but this process also yields a "moving post" dynamic target for NLU systems, rather than a static benchmark that will eventually saturate. + +Our approach draws inspiration from recent ef + +![](images/146d8ad67c596e66e14726d2c9c687ee91c33ae9bb1459d07fbc882d817d640e.jpg) +Figure 1: Adversarial NLI data collection via human-and-model-in-the-loop enabled training (HAMLET). The four steps make up one round of data collection. In step 3, model-correct examples are included in the training set; development and test sets are constructed solely from model-wrong verified-correct examples. + +forts that gamify collaborative training of machine learning agents over multiple rounds (Yang et al., 2017) and pit "builders" against "breakers" to learn better models (Ettinger et al., 2017). Recently, Dinan et al. (2019) showed that such an approach can be used to make dialogue safety classifiers more robust. Here, we focus on natural language inference (NLI), arguably the most canonical task in NLU. We collected three rounds of data, and call our new dataset Adversarial NLI (ANLI). + +Our contributions are as follows: 1) We introduce a novel human-and-model-in-the-loop dataset, consisting of three rounds that progressively increase in difficulty and complexity, that includes annotator-provided explanations. 2) We show that training models on this new dataset leads to state-of-the-art performance on a variety of popular NLI benchmarks. 3) We provide a detailed analysis of the collected data that sheds light on the shortcomings of current models, categorizes the data by inference type to examine weaknesses, and demonstrates good performance on NLI stress tests. The ANLI dataset is available at github.com/facebookresearch/anli/. A demo is available at adversarialnli.com. + +# 2 Dataset collection + +The primary aim of this work is to create a new large-scale NLI benchmark on which current state-of-the-art models fail. This constitutes a new target for the field to work towards, and can elucidate model capabilities and limitations. As noted, however, static benchmarks do not last very long these days. If continuously deployed, the data collection + +procedure we introduce here can pose a dynamic challenge that allows for never-ending learning. + +# 2.1 HAMLET + +To paraphrase the great bard (Shakespeare, 1603), there is something rotten in the state of the art. We propose Human-And-Model-in-the-Loop Enabled Training (HAMLET), a training procedure to automatically mitigate problems with current dataset collection procedures (see Figure 1). + +In our setup, our starting point is a base model, trained on NLI data. Rather than employing automated adversarial methods, here the model's "adversary" is a human annotator. Given a context (also often called a "premise" in NLI), and a desired target label, we ask the human writer to provide a hypothesis that fools the model into misclassifying the label. One can think of the writer as a "white hat" hacker, trying to identify vulnerabilities in the system. For each human-generated example that is misclassified, we also ask the writer to provide a reason why they believe it was misclassified. + +For examples that the model misclassified, it is necessary to verify that they are actually correct -i.e., that the given context-hypothesis pairs genuinely have their specified target label. The best way to do this is to have them checked by another human. Hence, we provide the example to human verifiers. If two human verifiers agree with the writer, the example is considered a good example. If they disagree, we ask a third human verifier to break the tie. If there is still disagreement between the writer and the verifiers, the example is discarded. If the verifiers disagree, they can over + +
ContextHypothesisReasonRoundorig.Labels pred.valid.Annotations
Roberto Javier Mora García (c. 1962 – 16 March 2004) was a Mexican journalist and editorial director of “El Mañana”, a newspaper based in Nuevo Laredo, Tamaulipas, Mexico. He worked for a number of media outlets in Mexico, including the “El Norte” and “El Diario de Monterrey”, prior to his assassination.Another individual laid waste to Roberto Javier Mora Garcia.The context states that Roberto Javier Mora Garcia was assassinated, so another person had to have “laid waste to him.” The system most likely had a hard time figuring this out due to it not recognizing the phrase “laid waste.”A1 (Wiki)ENE ELexical (assassination, laid waste), Tricky (Presupposition), Standard (Idiom)
A melee weapon is any weapon used in direct hand-to-hand combat; by contrast with ranged weapons which act at a distance. The term “melee” originates in the 1640s from the French word “mèlée”, which refers to hand-to-hand combat, a close quarters battle, a brawl, a confused fight, etc. Melee weapons can be broadly divided into three categoriesMelee weapons are good for ranged and hand-to-hand combat.Melee weapons are good for hand to hand combat, but NOT ranged.A2 (Wiki)CEC N CStandard (Conjunction), Tricky (Exhaustification), Reasoning (Facts)
If you can dream it, you can achieve it—unless you’re a goose trying to play a very human game of rugby. In the video above, one bold bird took a chance when it ran onto a rugby field mid-play. Things got dicey when it got into a tussle with another player, but it shook it off and kept right on running. After the play ended, the players escorted the feisty goose off the pitch. It was a risky move, but the crowd chanting its name was well worth it.The crowd believed they knew the name of the goose running on the field.Because the crowd was chanting its name, the crowd must have believed they knew the goose’s name. The word “believe” may have made the system think this was an ambiguous statement.A3 (News)ENE EReasoning (Facts), Reference (Coreference)
+ +Table 1: Examples from development set. 'An' refers to round number, 'orig.' is the original annotator's gold label, 'pred.' is the model prediction, 'valid.' are the validator labels, 'reason' was provided by the original annotator, 'Annotations' are the tags determined by an linguist expert annotator. + +rule the original target label of the writer. + +Once data collection for the current round is finished, we construct a new training set from the collected data, with accompanying development and test sets, which are constructed solely from verified correct examples. The test set was further restricted so as to: 1) include pairs from "exclusive" annotators who are never included in the training data; and 2) be balanced by label classes (and genres, where applicable). We subsequently train a new model on this and other existing data, and repeat the procedure. + +# 2.2 Annotation details + +We employed Mechanical Turk workers with qualifications and collected hypotheses via the ParlAI framework. Annotators are presented with a context and a target label—either 'entailment', 'contradiction', or 'neutral'—and asked to write a hypothesis that corresponds to the label. We phrase the label classes as "definitely correct", "definitely incorrect", or "neither definitely correct nor definitely incorrect" given the context, to make the task easier to grasp. Model predictions are obtained for the context and submitted hypothesis pair. The probability of each label is shown to the worker as feedback. If the model prediction was incorrect, the job is complete. If not, the worker continues to write hypotheses for the given (context, target-label) pair until the model predicts the label incor + +rectly or the number of tries exceeds a threshold (5 tries in the first round, 10 tries thereafter). + +To encourage workers, payments increased as rounds became harder. For hypotheses that the model predicted incorrectly, and that were verified by other humans, we paid an additional bonus on top of the standard rate. + +# 2.3 Round 1 + +For the first round, we used a BERT-Large model (Devlin et al., 2018) trained on a concatenation of SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2017), and selected the best-performing model we could train as the starting point for our dataset collection procedure. For Round 1 contexts, we randomly sampled short multi-sentence passages from Wikipedia (of 250-600 characters) from the manually curated HotpotQA training set (Yang et al., 2018). Contexts are either ground-truth contexts from that dataset, or they are Wikipedia passages retrieved using TF-IDF (Chen et al., 2017) based on a HotpotQA question. + +# 2.4 Round 2 + +For the second round, we used a more powerful RoBERTa model (Liu et al., 2019b) trained on SNLI, MNLI, an NLI-version $^2$ of FEVER (Thorne et al., 2018), and the training data from the previous round (A1). After a hyperparameter search, we + +
DatasetGenreContextTrain / Dev / TestModel error rateTries mean/median per verified ex.Time (sec.)
UnverifiedVerified
A1Wiki2,08016,946 / 1,000 / 1,00029.68%18.33%3.4 / 2.0199.2 / 125.2
A2Wiki2,69445,460 / 1,000 / 1,00016.59%8.07%6.4 / 4.0355.3 / 189.1
A3Various (Wiki subset)6,002100,459 / 1,200 / 1,20017.47%8.60%6.4 / 4.0284.0 / 157.0
1,00019,920 / 200 / 20014.79%6.92%7.4 / 5.0337.3 / 189.6
ANLIVarious10,776162,865 / 3,200 / 3,20018.54%9.52%5.7 / 3.0282.9 / 156.3
+ +Table 2: Dataset statistics: 'Model error rate' is the percentage of examples that the model got wrong; 'unverified' is the overall percentage, while 'verified' is the percentage that was verified by at least 2 human annotators. + +selected the model with the best performance on the A1 development set. Then, using the hyperparameters selected from this search, we created a final set of models by training several models with different random seeds. During annotation, we constructed an ensemble by randomly picking a model from the model set as the adversary each turn. This helps us avoid annotators exploiting vulnerabilities in one single model. A new non-overlapping set of contexts was again constructed from Wikipedia via HotpotQA using the same method as Round 1. + +# 2.5 Round 3 + +For the third round, we selected a more diverse set of contexts, in order to explore robustness under domain transfer. In addition to contexts from Wikipedia for Round 3, we also included contexts from the following domains: News (extracted from Common Crawl), fiction (extracted from StoryCloze (Mostafazadeh et al., 2016) and CBT (Hill et al., 2015)), formal spoken text (excerpted from court and presidential debate transcripts in the Manually Annotated Sub-Corpus (MASC) of the Open American National Corpus3), and causal or procedural text, which describes sequences of events or actions, extracted from WikiHow. Finally, we also collected annotations using the longer contexts present in the GLUE RTE training data, which came from the RTE5 dataset (Bentivogli et al., 2009). We trained an even stronger RoBERTa ensemble by adding the training set from the second round (A2) to the training data. + +# 2.6 Comparing with other datasets + +The ANLI dataset, comprising three rounds, improves upon previous work in several ways. First, and most obviously, the dataset is collected to be more difficult than previous datasets, by design. Second, it remedies a problem with SNLI, + +namely that its contexts (or premises) are very short, because they were selected from the image captioning domain. We believe longer contexts should naturally lead to harder examples, and so we constructed ANLI contexts from longer, multisentence source material. + +Following previous observations that models might exploit spurious biases in NLI hypotheses, (Gururangan et al., 2018; Poliak et al., 2018), we conduct a study of the performance of hypothesis-only models on our dataset. We show that such models perform poorly on our test sets. + +With respect to data generation with naïve annotators, Geva et al. (2019) noted that models can pick up on annotator bias, modelling annotator artefacts rather than the intended reasoning phenomenon. To counter this, we selected a subset of annotators (i.e., the "exclusive" workers) whose data would only be included in the test set. This enables us to avoid overfitting to the writing style biases of particular annotators, and also to determine how much individual annotator bias is present for the main portion of the data. Examples from each round of dataset collection are provided in Table 1. + +Furthermore, our dataset poses new challenges to the community that were less relevant for previous work, such as: can we improve performance online without having to train a new model from scratch every round, how can we overcome catastrophic forgetting, how do we deal with mixed model biases, etc. Because the training set includes examples that the model got right but were not verified, learning from noisy and potentially unverified data becomes an additional interesting challenge. + +# 3 Dataset statistics + +The dataset statistics can be found in Table 2. The number of examples we collected increases per round, starting with approximately 19k examples for Round 1, to around 47k examples for Round 2, + +
ModelTraining DataA1A2A3ANLIANLI-ESNLIMNLI-m/-mm
BERTS,M*100.028.928.819.819.991.386.7 / 86.4
+A144.232.629.335.034.291.386.3 / 86.5
+A1+A257.345.233.444.643.290.986.3 / 86.3
+A1+A2+A357.249.046.150.546.390.985.6 / 85.4
S,M,F,ANLI57.448.343.549.344.290.486.0 / 85.8
XLNetS,M,F,ANLI67.650.748.355.152.091.889.6 / 89.4
RoBERTaS,M47.625.422.131.131.492.690.8 / 90.6
+F54.024.222.432.833.792.790.6 / 90.5
+F+A1*268.719.322.035.836.892.890.9 / 90.7
+F+A1+A2*371.244.320.443.741.492.991.0 / 90.7
S,M,F,ANLI73.848.944.453.749.792.691.0 / 90.6
+ +Table 3: Model Performance. 'S' refers to SNLI, 'M' to MNLI dev (-m=matched, -mm=mismatched), and 'F' to FEVER; 'A1-A3' refer to the rounds respectively and 'ANLI' refers to A1+A2+A3, '-E' refers to test set examples written by annotators exclusive to the test set. Datasets marked $\star n$ were used to train the base model for round $n$ , and their performance on that round is underlined (A2 and A3 used ensembles, and hence have non-zero scores). + +to over $103\mathrm{k}$ examples for Round 3. We collected more data for later rounds not only because that data is likely to be more interesting, but also simply because the base model is better and so annotation took longer to collect good, verified correct examples of model vulnerabilities. + +For each round, we report the model error rate, both on verified and unverified examples. The unverified model error rate captures the percentage of examples where the model disagreed with the writer's target label, but where we are not (yet) sure if the example is correct. The verified model error rate is the percentage of model errors from example pairs that other annotators confirmed the correct label for. Note that error rate is a useful way to evaluate model quality: the lower the model error rate—assuming constant annotator quality and context-difficulty—the better the model. + +We observe that model error rates decrease as we progress through rounds. In Round 3, where we included a more diverse range of contexts from various domains, the overall error rate went slightly up compared to the preceding round, but for Wikipedia contexts the error rate decreased substantially. While for the first round roughly 1 in every 5 examples were verified model errors, this quickly dropped over consecutive rounds, and the overall model error rate is less than 1 in 10. On the one hand, this is impressive, and shows how far we have come with just three rounds. On the other hand, it shows that we still have a long way to go if even untrained annotators can fool ensembles of state-of-the-art models with relative ease. + +Table 2 also reports the average number of "tries", i.e., attempts made for each context until a model error was found (or the number of possible + +tries is exceeded), and the average time this took (in seconds). Again, these metrics are useful for evaluating model quality: observe that the average number of tries and average time per verified error both go up with later rounds. This demonstrates that the rounds are getting increasingly more difficult. Further dataset statistics and inter-annotator agreement are reported in Appendix C. + +# 4 Results + +Table 3 reports the main results. In addition to BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019b), we also include XLNet (Yang et al., 2019) as an example of a strong, but different, model architecture. We show test set performance on the ANLI test sets per round, the total ANLI test set, and the exclusive test subset (examples from test-set-exclusive workers). We also show accuracy on the SNLI test set and the MNLI development set (for the purpose of comparing between different model configurations across table rows). In what follows, we discuss our observations. + +Base model performance is low. Notice that the base model for each round performs very poorly on that round's test set. This is the expected outcome: For round 1, the base model gets the entire test set wrong, by design. For rounds 2 and 3, we used an ensemble, so performance is not necessarily zero. However, as it turns out, performance still falls well below chance4, indicating that workers did not find vulnerabilities specific to a single model, but generally applicable ones for that model class. + +![](images/71982168bea4209918b89063c003171ffb3b1874612d02fdfc8f2e5dd6786540.jpg) +Figure 2: RoBERTa performance on dev, with A1-3 downsampled s.t. $|\mathbf{A}1^{D1}| = |\mathbf{A}2^{D1}| = \frac{1}{2} |\mathbf{A}1|$ and $|\mathbf{A}1^{D2}| = |\mathbf{A}2^{D2}| = |\mathbf{A}3^{D2}| = \frac{1}{3} |\mathbf{A}1|$ . + +Rounds become increasingly more difficult. As already foreshadowed by the dataset statistics, round 3 is more difficult (yields lower performance) than round 2, and round 2 is more difficult than round 1. This is true for all model architectures. + +Training on more rounds improves robustness. Generally, our results indicate that training on more rounds improves model performance. This is true for all model architectures. Simply training on more "normal NLI" data would not help a model be robust to adversarial attacks, but our data actively helps mitigate these. + +RoBERTa achieves state-of-the-art performance... We obtain state of the art performance on both SNLI and MNLI with the RoBERTa model finetuned on our new data. The RoBERTa paper (Liu et al., 2019b) reports a score of 90.2 for both MNLI-matched and -mismatched dev, while we obtain 91.0 and 90.7. The state of the art on SNLI is currently held by MT-DNN (Liu et al., 2019a), which reports 91.6 compared to our 92.9. + +...but is outperformed when it is base model. However, the base (RoBERTa) models for rounds 2 and 3 are outperformed by both BERT and XLNet (rows 5, 6 and 10). This shows that annotators found examples that RoBERTa generally struggles with, which cannot be mitigated by more examples alone. It also implies that BERT, XLNet, and RoBERTa all have different weaknesses, possibly as a function of their training data (BERT, XLNet and RoBERTa were trained on different data sets, which might or might not have contained information relevant to the weaknesses). + +![](images/9f0fa3d9fc98f0371a8fbe1d62d9347140479babc8300491e9569f267907edb1.jpg) +Figure 3: Comparison of verified, unverified and combined data, where data sets are downsampled to ensure equal training sizes. + +Continuously augmenting training data does not downgrade performance. Even though ANLI training data is different from SNLI and MNLI, adding it to the training set does not harm performance on those tasks. Our results (see also rows 2-3 of Table 6) suggest the method could successfully be applied for multiple additional rounds. + +Exclusive test subset difference is small. We included an exclusive test subset (ANLI-E) with examples from annotators never seen in training, and find negligible differences, indicating that our models do not over-rely on annotator's writing styles. + +# 4.1 The effectiveness of adversarial training + +We examine the effectiveness of the adversarial training data in two ways. First, we sample from respective datasets to ensure exactly equal amounts of training data. Table 5 shows that the adversarial data improves performance, including on SNLI and MNLI when we replace part of those datasets with the adversarial data. This suggests that the adversarial data is more data efficient than "normally collected" data. Figure 2 shows that adversarial data collected in later rounds is of higher quality and more data-efficient. + +Second, we compared verified correct examples of model vulnerabilities (examples that the model got wrong and were verified to be correct) to unverified ones. Figure 3 shows that the verified correct examples are much more valuable than the unverified examples, especially in the later rounds (where the latter drops to random). + +# 4.2 Stress Test Results + +We also test models on two recent hard NLI test sets: SNLI-Hard (Gururangan et al., 2018) and + +
ModelSNLI-HardNLI Stress Tests
AT (m/mm)NRLN (m/mm)NG (m/mm)WO (m/mm)SE (m/mm)
Previous models72.714.4 / 10.228.858.7 / 59.448.8 / 46.650.0 / 50.258.3 / 59.4
BERT (All)82.375.0 / 72.965.884.2 / 84.664.9 / 64.461.6 / 60.678.3 / 78.3
XLNet (All)83.588.2 / 87.185.487.5 / 87.559.9 / 60.068.7 / 66.184.3 / 84.4
RoBERTa (S+M+F)84.581.6 / 77.262.188.0 / 88.561.9 / 61.967.9 / 66.286.2 / 86.5
RoBERTa (All)84.785.9 / 82.180.688.4 / 88.562.2 / 61.967.4 / 65.686.3 / 86.7
+ +Table 4: Model Performance on NLI stress tests (tuned on their respective dev. sets). All=S+M+F+ANLI. AT='Antonym'; 'NR'=Numerical Reasoning; 'LN'=Length; 'NG'=Negation; 'WO'=Word Overlap; 'SE'=Spell Error. Previous models refers to the Naik et al. (2018) implementation of Conneau et al. (2017, InferSent) for the Stress Tests, and to the Gururangan et al. (2018) implementation of Gong et al. (2018, DIIN) for SNLI-Hard. + +
Train DataA1A2A3SM-m/mm
SMD1+SMD245.126.127.192.589.8/89.7
SMD1+A72.642.942.092.390.3/89.6
SM48.024.831.193.290.8/90.6
SMD3+A73.342.440.593.390.8/90.7
+ +the NLI stress tests (Naik et al., 2018) (see Appendix A for details). The results are in Table 4. We observe that all our models outperform the models presented in original papers for these common stress tests. The RoBERTa models perform best on SNLI-Hard and achieve accuracy levels in the high 80s on the 'antonym' (AT), 'numerical reasoning' (NR), 'length' (LN), 'spelling error'(SE) sub-datasets, and show marked improvement on both 'negation' (NG), and 'word overlap' (WO). Training on ANLI appears to be particularly useful for the AT, NR, NG and WO stress tests. + +# 4.3 Hypothesis-only results + +For SNLI and MNLI, concerns have been raised about the propensity of models to pick up on spurious artifacts that are present just in the hypotheses (Gururangan et al., 2018; Poliak et al., 2018). Here, we compare full models to models trained only on the hypothesis (marked $H$ ). Table 6 reports results on ANLI, as well as on SNLI and MNLI. The table shows that hypothesis-only models perform poorly on ANLI $^5$ , and obtain good performance on SNLI and MNLI. Hypothesis-only performance + +Table 5: RoBERTa performance on dev set with different training data. S=SNLI, M=MNLI, A=A1+A2+A3. 'SM' refers to combined S and M training set. D1, D2, D3 means down-sampling SM s.t. $|\mathrm{SM}^{D2}| = |\mathrm{A}|$ and $|\mathrm{SM}^{D3}| + |\mathrm{A}| = |\mathrm{SM}|$ . Therefore, training sizes are identical in every pair of rows. + +
Train DataA1A2A3SM-m/mm
ALL73.848.944.492.691.0/90.6
S+M47.625.422.192.690.8/90.6
ANLI-Only71.343.343.083.586.3/86.5
\( {\mathrm{{ALL}}}^{H} \)49.746.342.871.460.2/59.8
\( S+M^H \)33.129.432.271.862.0/62.0
\( ANLI-Only^H \)51.042.641.547.051.9/54.5
+ +Table 6: Performance of RoBERTa with different data combinations. ALL=S,M,F,ANLI. Hypothesis-only models are marked $H$ where they are trained and tested with only hypothesis texts. + +decreases over rounds for ANLI. + +We observe that in rounds 2 and 3, RoBERTa is not much better than hypothesis-only. This could mean two things: either the test data is very difficult, or the training data is not good. To rule out the latter, we trained only on ANLI ( $\sim$ 163k training examples): RoBERTa matches BERT when trained on the much larger, fully in-domain SNLI+MNLI combined dataset (943k training examples) on MNLI, with both getting $\sim$ 86 (the third row in Table 6). Hence, this shows that the test sets are so difficult that state-of-the-art models cannot outperform a hypothesis-only prior. + +# 5 Linguistic analysis + +We explore the types of inferences that fooled models by manually annotating 500 examples from each round's development set. A dynamically evolving dataset offers the unique opportunity to track how model error rates change over time. Since each round's development set contains only verified examples, we can investigate two interesting questions: which types of inference do writers employ to fool the models, and are base models differentially sensitive to different types of reasoning? + +The results are summarized in Table 7. We devised an inference ontology containing six types of inference: Numerical & Quantitative (i.e., reason + +
RoundNumerical & Quant.Reference & NamesStandardLexicalTrickyReasoning & FactsQuality
A138%13%18%13%22%53%4%
A232%20%21%21%20%59%3%
A310%18%27%27%27%63%3%
Average27%17%22%22%23%58%3%
+ +Table 7: Analysis of 500 development set examples per round and on average. + +ing about cardinal and ordinal numbers, inferring dates and ages from numbers, etc.), Reference & Names (coreferences between pronouns and forms of proper names, knowing facts about name gender, etc.), Standard Inferences (conjunctions, negations, cause-and-effect, comparatives and superlatives etc.), Lexical Inference (inferences made possible by lexical information about synonyms, antonyms, etc.), Tricky Inferences (wordplay, linguistic strategies such as syntactic transformations/reorderings, or inferring writer intentions from contexts), and reasoning from outside knowledge or additional facts (e.g., "You can't reach the sea directly from Rwanda"). The quality of annotations was also tracked; if a pair was ambiguous or a label debatable (from the expert annotator's perspective), it was flagged. Quality issues were rare at $3 - 4\%$ per round. Any one example can have multiple types, and every example had at least one tag. + +We observe that both round 1 and 2 writers rely heavily on numerical and quantitative reasoning in over $30\%$ of the development set—the percentage in A2 $(32\%)$ dropped roughly $6\%$ from A1 $(38\%)$ —while round 3 writers use numerical or quantitative reasoning for only $17\%$ . The majority of numerical reasoning types were references to cardinal numbers that referred to dates and ages. Inferences predicated on references and names were present in about $10\%$ of rounds 1 & 3 development sets, and reached a high of $20\%$ in round 2, with coreference featuring prominently. Standard inference types increased in prevalence as the rounds increased, ranging from $18\% - 27\%$ , as did 'Lexical' inferences (increasing from $13\% - 31\%$ ). The percentage of sentences relying on reasoning and outside facts remains roughly the same, in the mid-50s, perhaps slightly increasing over the rounds. For round 3, we observe that the model used to collect it appears to be more susceptible to Standard, Lexical, and Tricky inference types. This finding is compatible with the idea that models trained on adversarial data perform better, since annotators seem to have been encouraged to devise more creative examples containing harder types of inference in + +order to stump them. Further analysis is provided in Appendix B. + +# 6 Related work + +Bias in datasets Machine learning methods are well-known to pick up on spurious statistical patterns. For instance, in the first visual question answering dataset (Antol et al., 2015), biases like "2" being the correct answer to $39\%$ of the questions starting with "how many" allowed learning algorithms to perform well while ignoring the visual modality altogether (Jabri et al., 2016; Goyal et al., 2017). In NLI, Gururangan et al. (2018), Poliak et al. (2018) and Tsuchiya (2018) showed that hypothesis-only baselines often perform far better than chance. NLI systems can often be broken merely by performing simple lexical substitutions (Glockner et al., 2018), and struggle with quantifiers (Geiger et al., 2018) and certain superficial syntactic properties (McCoy et al., 2019). + +In question answering, Kaushik and Lipton (2018) showed that question- and passage-only models can perform surprisingly well, while Jia and Liang (2017) added adversarially constructed sentences to passages to cause a drastic drop in performance. Many tasks do not actually require sophisticated linguistic reasoning, as shown by the surprisingly good performance of random encoders (Wieting and Kiela, 2019). Similar observations were made in machine translation (Belinkov and Bisk, 2017) and dialogue (Sankar et al., 2019). Machine learning also has a tendency to overfit on static targets, even if that does not happen deliberately (Recht et al., 2018). In short, the field is rife with dataset bias and papers trying to address this important problem. This work presents a potential solution: if such biases exist, they will allow humans to fool the models, resulting in valuable training examples until the bias is mitigated. + +Dynamic datasets. Bras et al. (2020) proposed AFLite, an approach for avoiding spurious biases through adversarial filtering, which is a model-in-the-loop approach that iteratively probes and improves models. Kaushik et al. (2019) offer a + +causal account of spurious patterns, and counterfactually augment NLI datasets by editing examples to break the model. That approach is human-in-the-loop, using humans to find problems with one single model. In this work, we employ both human and model-based strategies iteratively, in a form of human-and-model-in-the-loop training, to create completely new examples, in a potentially never-ending loop (Mitchell et al., 2018). + +Human-and-model-in-the-loop training is not a new idea. Mechanical Turker Descent proposes a gamified environment for the collaborative training of grounded language learning agents over multiple rounds (Yang et al., 2017). The "Build it Break it Fix it" strategy in the security domain (Ruef et al., 2016) has been adapted to NLP (Ettinger et al., 2017) as well as dialogue safety (Dinan et al., 2019). The QApedia framework (Kratzwald and Feuerriegel, 2019) continuously refines and updates its content repository using humans in the loop, while human feedback loops have been used to improve image captioning systems (Ling and Fidler, 2017). Wallace et al. (2019) leverage trivia experts to create a model-driven adversarial question writing procedure and generate a small set of challenge questions that QA-models fail on. Relatedly, Lan et al. (2017) propose a method for continuously growing a dataset of paraphrases. + +There has been a flurry of work in constructing datasets with an adversarial component, such as Swag (Zellers et al., 2018) and HellaSwag (Zellers et al., 2019), CODAH (Chen et al., 2019), Adversarial SQuAD (Jia and Liang, 2017), Lambada (Paperno et al., 2016) and others. Our dataset is not to be confused with abductive NLI (Bhagavatula et al., 2019), which calls itself $\alpha$ NLI, or ART. + +# 7 Discussion & Conclusion + +In this work, we used a human-and-model-in-the-loop training method to collect a new benchmark for natural language understanding. The benchmark is designed to be challenging to current state-of-the-art models. Annotators were employed to act as adversaries, and encouraged to find vulnerabilities that fool the model into misclassifying, but that another person would correctly classify. We found that non-expert annotators, in this gamified setting and with appropriate incentives, are remarkably creative at finding and exploiting weaknesses. We collected three rounds, and as the rounds progressed, the models became more robust and the test sets for each round became more + +difficult. Training on this new data yielded the state of the art on existing NLI benchmarks. + +The ANLI benchmark presents a new challenge to the community. It was carefully constructed to mitigate issues with previous datasets, and was designed from first principles to last longer. The dataset also presents many opportunities for further study. For instance, we collected annotator-provided explanations for each example that the model got wrong. We provided inference labels for the development set, opening up possibilities for interesting more fine-grained studies of NLI model performance. While we verified the development and test examples, we did not verify the correctness of each training example, which means there is probably some room for improvement there. + +A concern might be that the static approach is probably cheaper, since dynamic adversarial data collection requires a verification step to ensure examples are correct. However, verifying examples is probably also a good idea in the static case, and adversarially collected examples can still prove useful even if they didn't fool the model and weren't verified. Moreover, annotators were better incentivized to do a good job in the adversarial setting. Our finding that adversarial data is more data-efficient corroborates this theory. Future work could explore a detailed cost and time trade-off between adversarial and static collection. + +It is important to note that our approach is model-agnostic. HAMLET was applied against an ensemble of models in rounds 2 and 3, and it would be straightforward to put more diverse ensembles in the loop to examine what happens when annotators are confronted with a wider variety of architectures. + +The proposed procedure can be extended to other classification tasks, as well as to ranking with hard negatives either generated (by adversarial models) or retrieved and verified by humans. It is less clear how the method can be applied in generative cases. + +Adversarial NLI is meant to be a challenge for measuring NLU progress, even for as yet undiscovered models and architectures. Luckily, if the benchmark does turn out to saturate quickly, we will always be able to collect a new round. + +# Acknowledgments + +YN interned at Facebook. YN and MB were sponsored by DARPA MCS Grant #N66001-19-2-4031, ONR Grant #N00014-18-1-2871, and DARPA YFA17-D17AP00022. Special thanks to Sam Bowman for comments on an earlier draft. + +# References + +Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 2425-2433. +Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine translation. arXiv preprint arXiv:1711.02173. +Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The Fifth PASCAL Recognizing Textual Entailment Challenge. TAC. +Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Scott Wen-tau Yih, and Yejin Choi. 2019. Abductive commonsense reasoning. arXiv preprint arXiv:1908.05739. +Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326. +Ronan Le Bras, Swabha Swayamdipta, Chandra Bhagavatula, Rowan Zellers, Matthew E Peters, Ashish Sabharwal, and Yejin Choi. 2020. Adversarial filters of dataset biases. arXiv preprint arXiv:2002.04108. +Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open-domain questions. In Association for Computational Linguistics (ACL). +Michael Chen, Mike D'Arcy, Alisa Liu, Jared Fernandez, and Doug Downey. 2019. CODAH: an adversarially authored question-answer dataset for common sense. CoRR, abs/1904.04365. +Dan Ciresan, Ueli Meier, and Jürgen Schmidhuber. 2012. Multi-column deep neural networks for image classification. arXiv preprint arXiv:1202.2745. +Alexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representations. arXiv preprint arXiv:1803.05449. +Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670-680. Association for Computational Linguistics. +Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE. + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it Break it Fix it for Dialogue Safety: Robustness from Adversarial Human Attack. In Proceedings of EMNLP. +Allyson Ettinger, Sudha Rao, Hal Daumé III, and Emily M Bender. 2017. Towards linguistically generalizable nlp systems: A workshop and shared task. arXiv preprint arXiv:1711.01505. +Atticus Geiger, Ignacio Cases, Lauri Karttunen, and Christopher Potts. 2018. Stress-testing neural models of natural language inference with multiply-quantified sentences. arXiv preprint arXiv:1810.13033. +Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets. arXiv preprint arXiv:1908.07898. +Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking nli systems with sentences that require simple lexical inferences. In Proceedings of ACL. +Yichen Gong, Heng Luo, and Jian Zhang. 2018. Natural language inference over interaction space. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. +Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6904-6913. +Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R Bowman, and Noah A Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of NAACL. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778. +Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children's books with explicit memory representations. arXiv preprint arXiv:1511.02301. +Allan Jabri, Armand Joulin, and Laurens Van Der Maaten. 2016. Revisiting visual question answering baselines. In European conference on computer vision, pages 727-739. Springer. + +Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of EMNLP. +Divyansh Kaushik, Eduard Hovy, and Zachary C Lipton. 2019. Learning the difference that makes a difference with counterfactually-augmented data. arXiv preprint arXiv:1909.12434. +Divyansh Kaushik and Zachary C Lipton. 2018. How much reading does reading comprehension require? a critical investigation of popular benchmarks. arXiv preprint arXiv:1808.04926. +Bernhard Kratzwald and Stefan Feuerriegel. 2019. Learning from on-line user feedback in neural question answering on the web. In The World Wide Web Conference, pages 906-916. ACM. +Wuwei Lan, Siyu Qiu, Hua He, and Wei Xu. 2017. A continuously growing dataset of sentential paraphrases. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1224-1234, Copenhagen, Denmark. Association for Computational Linguistics. +Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner, et al. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324. +Huan Ling and Sanja Fidler. 2017. Teaching machines to describe images via natural language feedback. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pages 5075-5085. +Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. +Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Linguistics. +Tom Mitchell, William Cohen, Estevam Hruschka, Partha Talukdar, Bo Yang, Justin Betteridge, Andrew Carlson, B Dalvi, Matt Gardner, Bryan Kisel, et al. 2018. Never-ending learning. Communications of the ACM, 61(5):103-115. +Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, + +Pushmeet Kohli, and James Allen. 2016. A corpus and evaluation framework for deeper understanding of commonsense stories. arXiv preprint arXiv:1604.01696. +Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340-2353, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Yixin Nie and Mohit Bansal. 2017. Shortcut-stacked sentence encoders for multi-domain inference. arXiv preprint arXiv:1708.02312. +Yixin Nie, Haonan Chen, and Mohit Bansal. 2019. Combining fact extraction and verification with neural semantic matching networks. In Association for the Advancement of Artificial Intelligence (AAAI). +Denis Paperno, Germán Kruszewski, Angeliki Lazari-dou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. 2016. The lambada dataset: Word prediction requiring a broad discourse context. arXiv preprint arXiv:1606.06031. +Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: $100,000+$ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. +Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. 2018. Do CIFar-10 classifiers generalize to CIFar-10? arXiv preprint arXiv:1806.00451. +Andrew Ruef, Michael Hicks, James Parker, Dave Levin, Michelle L Mazurek, and Piotr Mardziel. 2016. Build it, break it, fix it: Contesting secure development. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 690-703. ACM. +Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252. +Chinnadhurai Sankar, Sandeep Subramanian, Christopher Pal, Sarath Chandar, and Yoshua Bengio. 2019. Do neural dialog systems use the conversation history effectively? an empirical study. arXiv preprint arXiv:1906.01603. + +William Shakespeare. 1603. The Tragedy of Hamlet, Prince of Denmark. +James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. Fever: a large-scale dataset for fact extraction and verification. arXiv preprint arXiv:1803.05355. +Masatoshi Tsuchiya. 2018. Performance impact caused by hidden bias of training data for recognizing textual entailment. In Proceedings of LREC. +Eric Wallace, Pedro Rodriguez, Shi Feng, Ikuya Yamada, and Jordan Boyd-Graber. 2019. Trick me if you can: Human-in-the-loop generation of adversarial question answering examples. In Transactions of the Association for Computational Linguistics. +Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. 2013. Regularization of neural networks using dropconnect. In International conference on machine learning, pages 1058-1066. +Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. arXiv preprint arXiv:1905.00537. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. +John Wieting and Douwe Kiela. 2019. No training required: Exploring random encoders for sentence classification. arXiv preprint arXiv:1901.10444. +Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. +Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotq: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600. +Zhilin Yang, Saizheng Zhang, Jack Urbanek, Will Feng, Alexander H Miller, Arthur Szlam, Douwe Kiela, and Jason Weston. 2017. Mastering the dungeon: Grounded language learning by mechanical turker descent. arXiv preprint arXiv:1711.07950. +Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of EMNLP. + +Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? In Proceedings of ACL. + +# A Performance on challenge datasets + +Recently, several hard test sets have been made available for revealing the biases NLI models learn from their training datasets (Nie and Bansal, 2017; McCoy et al., 2019; Gururangan et al., 2018; Naik et al., 2018). We examine model performance on two of these: the SNLI-Hard (Gururangan et al., 2018) test set, which consists of examples that hypothesis-only models label incorrectly, and the NLI stress tests (Naik et al., 2018), in which sentences containing antonyms pairs, negations, high word overlap, i.a., are heuristically constructed. We test our models on these stress tests after tuning on each test's respective development set to account for potential domain mismatches. For comparison, we also report results from the original papers: for SNLI-Hard from Gururangan et al.'s implementation of the hierarchical tensor-based Densely Interactive Inference Network (Gong et al., 2018, DIIN) on MNLI, and for the NLI stress tests, Naik et al.'s implementation of InferSent (Conneau et al., 2017) trained on SNLI. + +# B Further linguistic analysis + +We compare the incidence of linguistic phenomena in ANLI with extant popular NLI datasets to get an idea of what our dataset contains. We observe that FEVER and SNLI datasets generally contain many fewer hard linguistic phenomena than MultiNLI and ANLI (see Table 8). + +ANLI and MultiNLI have roughly the same percentage of hypotheses that exceeding twenty words in length, and/or contain negation (e.g., 'never', 'no'), tokens of 'or', and modals (e.g., 'must', 'can'). MultiNLI hypotheses generally contains more pronouns, quantifiers (e.g., 'many', 'every'), WH-words (e.g., 'who', 'why'), and tokens of 'and' than do their ANLI counterparts—although A3 reaches nearly the same percentage as MultiNLI for negation, and modals. However, ANLI contains more cardinal numerals and time terms (such as 'before', 'month', and 'tomorrow') than MultiNLI. These differences might be due to the fact that the two datasets are constructed from different genres of text. Since A1 and A2 contexts are constructed from a single Wikipedia data source (i.e., HotPotQA data), and most Wikipedia articles include dates in the first line, annotators appear to prefer constructing hypotheses that highlight numerals and time terms, leading to their high incidence. + +Focusing on ANLI more specifically, A1 has + +roughly the same incidence of most tags as A2 (i.e., within $2\%$ of each other), which, again, accords with the fact that we used the same Wikipedia data source for A1 and A2 contexts. A3, however, has the highest incidence of every tag (except for numbers and time) in the ANLI dataset. This could be due to our sampling of A3 contexts from a wider range of genres, which likely affected how annotators chose to construct A3 hypotheses; this idea is supported by the fact that A3 contexts differ in tag percentage from A1 and A2 contexts as well. The higher incidence of all tags in A3 is also interesting, because it could be taken as providing yet another piece of evidence that our HAMLET data collection procedure generates increasingly more difficult data as rounds progress. + +# C Dataset properties + +Table 9 shows the label distribution. Figure 4 shows a histogram of the number of tries per good verified example across for the three different rounds. Figure 5 shows the time taken per good verified example. Figure 6 shows a histogram of the number of tokens for contexts and hypotheses across three rounds. Figure 7 shows the proportion of different types of collected examples across three rounds. + +Inter-annotator agreement Table 10 reports the inter-annotator agreement for verifiers on the dev and test sets. For reference, the Fleiss' kappa of FEVER (Thorne et al., 2018) is 0.68 and of SNLI (Bowman et al., 2015) is 0.70. Table 11 shows the percentage of agreement of verifiers with the intended author label. + +# D Examples + +We include more examples of collected data in Table 12. + +# E User interface + +Examples of the user interface are shown in Figures 8, 9 and 10. + +![](images/6ef1d3682f42bf14fcb60e3e1e7ceba51247135b085b304ad3c396deb4543b97.jpg) +Figure 4: Histogram of the number of tries for each good verified example across three rounds. + +![](images/321aa8a7f4da0fe4955d1b5bbb95185bc4a7b3b4f32b9686a5531af71c670f85.jpg) + +![](images/02aff8645d3aebec318d1e3e126d916379abd9860f1e94a923b2739534322a59.jpg) + +![](images/b65c39fb9f22256baaf0e62a17f0a3dd387252c5f16bbf33afb2ecf170b90b32.jpg) +Figure 5: Histogram of the time spent per good verified example across three rounds. + +![](images/672d8bfd4997be712980823bdfc64d51308b38c564d5ed058ab50f669c1491af.jpg) + +![](images/41c5d2b0cf175eb51ff21bb70d0751ed469ff2790c08bd7416cf92e5d8461215.jpg) + +![](images/e0d72ab9db38263e7ef6f80045e4fd2ff7404cc0ff30f356b7ad94eedd72c7c0.jpg) +Figure 6: Histogram of the number of tokens in contexts and hypotheses across three rounds. + +![](images/8899f2eb7af4aa4eeaaccc8dbc747ac0067a0c6e48cf929b68c16f909d603c55.jpg) + +![](images/62d4c87b5aafb908a89a10aebe15dafb0f6d2cfc9f36071bbc8b4bfb5479eb8e.jpg) + +![](images/a1aad90748ec89537203b070d3f9d06644c43474919bb5d58a7aa326418e43b3.jpg) +Figure 7: Proportion across three rounds. A=Examples that model got right, B1=Examples that model got wrong and the first two verifiers agreed with the writer, B2=Examples that model got wrong and only one of the first two verifiers agreed with the writer and a third verifier also agreed with the writer, C=Examples where two verifiers agreed with each other and overruled the writer, D=Examples for which there is no agreement among verifiers. A and C are added only to training set. B1 and B2 are added to training, dev, or test set. D was discarded. + +![](images/5858fe7c447205497dec4885907ea59bd3b711499500401798d4d837b0bb35c0.jpg) + +![](images/ae959a79e9ea9f1ee24cd6223ea992477f012179e145f81676c8ae4f60d1bd74.jpg) + +![](images/b56c3ef7a1848999db08af70d5a3259ea8ecf00ca10b4e5071acb85682cde7f6.jpg) +Figure 8: UI for Creation. (Provide the context to annotator) + +![](images/7b3dd1cf344e84597a56a517ee4d777343614c4ae69594cdadca64d00474def3.jpg) +Figure 9: Collection UI for Creation. (Give the model feedback to annotator) + +![](images/5da6be8ef31ebd62de28bd0f69841d0b7dd1c9fd3b6f73ba2efce17f34f1233b.jpg) +Figure 10: UI for Verification Task. + +
TagSNLIOther DatasetsANLI
% c% hMNLImMNLImmFA1A2A3
% c% h% c% h% claim% c% h% c% h% c% h
Negation< 11141612163263102214
‘and’307411542186851288117511
‘or’1< 17282< 1606< 1151
Numbers1041681599723073274215
Time1241571696572256194911
WH-words311671892285275355
Pronouns1173720392423092876013
Quantifiers53211622173141017123812
Modals< 1< 117131814< 123323514
>20 words14< 1372393< 110051004984
# exs10k10k10k99991k1k1200
+ +Table 8: Percentage of development set sentences with tags in several datasets: AdvNLI, SNLI, MultiNLI and FEVER. $\% c$ refers to percentage in contexts, and $\% h$ refers to percentage in hypotheses. Bolded values label linguistic phenomena that have higher incidence in adversarially created hypotheses than in hypotheses from other NLI datasets, and italicized values have roughly the same (within $5\%$ ) incidence. + +
RoundEntailment / Neutral / Contradiction
TrainDevTest
A15,371 / 7,052 / 4,523334 / 333 / 333334 / 333 / 333
A214,448 / 20,959 / 10,053334 / 333 / 333334 / 333 / 333
A332,292 / 40,778 / 27,389402 / 402 / 396402 / 402 / 396
ANLI52,111 / 68,789 / 41,9651,070 / 1,068 / 1,0621,070 / 1,068 /1,062
+ +Table 9: Label distribution in splits across rounds. + +
RoundDev + TestDevTest
A10.72100.70200.7400
A20.69100.71000.6720
A30.67860.67390.6832
+ +Table 10: Inter-annotator agreement (Fleiss' kappa) for writers and the first two verifiers. + +
SNLIMNLIA1A2A3
85.885.286.184.683.9
+ +Table 11: Percentage of agreement of verifiers ("validators" for SNLI and MNLI) with the author label. + +
ContextHypothesisReasonRoundorig.Labels pred.valid.Annotations
Eduard Schulte (4 January 1891 in Düsseldorf 6 January 1966 in Zürich) was a prominent German industrialist. He was one of the first to warn the Allies and tell the world of the Holocaust and systematic exterminations of Jews in Nazi Germany occupied Europe.Eduard Schulte is the only person to warn the Allies of the atrocities of the Nazis.The context states that he is not the only person to warn the Allies about the atrocities committed by the Nazis.A1 (Wiki)CNC CTricky Presupposition, Numerical Or-dinal
Kota Ramakrishna Karanth (born May 1, 1894) was an Indian lawyer and politician who served as the Minister of Land Revenue for the Madras Presidency from March 1, 1946 to March 23, 1947. He was the elder brother of noted Kan-nada novelist K. Shivarama Karanth.Kota Ramakrishna Karanth has a brother who was a novelist and a politicianAlthough Kota Ramakrishna Karanth's brother is a novelist, we do not know if the brother is also a politicianA1 (Wiki)NEN E NStandard Conjunc-tion, Reasoning Plausibility Likely, Tricky Syntactic
The Macquarie University Hospital (abbreviated MUH) is a private teaching hospital. Mac-quarie University Hospital, together with the Faculty of Medicine and Health Science, Mac-quarie University, formerly known as ASAM, Australian School of Advanced Medicine, will integrate the three essential components of an academic health science centre: clinical care, education and research.The Macquarie University Hospital have still not integrated the three essential components of an academic health science centre: clinical care, education and researchthe statement says that the universities are getting together but have not integrated the systems yetA1 (Wiki)ECE ETricky Presupposition, Standard Negation
Bernardo Provenzano (31 January 1933 - 13 July 2016) was a member of the Sicilian Mafia ("Cosa Nostra") and was suspected of having been the head of the Corleonesi, a Mafia faction that originated in the town of Corleone, and de facto "capo di tutti capi" (boss of all bosses) of the entire Sicilian Mafia until his arrest in 2006.It was never confirmed that Bernardo Provenzano was the leader of the Corleonesi.Provenzano was only suspected as the leader of the mafia. It wasn't confirmed.A2 (Wiki)ENE ETricky Presupposition, Standard Negation
HMAS "Lonsdale" is a former Royal Australian Navy (RAN) training base that was located at Beach Street, Port Melbourne, Victoria, Australia. Originally named "Cbererus III", the Naval Reserve Base was commissioned as HMAS "Lonsdale" on 1 August 1940 during the Second World War.Prior to being re-named, Lonsdale was located in Perth, Australia.A naval base cannot be moved - based on the information in the sce-nario, the base has always been lo-cated in Victoria.A2CNC CTricky Presupposition, Reasoning Facts
Toolbox Murders is a 2004 horror film directed by Tobe Hooper, and written by Jace Anderson and Adam Gierasch. It is a remake of the 1978 film of the same name and was produced by the same people behind the original. The film centralizes on the occupants of an apartment who are stalked and murdered by a masked killer.Toolbox Murders is both 41 years old and 15 years old.Both films are named Toolbox Mur-ders one was made in 1978, one in 2004. Since it is 2019 that would make the first 41 years old and the remake 15 years old.A2 (Wiki)ECE EReasoning Facts, Numerical Age, Tricky Wordplay
A biker is critically ill in hospital after colliding with a lamppost in Pete The incident happened at 1.50pm yesterday in Thorpe Road. The 23-year-old was riding a Lexmoto Arrow 125 when, for an unknown reason, he left the road and collided with a lamppost. He was taken to James Cook University Hospital, in Middlesbrough, where he remains in a critical condition. Any witnesses to the collision are asked to call Durham Police on 101, quoting incident number 288 of July 9.The Lamppost was stationary.Lampposts don't typically move.A3 (News)ENE EReasoning Facts, Standard
"We had to make a decision between making payroll or paying the debt," Melton said Monday. "If we are unable to make payroll Oct. 19, we will definitely be able to make it next week Oct. 26 based on the nature of our sales taxes coming in at the end of the month. However we will have payroll the following week again on Nov. 2 and we are not sure we will be able to make that payroll because of the lack of revenue that is coming in."The company will not be able to make pay- roll on October \(19^{th}\) and will instead dis- pense it on October \(26^{th}\)It's not definitely correct nor defi- nitely incorrect because the com-pany said "if" they can't make it on the \(19^{th}\) they will do it on the \(26^{th}\), they didn't definitely say they won't make it on the \(19^{th}\)A3 (News)NEN C NReasoning Plau-sibility Likely, Tricky Presupposi-tion
The Survey: Greg was answering questions. He had been asked to take a survey about his liv-ing arrangements. He gave all the information he felt comfortable sharing. Greg hoped the surve-vey would improve things around his apartment. THe complex had really gone downhill lately.He gave some of the information he felt comfortable sharing.Greg gave all of the information he felt comfortable, not some. It was difficult for the system because it couldn't tell a significant difference between to word "some" and "all."A3 (Fic-tion)CEC CTricky (Scalar Im-plicature)
+ +Table 12: Extra examples from development sets. 'An' refers to round number, 'orig.' is the original annotator's gold label, 'pred.' is the model prediction, 'valid.' is the validator labels, 'reason' was provided by the original annotator, 'Annotations' is the tags determined by linguist expert annotator. \ No newline at end of file diff --git a/adversarialnlianewbenchmarkfornaturallanguageunderstanding/images.zip b/adversarialnlianewbenchmarkfornaturallanguageunderstanding/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..2d48cf2265746c7792c7f4d730576e835033ebbe --- /dev/null +++ b/adversarialnlianewbenchmarkfornaturallanguageunderstanding/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ebcd5401164db586529b2bbd85055ee995a81478a352b67d649e11c432633c60 +size 1186486 diff --git a/adversarialnlianewbenchmarkfornaturallanguageunderstanding/layout.json b/adversarialnlianewbenchmarkfornaturallanguageunderstanding/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b0c1e23ebe8d91d46badcf9c47076e42b01b55c1 --- /dev/null +++ b/adversarialnlianewbenchmarkfornaturallanguageunderstanding/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ed7e08a9d95164335bf1675ff7480cf7147bb665446de4975570d35fe41831d4 +size 417706 diff --git a/aformalhierarchyofrnnarchitectures/9527fcd0-b76d-485e-acd0-089209349b46_content_list.json b/aformalhierarchyofrnnarchitectures/9527fcd0-b76d-485e-acd0-089209349b46_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..4dcbcba560b1ce4550a01a961657ffc2a267d502 --- /dev/null +++ b/aformalhierarchyofrnnarchitectures/9527fcd0-b76d-485e-acd0-089209349b46_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:01504e20c28561596a38e90bbe8c7e89988e22e89a4f2a7e75e6b574380ebd59 +size 159762 diff --git a/aformalhierarchyofrnnarchitectures/9527fcd0-b76d-485e-acd0-089209349b46_model.json b/aformalhierarchyofrnnarchitectures/9527fcd0-b76d-485e-acd0-089209349b46_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9e0a22db21610e26048d42befb569f224b936da3 --- /dev/null +++ b/aformalhierarchyofrnnarchitectures/9527fcd0-b76d-485e-acd0-089209349b46_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:60372755b232eb67ea145a61335dd103d0bd1038ac2b0ee40338de15bcfc71c9 +size 187628 diff --git a/aformalhierarchyofrnnarchitectures/9527fcd0-b76d-485e-acd0-089209349b46_origin.pdf b/aformalhierarchyofrnnarchitectures/9527fcd0-b76d-485e-acd0-089209349b46_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2e4db38cdfe9ef7591a2c8d8bc2329b977849feb --- /dev/null +++ b/aformalhierarchyofrnnarchitectures/9527fcd0-b76d-485e-acd0-089209349b46_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6da338fcda6033ad503625a9017ead792ae90a9da3f0736d2930abb660f4a1b5 +size 1168379 diff --git a/aformalhierarchyofrnnarchitectures/full.md b/aformalhierarchyofrnnarchitectures/full.md new file mode 100644 index 0000000000000000000000000000000000000000..5e38e76177316d7f415c2c823d77dbfaa91adedb --- /dev/null +++ b/aformalhierarchyofrnnarchitectures/full.md @@ -0,0 +1,1044 @@ +# A Formal Hierarchy of RNN Architectures + +William Merrill* Roy Schwartz*§ + +Gail Weiss† Noah A. Smith*§ + +Yoav Goldberg*‡ +Eran Yahav† + +* Allen Institute for AI † Technion ‡ Bar Ilan University § University of Washington {willm, yoavg, roys, noah}@allenai.org {sgailw, yahave}@cs.technion.ac.il + +# Abstract + +We develop a formal hierarchy of the expressive capacity of RNN architectures. The hierarchy is based on two formal properties: space complexity, which measures the RNN's memory, and rational recurrence, defined as whether the recurrent update can be described by a weighted finite-state machine. We place several RNN variants within this hierarchy. For example, we prove the LSTM is not rational, which formally separates it from the related QRNN (Bradbury et al., 2016). We also show how these models' expressive capacity is expanded by stacking multiple layers or composing them with different pooling functions. Our results build on the theory of "saturated" RNNs (Merrill, 2019). While formally extending these findings to unsaturated RNNs is left to future work, we hypothesize that the practical learnable capacity of unsaturated RNNs obeys a similar hierarchy. Experimental findings from training unsaturated networks on formal languages support this conjecture. + +# 1 Introduction + +While neural networks are central to the performance of today's strongest NLP systems, theoretical understanding of the formal properties of different kinds of networks is still limited. It is established, for example, that the Elman (1990) RNN is Turing-complete, given infinite precision and computation time (Siegelmann and Sontag, 1992, 1994; Chen et al., 2018). But tightening these unrealistic assumptions has serious implications for expressive power (Weiss et al., 2018), leaving a significant gap between classical theory and practice, which theorems in this paper attempt to address. + +Recently, Peng et al. (2018) introduced rational RNNs, a subclass of RNNs whose internal state can be computed by independent weighted finite automata (WFAs). Intuitively, such models have a computationally simpler recurrent update than + +![](images/81aea0da9e56b5d9d938ba7a1fa30789d07dbd5259ed7a7e74b20f7c4a1e1de0.jpg) +Figure 1: Hierarchy of state expressiveness for saturated RNNs and related models. The $y$ axis represents increasing space complexity. $\emptyset$ means provably empty. Models are in bold with qualitative descriptions in gray. + +conventional models like long short-term memory networks (LSTMs; Hochreiter and Schmidhuber, 1997). Empirically, rational RNNs like the quasi-recurrent neural network (QRNN; Bradbury et al., 2016) and unigram rational RNN (Dodge et al., 2019) perform comparably to the LSTM, with a smaller computational budget. Still, the underlying simplicity of rational models raises the question of whether their expressive power is fundamentally limited compared to other RNNs. + +In a separate line of work, Merrill (2019) introduced the saturated $\mathbf{RNN}^1$ as a formal model for analyzing the capacity of RNNs. A saturated RNN is a simplified network where all activation functions have been replaced by step functions. The saturated network may be seen intuitively as a "stable" version of its original RNN, in which the in + +ternal activations act discretely. A growing body of work—including this paper—finds that the saturated theory predicts differences in practical learnable capacity for various RNN architectures (Weiss et al., 2018; Merrill, 2019; Suzgun et al., 2019a). + +We compare the expressive power of rational and non-rational RNNs, distinguishing between state expressiveness (what kind and amount of information the RNN states can capture) and language expressiveness (what languages can be recognized when the state is passed to a classifier). To do this, we build on the theory of saturated RNNs. + +State expressiveness We introduce a unified hierarchy (Figure 1) of the functions expressible by the states of rational and non-rational RNN encoders. The hierarchy is defined by two formal properties: space complexity, which is a measure of network memory, $^{2}$ and rational recurrence, whether the internal structure of the RNN can be described by WFAs. The hierarchy reveals concrete differences between LSTMs and QRNNs, and further separates both from a class containing convolutional neural networks (CNNs, Lecun and Bengio, 1995; Kim, 2014), Elman RNNs, and gated recurrent units (GRU; Cho et al., 2014). + +We provide the first formal proof that LSTMs can encode functions that rational recurrences cannot. On the other hand, we show that the saturated Elman RNN and GRU are rational recurrences with constant space complexity, whereas the QRNN has unbounded space complexity. We also show that an unrestricted WFA has rich expressive power beyond any saturated RNN we consider—including the LSTM. This difference potentially opens the door to more expressive RNNs incorporating the computational efficiency of rational recurrences. + +Language expressiveness When applied to classification tasks like language recognition, RNNs are typically combined with a "decoder": additional layer(s) that map their hidden states to a prediction. Thus, despite differences in state expressiveness, rational RNNs might be able to achieve comparable empirical performance to non-rational RNNs on NLP tasks. In this work, we consider the setup in which the decoders only view the final hidden state of the RNN. We demonstrate that + +a sufficiently strong decoder can overcome some of the differences in state expressiveness between different models. For example, an LSTM can recognize $a^n b^n$ with a single decoding layer, whereas a QRNN provably cannot until the decoder has two layers. However, we also construct a language that an LSTM can recognize without a decoder, but a QRNN cannot recognize with any decoder. Thus, no decoder can fully compensate for the weakness of the QRNN compared to the LSTM. + +Experiments Finally, we conduct experiments on formal languages, justifying that our theorems correctly predict which languages unsaturated recognizers trained by gradient descent can learn. Thus, we view our hierarchy as a useful formal tool for understanding the relative capabilities of different RNN architectures. + +Roadmap We present the formal devices for our analysis of RNNs in Section 2. In Section 3 we develop our hierarchy of state expressiveness for single-layer RNNs. In Section 4, we shift to study RNNs as language recognizers. Finally, in Section 5, we provide empirical results evaluating the relevance of our predictions for unsaturated RNNs. + +# 2 Building Blocks + +In this work, we analyze RNNs using formal models from automata theory—in particular, WFAs and counter automata. In this section, we first define the basic notion of an encoder studied in this paper, and then introduce more specialized formal concepts: WFAs, counter machines (CMs), space complexity, and, finally, various RNN architectures. + +# 2.1 Encoders + +We view both RNNs and automata as encoders: machines that can be parameterized to compute a set of functions $f:\Sigma^{*}\to \mathbb{Q}^{k}$ , where $\boldsymbol{\Sigma}$ is an input alphabet and $\mathbb{Q}$ is the set of rational reals. Given an encoder $M$ and parameters $\theta$ , we use $M_{\theta}$ to represent the specific function that the parameterized encoder computes. For each encoder, we refer to the set of functions that it can compute as its state expressiveness. For example, a deterministic finite state acceptor (DFA) is an encoder whose parameters are its transition graph. Its state expressiveness is the indicator functions for the regular languages. + +# 2.2 WFAs + +Formally, a WFA is a non-deterministic finite automaton where each starting state, transition, and + +final state is weighted. Let $Q$ denote the set of states, $\Sigma$ the alphabet, and $\mathbb{Q}$ the rational reals.4 + +This weighting is specified by three functions: + +1. Initial state weights $\lambda :Q\to \mathbb{Q}$ +2. Transition weights $\tau :Q\times \Sigma \times Q\to \mathbb{Q}$ +3. Final state weights $\rho :Q\to \mathbb{Q}$ + +The weights are used to encode any string $x \in \Sigma^{*}$ : + +Definition 1 (Path score). Let $\pi$ be a path of the form $q_0 \to_{x_1} q_1 \to_{x_2} \dots \to_{x_t} q_t$ through WFA $A$ . + +The score of $\pi$ is given by + +$$ +A [ \pi ] = \lambda (q _ {0}) \left(\prod_ {i = 1} ^ {t} \tau (q _ {i - 1}, x _ {i}, q _ {i})\right) \rho (q _ {t}). +$$ + +By $\Pi (x)$ , denote the set of paths producing $x$ . + +Definition 2 (String encoding). The encoding computed by a WFA $A$ on string $x$ is + +$$ +A [ x ] = \sum_ {\pi \in \Pi (x)} A [ \pi ]. +$$ + +Hankel matrix Given a function $f:\Sigma^{*}\to \mathbb{Q}$ and two enumerations $\alpha ,\omega$ of the strings in $\Sigma^{*}$ , we define the Hankel matrix of $f$ as the infinite matrix + +$$ +[ H _ {f} ] _ {i j} = f \left(\alpha_ {i} \cdot \omega_ {j}\right). \tag {1} +$$ + +where $\cdot$ denotes concatenation. It is sometimes convenient to treat $H_{f}$ as though it is directly indexed by $\Sigma^{*}$ , e.g. $[H_f]_{\alpha_i,\omega_j} = f(\alpha_i\cdot \omega_j)$ , or refer to a sub-block of a Hankel matrix, row- and column-indexed by prefixes and suffixes $P,S\subseteq \Sigma^{*}$ . The following result relates the Hankel matrix to WFAs: + +Theorem 1 (Carlyle and Paz, 1971; Fliess, 1974). For any $f: \Sigma^{*} \to \mathbb{Q}$ , there exists a WFA that computes $f$ if and only if $H_{f}$ has finite rank. + +Rational series (Sakarovitch, 2009) For all $k \in \mathbb{N}$ , $\mathbf{f} : \Sigma^* \to \mathbb{Q}^k$ is a rational series if there exist WFAs $A_1, \dots, A_k$ such that, for all $x \in \Sigma^*$ and $1 \leq i \leq k$ , $A_i[x] = f_i(x)$ . + +# 2.3 Counter Machines + +We now turn to introducing a different type of encoder: the real-time counter machine (CM; Merrill, 2020; Fischer, 1966; Fischer et al., 1968). CMs are deterministic finite-state machines augmented with finitely many integer counters. While processing a string, the machine updates these counters, and may use them to inform its behavior. + +We view counter machines as encoders mapping $\Sigma^{*}\to \mathbb{Z}^{k}$ . For $m\in \mathbb{N},\circ \in \{+, - ,\times \}$ , let $\circ m$ denote the function $f(n) = n\circ m$ . + +Definition 3 (General CM; Merrill, 2020). A $k$ -counter CM is a tuple $\langle \Sigma, Q, q_0, u, \delta \rangle$ with + +1. A finite alphabet $\Sigma$ +2. A finite set of states $Q$ , with initial state $q_0$ +3. A counter update function + +$$ +u: \Sigma \times Q \times \{0, 1 \} ^ {k} \rightarrow \{\times 0, - 1, + 0, + 1 \} ^ {k} +$$ + +4. A state transition function + +$$ +\delta : \Sigma \times Q \times \{0, 1 \} ^ {k} \rightarrow Q +$$ + +A CM processes input tokens $\{x_{t}\}_{t = 1}^{n}$ sequentially. Denoting $\langle q_t,\mathbf{c}_t\rangle \in Q\times \mathbb{Z}^k$ a CM's configuration at time $t$ , define its next configuration: + +$$ +q _ {t + 1} = \delta \left(x _ {t}, q _ {t}, \overrightarrow {\mathbb {I}} _ {= 0} \left(\mathbf {c} _ {t}\right)\right) \tag {2} +$$ + +$$ +\mathbf {c} _ {t + 1} = u \left(x _ {t}, q _ {t}, \overline {{\mathbb {I}}} _ {= 0} \left(\mathbf {c} _ {t}\right)\right) \left(\mathbf {c} _ {t}\right), \tag {3} +$$ + +where $\vec{\mathbb{I}}_{=0}$ is a broadcasted "zero-check" operation, i.e., $\vec{\mathbb{I}}_{=0}(\mathbf{v})_i \triangleq \mathbb{1}_{=0}(v_i)$ . In (2) and (3), note that the machine only views the zerosness of each counter, and not its actual value. A general CM's encoding of a string $x$ is the value of its counter vector $\mathbf{c}_t$ after processing all of $x$ . + +# Restricted CMs + +1. A CM is $\Sigma$ -restricted iff $u$ and $\delta$ depend only on the current input $\sigma \in \Sigma$ . +2. A CM is $(\Sigma \times Q)$ -restricted iff $u$ and $\delta$ depend only on the current input $\sigma \in \Sigma$ and the current state $q \in Q$ . +3. A CM is $\Sigma^w$ -restricted iff it is $(\Sigma \times Q)$ -restricted, and the states $Q$ are windows over the last $w$ input tokens, e.g., $Q = \Sigma^{\leq w}5$ . + +These restrictions prevent the machine from being "counter-aware": $u$ and $\delta$ cannot condition on the counters' values. As we will see, restricted CMs have natural parallels in the realm of rational RNNs. In Subsection 3.2, we consider the relationship between counter awareness and rational recurrence. + +# 2.4 Space Complexity + +As in Merrill (2019), we also analyze encoders in terms of state space complexity, measured in bits. + +Definition 4 (Bit complexity). An encoder $M:\Sigma^{*}\to \mathbb{Q}^{k}$ has $T(n)$ space iff + +$$ +\max _ {\theta} \left| \left\{s _ {M _ {\theta}} (x) \mid x \in \Sigma^ {\leq n} \right\} \right| = 2 ^ {T (n)}, +$$ + +where $s_{M_{\theta}}(x)$ is a minimal representation of $M$ 's internal configuration immediately after $x$ . + +We consider three asymptotic space complexity classes: $\Theta(1)$ , $\Theta(\log n)$ , and $\Theta(n)$ , corresponding to encoders that can reach a constant, polynomial, and exponential (in sequence length) number of configurations respectively. Intuitively, encoders that can dynamically count but cannot use more complex memory like stacks—such as all CMs—are in $\Theta(\log n)$ space. Encoders that can uniquely encode every input sequence are in $\Theta(n)$ space. + +# 2.5 Saturated Networks + +A saturated neural network is a discrete approximation of neural network considered by Merrill (2019), who calls it an "asymptotic network." Given a parameterized neural encoder $M_{\theta}(x)$ , we construct the saturated network $s - M_{\theta}(x)$ by taking + +$$ +\mathrm {s} - M _ {\theta} (x) = \lim _ {N \rightarrow \infty} M _ {N \theta} (x) \tag {4} +$$ + +where $N\theta$ denotes the parameters $\theta$ multiplied by a scalar $N$ . This transforms each "squashing" function (sigmoid, tanh, etc.) to its extreme values (0, $\pm 1$ ). In line with prior work (Weiss et al., 2018; Merrill, 2019; Suzgun et al., 2019b), we consider saturated networks a reasonable approximation for analyzing practical expressive power. For clarity, we denote the saturated approximation of an architecture by prepending it with s, e.g., s-LSTM. + +# 2.6 RNNs + +A recurrent neural network (RNN) is a parameterized update function $g_{\theta}:\mathbb{Q}^{k}\times \mathbb{Q}^{d_{x}}\to \mathbb{Q}^{k}$ , where $\theta$ are the rational-valued parameters of the RNN and $d_{x}$ is the dimension of the input vector. $g_{\theta}$ takes as input a current state $\mathbf{h}\in \mathbb{Q}^k$ and input vector $\mathbf{x}\in \mathbb{Q}^{d_x}$ , and produces the next state. Defining the initial state as $\mathbf{h}_0 = \mathbf{0}$ , an RNN can be applied to an input sequence $x\in (\mathbb{Q}^{d_x})^*$ one vector at a time to create a sequence of states $\{\mathbf{h}_t\}_{t\leq |x|}$ , each representing an encoding of the prefix of $x$ up to that time step. RNNs can be used to encode sequences over a finite alphabet $x\in \Sigma^{*}$ by first applying a mapping (embedding) $e:\Sigma \rightarrow \mathbb{Q}^{d_x}$ . + +Multi-layer RNNs "Deep" RNNs are RNNs that have been arranged in $L$ stacked layers $R_{1},\ldots ,R_{L}$ . In this setting, the series of output + +states $\mathbf{h}_1, \mathbf{h}_2, \dots, \mathbf{h}_{|x|}$ generated by each RNN on its input is fed as input to the layer above it, and only the first layer receives the original input sequence $x \in \Sigma^*$ as input. + +The recurrent update function $g$ can take several forms. The original and most simple form is that of the Elman RNN. Since then, more elaborate forms using gating mechanisms have become popular, among them the LSTM, GRU, and QRNN. + +Elman RNNs (Elman, 1990) Let $\mathbf{x}_t$ be a vector embedding of $x_{t}$ . For brevity, we suppress the bias terms in this (and the following) affine operations. + +$$ +\mathbf {h} _ {t} = \tanh \left(\mathbf {W x} _ {t} + \mathbf {U h} _ {t - 1}\right). \tag {5} +$$ + +We refer to the saturated Elman RNN as the $s$ -RNN. The s-RNN has $\Theta(1)$ space (Merrill, 2019). + +LSTMs (Hochreiter and Schmidhuber, 1997) An LSTM is a gated RNN with a state vector $\mathbf{h}_t \in \mathbb{Q}^k$ and memory vector $\mathbf{c}_t \in \mathbb{Q}^k$ . + +$$ +\mathbf {f} _ {t} = \sigma \left(\mathbf {W} ^ {f} \mathbf {x} _ {t} + \mathbf {U} ^ {f} \mathbf {h} _ {t - 1}\right) \tag {6} +$$ + +$$ +\mathbf {i} _ {t} = \sigma \left(\mathbf {W} ^ {i} \mathbf {x} _ {t} + \mathbf {U} ^ {i} \mathbf {h} _ {t - 1}\right) \tag {7} +$$ + +$$ +\mathbf {o} _ {t} = \sigma \left(\mathbf {W} ^ {o} \mathbf {x} _ {t} + \mathbf {U} ^ {o} \mathbf {h} _ {t - 1}\right) \tag {8} +$$ + +$$ +\tilde {\mathbf {c}} _ {\mathbf {t}} = \tanh \left(\mathbf {W} ^ {c} \mathbf {x} _ {t} + \mathbf {U} ^ {c} \mathbf {h} _ {t - 1}\right) \tag {9} +$$ + +$$ +\mathbf {c} _ {t} = \mathbf {f} _ {t} \odot \mathbf {c} _ {t - 1} + \mathbf {i} _ {t} \odot \tilde {\mathbf {c}} _ {\mathbf {t}} \tag {10} +$$ + +$$ +\mathbf {h} _ {t} = \mathbf {o} _ {t} \odot \tanh \left(\mathbf {c} _ {t}\right). \tag {11} +$$ + +The LSTM can use its memory vector $\mathbf{c}_t$ as a register of counters (Weiss et al., 2018). Merrill (2019) showed that the s-LSTM has $\Theta (\log n)$ space. + +GRUs (Cho et al., 2014) Another kind of gated RNN is the GRU. + +$$ +\mathbf {z} _ {t} = \sigma \left(\mathbf {W} ^ {z} \mathbf {x} _ {t} + \mathbf {U} ^ {z} \mathbf {h} _ {t - 1}\right) \tag {12} +$$ + +$$ +\mathbf {r} _ {t} = \sigma \left(\mathbf {W} ^ {r} \mathbf {x} _ {t} + \mathbf {U} ^ {r} \mathbf {h} _ {t - 1}\right) \tag {13} +$$ + +$$ +\mathbf {u} _ {t} = \tanh \left(\mathbf {W} ^ {u} \mathbf {x} _ {t} + \mathbf {U} ^ {u} \left(\mathbf {r} _ {t} \odot \mathbf {h} _ {t - 1}\right)\right) \tag {14} +$$ + +$$ +\mathbf {h} _ {t} = \mathbf {z} _ {t} \odot \mathbf {h} _ {t - 1} + (1 - \mathbf {z} _ {t}) \odot \mathbf {u} _ {t}. \tag {15} +$$ + +Weiss et al. (2018) found that, unlike the LSTM, the GRU cannot use its memory to count dynamically. Merrill (2019) showed the s-GRU has $\Theta(1)$ space. + +![](images/e1200c889d0f2ee383c9ed4616f2dafdf573b0da2581da44d6f775a2833c6bc1.jpg) +Figure 2: Diagram of the relations between encoders. Neural networks are underlined. We group by asymptotic upper bound $(O)$ , as opposed to tight $(\Theta)$ . + +QRNNs Bradbury et al. (2016) propose QRNNs as a computationally efficient hybrid of LSTMs and CNNs. Let $*$ denote convolution over time, let $\mathbf{W}^z, \mathbf{W}^f, \mathbf{W}^o \in \mathbb{Q}^{d_x \times w \times k}$ be convolutions with window length $w$ , and let $\mathbf{X} \in \mathbb{Q}^{n \times d_x}$ denote the matrix of $n$ input vectors. AnIFO-QRNN (henceforth referred to as a QRNN) with window length $w$ is defined by $\mathbf{W}^z, \mathbf{W}^f$ , and $\mathbf{W}^o$ as follows: + +$$ +\mathbf {Z} = \tanh \left(\mathbf {W} ^ {z} * \mathbf {X}\right) \tag {16} +$$ + +$$ +\mathbf {F} = \sigma \left(\mathbf {W} ^ {f} * \mathbf {X}\right) \tag {17} +$$ + +$$ +\mathbf {O} = \sigma \left(\mathbf {W} ^ {o} * \mathbf {X}\right) \tag {18} +$$ + +$$ +\mathbf {c} _ {t} = \mathbf {f} _ {t} \odot \mathbf {c} _ {t - 1} + \mathbf {i} _ {t} \odot \mathbf {z} _ {t} \tag {19} +$$ + +$$ +\mathbf {h} _ {t} = \mathbf {o} _ {t} \odot \mathbf {c} _ {t} \tag {20} +$$ + +where $\mathbf{z}_t, \mathbf{f}_t, \mathbf{o}_t$ are respectively rows of $\mathbf{Z}, \mathbf{F}, \mathbf{O}$ . A QRNN $Q$ can be seen as an LSTM in which all uses of the state vector $\mathbf{h}_t$ have been replaced with a computation over the last $w$ input tokens-in this way it is similar to a CNN. + +The s-QRNN has $\Theta (\log n)$ space, as the analysis of Merrill (2019) for the s-LSTM directly applies. Indeed, any s-QRNN is also a $(\Sigma^w)$ -restricted CM extended with $= \pm 1$ ("set to $\pm 1$ " operations. + +# 3 State Expressiveness + +We now turn to presenting our results. In this section, we develop a hierarchy of single-layer RNNs based on their state expressiveness. A set-theoretic view of the hierarchy is shown in Figure 2. + +Let $\mathcal{R}$ be the set of rational series. The hierarchy relates $\Theta (\log n)$ space to the following sets: + +- RR As in Peng et al. (2018), we say that An encoder is rationally recurrent (RR) iff its state expressiveness is a subset of $\mathcal{R}$ . +- RR-hard An encoder is $RR$ -hard iff its state expressiveness contains $\mathcal{R}$ . A Turing machine is RR-hard, as it can simulate any WFA. + +- RR-complete Finally, an encoder is $RR$ -complete iff its state expressiveness is equivalent to $\mathcal{R}$ . A trivial example of an RR-complete encoder is a vector of $k$ WFAs. + +The different RNNs are divided between the intersections of these classes. In Subsection 3.1, we prove that the s-LSTM, already established to have $\Theta (\log n)$ space, is not RR. In Subsection 3.2, we demonstrate that encoders with restricted counting ability (e.g., QRNNs) are RR, and in Subsection 3.3, we show the same for all encoders with finite state (CNNs, s-RNNs, and s-GRUs). In Subsection 3.4, we demonstrate that none of these RNNs are RR-hard. In Appendix F, we extend this analysis from RNNs to self attention. + +# 3.1 Counting Beyond RR + +We find that encoders like the s-LSTM—which, as discussed in Subsection 2.3, is "aware" of its current counter values—are not RR. To do this, we construct $f_0: \{a, b\}^* \to \mathbb{N}$ that requires counter awareness to compute on strings of the form $a^*b^*$ , making it not rational. We then construct an s-LSTM computing $f_0$ over $a^*b^*$ . + +Let $\#_{a - b}(x)$ denote the number of $a$ s in string $x$ minus the number of $b$ s. + +Definition 5 (Rectified counting). + +$$ +f _ {0}: x \mapsto \left\{ \begin{array}{l l} \# _ {a - b} (x) & \text {i f} \# _ {a - b} (x) > 0 \\ 0 & \text {o t h e r w i s e .} \end{array} \right. +$$ + +Lemma 1. For all $f: \{a, b\}^* \to \mathbb{N}$ , if $f(a^i b^j) = f_0(a^i b^j)$ for all $i, j \in \mathbb{N}$ , then $f \notin \mathcal{R}$ . + +Proof. Consider the Hankel sub-block $\mathbf{A}_n$ of $H_{f}$ with prefixes $P_{n} = \{a^{i}\}_{i\leq n}$ and suffixes $S_{n} = \{b^{j}\}_{j\leq n}$ . $\mathbf{A}_n$ is lower-triangular: + +$$ +\left( \begin{array}{c c c c} 0 & 0 & 0 & \dots \\ 1 & 0 & 0 & \dots \\ 2 & 1 & 0 & \dots \\ \vdots & \vdots & \vdots & \ddots \end{array} \right). \tag {21} +$$ + +Therefore $\mathrm{rank}(\mathbf{A}_n) = n - 1$ . Thus, for all $n$ , there is a sub-block of $H_{f}$ with rank $n - 1$ , and so $\mathrm{rank}(H_f)$ is unbounded. It follows from Theorem 1 that there is no WFA computing $f$ . + +Theorem 2. The $s$ -LSTM is not RR. + +![](images/eb0c044f7be82331ca24a4bd17906a8d8112f3027640412707a7084b71c448f2.jpg) +Figure 3: A 1-CM computing $f_{0}$ for $x \in \{a^{i}b^{j} \mid i, j \in \mathbb{N}\}$ . Let $\sigma / \pm m$ denote a transition that consumes $\sigma$ and updates the counter by $\pm m$ . We write $\sigma, = 0 / \pm m$ (or $\neq$ ) for a transition that requires the counter is 0. + +Proof. Assume the input has the form $a^i b^j$ for some $i, j$ . Consider the following LSTM: + +$$ +i _ {t} = \sigma \left(1 0 N h _ {t - 1} - 2 N \mathbb {1} _ {= b} \left(x _ {t}\right) + N\right) \tag {22} +$$ + +$$ +\tilde {c} _ {t} = \tanh \left(N \mathbb {1} _ {= a} (x _ {t}) - N \mathbb {1} _ {= b} (x _ {t})\right) \tag {23} +$$ + +$$ +c _ {t} = c _ {t - 1} + i _ {t} \tilde {c} _ {t} \tag {24} +$$ + +$$ +h _ {t} = \tanh \left(c _ {t}\right). \tag {25} +$$ + +Let $N\to \infty$ . Then $i_t = 0$ iff $x_{t} = b$ and $h_{t - 1} = 0$ (i.e. $c_{t - 1} = 0$ ). Meanwhile, $\tilde{c}_t = 1$ iff $x_{t} = a$ . The update term becomes + +$$ +i _ {t} \tilde {c} _ {t} = \left\{ \begin{array}{l l} 1 & \text {i f} x _ {t} = a \\ - 1 & \text {i f} x _ {t} = b \text {a n d} c _ {t - 1} > 0 \\ 0 & \text {o t h e r w i s e .} \end{array} \right. \tag {26} +$$ + +For a string $a^i b^j$ , the update in (26) is equivalent to the CM in Figure 3. Thus, by Lemma 1, the s-LSTM (and the general CM) is not RR. $\square$ + +# 3.2 Rational Counting + +While the counter awareness of a general CM enables it to compute non-rational functions, CMs that cannot view their counters are RR. + +Theorem 3. Any $\Sigma$ -restricted CM is RR. + +Proof. We show that any function that a $\Sigma$ -restricted CM can compute can also be computed by a collection of WFAs. The CM update operations $(-1, +0, +1$ , or $\times 0)$ can all be reexpressed in terms of functions $\mathbf{r}(x), \mathbf{u}(x): \Sigma^{*} \to \mathbb{Z}^{k}$ to get: + +$$ +\mathbf {c} _ {t} = \mathbf {r} \left(x _ {t}\right) \mathbf {c} _ {t - 1} + \mathbf {u} \left(x _ {t}\right) \tag {27} +$$ + +$$ +\mathbf {c} _ {t} = \sum_ {i = 1} ^ {t} \left(\prod_ {j = i + 1} ^ {t} \mathbf {r} \left(x _ {j}\right)\right) \mathbf {u} \left(x _ {i}\right). \tag {28} +$$ + +A WFA computing $[\mathbf{c}_t]_i$ is shown in Figure 4. + +The WFA in Figure 4 also underlies unigram rational RNNs (Peng et al., 2018). Thus, $\Sigma$ -restricted CMs are actually a special case of unigram WFAs. In Appendix A, we show the more general result: + +Theorem 4. Any $(\Sigma \times Q)$ -restricted CM is RR. + +In many rational RNNs, the updates at different time steps are independent of each other outside of a window of $w$ tokens. Theorem 4 tells us this independence is not an essential property of rational encoders. Rather, any CM where the update is conditioned by finite state (as opposed to being conditioned by a local window) is in fact RR. + +Furthermore, since $(\Sigma^w)$ -restricted CMs are a special case of $(\Sigma \times Q)$ -restricted CMs, Theorem 4 can be directly applied to show that the s-QRNN is RR. See Appendix A for further discussion of this. + +# 3.3 Finite-Space RR + +Theorem 4 motivates us to also think about finite-space encoders: i.e., encoders with no counters" where the output at each prefix is fully determined by a finite amount of memory. The following lemma implies that any finite-space encoder is RR: + +Lemma 2. Any function $f: \Sigma^{*} \to \mathbb{Q}$ computable by a $\Theta(1)$ -space encoder is a rational series. + +Proof. Since $f$ is computable in $\Theta(1)$ space, there exists a DFA $A_f$ whose accepting states are isomorphic to the range of $f$ . We convert $A_f$ to a WFA by labelling each accepting state by the value of $f$ that it corresponds to. We set the starting weight of the initial state to 1, and 0 for every other state. We assign each transition weight 1. + +Since the CNN, s-RNN, and s-GRU have finite state, we obtain the following result: + +Theorem 5. The CNN, $s$ -RNN, and $s$ -GRU are RR. + +While Schwartz et al. (2018) and Peng et al. (2018) showed the CNN to be RR over the max-plus semiring, Theorem 5 shows the same holds for $\langle \mathbb{Q},\cdot , + \rangle$ + +# 3.4 RR Completeness + +While "rational recurrence" is often used to indicate the simplicity of an RNN architecture, we find in this section that WFAs are surprisingly computationally powerful. Figure 5 shows a WFA mapping binary string to their numeric value, proving WFAs have $\Theta(n)$ space. We now show that none of our RNNs are able to simulate an arbitrary WFA, even in the unsaturated form. + +![](images/3f6fceb868dd7fe80e198f8aeddc20031069462121524399be21add6b9d5d38a.jpg) +Figure 4: WFA simulating unit $i$ of a $\Sigma$ -restricted CM. Let $\forall \sigma / w(\sigma)$ denote a set of transitions consuming each token $\sigma$ with weight $w(\sigma)$ . We use standard DFA notation to show initial weights $\lambda(q_0) = 1, \lambda(q_1) = 0$ and accepting weights $\rho(q_0) = 0, \rho(q_1) = 1$ . + +![](images/534f68b577a5b46ac5695173191cb7f588812a89bbfcea201ad2c2513c0338a0.jpg) +Figure 5: A WFA mapping binary strings to their numeric value. This can be extended for any base $>2$ . Cortes and Mohri (2000) present a similar construction. Notation is the same as Figure 4. + +Theorem 6. Both the saturated and unsaturated RNN, GRU, QRNN, and LSTM are not RR-hard. + +Proof. Consider the function $f_{b}$ mapping binary strings to their value, e.g. $101 \mapsto 5$ . The WFA in Figure 5 shows that this function is rational. + +The value of $f_{b}$ grows exponentially with the sequence length. On the other hand, the value of the RNN and GRU cell is bounded by 1, and QRNN and LSTM cells can only grow linearly in time. Therefore, these encoders cannot compute $f_{b}$ . + +In contrast, memory networks can have $\Theta (n)$ space. Appendix G explores this for stack RNNs. + +# 3.5 Towards Transformers + +Appendix F presents preliminary results extending saturation analysis to self attention. We show saturated self attention is not RR and consider its space complexity. We hope further work will more completely characterize saturated self attention. + +# 4 Language Expressiveness + +Having explored the set of functions expressible internally by different saturated RNN encoders, we turn to the languages recognizable when using them with a decoder. We consider the following setup: + +1. An s-RNN encodes $x$ to a vector $\mathbf{h}_t\in \mathbb{Q}^k$ +2. A decoder function maps the last state $\mathbf{h}_t$ to an accept/reject decision, respectively: $\{1,0\}$ . + +We say that a language $L$ is decided by an encoder-decoder pair $\mathbf{e}$ , $\mathbf{d}$ if $\mathbf{d}(\mathbf{e}(x)) = 1$ for every sequence $x \in L$ and otherwise $\mathbf{d}(\mathbf{e}(x)) = 0$ . We explore which languages can be decided by different encoder-decoder pairings. + +Some related results can be found in Cortes and Mohri (2000), who study the expressive power of WFAs in relation to CFGs under a slightly different definition of language recognition. + +# 4.1 Linear Decoders + +Let $\mathbf{d}_1$ be the single-layer linear decoder + +$$ +\mathbf {d} _ {1} \left(\mathbf {h} _ {t}\right) \triangleq \mathbb {1} _ {> 0} \left(\mathbf {w} \cdot \mathbf {h} _ {t} + b\right) \in \{0, 1 \} \tag {29} +$$ + +parameterized by $\mathbf{w}$ and $b$ . For an encoder architecture $E$ , we denote by $D_{1}(E)$ the set of languages decidable by $E$ with $\mathbf{d}_1$ . We use $D_{2}(E)$ analogously for a 2-layer decoder with $\mathbb{1}_{>0}$ activations, where the first layer has arbitrary width. + +# 4.2 A Decoder Adds Power + +We refer to sets of strings using regular expressions, e.g. $a^* = \{a^i \mid i \in \mathbb{N}\}$ . To illustrate the purpose of the decoder, consider the following language: + +$$ +L _ {\leq} = \{x \in \{a, b \} ^ {*} \mid \# _ {a - b} (x) \leq 0 \}. \tag {30} +$$ + +The Hankel sub-block of the indicator function for $L_{\leq}$ over $P = a^{*}$ , $S = b^{*}$ is lower triangular. Therefore, no RR encoder can compute it. + +However, adding the $D_{1}$ decoder allows us to compute this indicator function with an s-QRNN, which is RR. We set the s-QRNN layer to compute the simple series $c_{t} = \#_{a - b}(x)$ (by increasing on $a$ and decreasing on $b$ ). The $D_{1}$ layer then checks $c_{t} \leq 0$ . So, while the indicator function for $L_{\leq}$ is not itself rational, it can be easily recovered from a rational representation. Thus, $L_{\leq} \in D_{1}(\mathrm{s - QRNN})$ . + +# 4.3 Case Study: $a^n b^n$ + +We compare the language expressiveness of several rational and non-rational RNNs on the following: + +$$ +a ^ {n} b ^ {n} \triangleq \left\{a ^ {n} b ^ {n} \mid n \in \mathbb {N} \right\} \tag {31} +$$ + +$$ +a ^ {n} b ^ {n} \Sigma^ {*} \triangleq \left\{a ^ {n} b ^ {n} (a | b) ^ {*} \mid 0 < n \right\}. \tag {32} +$$ + +$a^n b^n$ is more interesting than $L_{\leq}$ because the $D_{1}$ decoder cannot decide it simply by asking the encoder to track $\#_{a - b}(x)$ , as that would require it to compute the non-linearly separable $= 0$ function. Thus, it appears at first that deciding $a^{n}b^{n}$ with $D_{1}$ + +might require a non-rational RNN encoder. However, we show below that this is not the case. + +Let $\circ$ denote stacking two layers. We will go on to discuss the following results: + +$$ +a ^ {n} b ^ {n} \in D _ {1} (\mathrm {W F A}) \tag {33} +$$ + +$$ +a ^ {n} b ^ {n} \in D _ {1} (\mathrm {s - L S T M}) \tag {34} +$$ + +$$ +a ^ {n} b ^ {n} \notin D _ {1} (\mathrm {s} - \mathrm {Q R N N}) \tag {35} +$$ + +$$ +a ^ {n} b ^ {n} \in D _ {1} (\mathrm {s} - \mathrm {Q R N N} \circ \mathrm {s} - \mathrm {Q R N N}) \tag {36} +$$ + +$$ +a ^ {n} b ^ {n} \in D _ {2} (\mathrm {s} - \mathrm {Q R N N}) \tag {37} +$$ + +$$ +a ^ {n} b ^ {n} \Sigma^ {*} \in D _ {1} (\mathrm {s - L S T M}) \tag {38} +$$ + +$$ +a ^ {n} b ^ {n} \Sigma^ {*} \notin D (\mathrm {s} - \text {Q R N N}) \text {f o r a n y} D \tag {39} +$$ + +$$ +a ^ {n} b ^ {n} \Sigma^ {*} \cup \{\epsilon \} \in D _ {1} (\mathrm {s} - \mathrm {Q R N N} \circ \mathrm {s} - \mathrm {Q R N N}) \tag {40} +$$ + +WFAs (Appendix B) In Theorem 8 we present a function $f: \Sigma^{*} \to \mathbb{Q}$ satisfying $f(x) > 0$ iff $x \in a^{n}b^{n}$ , and show that $H_{f}$ has finite rank. It follows that there exists a WFA that can decide $a^{n}b^{n}$ with the $D_{1}$ decoder. Counterintuitively, $a^{n}b^{n}$ can be recognized using rational encoders. + +QRNNs (Appendix C) Although $a^n b^n \in D_1(\mathrm{WFA})$ , it does not follow that every rationally recurrent model can also decide $a^n b^n$ with the help of $D_1$ . Indeed, in Theorem 9, we prove that $a^n b^n \notin D_1(\mathrm{s - QRNN})$ , whereas $a^n b^n \in D_1(\mathrm{s - LSTM})$ (Theorem 13). + +It is important to note that, with a more complex decoder, the QRNN could recognize $a^n b^n$ . For example, the s-QRNN can encode $c_{1} = \#_{a - b}(x)$ and set $c_{2}$ to check whether $x$ contains $ba$ , from which a $D_{2}$ decoder can recognize $a^n b^n$ (Theorem 10). + +This does not mean the hierarchy dissolves as the decoder is strengthened. We show that $a^n b^n \Sigma^*$ — which seems like a trivial extension of $a^n b^n$ — is not recognizable by the s-QRNN with any decoder. + +This result may appear counterintuitive, but in fact highlights the s-QRNN's lack of counter awareness: it can only passively encode the information needed by the decoder to recognize $a^{n}b^{n}$ . Failing to recognize that a valid prefix has been matched, it cannot act to preserve that information after additional input tokens are seen. We present a proof in Theorem 11. In contrast, in Theorem 14 we show that the s-LSTM can directly encode an indicator for $a^{n}b^{n}\Sigma^{*}$ in its internal state. + +Proof sketch: $a^n b^n \Sigma^* \notin D(\text{s - QRNN})$ . A sequence $s_1 \in a^n b^n \Sigma^*$ is shuffled to create $s_2 \notin a^n b^n \Sigma^*$ with an identical multi-set of counter up + +dates. $^{10}$ Counter updates would be order agnostic if not for reset operations, and resets mask all history, so extending $s_1$ and $s_2$ with a single suffix $s$ containing all of their $w$ -grams reaches the same final state. Then for any $D$ , $D(\mathrm{s - QRNN})$ cannot separate them. We formalize this in Theorem 11. + +We refer to this technique as the suffix attack, and note that it can be used to prove for multiple other languages $L \in D_{2}(\mathrm{s - QRNN})$ that $L \cdot \Sigma^{*}$ is not in $D(\mathrm{s - QRNN})$ for any decoder $D$ . + +2-layer QRNNs Adding another layer overcomes the weakness of the 1-layer s-QRNN, at least for deciding $a^n b^n$ . This follows from the fact that $a^n b^n \in D_2(\mathrm{s - QRNN})$ : the second QRNN layer can be used as a linear layer. + +Similarly, we show in Theorem 10 that a 2-layer s-QRNN can recognize $a^n b^n \Sigma^* \cup \{\epsilon\}$ . This suggests that adding a second s-QRNN layer compensates for some of the weakness of the 1-layer s-QRNN, which, by the same argument for $a^n b^n \Sigma^*$ cannot recognize $a^n b^n \Sigma^* \cup \{\epsilon\}$ with any decoder. + +# 4.4 Arbitrary Decoder + +Finally, we study the theoretical case where the decoder is an arbitrary recursively enumerable (RE) function. We view this as a loose upper bound of stacking many layers after a rational encoder. What information is inherently lost by using a rational encoder? WFAs can uniquely encode each input, making them Turing-complete under this setup; however, this does not hold for rational s-RNNs. + +RR-complete Assuming an RR-complete encoder, a WFA like Figure 5 can be used to encode each possible input sequence over $\Sigma$ to a unique number. We then use the decoder as an oracle to decide any RE language. Thus, an RR-complete encoder with an RE decoder is Turing-complete. + +Bounded space However, the $\Theta (\log n)$ space bound of saturated rational RNNs like the s-QRNN means these models cannot fully encode the input. In other words, some information about the prefix $x_{:t}$ must be lost in $\mathbf{c}_t$ . Thus, rational s-RNNs are not Turing-complete with an RE decoder. + +# 5 Experiments + +In Subsection 4.3, we showed that different saturated RNNs vary in their ability to recognize $a^{n}b^{n}$ and $a^{n}b^{n}\Sigma^{*}$ . We now test empirically whether + +![](images/bb39623e0f46d7e2379f630028f2c030f508e61a073dfaceb0e35b3c122b657f.jpg) + +![](images/22d792a5e26eea4672c41dc92ff098883f504694da6019b35f775aeb45c18699.jpg) +Figure 6: Accuracy recognizing $L_{5}$ and $a^{n}b^{n}\Sigma^{*}$ . "QRNN+" is a QRNN with a 2-layer decoder, and "2QRNN" is a 2-layer QRNN with a 1-layer decoder. + +these predictions carry over to the learnable capacity of unsaturated RNNs.11 We compare the QRNN and LSTM when coupled with a linear decoder $D_{1}$ We also train a 2-layer QRNN ("QRNN2") and a 1-layer QRNN with a $D_{2}$ decoder ("QRNN+"). + +We train on strings of length 64, and evaluate generalization on longer strings. We also compare to a baseline that always predicts the majority class. The results are shown in Figure 6. We provide further experimental details in Appendix E. + +Experiment 1 We use the following language, which has similar formal properties to $a^n b^n$ , but with a more balanced label distribution: + +$$ +L _ {5} = \left\{x \in (a | b) ^ {*} \mid \left| \# _ {a - b} (x) \right| < 5 \right\}. \tag {41} +$$ + +In line with (34), the LSTM decides $L_{5}$ perfectly for $n \leq 64$ , and generalizes fairly well to longer strings. As predicted in (35), the QRNN cannot fully learn $L_{5}$ even for $n = 64$ . Finally, as predicted in (36) and (37), the 2-layer QRNN and the QRNN with $D_{2}$ do learn $L_{5}$ . However, we see that they do not generalize as well as the LSTM for longer strings. We hypothesize that these multi- + +layer models require more epochs to reach the same generalization performance as the LSTM. $^{12}$ + +Experiment 2 We also consider $a^n b^n \Sigma^*$ . As predicted in (38) and (40), the LSTM and 2-layer QRNN decide $a^n b^n \Sigma^*$ flawlessly for $n = 64$ . A 1-layer QRNN performs at the majority baseline for all $n$ with both a 1 and 2-layer decoder. Both of these failures were predicted in (39). Thus, the only models that learned $a^n b^n \Sigma^*$ were exactly those predicted by the saturated theory. + +# 6 Conclusion + +We develop a hierarchy of saturated RNN encoders, considering two angles: space complexity and rational recurrence. Based on the hierarchy, we formally distinguish the state expressiveness of the non-rational s-LSTM and its rational counterpart, the s-QRNN. We show further distinctions in state expressiveness based on encoder space complexity. + +Moreover, the hierarchy translates to differences in language recognition capabilities. Strengthening the decoder alleviates some, but not all, of these differences. We present two languages, both recognizable by an LSTM. We show that one can be recognized by an s-QRNN only with the help of a decoder, and that the other cannot be recognized by an s-QRNN with the help of any decoder. + +While this means existing rational RNNs are fundamentally limited compared to LSTMs, we find that it is not necessarily being rationally recurrent that limits them: in fact, we prove that a WFA can perfectly encode its input—something no saturated RNN can do. We conclude with an analysis that shows that an RNN architecture's strength must also take into account its space complexity. These results further our understanding of the inner working of NLP systems. We hope they will guide the development of more expressive rational RNNs. + +# Acknowledgments + +We appreciate Amir Yehudayoff's help in finding the WFA used in Theorem 8, and the feedback of researchers at the Allen Institute for AI, our anonymous reviewers, and Tobias Jaroslaw. The project was supported in part by NSF grant IIS-1562364, Israel Science Foundation grant no.1319/16, and the European Research Council under the EU's Horizon 2020 research and innovation program, grant agreement No. 802774 (iEXTRACT). + +# References + +Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. +Borja Balle, Xavier Carreras, Franco M. Luque, and Ariadna Quattoni. 2014. Spectral learning of weighted automata. Machine Learning, 96(1):33-63. +James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. 2016. Quasi-recurrent neural networks. +J. W. Carlyle and A. Paz. 1971. Realizations by stochastic finite automata. J. Comput. Syst. Sci., 5(1):26-40. +Yining Chen, Sorcha Gilroy, Andreas Maletti, Jonathan May, and Kevin Knight. 2018. Recurrent neural networks as weighted language recognizers. In Proc. of NAACL, pages 2261-2271. +Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proc. of EMNLP, pages 1724-1734. +Corinna Cortes and Mehryar Mohri. 2000. Context-free recognition with weighted automata. Grammars, 3(2/3):133-150. +Jesse Dodge, Roy Schwartz, Hao Peng, and Noah A. Smith. 2019. RNN architecture learning with sparse regularization. In Proc. of EMNLP, pages 1179-1184. +Jeffrey L Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179-211. +Patrick C Fischer. 1966. Turing machines with restricted memory access. Information and Control, 9(4):364-379. +Patrick C. Fischer, Albert R. Meyer, and Arnold L. Rosenberg. 1968. Counter machines and counter languages. Mathematical Systems Theory, 2(3):265-283. +Michel Fliess. 1974. Matrices de Hankel. J. Math. Pures Appl, 53(9):197-222. +Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. Proceedings of Workshop for NLP Open Source Software (NLP-OSS). +Michael Hahn. 2020. Theoretical limitations of self-attention in neural sequence models. Transactions of the Association for Computational Linguistics, 8:156-171. + +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780. +Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proc. of EMNLP, pages 1746-1751. +Yann Lecun and Yoshua Bengio. 1995. The Handbook of Brain Theory and Neural Networks, chapter "Convolutional Networks for Images, Speech, and Time Series". MIT Press. +William Merrill. 2019. Sequential neural networks as automata. In Proceedings of the Workshop on Deep Learning and Formal Languages: Building Bridges, pages 1-13. +William Merrill. 2020. On the linguistic capacity of real-time counter automata. +Hao Peng, Roy Schwartz, Sam Thomson, and Noah A. Smith. 2018. Rational recurrences. In Proc. of EMNLP, pages 1203-1214. +Jacques Sakarovitch. 2009. Rational and recognisable power series. In Handbook of Weighted Automata, pages 105-174. Springer. +Roy Schwartz, Sam Thomson, and Noah A. Smith. 2018. Bridging CNNs, RNNs, and weighted finite-state machines. In Proc. of ACL, pages 295-305. +Hava T. Siegelmann and Eduardo D. Sontag. 1992. On the computational power of neural nets. In Proc. of COLT, pages 440-449. +Hava T. Siegelmann and Eduardo D. Sontag. 1994. Analog computation via neural networks. Theoretical Computer Science, 131(2):331-360. +Mirac Suzgun, Yonatan Belinkov, Stuart Shieber, and Sebastian Gehrmann. 2019a. LSTM networks can perform dynamic counting. In Proceedings of the Workshop on Deep Learning and Formal Languages: Building Bridges, pages 44-54. +Mirac Suzgun, Sebastian Gehrmann, Yonatan Belinkov, and Stuart M. Shieber. 2019b. Memory-augmented recurrent neural networks can learn generalized Dyck languages. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008. +Gail Weiss, Yoav Goldberg, and Eran Yahav. 2018. On the practical computational power of finite precision RNNs for language recognition. + +# A Rational Counting + +We extend the result in Theorem 3 as follows. + +Theorem 7. Any $(\Sigma \times Q)$ -restricted CM is rationally recurrent. + +Proof. We present an algorithm to construct a WFA computing an arbitrary counter in a $(\Sigma \times Q)$ -restricted CM. First, we create two independent copies of the transition graph for the restricted CM. We refer to one copy of the CM graph as the add graph, and the other as the multiply graph. + +The initial state in the add graph receives a starting weight of 1, and every other state receives a starting weight of 0. Each state in the add graph receives an accepting weight of 0, and each state in the multiply graph receives an accepting weight of 1. In the add graph, each transition receives a weight of 1. In the multiply graph, each transition receives a weight of 0 if it represents $\times 0$ , and 1 otherwise. Finally, for each non-multiplicative update $\sigma / + m^{13}$ from $q_{i}$ to $q_{j}$ in the original CM, we add a WFA transition $\sigma /m$ from $q_{i}$ in the add graph to $q_{j}$ in the multiply graph. + +Each counter update creates one path ending in the multiply graph. The path score is set to 0 if that counter update is "erased" by a $\times 0$ operation. Thus, the sum of all the path scores in the WFA equals the value of the counter. + +This construction can be extended to accommodate $= m$ counter updates from $q_{i}$ to $q_{j}$ by adding an additional transition from the initial state to $q_{j}$ in the multiplication graph with weight $m$ . This allows us to apply it directly to s-QRNNs, whose update operations include $= 1$ and $-1$ . + +# B WFAs + +We show that while WFAs cannot directly encode an indicator for the language $a^n b^n = \{a^n b^n\mid |n\in \mathbb{N}\}$ , they can encode a function that can be thresholded to recognize $a^n b^n$ , i.e.: + +Theorem 8. The language $a^n b^n = \{a^n b^n \mid n \in \mathbb{N}\}$ over $\Sigma = \{a, b\}$ is in $D_1(\mathrm{WFA})$ . + +We prove this by showing a function whose Hankel matrix has finite rank that, when combined with the identity transformation (i.e., $w = 1$ , $b = 0$ ) followed by thresholding, is an indicator for $a^n b^n$ . Using the shorthand $\sigma(x) = \#_{\sigma}(x)$ , the function + +is: + +$$ +f (w) = \left\{ \begin{array}{l l} 0. 5 - 2 (a (x) - b (x)) ^ {2} & \text {i f} x \in a ^ {*} b ^ {*} \\ - 0. 5 & \text {o t h e r w i s e .} \end{array} \right. \tag {42} +$$ + +Immediately $f$ satisfies $\mathbb{1}_{>0}(f(x))\iff x\in a^{n}b^{n}$ . To prove that its Hankel matrix, $H_{f}$ , has finite rank, we will create 3 infinite matrices of ranks 3, 3 and 1, which sum to $H_{f}$ . The majority of the proof will focus on the rank of the rank 3 matrices, which have similar compositions. + +We now show 3 series $r, s, t$ and a set of series they can be combined to create. These series will be used to create the base vectors for the rank 3 matrices. + +$$ +a _ {i} = \frac {i (i + 1)}{2} \tag {43} +$$ + +$$ +b _ {i} = i ^ {2} - 1 \tag {44} +$$ + +$$ +r _ {i} = \operatorname {f i x} _ {0} (i, a _ {i - 2}) \tag {45} +$$ + +$$ +s _ {i} = \operatorname {f i x} _ {1} (i, - b _ {i - 1}) \tag {46} +$$ + +$$ +t _ {i} = \operatorname {f i x} _ {2} (i, a _ {i - 1}) \tag {47} +$$ + +where for every $j\leq 2$ + +$$ +\operatorname {f i x} _ {j} (i, x) = \left\{ \begin{array}{l l} x & \text {i f} i > 2 \\ 1 & \text {i f} i = j \\ 0 & \text {o t h e r w i s e .} \end{array} \right. \tag {48} +$$ + +Lemma 3. Let $c_{i} = 1 - 2i^{2}$ and $\{c^{(k)}\}_{k\in \mathbb{N}}$ be the set of series defined $c_{i}^{(k)} = c_{|i - k|}$ . Then for every $i, k\in \mathbb{N}$ , + +$$ +c _ {i} ^ {(k)} = c _ {0} ^ {(k)} r _ {i} + c _ {1} ^ {(k)} s _ {i} + c _ {2} ^ {(k)} t _ {i}. +$$ + +Proof. For $i \in \{0,1,2\}$ , $r_i, s_i$ and $t_i$ collapse to a 'select' operation, giving the true statement $c_i^{(k)} = c_i^{(k)} \cdot 1$ . We now consider the case $i > 2$ . Substituting the series definitions in the right side of the equation gives + +$$ +c _ {k} a _ {i - 2} + c _ {| k - 1 |} (- b _ {i - 1}) + c _ {k - 2} a _ {i - 1} \tag {49} +$$ + +which can be expanded to + +$$ +(1 - 2 k ^ {2}) \quad \cdot \frac {i ^ {2} - 3 i + 2}{2} \quad + +$$ + +$$ +\left(1 - 2 (k - 1) ^ {2}\right) \quad \cdot \left(1 - (i - 1) ^ {2}\right) \quad + +$$ + +$$ +(1 - 2 (k - 2) ^ {2}) \quad \cdot \frac {(i - 1) i}{2}. +$$ + +Reordering the first component and partially opening the other two gives + +$$ +\begin{array}{l} (- 2 k ^ {2} + 1) \frac {i ^ {2} - 3 i + 2}{2} + \\ (- 2 k ^ {2} + 4 k - 1) (2 i - i ^ {2}) + \\ (- k ^ {2} + 4 k - 3. 5) (i ^ {2} - i) \\ \end{array} +$$ + +and a further expansion gives + +$$ +\begin{array}{l} - k ^ {2} i ^ {2} + \quad 0. 5 i ^ {2} + 3 k ^ {2} i - 1. 5 i - 2 k ^ {2} + 1 + \\ 2 k ^ {2} i ^ {2} - 4 k i ^ {2} + i ^ {2} - 4 k ^ {2} i + 8 k i - 2 i + \\ - k ^ {2} i ^ {2} + 4 k i ^ {2} - 3. 5 i ^ {2} + k ^ {2} i - 4 k i + 3. 5 i \\ \end{array} +$$ + +which reduces to + +$$ +- 2 i ^ {2} + 4 k i - 2 k ^ {2} + 1 = 1 - 2 (k - i) ^ {2} = c _ {i} ^ {(k)}. +$$ + +![](images/1c10225df0b3ecc4b4f00bee0e4ae80a1c62db347b1de398a110fabd21214c87.jpg) + +We restate this as: + +Corollary 1. For every $k \in \mathbb{N}$ , the series $c^{(k)}$ is a linear combination of the series $r, s$ and $t$ . + +We can now show that $f$ is computable by a WFA, proving Theorem 8. By Theorem 1, it is sufficient to show that $H_{f}$ has finite rank. + +Lemma 4. $H_{f}$ has finite rank. + +Proof. For every $P, S \subseteq \{a, b\}^*$ , denote + +$$ +[ H _ {f} | _ {P, S} ] _ {u, v} = \left\{ \begin{array}{l l} [ H _ {f} ] _ {u, v} & \text {i f} u \in P \text {a n d} v \in S \\ 0 & \text {o t h e r w i s e} \end{array} \right. +$$ + +Using regular expressions to describe $P, S$ , we create the 3 finite rank matrices which sum to $H_{f}$ : + +$$ +A = \left. \left(H _ {f} + 0. 5\right) \right| _ {a ^ {*}, a ^ {*} b ^ {*}} \tag {50} +$$ + +$$ +B = \left(H _ {f} + 0. 5\right) | _ {a ^ {*} b ^ {+}, b ^ {*}} \tag {51} +$$ + +$$ +C = (- 0. 5) | _ {u, v}. \tag {52} +$$ + +Intuitively, these may be seen as a "split" of $H_{f}$ into sections as in Figure 7, such that $A$ and $B$ together cover the sections of $H_{f}$ on which $u\cdot v$ does not contain the substring $ba$ (and are equal on them to $H_{f} + 0.5$ ), and $C$ is simply the constant matrix $-0.5$ . Immediately, $H_{f} = A + B + C$ , and $\mathrm{rank}(C) = 1$ . + +We now consider $A$ . Denote $P_A = a^*, S_A = a^* b^*$ . $A$ is non-zero only on indices $u \in P_A, v \in S_A$ , and for these, $u \cdot v \in a^* b^*$ and $A_{u,v} = 0.5 + f(u \cdot v) = 1 - 2(a(u) + a(v) - b(v))^2$ . This gives that for every $u \in P_A, v \in S_A$ , + +$$ +A _ {u, v} = c _ {| a (u) - (b (v) - a (v)) |} = c _ {b (v) - a (v)} ^ {(a (u))}. \tag {53} +$$ + +![](images/dae78b826910c09842aff970d9bd1fcb861f571c20918b6d98db49656dfbc56f.jpg) +Figure 7: Intuition of the supports of $A, B$ and $C$ . + +For each $\tau \in \{r,s,t\}$ , define $\tilde{\tau} \in \mathbb{Q}^{\{a,b\}^*}$ as + +$$ +\tilde {\tau} _ {v} = \mathbb {1} _ {v \in a ^ {*} b ^ {*}} \cdot \tau_ {b (v) - a (v)}. \tag {54} +$$ + +We get from Corollary 1 that for every $u \in a^*$ , the $u$ th row of $A$ is a linear combination of $\tilde{r}, \tilde{s}$ , and $\tilde{t}$ . The remaining rows of $A$ are all $\mathbf{0}$ and so also a linear combination of these, and so $\mathrm{rank}(A) \leq 3$ . + +Similarly, we find that the nonzero entries of $B$ satisfy + +$$ +B _ {u, v} = c _ {| b (v) - (a (u) - b (u)) |} = c _ {a (u) - b (u)} ^ {(b (v))} \tag {55} +$$ + +and so, for $\tau \in \{r,s,t\}$ , the columns of $B$ are linear combinations of the columns $\tau' \in \mathbb{Q}^{\{a,b\}^*}$ defined + +$$ +\tau_ {u} ^ {\prime} = \mathbb {1} _ {u \in a ^ {*} b ^ {+}} \cdot \tau_ {a (u) - b (u)}. \tag {56} +$$ + +Thus we conclude $\mathrm{rank}(B) \leq 3$ . + +Finally, $H_{f} = A + B + C$ , and so by the subadditivity of rank in matrices, + +$$ +\operatorname {r a n k} \left(H _ {f}\right) \leq \sum_ {M = A, B, C} \operatorname {r a n k} (M) = 7. \tag {57} +$$ + +![](images/78d9e901e055c19a3e58c5d058ff9502190ad5404eaf7645f9ee1df416319b26.jpg) + +In addition, the rank of $\tilde{H}_f\in \mathbb{Q}^{\{a,b\} ^{\leq 2},\{a,b\}^{\leq 2}}$ defined $[\tilde{H}_f]_{u,v} = [H_f]_{u,v}$ is 7, and so we can conclude that the bound in the proof is tight, i.e., $\mathrm{rank}(H_f) = 7$ . From here $\tilde{H}_f$ is a complete subblock of $H_{f}$ and can be used to explicitly construct a WFA for $f$ , using the spectral method described by Balle et al. (2014). + +# C s-QRNNs + +Theorem 9. No $s$ -QRNN with a linear threshold decoder can recognize $a^n b^n = \{a^n b^n \mid n \in \mathbb{N}\}$ , i.e., $a^n b^n \notin D_1(s$ -QRNN). + +Proof. An ifo s-QRNN can be expressed as a $\Sigma^k$ -restricted CM with the additional update operations $\{:= -1, := 1\}$ , where $k$ is the window size of the QRNN. So it is sufficient to show that such a machine, when coupled with the decoder $D_1$ (linear translation followed by thresholding), cannot recognize $a^n b^n$ . + +Let $\mathcal{A}$ be some such CM, with window size $k$ and $h$ counters. Take $n = k + 10$ and for every $m\in \mathbb{N}$ denote $w_{m} = a^{n}b^{m}$ and the counter values of $\mathcal{A}$ after $w_{m}$ as $c^m\in \mathbb{Q}^h$ . Denote by $u_{t}$ the vector of counter update operations made by this machine on input sequence $w_{m}$ at time $t\leq n + m$ . As $\mathcal{A}$ is dependent only on the last $k$ counters, necessarily all $u_{k + i}$ are identical for every $i\geq 1$ . + +It follows that for all counters in the machine that go through an assignment (i.e., :=) operation in $u_{k+1}$ , their values in $c^{k+i}$ are identical for every $i \geq 1$ , and for every other counter $j$ , $c_j^{k+i} - c_j^k = i \cdot \delta$ for some $\delta \in \mathbb{Z}$ . Formally: for every $i \geq 1$ there are two sets $I, J = [h] \setminus I$ and constant vectors $\mathbf{u} \in \mathbb{N}^I, \mathbf{v} \in \mathbb{N}^J$ s.t. $c^{k+i}|_I = \mathbf{u}$ and $|c^{k+i} - c^k|_J = \mathbf{i} \cdot v$ . + +We now consider the linear thresholder, defined by weights and bias $\mathbf{w}, b$ . In order to recognise $a^n b^n$ , the thresholder must satisfy: + +$$ +\mathbf {w} \cdot c ^ {k + 9} + b < 0 \tag {58} +$$ + +$$ +\mathbf {w} \cdot c ^ {k + 1 0} + b \quad > 0 \tag {59} +$$ + +$$ +\mathbf {w} \cdot c ^ {k + 1 1} + b \quad < 0 \tag {60} +$$ + +Opening these equations gives: + +$$ +\left. \mathbf {w} \right| _ {J} \left(\cdot c ^ {k} \mid_ {J} + 9 \mathbf {v} \mid_ {J}\right) \quad + \left. \mathbf {w} \right| _ {I} \cdot \mathbf {u} \quad < 0 \tag {61} +$$ + +$$ +\left. \mathbf {w} \right| _ {J} \left(\cdot c ^ {k} \mid_ {J} + 1 0 \mathbf {v} \mid_ {J}\right) + \left. \mathbf {w} \right| _ {I} \cdot \mathbf {u} > 0 \tag {62} +$$ + +$$ +\left. \mathbf {w} \right| _ {J} \left(\cdot c ^ {k} \mid_ {J} + 1 1 \mathbf {v} \mid_ {J}\right) + \left. \mathbf {w} \right| _ {I} \cdot \mathbf {u} < 0 \tag {63} +$$ + +but this gives $9w|_{J} \cdot \mathbf{v}|_{J} < 10w|_{J} \cdot \mathbf{v}|_{J} > 11w|_{J} \cdot \mathbf{v}|_{J}$ , which is impossible. + +![](images/cbb5a09304aeb39036cb791624a5229e001c5744c8803eb484ecbad682d6faa4.jpg) + +However, this does not mean that the s-QRNN is entirely incapable of recognising $a^n b^n$ . Increasing the decoder power allows it to recognise $a^n b^n$ quite simply: + +Theorem 10. For the two-layer decoder $D_{2}$ , $a^{n}b^{n}\in D_{2}(\mathrm{s - QRNN})$ . + +Proof. Let $\#_{ba}(x)$ denote the number of $ba$ 2-grams in $x$ . We use s-QRNN with window size + +2 to maintain two counters: + +$$ +[ \mathbf {c} _ {t} ] _ {1} = \# _ {a - b} (x) \tag {64} +$$ + +$$ +\left[ \mathbf {c} _ {t} \right] _ {2} = \# _ {b a} (x). \tag {65} +$$ + +$[\mathbf{c}_t]_2$ can be computed provided the QRNN window size is $\geq 2$ . A two-layer decoder can then check + +$$ +0 \leq [ \mathbf {c} _ {t} ] _ {1} \leq 0 \wedge [ \mathbf {c} _ {t} ] _ {2} \leq 0. \tag {66} +$$ + +![](images/bcf061fcdf37440e6b770af04d610be2209862a0d5b19a43102a35637c853777.jpg) + +Theorem 11 (Suffix attack). No $s$ -QRNN and decoder can recognize the language $a^n b^n \Sigma^* = a^n b^n (a|b)^*$ , $n > 0$ , i.e., $a^n b^n \Sigma^* \notin L(s$ -QRNN) for any decoder $L$ . + +The proof will rely on the s-QRNN's inability to "freeze" a computed value, protecting it from manipulation by future input. + +Proof. As in the proof for Theorem 9, it is sufficient to show that no $\Sigma^k$ -restricted CM with the additional operations $\{:= -1, := 1\}$ can recognize $a^n b^n \Sigma^*$ for any decoder $L$ . + +Let $\mathcal{A}$ be some such CM, with window size $k$ and $h$ counters. For every $w\in \Sigma^n$ denote by $c(w)\in \mathbb{Q}^h$ the counter values of $\mathcal{A}$ after processing $w$ . Denote by $u_{t}$ the vector of counter update operations made by this machine on an input sequence $w$ at time $t\leq |w|$ . Recall that $\mathcal{A}$ is $\Sigma^k$ restricted, meaning that $u_{i}$ depends exactly on the window of the last $k$ tokens for every $i$ . + +We now denote $j = k + 10$ and consider the sequences $w_{1} = a^{j}b^{j}a^{j}b^{j}a^{j}b^{j}$ , $w_{2} = a^{j}b^{j - 1}a^{j}b^{j + 1}a^{j}b^{j}$ . $w_{2}$ is obtained from $w_{1}$ by removing the $2j$ -th token of $w_{1}$ and reinserting it at position $4j$ . + +As all of $w_{1}$ is composed of blocks of $\geq k$ identical tokens, the windows preceding all of the other tokens in $w_{1}$ are unaffected by the removal of the $2j$ -th token. Similarly, being added onto the end of a substring $b^{k}$ , its insertion does not affect the windows of the tokens after it, nor is its own window different from before. This means that overall, the set of all operations $u_{i}$ performed on the counters is identical in $w_{1}$ and in $w_{2}$ . The only difference is in their ordering. + +$w_{1}$ and $w_{2}$ begin with a shared prefix $a^{k}$ , and so necessarily the counters are identical after processing it. We now consider the updates to the counters after these first $k$ tokens, these are determined by the windows of $k$ tokens preceding each update. + +First, consider all the counters that undergo some assignment $(\coloneqq)$ operation during these sequences, and denote by $\{w\}$ the multiset of windows in $w \in \Sigma^k$ for which they are reset. $w_1$ and $w_2$ only contain $k$ -windows of types $a^x b^{k - x}$ or $b^{x}a^{k - x}$ , and so these must all re-appear in the shared suffix $b^{j}a^{j}b^{j}$ of $w_{1}$ and $w_{2}$ , at which point they will be synchronised. It follows that these counters all finish with identical value in $c(w_1)$ and $c(w_2)$ . + +All the other counters are only updated using addition of $-1, 1$ and $0$ , and so the order of the updates is inconsequential. It follows that they too are identical in $c(w_1)$ and $c(w_2)$ , and therefore necessarily that $c(w_1) = c(w_2)$ . + +From this we have $w_{1}, w_{2}$ satisfying $w_{1} \in a^{n}b^{n}\Sigma^{*}, w_{2} \notin a^{n}b^{n}\Sigma^{*}$ but also $c(w_{1}) = c(w_{2})$ . Therefore, it is not possible to distinguish between $w_{1}$ and $w_{2}$ with the help of any decoder, despite the fact that $w_{1} \in a^{n}b^{n}\Sigma^{*}$ and $w_{2} \notin a^{n}b^{n}\Sigma^{*}$ . It follows that the CM and s-QRNN cannot recognize $a^{n}b^{n}\Sigma^{*}$ with any decoder. + +For the opposite extension $\Sigma^{*}a^{n}b^{n}$ , in which the language is augmented by a prefix, we cannot use such a "suffix attack". In fact, $\Sigma^{*}a^{n}b^{n}$ can be recognized by an s-QRNN with window length $w\geq 2$ and a linear threshold decoder as follows: a counter counts $\#_{a - b}(x)$ and is reset to 1 on appearances of $ba$ , and the decoder compares it to 0. + +Note that we define decoders as functions from the final state to the output. Thus, adding an additional QRNN layer does not count as a "decoder" (as it reads multiple states). In fact, we show that having two QRNN layers allows recognizing $a^n b^n \Sigma^*$ . + +Theorem 12. Let $\epsilon$ be the empty string. Then, + +$$ +a ^ {n} b ^ {n} \Sigma^ {*} \cup \{\epsilon \} \in D _ {1} (\mathrm {s - Q R N N} \circ \mathrm {s - Q R N N}). +$$ + +Proof. We construct a two-layer s-QRNN from which $a^n b^n \Sigma^*$ can be recognized. Let $\$$ denote the left edge of the string. The first layer computes two quantities $d_t$ and $e_t$ as follows: + +$$ +d _ {t} = \# _ {b a} (x) \tag {67} +$$ + +$$ +e _ {t} = \# _ {\mathbb {S} ^ {b}} (x). \tag {68} +$$ + +Note that $e_t$ can be interpreted as a binary value checking whether the first token was $b$ . The second layer computes $c_t$ as a function of $d_t, e_t$ , and $x_t$ (which can be passed through the first layer). We will demonstrate a construction for $c_t$ by creating + +linearly separable functions for the gate terms $f_{t}$ and $z_{t}$ that update $c_{t}$ . + +$$ +f _ {t} = \left\{ \begin{array}{l l} 1 & \text {i f} d _ {t} \leq 0 \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {69} +$$ + +$$ +z _ {t} = \left\{ \begin{array}{l l} 1 & \text {i f} x _ {t} = a \vee e _ {t} \\ - 1 & \text {o t h e r w i s e .} \end{array} \right. \tag {70} +$$ + +Now, the update function $u_{t}$ to $c_{t}$ can be expressed + +$$ +u _ {t} = f _ {t} z _ {t} = \left\{ \begin{array}{l l} + 0 & \text {i f} 0 < d _ {t} \\ + 1 & \text {i f} d _ {t} \leq 0 \wedge \left(x _ {t} = a \vee e _ {t}\right) \\ - 1 & \text {o t h e r w i s e .} \end{array} \right. \tag {71} +$$ + +Finally, the decoder accepts iff $c_{t} \leq 0$ . To justify this, we consider two cases: either $x$ starts with $b$ or $a$ . If $x$ starts with $b$ , then $e_{t} = 0$ , so we increment $c_{t}$ by 1 and never decrement it. Since $0 < c_{t}$ for any $t$ , we will reject $x$ . If $x$ starts with $a$ , then we accept iff there exists a sequence of $bs$ following the prefix of $as$ such that both sequences have the same length. + +# D s-LSTMs + +In contrast to the s-QRNN, we show that the sLSTM paired with a simple linear and thresholding decoder can recognize both $a^n b^n$ and $a^n b^n \Sigma^*$ . + +# Theorem 13. + +$$ +a ^ {n} b ^ {n} \in D _ {1} (\mathrm {s - L S T M}). +$$ + +Proof. Assuming a string $a^i b^i$ , we set two units of the LSTM state to compute the following functions using the CM in Figure 3: + +$$ +\left[ \mathbf {c} _ {t} \right] _ {1} = \operatorname {R e L U} (i - j) \tag {72} +$$ + +$$ +[ \mathbf {c} _ {t} ] _ {2} = \operatorname {R e L U} (j - i). \tag {73} +$$ + +We also add a third unit $[\mathbf{c}_t]_3$ that tracks whether the 2-gram $ba$ has been encountered, which is equivalent to verifying that the string has the form $a^i b^i$ . Allowing $\mathbf{h}_t = \tanh (\mathbf{c}_t)$ , we set the linear threshold layer to check + +$$ +\left[ \mathbf {h} _ {t} \right] _ {1} + \left[ \mathbf {h} _ {t} \right] _ {2} + \left[ \mathbf {h} _ {t} \right] _ {3} \leq 0. \tag {74} +$$ + +![](images/077b078f8149ae7721ff5b4ace9b721a4f3b0917150559557732a9e31df32cac.jpg) + +# Theorem 14. + +$$ +a ^ {n} b ^ {n} \Sigma^ {*} \in D _ {1} (\mathrm {s - L S T M}). +$$ + +Proof. We use the same construction as Theorem 13, augmenting it with + +$$ +[ \mathbf {c} _ {t} ] _ {4} \triangleq [ \mathbf {h} _ {t - 1} ] _ {1} + [ \mathbf {h} _ {t - 1} ] _ {2} + [ \mathbf {h} _ {t - 1} ] _ {3} \leq 0. \tag {75} +$$ + +We decide $x$ according to the (still linearly separable) equation + +$$ +\left(0 < \left[ \mathbf {h} _ {t} \right] _ {4}\right) \vee \left(\left[ \mathbf {h} _ {t} \right] _ {1} + \left[ \mathbf {h} _ {t} \right] _ {2} + \left[ \mathbf {h} _ {t} \right] _ {3} \leq 0\right). \tag {76} +$$ + +![](images/8e57815c7f91ceb4bdc811f77a4aa20885715b4aacce4938ac5e027d4d5960d1.jpg) + +# E Experimental Details + +Models were trained on strings up to length 64, and, at each index $t$ , were asked to classify whether or not the prefix up to $t$ was a valid string in the language. Models were then tested on independent datasets of lengths 64, 128, 256, 512, 1024, and 2048. The training dataset contained 100000 strings, and the validation and test datasets contained 10000. We discuss task-specific schemes for sampling strings in the next paragraph. All models were trained for a maximum of 100 epochs, with early stopping after 10 epochs based on the validation cross entropy loss. We used default hyperparameters provided by the open-source AllenNLP framework (Gardner et al., 2018). The code is available at https://github.com/viking-sudo-rm/rr-experiments. + +Sampling strings For the language $L_{5}$ , each token was sampled uniformly at random from $\Sigma = \{a, b\}$ . For $a^{n}b^{n}\Sigma^{*}$ , half the strings were sampled in this way, and for the other half, we sampled $n$ uniformly between 0 and 32, fixing the first $2n$ characters of the string to $a^{n}b^{n}$ and sampling the suffix uniformly at random. + +Experimental cost Experiments were run for 20 GPU hours on Quadro RTX 8000. + +# F Self Attention + +Architecture We place saturated self attention (Vaswani et al., 2017) into the state expressiveness hierarchy. We consider a single-head self attention encoder that is computed as follows: + +1. At time $t$ , compute queries $\mathbf{q}_t$ , keys $\mathbf{k}_t$ , and values $\mathbf{v}_t$ from the input embedding $\mathbf{x}_t$ using a linear transformation. +2. Compute attention head $\mathbf{h}_t$ by attending over the keys and values up to time $t$ ( $\mathbf{K}_{:t}$ and $\mathbf{V}_{:t}$ ) with query $\mathbf{q}_t$ . + +3. Let $\| \cdot \| _L$ denote a layer normalization operation (Ba et al., 2016). + +$$ +\mathbf {h} _ {t} ^ {\prime} = \operatorname {R e L U} \left(\mathbf {W} ^ {h} \cdot \| \mathbf {h} _ {t} \| _ {L}\right) \tag {77} +$$ + +$$ +\mathbf {c} _ {t} = \left\| \mathbf {W} ^ {c} \mathbf {h} _ {t} ^ {\prime} \right\| _ {L}. \tag {78} +$$ + +This simplified architecture has only one attention head, and does not incorporate residual connections. It is also masked (i.e., at time $t$ , can only see the prefix $\mathbf{X}_{:t}$ ), which enables direct comparison with unidirectional RNNs. For simplicity, we do not add positional information to the input embeddings. + +Theorem 15. Saturated masked self attention is not $RR$ . + +Proof. Let $\#_{\sigma}(x)$ denote the number of occurrences of $\sigma \in \Sigma$ in string $x$ . We construct a self attention layer to compute the following function over $\{a, b\}^*$ : + +$$ +f (x) = \left\{ \begin{array}{l l} 0 & \text {i f} \# _ {a} (x) = \# _ {b} (x) \\ 1 & \text {o t h e r w i s e .} \end{array} \right. \tag {79} +$$ + +Since the Hankel sub-block over $P = a^{*},S = b^{*}$ has infinite rank, $f\notin \mathcal{R}$ + +Fix $\mathbf{v}_t = \mathbf{x}_t$ . As shown by Merrill (2019), saturated attention over a prefix of input vectors $\mathbf{X}_{:t}$ reduces to sum of the subsequence for which key-query similarity is maximized, i.e., denoting $I = \{i \in [t] \mid \mathbf{k}_i \cdot \mathbf{q}_t = m\}$ where $m = \max \{\mathbf{k}_i \cdot \mathbf{q}_t | i \in [t]\}$ : + +$$ +\mathbf {h} _ {t} = \frac {1}{| I |} \sum_ {i \in I} \mathbf {x} _ {t _ {i}}. \tag {80} +$$ + +For all $t$ , set the key and query $k_{t}, q_{t} = 1$ . Thus, all the key-query similarities are 1, and we obtain: + +$$ +\begin{array}{l} \mathbf {h} _ {t} = \frac {1}{t} \sum_ {t ^ {\prime} = 1} ^ {t} \mathbf {x} _ {t ^ {\prime}} (81) \\ = \frac {1}{t} (\# _ {a} (x), \# _ {b} (x)) ^ {\top}. (82) \\ \end{array} +$$ + +Applying layer norm to this quantity preserves equality of the first and second elements. Thus, we set the layer in (77) to independently check $0 < [\mathbf{h}_t^0 ]_1 - [\mathbf{h}_t^0 ]_2$ and $[\mathbf{h}_t^0 ]_1 - [\mathbf{h}_t^0 ]_2 < 0$ using ReLU. The final layer $c_{t}$ sums these two quantities, returning 0 if neither condition is met, and 1 otherwise. + +Since saturated self attention can represent $f\notin \mathcal{R}$ , it is not RR. + +Space Complexity We show that self attention falls into the same space complexity class as the LSTM and QRNN. Our method here extends Merrill (2019)'s analysis of attention. + +# Theorem 16. Saturated single-layer self attention has $\Theta (\log n)$ space. + +Proof. The construction from Theorem 15 can reach a linear (in sequence length) number of different outputs, implying a linear number of different configurations, and so that the space complexity of saturated self attention is $\Omega (\log n)$ . We now show the upper bound $O(\log n)$ . + +A sufficient representation for the internal state (configuration) of a self-attention layer is the unordered group of key-value pairs over the prefixes of the input sequence. + +Since $f_{k}:x_{t}\mapsto \mathbf{k}_{t}$ and $f_{v}:x_{t}\mapsto \mathbf{v}_{t}$ have finite domain $(\Sigma)$ , their images $K = \mathrm{image}(f_k),V =$ image $(f_v)$ are finite.14 Thus, there is also a finite number of possible key-value pairs $\langle \mathbf{k}_t,\mathbf{v}_t\rangle \in$ $K\times V$ . Recall that the internal configuration can be specified by the number of occurrences of each possible key-value pair. Taking $n$ as an upper bound for each of these counts, we bound the number of configurations of the layer as $n^{|K\times V|}$ . Therefore the bit complexity is + +$$ +\log_ {2} \left(n ^ {| K \times V |}\right) = O (\log n). \tag {83} +$$ + +![](images/0ebdad466bcd60f2befb8f96f07db2911dbf22412f145516da9283fd309f573e.jpg) + +Note that this construction does not apply if the "vocabulary" we are attending over is not finite. Thus, using unbounded positional embeddings, stacking multiple self attention layers, or applying attention over other encodings with unbounded state might reach $\Theta (n)$ . + +While it eludes our current focus, we hope future work will extend the saturated analysis to self attention more completely. We direct the reader to Hahn (2020) for some additional related work. + +# G Memory Networks + +All of the standard RNN architectures considered in Section 3 have $O(\log n)$ space in their saturated form. In this section, we consider a stack RNN encoder similar to the one proposed by Suzgun et al. (2019b) and show how it, like a WFA, can encode binary representations from strings. Thus, + +the stack RNN has $\Theta (n)$ space. Additionally, we find that it is not RR. This places it in the upperright box of Figure 1. + +Classically, a stack is a dynamic list of objects to which elements $v \in V$ can be added and removed in a LIFO manner (using push and pop operations). The stack RNN proposed in Suzgun et al. (2019b) maintains a differentiable variant of such a stack, as follows: + +Differentiable Stack In a differentiable stack, the update operation takes an element $s_t$ to push and a distribution $\pi_t$ over the update operations push, pop, and no-op, and returns the weighted average of the result of applying each to the current stack. The averaging is done elementwise along the stacks, beginning from the top entry. To facilitate this, differentiable stacks are padded with infinite 'null entries'. Their elements must also have a weighted average operation defined. + +Definition 6 (Geometric $k$ -stack RNN encoder). Initialize the stack $\mathbf{S}$ to an infinite list of null entries, and denote by $S_{t}$ the stack value at time $t$ . Using 1-indexing for the stack and denoting $[S_{t - 1}]_0 \triangleq \mathbf{s}_t$ , the geometric $k$ -stack RNN recurrent update is:15 + +$$ +\mathbf {s} _ {t} = \mathbf {f} _ {s} (x _ {t}, \mathbf {c} _ {t - 1}) +$$ + +$$ +\pi_ {t} = \mathbf {f} _ {\pi} \left(x _ {t}, \mathbf {c} _ {t - 1}\right) +$$ + +$$ +\forall i \geq 1 [ \mathbf {S} _ {t} ] _ {i} = \sum_ {a = 1} ^ {3} [ \pi_ {t} ] _ {a} [ \mathbf {S} _ {t - 1} ] _ {i + a - 2}. +$$ + +In this work we will consider the case where the null entries are $\mathbf{0}$ and the encoding $\mathbf{c}_t$ is produced as a geometric-weighted sum of the stack contents, + +$$ +\mathbf {c} _ {t} = \sum_ {i = 1} ^ {\infty} \left(\frac {1}{2}\right) ^ {i - 1} [ \mathbf {S} _ {t} ] _ {i}. +$$ + +This encoding gives preference to the latest values in the stack, giving initial stack encoding $\mathbf{c}_0 = \mathbf{0}$ . + +Space Complexity The memory introduced by the stack data structure pushes the encoder into $\Theta (n)$ space. We formalize this by showing that, like a WFA, the stack RNN can encode binary strings to their value. + +Lemma 5. The saturated stack RNN can compute the converging binary encoding function, i.e., $101 \mapsto 1 \cdot 1 + 0.5 \cdot 0 + 0.25 \cdot 1 = 1.25$ . + +Proof. Choose $k = 1$ . Fix the controller to always push $x_{t}$ . Then, the encoding at time $t$ will be + +$$ +\mathbf {c} _ {t} = \sum_ {i = 1} ^ {t} \left(\frac {1}{2}\right) ^ {i - 1} x _ {i}. \tag {84} +$$ + +This is the value of the prefix $x_{:t}$ in binary. + +![](images/1f011a983e10711b99ce779d2f50f166c94e4b051ec3b70876581aa7dffcc508.jpg) + +Rational Recurrence We provide another construction to show that the stack RNN can compute non-rational series. Thus, it is not RR. + +Definition 7 (Geometric counting). Define $f_2: \{a, b\}^* \to \mathbb{N}$ such that + +$$ +f _ {2} (x) = \exp_ {\frac {1}{2}} (\# _ {a - b} (x)) - 1. +$$ + +Like similar functions we analyzed in Section 3, the Hankel matrix $H_{f_2}$ has infinite rank over the sub-block $a^i b^j$ . + +Lemma 6. The saturated stack RNN can compute $f_{2}$ . + +Proof. Choose $k = 1$ . Fix the controller to push 1 for $x_{t} = a$ , and pop otherwise. \ No newline at end of file diff --git a/aformalhierarchyofrnnarchitectures/images.zip b/aformalhierarchyofrnnarchitectures/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c7150a8c2fc0f9a2c4fef4c5250a265e1314c4dd --- /dev/null +++ b/aformalhierarchyofrnnarchitectures/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9dca494aee0de5e34d9273139bb85a8321d346c2a55b7cfaa2f94dea62546f6a +size 623768 diff --git a/aformalhierarchyofrnnarchitectures/layout.json b/aformalhierarchyofrnnarchitectures/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b937ce8fe4f6bf363056ac7c203909b43f381905 --- /dev/null +++ b/aformalhierarchyofrnnarchitectures/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:974de3a78a25d630c12f9a427f4598a7b33b5f0e7f13acdabc8ce0be90660597 +size 1248858 diff --git a/aframebasedsentencerepresentationformachinereadingcomprehension/51716723-233f-416d-8ecc-ce4c521298af_content_list.json b/aframebasedsentencerepresentationformachinereadingcomprehension/51716723-233f-416d-8ecc-ce4c521298af_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..673b6de45477cc9f60e93f04d30fccecdfa3c7b2 --- /dev/null +++ b/aframebasedsentencerepresentationformachinereadingcomprehension/51716723-233f-416d-8ecc-ce4c521298af_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ffab72768ccf39889cf8b0dab77dc5c1f8cae16202393a38bf1988fb7ee5584f +size 38772 diff --git a/aframebasedsentencerepresentationformachinereadingcomprehension/51716723-233f-416d-8ecc-ce4c521298af_model.json b/aframebasedsentencerepresentationformachinereadingcomprehension/51716723-233f-416d-8ecc-ce4c521298af_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3f55883db32863b7bfca8ef33e31e4d12e3ba32a --- /dev/null +++ b/aframebasedsentencerepresentationformachinereadingcomprehension/51716723-233f-416d-8ecc-ce4c521298af_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0a7245b9be89292a103dcee05a522518a65308e71287ac9d48621b793a9af60 +size 50189 diff --git a/aframebasedsentencerepresentationformachinereadingcomprehension/51716723-233f-416d-8ecc-ce4c521298af_origin.pdf b/aframebasedsentencerepresentationformachinereadingcomprehension/51716723-233f-416d-8ecc-ce4c521298af_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f8275a11c0055b94941e925bb84b53f3876b47b1 --- /dev/null +++ b/aframebasedsentencerepresentationformachinereadingcomprehension/51716723-233f-416d-8ecc-ce4c521298af_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:739715a0af59c2de00a535d46d6281d31b8a1aef7605544324ac207e706200c7 +size 628256 diff --git a/aframebasedsentencerepresentationformachinereadingcomprehension/full.md b/aframebasedsentencerepresentationformachinereadingcomprehension/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b79cb0650f35cd898537d08d7e8012f68f4db94e --- /dev/null +++ b/aframebasedsentencerepresentationformachinereadingcomprehension/full.md @@ -0,0 +1,209 @@ +# A Frame-based Sentence Representation for Machine Reading Comprehension + +Shaoru Guo $^{1}$ , Ru Li $^{1,2*}$ , Hongye Tan $^{1,2}$ , Xiaoli Li $^{3}$ , Yong Guan $^{1}$ , Hongyan Zhao $^{1}$ and Yueping Zhang $^{1}$ + +1. School of Computer and Information Technology, Shanxi University, Taiyuan, China +2. Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, Shanxi University, Taiyuan, China + +3. Institute for Infocomm Research, A*Star, Singapore + +{guoshaoru0928,guanyong0130,13233587117}@163.com + +{liru,tanhongye}@sxu.edu.cn,xlli@ntu.edu.sg + +hongyanzhao@tyust.edu.cn + +# Abstract + +Sentence representation (SR) is the most crucial and challenging task in Machine Reading Comprehension (MRC). MRC systems typically only utilize the information contained in the sentence itself, while human beings can leverage their semantic knowledge. To bridge the gap, we proposed a novel Frame-based Sentence Representation (FSR) method, which employs frame semantic knowledge to facilitate sentence modelling. Specifically, different from existing methods that only model lexical units (LUs), Frame Representation Models, which utilize both LUs in frame and Frame-to-Frame (F-to-F) relations, are designed to model frames and sentences with attention schema. Our proposed FSR method is able to integrate multiple-frame semantic information to get much better sentence representations. Our extensive experimental results show that it performs better than state-of-the-art technologies on machine reading comprehension task. + +# 1 Introduction + +Machine Reading Comprehension (MRC) requires machines to read and understand a text passage, and answer relevant questions about it. Human beings can easily understand the meaning of a sentence based on their semantic knowledge. For instance, given a sentence Katie bought some chocolate cookies, people know Katie is a buyer, chocolate cookies are goods and belong to Food class etc. Existing machine learning approaches, however, face great challenges to address complicated MRC questions, as they do not have above semantic knowledge. + +Nevertheless, FrameNet (Fillmore, 1976; Baker et al., 1998), as a knowledge base, provides schematic scenario representation that could be potentially leveraged to better understand sentences. + +
FCommerce_buy
FEsBuyer, Goods, ...
LUsbuy.v, buy.n, buyer.n, purchase.n,...
T[Katie]Buyer boughtCommerce_buy +[some chocolate cookies]Goods
F-to-FCommerce_buy—Shopping— +Seeking—Locating
+ +Table 1: Example of F, FEs, LUs, T and F-to-F. + +It enables the development of wide-coverage frame parsers (Gildea and Jurafsky, 2002; Das et al., 2014), as well as various real-world applications, ranging from event recognition (Liu et al., 2016), textual entailment (Burchardt et al., 2009), question answering (Ofoghi et al., 2009), narrative schemas (Chambers and Jurafsky, 2010) and paraphrase identification (Zhang et al., 2018), etc. In particular, Frame (F) is defined as a composition of Lexical Units (LUs) and a set of Frame Elements (FEs). Given a sentence, if its certain word evokes a frame by matching a LU, then it is called Target (T). It is worth mentioning that FrameNet arranges different relevant frames into a network by defining Frame-to-Frame (F-to-F) relations. Table 1 provides an example of F, FEs, LUs, T and F-to-F, where target word bought in sentence Katie bought some chocolate cookies evokes a frame Commerce_buy as it matches with a LU buy. Note target word chocolate cookies evokes a different frame Food. + +How to utilize semantic knowledge from FrameNet? We observe the existing works mainly focus on LU vector embedding within a frame (Hermann and Blunsom, 2014; Bojanowski et al., 2017; Glavas et al., 2019), without modeling a frame as a whole. In addition, many sentences could have more than one target words that will evoke multiple frames, but there is less existing method to + +![](images/ec353c1f82344cc7a439b02b24f63aedc9f234a2235c2b2bf86dcc605b3b4b5f.jpg) +Figure 1: Lexical Units Attention Model. + +integrate rich multi-frame relations from FrameNet together. To address the above problems, in this paper, we proposed a novel Frame-based Sentence Representation (FSR) method, which leverages rich frame semantic knowledge, including both generalizations of LUs and F-to-F relations, to better model sentences. The key contributions of this work are summarized as follows: + +1. We propose novel attention-based frame representation models, which take full advantage of LUs and F-to-F relations to model frames with attention schema. +2. We propose a new Frame-based Sentence Representation (FSR) method that integrates multi-frame semantic information to obtain richer semantic aggregation for better sentence representation. +3. Our experimental results demonstrate our proposed frame-based sentence representation (FSR) method is very effective on Machine Reading Comprehension (MRC) task. + +# 2 Frame Representation Model + +In this section, we present our Frame Representation Model, considering both LUs and F-to-F. + +Let $F = \{F_{1}, F_{2}, \ldots, F_{m}, \ldots\}$ represents a set of all frames in FrameNet, where $F_{m} \in \mathcal{R}^{H}$ is the representation of $m$ -th frame of F. Let $U^{F_{m}} = \{u_{1}^{F_{m}}, u_{2}^{F_{m}}, \ldots, u_{n}^{F_{m}}, \ldots\}$ be the LUs set of $F_{m}$ , where $U^{F_{m}} \in \mathcal{R}^{(H \cdot N)}$ , N stands for the total number of LUs in $F_{m}$ , and $u_{n}^{F_{m}}$ be the $n$ -th LU of $F_{m}$ . $t^{F_{m}}$ is a target word, matching a LU in $F_{m}$ . We proposed 3 different frame representation models. + +![](images/8e344c90c842102f51249856df400c53b32da3328fefb4d5966237eb441d0606.jpg) +Figure 2: Frame Relation Attention Model. + +# 2.1 Lexical Units Aggregation Model (LUA) + +Lexical Units Aggregation Model (LUA) is a straightforward idea. Given a frame $F_{m}$ , it averages all its underlying LU representation $u_{n}^{F_{m}}$ ( $u_{n}^{F_{m}} \in U^{F_{m}}$ ) to represent the frame entirely: + +$$ +F _ {m} = \frac {1}{N} \sum_ {U F _ {m}} u _ {n} ^ {F _ {m}} \tag {1} +$$ + +# 2.2 Lexical Units Attention Model (TLUA) + +Each frame in above LUA model has the same representation for different sentences, as they do not distinguish the importance of each LU in the frame. To address this issue, we propose TLUA model, utilizing an attention scheme to automatically weight different LUs for the frame, according to target word T in the given sentence, shown in Figure 1. + +More specifically, we compute the weighted sum of target word T's representation and other LUs' representations based on their importance wrt T. In other words, we emphasize T as it occurs in the given sentence, which can reduce the potential noise introduced by irrelevant LUs in the same frame. It should be noted that we encode multiple word target by averaging of all words representations in it. + +$$ +F _ {m} = t ^ {F _ {m}} + \sum_ {\tilde {U} ^ {F _ {m}}} a t t \left(u _ {n} ^ {F _ {m}}\right) \cdot u _ {n} ^ {F _ {m}} \tag {2} +$$ + +$$ +\operatorname {a t t} \left(u _ {n} ^ {F _ {m}}\right) = \frac {\exp \left(t ^ {F _ {m}} \cdot u _ {n} ^ {F _ {m}}\right)}{\sum_ {u _ {k} ^ {F _ {m}} \in \widetilde {U} ^ {F _ {m}}} \exp \left(t ^ {F _ {m}} \cdot u _ {k} ^ {F _ {m}}\right)} \tag {3} +$$ + +Here, $\widetilde{U}^{F_m}$ represents the LUs set of $F_{m}$ which is not include $t^{F_m}$ , and $\widetilde{U}^{F_m} \in \mathcal{R}^{H \cdot (N - 1)}$ . + +![](images/cee4a9b00b5cd6ff2293a9fe86b9a64bc13903bb52ab5c9b9fd8270ea8cca688.jpg) +Figure 3: A sentence of FrameNet annotations. + +# 2.3 Frame Relation Attention Model (FRA) + +The key problem in MRC is to analyze semantic relations among multiple sentences. As such, we propose a novel FRA model, which takes advantage of F-to-F relations to get much richer semantic information, shown in Figure 2. + +Given frame $F_{m}$ , $F_{m}^{+} = \{F_{m,1},\dots ,F_{m,w},\dots \}$ represents its expanded frames, including all the frames that can be linked to $F_{m}$ through F-to-F relation chains in FrameNet, with no more than 3 hops to only keep close relations. Note attention schemes have been designed for both intra-frame and inter-frames. Particularly, intra-frame attention focuses on relevant LUs, while inter-frames attention emphasizes relevant frames, avoiding the influence from less relevant but linked frames. + +$$ +F _ {m} ^ {*} = F _ {m} + \sum_ {w = 1} ^ {W} a t t \left(F _ {m, w}\right) \cdot F _ {m, w} \tag {4} +$$ + +$$ +\operatorname {a t t} \left(F _ {m, w}\right) = \frac {\exp \left(F _ {m} \cdot F _ {m , w}\right)}{\sum_ {k = 1} ^ {W} \exp \left(F _ {m} \cdot F _ {m , k}\right)} \tag {5} +$$ + +# 3 Frame-based Sentence Representation + +Given a sentence $s = \{x_{1}, x_{2}, \ldots, x_{k}, \ldots\}$ where each $x_{k}$ is a word, let $T_{k}$ be the $k$ -th frame-evoking target of $s$ , and $T_{k}$ evokes $F_{k}$ frame. $FE_{ki}$ denotes the $i$ -th frame element of $F_{k}$ , and $P_{ki}$ denotes the $i$ -th span fulfilling $FE_{ki}$ . We define a frame semantic quadruple $c_{k} = < T_{k}, F_{k}, FE_{kn}, P_{kn} >$ , where $c_{k}$ represents the $k$ -th quadruple of $s$ . + +# 3.1 Sentence Semantic Annotations with Multiple Frames + +In this paper, we employ SEMAFOR (Das et al., 2014) to automatically process sentences with multiple semantic annotations (Kshirsagar et al., 2015). + +Figure 3 provides an example sentence with three T, namely bought, some, chocolate cookies. Each T has its evoked semantic frame right below it. For each frame, its FE are shown enclosed in the block where dark grey indicates the corresponding T, and the words fulfilling the FEs are connected to the corresponding text. For example, T bought evokes the Commerce buy frame, and has the Buyer, + +![](images/ea3837ebde468775d5747f74b0579f15c12f721a78224a27d21996413c830a55.jpg) +Figure 4: Frame Integration Representation Model. + +Goods FEs fulfilled by Katie and some chocolate cookies. + +The sentence $s$ in Figure 3 has three quadruples: +1. $c_{1} = <$ bought, Commerce_buy, [Buyer, Good-s], [Katie, chocolate cookies] +2. $c_2 = <$ some, Proportionalquantity, [Denotedquantity], [some] +3. $c_{3} = <$ chocolate cookies, Food, [Food], [chocolate cookies] + +# 3.2 Frame Integration Representation + +In Figure 4, $c_k$ ( $k = 1, 2, 3$ ) is the input. We first compute its matrix representation $c_k^t$ , with columns denoting different semantic information. Then, we formalize sentence representation as follows: + +$$ +c ^ {s} = \mathcal {N} \left(c ^ {t}\right) \tag {6} +$$ + +$$ +c ^ {t} = \phi \left(c _ {k} ^ {t}, P _ {k}\right) \quad (k = 1, \dots , K) \tag {7} +$$ + +Where $K$ represents the total number of quadruples in the sentence. $\phi (c_k^t,P_k)$ is aggregate operation, used to form frame set representation $c^t$ based on the information of $P$ and $T$ in the sequence. Finally, we encode sentence information by neural network models. + +# 4 Experiments + +# 4.1 Models for MRC + +To better analyze the performance of our proposed method on MRC, we apply both BERT (Devlin et al., 2018) and LSTM (Hochreiter and Schmidhuber, 1997) as our neural models. Also, we construct the input as: the passage as sequence A, and the + +
MethodMCTest-160 (%)MCTest-500 (%)
Richardson et al. (2013)69.1663.33
Wang et al. (2015)75.2769.94
Li et al. (2018)74.5872.67
Attentive Reader (Hermann et al., 2015)46.341.9
Neural Reasoner (Peng et al., 2015)47.645.6
Parallel-Hierarchical (Trischler et al., 2016)74.5871.00
Reading Strategies (Sun et al., 2018)81.782.0
Bert (Zhang et al., 2019)73.880.4
BERT+DCMN+ (Zhang et al., 2019)85.086.5
FSR86.184.2
+ +Table 2: The Performance Comparison of 10 Different Models on Two MCTest Datasets. + +
Method160 (%)500 (%)
Bert (Zhang et al., 2019)73.880.4
Bert (Our implementation)82.580.9
Bert+LUA82.779.5
Bert+TLUA84.682.7
Bert+FRA86.184.2
bi-LSTM54.249.5
bi-LSTM+LUA59.457.5
bi-LSTM+TLUA61.558.2
bi-LSTM+FRA62.759.6
+ +Table 3: Performance Comparison with Three Different Frame Representation Models. + +
PassageKatie went to the store...She looked around for the flowers. She wanted cookies not chips. She found some chocolate cookies. Katie then looked for a bow...
QuestionWhat snack did Katie buy?
OptionA) Chips B) Chocolate cookies +C) Flowers D) Bows
AnswerB
Frame Semantic{Chips, Chocolate cookies} ∈ Food +{Flowers, Bows}∉Food +Found and Buy have relations, as their frames are connected.
+ +Table 4: A Case Study Example. + +concatenation of question and one choice of answer as sequence B. + +In addition, we apply a linear layer and a softmax layer on the final hidden state, and maximize the log-probability of correct labels during training. + +# 4.2 Datasets for MRC + +We employ MCTest (Richardson et al., 2013) to test the system performance of multiple-choice machine comprehension task. It consists of two data sets, namely MCTest-160 and MCTest-500. + +# 4.3 Experiment Results + +Table 2 shows our FSR model achieves $86.1\%$ accuracy on MCTest-160, which is significantly better than all the nine state-of-the-art methods. In addition, it also achieves very competitive results on MCTest-500, i.e., much better than eight existing methods, slightly worse than BERT+DCMN+ model. This is encouraging, as our model is much simpler than BERT+DCMN+, which uses much more sophisticated architecture. + +Recall in Section 2, we proposed three different methods, namely, LUA, TLUA, FRA, for frame representation. Table 3 shows their detailed results: + +(1) No matter for BERT or bi-LSTM, if we add frame semantic information, the performance improves by several percents, indicating frame information is valuable in semantic understanding. +(2) Comparing TLUA with LUA, TLUA performs better, signifying attention scheme in TLUA can capture semantic information more accurately. +(3) Finally, FRA further improves LUA and TLUA's performance, as sentences within a passage typically have semantic connections with each other, and it is thus necessary to take advantage of F-to-F relations to enrich semantic information. + +# 4.4 Case Study + +For case study, Table 4 shows an example in M-CTest, where we are able to answer it correctly. Both Chips, Chocolate cookies belong to the Food + +frame, while Flowers and Bows evoke two different frames Plants and Accoutrements respectively. The target words Found and Buy in the given passage/question evoking different frames Location and Commerce buy - note in FrameNet they are connected due to their semantic relations, facilitating us to find answer B) Chocolate cookies. + +# 5 Conclusion + +We propose a novel Frame-based Sentence Representation method, which integrates multi-frame semantic information to facilitate sentence modelling. Our extensive experimental results demonstrate it works very well for the challenging machine reading comprehension task. + +# Acknowledgments + +We thank the anonymous reviewers for their helpful comments and suggestions. This work was supported by the National Key Research and Development Program of China (No.2018YFB1005103) and the National Natural Science Foundation of China (No.61936012, No.61772324). + +# References + +Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The berkeley framenet project. In Proceedings of the 17th International Conference on Computational Linguistics, COLING '98, pages 86-90, Stroudsburg, PA, USA. Association for Computational Linguistics. +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146. +Aljoscha Burchardt, Marco Pennacchiotti, Stefan Thater, and Mantered Pinkal. 2009. Assessing the impact of frame semantics on textual entailment. Natural Language Engineering, 15(4):527550. +Nathanael Chambers and Dan Jurafsky. 2010. A database of narrative schemas. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10), Valletta, Malta. European Language Resources Association (ELRA). +Dipanjan Das, Desai Chen, Andr F. T. Martins, Nathan Schneider, and Noah A. Smith. 2014. Frame-semantic parsing. Computational Linguistics, 40(1):9-56. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. + +Charles J. Fillmore. 1976. Frame semantics and the nature of language. Annals of the New York Academy of Sciences, 280(1):20-32. +Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3):245-288. +Goran Glavas, Robert Litschko, Sebastian Ruder, and Ivan Vulic. 2019. How to (properly) evaluate crosslingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. CoRR, abs/1902.00508. +Karl Moritz Hermann and Phil Blunsom. 2014. Multilingual models for compositional distributed semantics. CoRR, abs/1404.4641. +Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1693-1701. Curran Associates, Inc. +Sepp Hochreiter and Jrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780. +Meghana Kshirsagar, Sam Thomson, Nathan Schneider, Jaime G Carbonell, Noah A Smith, and Chris Dyer. 2015. Frame-semantic role labeling with heterogeneous annotations. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 218-224. +Chenrui Li, Yuanbin Wu, and Man Lan. 2018. Inference on syntactic and semantic structures for machine comprehension. In Thirty-Second AAAI Conference on Artificial Intelligence. +Shulin Liu, Yubo Chen, Shizhu He, Kang Liu, and Jun Zhao. 2016. Leveraging FrameNet to improve automatic event detection. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2134-2143, Berlin, Germany. Association for Computational Linguistics. +Bahadorreza Ofoghi, John Yearwood, and Liping Ma. 2009. The impact of frame semantic annotation levels, frame-alignment techniques, and fusion methods on factoid answer processing. Journal of the American Society for Information Science and Technology, 60(2):247-263. +Baolin Peng, Zhengdong Lu, Hang Li, and Kam-Fai Wong. 2015. Towards neural network-based reasoning. CoRR, abs/1508.05508. +Matthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset + +for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 193-203, Seattle, Washington, USA. Association for Computational Linguistics. +Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. 2018. Improving machine reading comprehension with general reading strategies. CoRR, abs/1810.13441. +Adam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Philip Bachman, and Kaheer Suleman. 2016. A parallel-hierarchical model for machine comprehension on sparse data. CoRR, abs/1603.08884. +Hai Wang, Mohit Bansal, Kevin Gimpel, and David McAllester. 2015. Machine comprehension with syntax, frames, and semantics. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 700-706, Beijing, China. Association for Computational Linguistics. +Shuailiang Zhang, Hai Zhao, Yuwei Wu, Zhuosheng Zhang, Xi Zhou, and Xiang Zhou. 2019. Dcmn+: Dual co-matching network for multi-choice reading comprehension. arXiv preprint arXiv:1908.11511. +Xiaodong Zhang, Xu Sun, and Houfeng Wang. 2018. Duplicate question identification by integrating framenet with neural networks. In Thirty-Second AAAI Conference on Artificial Intelligence. \ No newline at end of file diff --git a/aframebasedsentencerepresentationformachinereadingcomprehension/images.zip b/aframebasedsentencerepresentationformachinereadingcomprehension/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..ec2a08d5f5a14806ce41947105107260a9aaaedc --- /dev/null +++ b/aframebasedsentencerepresentationformachinereadingcomprehension/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5ef43055a3a28c633ff70d5b6659fc568f0469de5cd10332306ec6caa9510c05 +size 316198 diff --git a/aframebasedsentencerepresentationformachinereadingcomprehension/layout.json b/aframebasedsentencerepresentationformachinereadingcomprehension/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..03aee9d092c4fb885b4de4598a9970a2d53d8e22 --- /dev/null +++ b/aframebasedsentencerepresentationformachinereadingcomprehension/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1f5f4d68f55e47855c8e69d5dc176a27d1d4c55ed5be256e7fdde4cde92190c3 +size 230178 diff --git a/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/3f7a1043-4ce8-435a-834f-19dc7cc08d48_content_list.json b/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/3f7a1043-4ce8-435a-834f-19dc7cc08d48_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..383b778b95b2ac92eb99ea751805277f81fadb61 --- /dev/null +++ b/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/3f7a1043-4ce8-435a-834f-19dc7cc08d48_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf38d0d1c0ee3e86d7e4c92c65ed39eb1343f68a9c33dcb5b75a4ceaeaab5b30 +size 84747 diff --git a/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/3f7a1043-4ce8-435a-834f-19dc7cc08d48_model.json b/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/3f7a1043-4ce8-435a-834f-19dc7cc08d48_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a449c1451ea83ad89ccc9ba8db141b94c95892f7 --- /dev/null +++ b/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/3f7a1043-4ce8-435a-834f-19dc7cc08d48_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66fa22e85db8bdb0373d29642274b75db73d583a683b6e0be2f942f263b4da5f +size 103404 diff --git a/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/3f7a1043-4ce8-435a-834f-19dc7cc08d48_origin.pdf b/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/3f7a1043-4ce8-435a-834f-19dc7cc08d48_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f6a517c5d9a4f31f35b3aa99d824260581e0ce5a --- /dev/null +++ b/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/3f7a1043-4ce8-435a-834f-19dc7cc08d48_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6148e21cbda1b0b45b3cf14704cd632cef0dfc1d65e96d1457e81973b68ebf7f +size 358217 diff --git a/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/full.md b/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..8848d9fbae869f36973258fe9c74077ead98afb9 --- /dev/null +++ b/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/full.md @@ -0,0 +1,343 @@ +# A Generate-and-Rank Framework with Semantic Type Regularization for Biomedical Concept Normalization + +Dongfang Xu and Zeyu Zhang and Steven Bethard + +School of Information + +University of Arizona + +Tucson, AZ + +{dongfangxu9, zeyuzhang, bethard} $@$ email.arizona.edu + +# Abstract + +Concept normalization, the task of linking textual mentions of concepts to concepts in an ontology, is challenging because ontologies are large. In most cases, annotated datasets cover only a small sample of the concepts, yet concept normalizers are expected to predict all concepts in the ontology. In this paper, we propose an architecture consisting of a candidate generator and a list-wise ranker based on BERT. The ranker considers pairings of concept mentions and candidate concepts, allowing it to make predictions for any concept, not just those seen during training. We further enhance this list-wise approach with a semantic type regularizer that allows the model to incorporate semantic type information from the ontology during training. Our proposed concept normalization framework achieves state-of-the-art performance on multiple datasets. + +# 1 Introduction + +Mining and analyzing the constantly-growing unstructured text in the bio-medical domain offers great opportunities to advance scientific discovery (Gonzalez et al., 2015; Fleuren and Alkema, 2015) and improve the clinical care (Rumshisky et al., 2016; Liu et al., 2019). However, lexical and grammatical variations are pervasive in such text, posing key challenges for data interoperability and the development of natural language processing (NLP) techniques. For instance, heart attack, MI, myocardial infarction, and cardiovascular stroke all refer to the same concept. It is critical to disambiguate these terms by linking them with their corresponding concepts in an ontology or knowledge base. Such linking allows downstream tasks (relation extraction, information retrieval, text classification, etc.) to access the ontology's rich knowledge about biomedical entities, their synonyms, semantic types and mutual relationships. + +Concept normalization is a task that maps concept mentions, the in-text natural-language mentions of ontological concepts, to concept entries in a standardized ontology or knowledge base. Techniques for concept normalization have been advancing, thanks in part to recent shared tasks including clinical disorder normalization in 2013 ShARE/CLEF (Suominen et al., 2013) and 2014 SemEval Task 7 Analysis of Clinical Text (Pradhan et al., 2014), and adverse drug event normalization in Social Media Mining for Health (SMM4H) (Sarker et al., 2018; Weissenbacher et al., 2019). Most existing systems use a string-matching or dictionary look-up approach (Leal et al., 2015; D'Souza and Ng, 2015; Lee et al., 2016), which are limited to matching morphologically similar terms, or supervised multi-class classifiers (Belousov et al., 2017; Tutubalina et al., 2018; Niu et al., 2019; Luo et al., 2019a), which may not generalize well when there are many concepts in the ontology and the concept types that must be predicted do not all appear in the training data. + +We propose an architecture (shown in Figure 1) that is able to consider both morphological and semantic information. We first apply a candidate generator to generate a list of candidate concepts, and then use a BERT-based list-wise classifier to rank the candidate concepts. This two-step architecture allows unlikely concept candidates to be filtered out prior to the final classification, a necessary step when dealing with ontologies with millions of concepts. In contrast to previous list-wise classifiers (Murty et al., 2018) which only take the concept mention as input, our BERT-based list-wise classifier takes both the concept mention and the candidate concept name as input, and is thus able to handle concepts that never appear in the training data. We further enhance this list-wise approach with a semantic type regularizer that allows our ranker to leverage semantic type information from + +![](images/00c6988f5af0679c78db2dba0ba4448d212134a64cef02a37a8a1c2a8e35124b.jpg) +Figure 1: Proposed architecture for concept normalization: candidate generation and ranking. + +the ontology during training. + +Our work makes the following contributions: + +- Our proposed concept normalization framework achieves state-of-the-art performance on multiple datasets. +- We propose a concept normalization framework consisting of a candidate generator and a list-wise classifier. Our framework is easier to train and the list-wise classifier is able to predict concepts never seen during training. +- We introduce a semantic type regularizer which encourages the model to consider the semantic type information of the candidate concepts. This semantic type regularizer improves performance over the BERT-based listwise classifier on multiple datasets. + +The code for our proposed generate-and-rank framework is available at https://github.com/ dongfang91/Generate-and-Rank-ConNorm. + +# 2 Related work + +Traditional approaches for concept normalization involve string match and dictionary look-up. These approaches differ in how they construct dictionaries, such as collecting concept mentions from the labeled data as extra synonyms (Leal et al., 2015; Lee et al., 2016), and in different string matching techniques, such as string overlap and edit distance (Kate, 2016). Two of the most commonly used knowledge-intensive concept normalization tools, MetaMap (Aronson, 2001) and cTAKES (Savova et al., 2010) both employ rules to first generate lexical variants for each noun phrase and then conduct dictionary look-up for each variant. Several systems (D'Souza and Ng, 2015; Jonnagaddala et al., 2016) have demonstrated that rule-based concept normalization systems achieve performance competitive with other approaches in a sieve-based approach that carefully selects combinations and orders of dictionaries, exact and partial matching, + +and heuristic rules. However, such rule-based approaches struggle when there are great variations between concept mention and concept, which is common, for example, when comparing social media text to medical ontologies. + +Due to the availability of shared tasks and annotated data, the field has shifted toward machine learning techniques. We divide the machine learning approaches into two categories, classification (Savova et al., 2008; Stevenson et al., 2009; Limsopatham and Collier, 2016; Yepes, 2017; Festag and Spreckelsen, 2017; Lee et al., 2017; Tutubalina et al., 2018; Niu et al., 2019) and learning to rank (Leaman et al., 2013; Liu and Xu, 2017; Li et al., 2017; Nguyen et al., 2018; Murty et al., 2018). + +Most classification-based approaches using deep neural networks have shown strong performance. They differ in using different architectures, such as Gated Recurrent Units (GRU) with attention mechanisms (Tutubalina et al., 2018), multi-task learning with auxiliary tasks to generate attention weights (Niu et al., 2019), or pre-trained transformer networks (Li et al., 2019; Miftahutdinov and Tutubalina, 2019); different sources for training word embeddings, such as Google News (Limsopatham and Collier, 2016) or concept definitions from the Unified Medical Language System (UMLS) Metathesaurus (Festag and Speckelsen, 2017); and different input representations, such as using character embeddings (Niu et al., 2019). All classification approaches share the disadvantage that the output space must be the same size as the number of concepts to be predicted, and thus the output space tends to be small such as 2,200 concepts in (Limsopatham and Collier, 2016) and around 22,500 concepts in (Weissenbacher et al., 2019). Classification approaches also struggle with concepts that have only a few example mentions in the training data. + +Researchers have applied point-wise learning to rank (Liu and Xu, 2017; Li et al., 2017), pairwise learning to rank (Leaman et al., 2013; Nguyen + +et al., 2018), and list-wise learning to rank (Murty et al., 2018; Ji et al., 2019) on concept normalization. Generally, the learning-to-rank approach has the advantage of reducing the output space by first obtaining a smaller list of possible candidate concepts via a candidate generator and then ranking them. DNorm (Leaman et al., 2013), based on a pair-wise learning-to-rank model where both mentions and concept names were represented as TF-IDF vectors, was the first to use learning-to-rank for concept normalization and achieved the best performance in the ShARE/CLEF eHealth 2013 shared task. List-wise learning-to-rank approaches are both computationally more efficient than pairwise learning-to-rank (Cao et al., 2007) and empirically outperform both point-wise and pair-wise approaches (Xia et al., 2008). There are two implementations of list-wise classifiers using neural networks for concept normalization: Murty et al. (2018) treat the selection of the best candidate concept as a flat classification problem, losing the ability to handle concepts not seen during training; Ji et al. (2019) take a generate-and-rank approach similar to ours, but they do not leverage resources such as synonyms or semantic type information from UMLS in their BERT-based ranker. + +# 3 Proposed methods + +# 3.1 Concept normalization framework + +We define a concept mention $m$ as an abbreviation such as "MI", a noun phrase such as "heart attack", or even a short text such as "an obstruction of the blood supply to the heart". The goal is then to assign $m$ with a concept $c$ . Formally, given a list of pre-identified concept mentions $M = \{m_1, m_2, \dots, m_n\}$ in the text and an ontology or knowledge base with a set of concepts $C = \{c_1, c_2, \dots, c_t\}$ , the goal of concept normalization is to find a mapping function $c_j = f(m_i)$ that maps each textual mention to its correct concept. + +We approach concept normalization in two steps: we first use a candidate generator $G(m, C) \to C_m$ to generate a list of candidate concepts $C_m$ for each mention $m$ , where $C_m \subseteq C$ and $|C_m| \ll |C|$ . We then use a candidate ranker $R(m, C_m) \to \hat{C_m}$ where $\hat{C_m}$ is a re-ranked list of candidate concepts sorted by their relevance, preference, or importance. But unlike information retrieval tasks where the order of candidate concepts in the sorted list $\hat{C_m}$ is crucial, in concept normalization we care only that the one true concept is at the top of the list. + +The main idea of the two-step approach is that we first use a simple and fast system with high recall to generate candidates, and then a more precise system with more discriminative input to rank the candidates. + +# 3.2 Candidate generator + +We implement two kinds of candidate generators: a BERT-based multi-class classifier when the number of concepts in the ontology is small, and a Lucene-based dictionary look-up when there are hundreds of thousands of concepts in the ontology. + +# 3.2.1 BERT-based multi-class classifier + +BERT (Devlin et al., 2019) is a contextualized word representation model that has shown great performance in many NLP tasks. Here, we use BERT in a multi-class text-classification configuration as our candidate concept generator. We use the final hidden vector $V_{m} \in \mathbb{R}^{H}$ corresponding to the first input token ([CLS]) generated from $BERT(m)$ and a classification layer with weights $W \in \mathbb{R}^{|C| \times H}$ , and train the model using a standard classification loss: + +$$ +L _ {G} = y * \log (\operatorname {s o f t m a x} \left(V _ {m} W ^ {T}\right)) \tag {1} +$$ + +where $y$ is a one-hot vector, and $|y| = |C|$ . The score for all concepts is calculated as: + +$$ +p (C) = \operatorname {s o f t m a x} \left(V _ {m} W ^ {T}\right) \tag {2} +$$ + +We select the top $k$ most probable concepts in $p(C)$ and feed that list $C_m$ to the ranker. + +# 3.2.2 Lucene-based dictionary look-up system + +Multi-pass sieve rule based systems (D'Souza and Ng, 2015; Jonnagaddala et al., 2016; Luo et al., 2019b) achieve competitive performance when used with the right combinations and orders of different dictionaries, exact and partial matching, and heuristic rules. Such systems relying on basic lexical matching algorithms are simple and fast to implement, but they are only able to generate candidate concepts which are morphologically similar to a given mention. + +Inspired by the work of Luo et al. (2019b), we implement a Lucene-based sieve normalization system which consists of the following components (see Appendix A.1 for details): + +a. Lucene index over the training data finds all mentions that exactly match $m$ . +b. Lucene index over ontology finds concepts whose preferred name exactly matches $m$ . +c. Lucene index over ontology finds concepts where at least one synonym of the concept exactly matches $m$ . +d. Lucene index over ontology finds concepts where at least one synonym of the concept has high character overlap with $m$ . + +The ranked list $C_m$ generated by this system is fed as input to the candidate ranker. + +# 3.3 Candidate ranker + +After the candidate generator produces a list of concepts, we use a BERT-based list-wise classifier to select the most likely candidate. BERT allows us to match morphologically dissimilar (but semantically similar) mentions and concepts, and the list-wise classifier takes both mention and candidate concepts as input, allowing us to handle concepts that appear infrequently (or never) in the training data. + +Here, we use BERT similar to a question answering configuration, where given a concept mention $m$ , the task is to choose the most likely candidate concept $c_{m}$ from all candidate concepts $C_m$ . As shown in Figure 1, our classifier input includes the text of the mention $m$ and all synonyms of the candidate concept $c_{m}$ , and takes the form [CLS] $m$ [SEP] $\mathrm{syn}_1(c_m)$ [SEP] ... [SEP] $\mathrm{syn}_s(c_m)$ [SEP], where $\mathrm{syn}_i(c_m)$ is the $i^{\text{th}}$ synonym of concept $c_m^2$ . We calculate the final hidden vector $V_{(m,c_m)} \in \mathbb{R}^H$ corresponding to the first input token ([CLS]) generated from BERT for each such input, and then concatenate the hidden vectors of all candidate concepts to form a matrix $V_{(m,C_m)} \in \mathbb{R}^{|C_m| \times H}$ . We use this matrix and classification layer weights $W \in \mathbb{R}^H$ , and compute a standard classification loss: + +$$ +L _ {R} = y * \log (\operatorname {s o f t m a x} (V _ {(m, C _ {m})} W ^ {T})). \quad (3) +$$ + +where $y$ is a one-hot vector, and $|y| = |C_m|$ . + +# 3.4 Semantic type regularizer + +To encourage the list-wise classifier towards a more informative ranking than just getting the correct + +concept at the top of the list, we propose a semantic type regularizer that is optimized when candidate concepts with the correct semantic type are ranked above candidate concepts with incorrect types. The semantic type of the candidate concept is assumed correct only if it exactly matches the semantic type of the gold truth concept. If the concept has multiple semantic types, all must match. Our semantic type regularizer consists of two components: + +$$ +R _ {p} \left(\hat {y} _ {t}, \hat {y} _ {p}\right) = \sum_ {p \in P (y)} \left(m _ {1} + \hat {y} _ {p} - \hat {y} _ {t}\right) \tag {4} +$$ + +$$ +R _ {n} \left(\hat {y} _ {p}, \hat {y} _ {n}\right) = \sum_ {p \in P (y)} \max _ {n \in N (y)} \left(m _ {2} + \hat {y} _ {n} - \hat {y} _ {p}\right) \tag {5} +$$ + +where $\hat{y} = V_{(m,c_m)}W^T$ , $N(y)$ is the set of indexes of candidate concepts with incorrect semantic types (negative candidates), $P(y)$ (positive candidates) is the complement of $N(y)$ , $\hat{y}_t$ is the score of the gold truth candidate concept, and thus $t \in P(y)$ . The margins $m_1$ and $m_2$ are hyper-parameters for controlling the minimal distances between $\hat{y}_t$ and $\hat{y}_p$ and between $\hat{y}_p$ and $\hat{y}_n$ , respectively. Intuitively, $R_p$ tries to push the score of the gold truth concept above all positive candidates at least by $m_1$ , and $R_n$ tries to push the best scored negative candidate below all positive candidates by $m_2$ . + +The final loss function we optimize for the BERT-based list-wise classifier is: + +$$ +L = L _ {R} + \lambda R _ {p} \left(\hat {y} _ {t}, \hat {y} _ {p}\right) + \mu R _ {n} \left(\hat {y} _ {p}, \hat {y} _ {n}\right) \tag {6} +$$ + +where $\lambda$ and $\mu$ are hyper-parameters to control the tradeoff between standard classification loss and the semantic type regularizer. + +# 4 Experiments + +# 4.1 Datasets + +Our experiments are conducted on three social media datasets, AskAPatient (Limsopatham and Collier, 2016), TwADR-L (Limsopatham and Collier, 2016), and SMM4H-17 (Sarker et al., 2018), and one clinical notes dataset, MCN (Luo et al., 2019b). We summarize dataset characteristics in Table 1. + +AskAPatient The AskAPatient dataset3 contains 17,324 adverse drug reaction (ADR) annotations collected from blog posts. The mentions are mapped to 1,036 medical concepts with + +
DatasetAskAPatientTwADR-LSMM4H-17MCN
OntologySNOMED-CT & AMTMedDRAMedDRA (PT)SNOMED-CT & RxNorm
SubsetYYNN
|Contology|1,0362,22022,500434,056
|STontology|221861125
|Cdataset|1,0362,2205133,792
|M|17,3245,0749,14913,609
|Mtrain|15665.24805.75,3195,334
|Mtest|866.2142.72,5006,925
|M|/|Cdataset|16.722.2917.833.59
|Ctest - Ctrain|00432,256
|Mtest - Mtrain|/Mtest39.7%39.5%34.7%53.9%
|Mambiguous|/|M|1.2%12.8%0.8%4.5%
+ +Table 1: Dataset statistics, where $C$ is a set of concepts, ${ST}$ is a set of semantic types,and $M$ is a set of mentions. + +22 semantic types from the subset of Systematized Nomenclature Of Medicine-Clinical Term (SNOMED-CT) and the Australian Medicines Terminology (AMT). We follow the 10-fold cross validation (CV) configuration in Lim-sopatham and Collier (2016) which provides 10 sets of train/dev/test splits. + +TwADR-L The TwADR-L dataset contains 5,074 ADR expressions from social media. The mentions are mapped to 2,220 Medical Dictionary for Regulatory Activities (MedDRA) concepts with 18 semantic types. We again follow the 10-fold cross validation configuration defined by Limsopatham and Collier (2016). + +SMM4H-17 The SMM4H-17 dataset consists of 9,149 manually curated ADR expressions from tweets. The mentions are mapped to 22,500 concepts with 61 semantic types from MedDRA Preferred Terms (PTs). We use the 5,319 mentions from the released set as our training data, and keep the 2,500 mentions from the original test set as evaluation. + +MCN The MCN dataset consists of 13,609 concept mentions drawn from 100 discharge summaries from the fourth i2b2/VA shared task (Uzuner et al., 2011). The mentions are mapped to 3792 unique concepts out of 434,056 possible concepts with 125 semantic types in SNOMED-CT and RxNorm. We take 40 clinical notes from the released data as training, consisting of 5,334 mentions, and the standard evaluation data with 6,925 mentions as our test set. Around $2.7\%$ of mentions in MCN could not be mapped to any + +concepts in the terminology, and are assigned to the CUI-less label. + +A major difference between the datasets is the space of concepts that systems must consider. For AskAPatient and TwADR-L, all concepts in the test data are also in the training data, and in both cases only a couple thousand concepts have to be considered. Both SMM4H-17 and MCN define a much larger concept space: SMM4H-17 considers 22,500 concepts (though only 513 appear in the data) and MCN considers 434,056 (though only 3,792 appear in the data). AskAPatient and TwADR-L have no unseen concepts in their test data, SMM4H-17 has a few (43), while MCN has a huge number (2,256). Even a classifier that perfectly learned all concepts in the training data could achieve only $70.15\%$ accuracy on MCN. MCN also has more unseen mentions: $53.9\%$ , where the other datasets have less than $40\%$ . The MCN dataset is thus harder to memorize, as systems must consider many mentions and concepts never seen in training. + +Unlike the clinical MCN dataset, in the three social media datasets - AskAPatient, TwADR-L, and SMM4H-17 - it is common for the ADR expressions to share no words with their target medical concepts. For instance, the ADR expression "makes me like a zombie" is assigned the concept "C1443060" with preferred term "feeling abnormal". The social media datasets do not include context, only the mentions themselves, while the MCN dataset provides the entire note surrounding each mention. Since only $4.5\%$ of mentions in the MCN dataset are ambiguous, for the current experiments we ignore this additional context information. + +# 4.2 Unified Medical Language System + +The UMLS Metathesaurus (Bodenreider, 2004) links similar names for the same concept + +from nearly 200 different vocabularies such as SNOMED-CT, MedDRA, RxNorm, etc. There are over 3.5 million concepts in UMLS, and for each concept, UMLS also provides the definition, preferred term, synonyms, semantic type, relationships with other concepts, etc. + +In our experiments, we make use of synonyms and semantic type information from UMLS. We restrict our concepts to the three vocabularies, MedDRA, SNOMED-CT, and RxNorm in the UMLS version 2017AB. For each concept in the ontologies of the four datasets, we first find its concept unique identifier (CUI) in UMLS. We then extract synonyms and semantic type information according to the CUI. Synonyms (English only) are collected from level 0 terminologies containing vocabulary sources for which no additional license agreements are necessary. + +# 4.3 Evaluation metrics + +For all four datasets, the standard evaluation of concept normalization systems is accuracy. For the AskAPatient and TwADR-L datasets, which use 10-fold cross validation, the accuracy metrics are averaged over 10 folds. + +# 4.4 Implementation details + +We use the BERT-based multi-class classifier as the candidate generator on the three social media datasets AskAPatient, TwADR-L, and SMM4H-17, and the Lucene-based candidate generator for the MCN dataset. In the social media datasets, the number of concepts in the data is small, few test concepts are unseen in the training data, and there is a greater need to match expressions that are morphologically dissimilar from medical concepts. In the clinical MCN dataset, the opposites are true. + +For all experiments, we use BioBERT-base (Lee et al., 2019), which further pre-trains BERT on PubMed abstracts (PubMed) and PubMed Central full-text articles (PMC). We use huggingface's pytorch implementation of BERT5. We select the best hyper-parameters based on the performance on dev set. See Appendix A.2 for hyperparameter settings. + +# 4.5 Comparisons with related methods + +We compare our proposed architecture with the following state-of-the-art systems. + +WordCNN Limsopatham and Collier (2016) use convolutional neural networks over pre-trained word embeddings to generate a vector representation for each mention, and then feed these into a softmax layer for multi-class classification. + +WordGRU+Attend+TF-IDF Tutubalina et al. (2018) use a bidirectional GRU with attention over pre-trained word embeddings to generate a vector representation for each mention, concatenate such vector representations with the cosine similarities of the TF-IDF vectors between the mention and all other concept names, and then feed the concatenated vector to a softmax layer for multi-class classification. + +BERT+TF-IDF Miftahutdinov and Tutubalina (2019) take similar approach as Tutubalina et al. (2018), but use BERT to generate a vector representation for each mention. They concatenate the vector representations with the cosine similarities of the TF-IDF vectors between the mention and all other concept names, and then feed the concatenated vector to a softmax layer for multi-class classification. + +CharCNN+Attend+MT Niu et al. (2019) use a multi-task attentional character-level convolution neural network. They first convert the mention into a character embedding matrix. The auxiliary task network takes the embedding matrix as input for a CNN to learn to generate character-level domain-related importance weights. Such learned importance weights are concatenated with the character embedding matrix and fed as input to another CNN model with a softmax layer for multi-class classification. + +CharLSTM+WordLSTM Han et al. (2017) first use a forward LSTM over each character of the mention and its corresponding character class such as lowercase or uppercase to generate a character-level vector representation, then use another bi-directional LSTM over each word of the mention to generate a word-level representation. They concatenate character-level and word-level representations and feed them as input to a softmax layer for multi-class classification. + +LR+MeanEmbedding Belousov et al. (2017) calculate the mean of three different weighted word embeddings pre-trained on GoogleNews, Twitter and DrugTwitter as vector representations for + +
ApproachTwADR-LAskAPatientSMM4H-17
DevTestDevTestDevTest
WordCNN (Limsopatham and Collier, 2016)-44.78-81.41--
WordGRU+Attend+TF-IDF (Tutubalina et al., 2018)---85.71--
BERT+TF-IDF (Miftahutdinov and Tutubalina, 2019)-----89.64
CharCNN+Attend+MT (Niu et al., 2019)-46.46-84.65--
CharLSTM+WordLSTM (Han et al., 2017)-----87.20
LR+MeanEmbedding (Belousov et al., 2017)-----87.70
BERT47.0844.0588.6387.5284.7487.36
BERT + BERT-rank48.0746.3288.1487.1084.4487.66
BERT + BERT-rank + ST-reg47.9847.0288.2687.4684.6688.24
BERT + gold + BERT-rank52.7049.6989.0687.9288.5790.16
BERT + gold + BERT-rank + ST-reg52.8450.8189.6888.5188.8791.08
+ +the mention, where word weights are calculated as inverse document frequency. Such vector representations are fed as input to a multinomial logistic regression (LR) model for multi-class classification. + +Sieve-based Luo et al. (2019b) build a sieve-based normalization model which contains exact-match and MetaMap (Aronson, 2001) modules. Given a mention as input, the exact-match module first looks for mentions in the training data that exactly match the input, and then looks for concepts from the ontology whose synonyms exactly match the input. If no concepts are found, the mention is fed into MetaMap. They run this sieve-based normalization model twice. In the first round, the model lower-cases the mentions and includes acronym/abbreviation tokens during dictionary lookup. In the second round, the model lower-cases the mentions spans and also removes special tokens such as "'s", ""," etc. + +Since our focus is individual systems, not ensembles, we compare only to other non-ensembles6. + +# 4.6 Models + +We separate out the different contributions from the following components of our architecture. + +BERT The BERT-based multi-class classifier. When used alone, we select the most probable concept as the prediction. + +Table 2: Comparisons of our proposed concept normalization architecture against the current state-of-the-art performances on TwADR-L, AskAPatient, and SMM4H-17 datasets. + +
ApproachMCN
DevTest
Sieve-based (Luo et al., 2019b)-76.35
Lucene79.25
Lucene+BERT-rank83.5682.75
Lucene+BERT-rank+ST-reg84.4483.56
Lucene+gold+BERT-rank86.8984.77
Lucene+gold+BERT-rank+ST-reg88.5986.56
+ +Table 3: Accuracy of our proposed concept normalization architecture on MCN dataset. + +Lucene The Lucene-based dictionary look-up. When used alone, we take the top-ranked candidate concept as the prediction. + ++BERT-rank The BERT-based list-wise classifier, always used in combination with either BERT or Lucene as a candidate generator + ++ST-reg The semantic type regularizer, always used in combination with BERT-ranker. + +We also consider the case $(+\mathbf{gold})$ where we artificially inject the correct concept into the candidate generator's list if it was not already there. + +# 5 Results + +Table 2 shows that our complete model, BERT + BERT-rank + ST-reg, achieves a new state-of-the-art on two of the social media test sets, and Table 3 shows that Lucene + BERT-rank + ST-reg achieves a new state-of-the-art on the clinical MCN test set. The TwADR-L dataset is the most difficult, with our complete model achieving $47.02\%$ accuracy. In the other datasets, performance of our complete + +model is much higher: $87.46\%$ for AskAPatient, $88.24\%$ for SMM4H-17 $^{7}$ . + +On the TwADR-L, SMM4H-17, and MCN test sets, adding the BERT-based ranker improves performance over the candidate generator alone, and adding the semantic type regularization further improves performance. For example, Lucene alone achieves $79.25\%$ accuracy on the MCN data, adding the BERT ranker increases this to $82.75\%$ , and adding the semantic type regularizer increases this to $83.56\%$ . On AskAPatient, performance of the full model is similar to just the BERT multiclass classifier, perhaps because in this case BERT alone already successfully improves the state-of-the-art from $85.71\%$ to $87.52\%$ . The +gold setting allows us to answer how well our ranker would perform if our candidate generator made no mistakes. First, we can see that if the correct concept is always in the candidate list, our list-based ranker (+BERT-rank) outperforms the multi-class classifier (BERT) on all test sets. We also see in this setting that the benefits of the semantic type regularizer are amplified, with test sets of TwADR-L and MCN showing more than $1.00\%$ gain in accuracy from using the regularizer. These findings suggest that improving the quality of the candidate generator should be a fruitful future direction. + +Overall, we see the biggest performance gains from our proposed generate-and-rank architecture in the MCN dataset. This is the most realistic setting, where the number of candidate concepts is large and many test concepts were never seen during training. In such cases, we cannot use a multiclass classifier as a candidate generator since it would never generate unseen concepts. Thus, our ranker shines in its ability to sort through the long list of possible concepts. + +# 6 Qualitative analysis + +Table 4 shows an example that is impossible for the multi-class classifier approach to concept normalization. The concept mention "an abdominal wall hernia" in the clinical MCN dataset needs to be mapped to the concept with the preferred name "Hernia of abdominal wall", but that concept never appeared in the training data. The Lucene-based candidate generator finds this concept, but only + +
CandidatesLBR
Repair of abdominal wall hernia13
Repair of anterior abdominal wall hernia24
Obstructed hernia of anterior abdominal wall35
Hernia of abdominal wall41
Abdominal wall hernia procedure52
+ +Table 4: Predicted candidate concepts for mention An abdominal wall hernia and their rankings among the outputs of Lucene (L) and BERT-Ranker (BR). Gold concept is Hernia of abdominal wall. + +
CandidatesBRSTRST
Influenza-like illness12DS
Influenza24DS
Influenza-like symptoms31SS
Feeling tired45F
Muscle cramps in feet53SS
+ +Table 5: Predicted candidate concepts for mention felt like I was coming down with flu and their rankings among the outputs of BERT-Ranker (BR) and BERT-Ranker + semantic type regularizer (STR). Gold concept is flu-like symptoms. Semantic types (ST) of the candidates include: disease or syndrome (DS), sign or symptom (SS), finding (F) + +through character overlap (step d.) and several other concepts have high overlap as well. Thus Lucene ranks the correct concept 4th in its list. The BERT ranker is able to compare "an abdominal wall hernia" to "Hernia of abdominal wall" and recognize that as a better match than the other options, re-assigning it to rank 1. + +Table 5 shows an example that illustrates why the semantic type regularizer helps. The mention "felt like I was coming down with flu" in the social media AskAPatient dataset needs to be mapped to the concept with the preferred name "influenza-like symptoms", which has the semantic type of a sign or symptom. The BERT ranker ranks two disease or syndromes higher, placing the correct concept at rank 3. After the semantic type regularizer is added, the system recognizes that the mention should be mapped to a sign or symptom, and correctly ranks it above the disease or syndromes. Note that this happens even though the ranker does not get to see the semantic type of the input mention at prediction time. + +# 7 Limitations and future research + +The available concept normalization datasets are somewhat limited. Lee et al. (2017) notes that AskAPatient and TwADR-L have issues including + +Duplicate instances, which can lead to bias in the system; many phrases have multiple valid mappings to concepts but the context necessary to disambiguate is not part of the dataset; and the 10-fold cross-validation makes training complex models unnecessarily expensive. These datasets are also unrealistic in that all concepts in the test data are seen during training. Future research should focus on more realistic datasets that follow the approach of MCN in annotating mentions of concepts from a large ontology and including the full context. + +Our ability to explore the size of the candidate list was limited by our available computational resources. As the size of the candidate list increases, the true concept is more likely to be included, but the number of training instances also increases, making the computational cost larger, especially for the datasets using 10-fold cross-validation. We chose candidate list sizes as large as we could afford, but there are likely further gains possible with larger candidate lists. + +Our semantic type regularizer is limited to exact matching: it checks only whether the semantic type of a candidate exactly matches the semantic type of the true concept. The UMLS ontology includes many other relations, such as is-a and part-of relations, and extending our regularizer to encode such rich semantic knowledge may yield further improvements in the BERT-based ranker. + +# 8 Conclusion + +We propose a concept normalization framework consisting of a candidate generator and a list-wise classifier based on BERT. + +Because the candidate ranker makes predictions over pairs of concept mentions and candidate concepts, it is able to predict concepts never seen during training. Our proposed semantic type regularizer allows the ranker to incorporate semantic type information into its predictions without requiring semantic types at prediction time. This generate-and-rank framework achieves state-of-the-art performance on multiple concept normalization datasets. + +# Acknowledgments + +We thank the anonymous reviewers for their insightful comments on an earlier draft of this paper. This work was supported in part by National Institutes of Health grant R01LM012918 from the National Library of Medicine (NLM) and grant + +R01GM114355 from the National Institute of General Medical Sciences (NIGMS). The computations were done in systems supported by the National Science Foundation under Grant No. 1228509. This research was supported in part by an appointment to the Oak Ridge National Laboratory Advanced Short-Term Research Opportunity (ASTRO) Program, sponsored by the U.S. Department of Energy and administered by the Oak Ridge Institute for Science and Education. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health, National Science Foundation, or Department of Energy. + +# References + +Alan R. Aronson. 2001. Effective mapping of biomedical text to the UMLS Metathesaurus: the MetaMap program. In Proceedings of the AMIA Symposium, pages 17-21. American Medical Informatics Association. +Maksim Belousov, William Dixon, and Goran Nenadic. 2017. Using an Ensemble of Generalised Linear and Deep Learning Models in the SMM4H 2017 Medical Concept Normalisation Task. In CEUR Workshop Proceedings, volume 1996, pages 54-58. +Olivier Bodenreider. 2004. The Unified Medical Language System (UMLS): integrating biomedical terminology. *Nucleic Acids Research*, 32(suppl_1):D267-D270. +Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007. Learning to Rank: From Pairwise Approach to Listwise Approach. In Proceedings of the 24th International Conference on Machine Learning, pages 129-136. Association for Computing Machinery. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Jennifer D'Souza and Vincent Ng. 2015. Sieve-based entity linking for the biomedical domain. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 297-302, Beijing, China. Association for Computational Linguistics. + +Sven Festag and Cord Spreckelsen. 2017. Word Sense Disambiguation of Medical Terms via Recurrent Convolutional Neural Networks. In Health Informatics Meets EHealth: Digital Insight Information-Driven Health & Care. Proceedings of the 11th EHealth2017 Conference, volume 236, pages 8-15. +Wilco W.M. Fleuren and Wynand Alkema. 2015. Application of text mining in the biomedical domain. Methods, 74:97-106. +Graciela H. Gonzalez, Tasnia Tahsin, Britton C. Goodale, Anna C. Greene, and Casey S. Greene. 2015. Recent Advances and Emerging Applications in Text and Data Mining for Biomedical Discovery. Briefings in Bioinformatics, 17(1):33-42. +Sifei Han, Tung Tran, Anthony Rios, and Ramakanth Kavuluru. 2017. Team UKNLP: Detecting ADRs, Classifying Medication Intake Messages, and Normalizing ADR Mentions on Twitter. In CEUR Workshop Proceedings, volume 1996, pages 49-53. +Zongcheng Ji, Qiang Wei, and Hua Xu. 2019. Bert-based ranking for biomedical entity normalization. arXiv preprint arXiv:1908.03548. +Jitendra Jonnagaddala, Toni Rose Jue, Nai-Wen Chang, and Hong-Jie Dai. 2016. Improving the dictionary lookup approach for disease normalization using enhanced dictionary and query expansion. Database, 2016:baw112. +Rohit J. Kate. 2016. Normalizing clinical terms using learned edit distance patterns. Journal of the American Medical Informatics Association, 23(2):380-386. +Andre Leal, Bruno Martins, and Francisco Couto. 2015. *ULisboa: Recognition and normalization of medical concepts*. In *Proceedings of the 9th International Workshop on Semantic Evaluation* (SemEval 2015), pages 406-411, Denver, Colorado. Association for Computational Linguistics. +Robert Leaman, Rezarta Islamaj Dogan, and Zhiyong Lu. 2013. DNorm: disease name normalization with pairwise learning to rank. Bioinformatics, 29(22):2909-2917. +Hsin-Chun Lee, Yi-Yu Hsu, and Hung-Yu Kao. 2016. AuDis: an automatic CRF-enhanced disease normalization in biomedical text. Database, 2016. Baw091. +Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics. Btz682. +Kathy Lee, Sadid A. Hasan, Oladimeji Farri, Alok Choudhary, and Ankit Agrawal. 2017. Medical Concept Normalization for Online User-Generated Texts. In 2017 IEEE International Conference on Healthcare Informatics (ICHI), pages 462-469. IEEE. + +Fei Li, Yonghao Jin, Weisong Liu, Bhanu Pratap Singh Rawat, Pengshan Cai, and Hong Yu. 2019. FineTuning Bidirectional Encoder Representations From Transformers (BERT)-Based Models on LargeScale Electronic Health Record Notes: An Empirical Study. JMIR Med Inform, 7(3):e14830. +Haodi Li, Qingcai Chen, Buzhou Tang, Xiaolong Wang, Hua Xu, Baohua Wang, and Dong Huang. 2017. CNN-based ranking for biomedical entity normalization. BMC Bioinformatics, 18(11):385. +Nut Limsopatham and Nigel Collier. 2016. Normalising medical concepts in social media texts by learning semantic representation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1014-1023, Berlin, Germany. Association for Computational Linguistics. +Feifan Liu, Abhyuday Jagannatha, and Hong Yu. 2019. Towards drug safety surveillance and pharmacovigilance: Current progress in detecting medication and adverse drug events from electronic health records. Drug Saf, 42:95-97. +Hongwei Liu and Yun Xu. 2017. A Deep Learning Way for Disease Name Representation and Normalization. In *Natural Language Processing and Chinese Computing*, pages 151-157. Springer International Publishing. +Yen-Fu Luo, Weiyi Sun, and Anna Rumshisky. 2019a. A Hybrid Normalization Method for Medical Concepts in Clinical Narrative using Semantic Matching. In AMIA Joint Summits on Translational Science proceedings, volume 2019, pages 732-740. American Medical Informatics Association. +Yen-Fu Luo, Weiyi Sun, and Anna Rumshisky. 2019b. MCN: A comprehensive corpus for medical concept normalization. Journal of Biomedical Informatics, pages 103-132. +Zulfat Miftahutdinov and Elena Tutubalina. 2019. Deep neural models for medical concept normalization in user-generated texts. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 393-399, Florence, Italy. Association for Computational Linguistics. +Shikhar Murty, Patrick Verga, Luke Vilnis, Irena Radovanovic, and Andrew McCallum. 2018. Hierarchical losses and new resources for fine-grained entity typing and linking. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 97-109, Melbourne, Australia. Association for Computational Linguistics. +Thanh Ngan Nguyen, Minh Trang Nguyen, and Thanh Hai Dang. 2018. Disease Named Entity Normalization Using Pairwise Learning To Rank and Deep Learning. Technical report, VNU University of Engineering and Technology. + +Jinghao Niu, Yehui Yang, Siheng Zhang, Zhengya Sun, and Wensheng Zhang. 2019. Multi-task Character-Level Attentional Networks for Medical Concept Normalization. *Neural Process Lett*, 49(3):1239-1256. +Sameer Pradhan, Noémie Elhadad, Wendy Chapman, Suresh Manandhar, and Guergana Savova. 2014. SemEval-2014 task 7: Analysis of clinical text. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 54–62, Dublin, Ireland. Association for Computational Linguistics. +Anna Rumshisky, Marzyeh Ghassemi, Tristan Naumann, Peter Szolovits, V.M. Castro, T.H. McCoy, and R.H. Perlis. 2016. Predicting early psychiatric readmission with natural language processing of narrative discharge summaries. Transl Psychiatry, 6(10):e921-e921. +Abeed Sarker, Maksim Belousov, Jasper Friedrichs, Kai Hakala, Svetlana Kiritchenko, Farrokh Mehryary, Sifei Han, Tung Tran, Anthony Rios, Ramakanth Kavuluru, Berry de Bruijn, Filip Ginter, Debanjan Mahata, Saif M. Mohammad, Goran Nenadic, and Graciela Gonzalez-Hernandez. 2018. Data and systems for medication-related text classification and concept normalization from Twitter: insights from the Social Media Mining for Health (SMM4H)-2017 shared task. Journal of the American Medical Informatics Association, 25(10):1274-1283. +Guergana K. Savova, Anni R. Coden, Igor L. Sominsky, Rie Johnson, Philip V. Ogren, Piet C. De Groen, and Christopher G. Chute. 2008. Word sense disambiguation across two domains: Biomedical literature and clinical notes. Journal of Biomedical Informatics, 41(6):1088-1100. +Guergana K. Savova, James J. Masanz, Philip V. Ogren, Jiaping Zheng, Sunghwan Sohn, Karin C. Kipper-Schuler, and Christopher G. Chute. 2010. Mayoclinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications. Journal of the American Medical Informatics Association, 17(5):507-513. +Mark Stevenson, Yikun Guo, Abdulaziz Alamri, and Robert Gaizauskas. 2009. Disambiguation of biomedical abbreviations. In Proceedings of the BioNLP 2009 Workshop, pages 71-79, Boulder, Colorado. Association for Computational Linguistics. +Hanna Suominen, Sanna Salantera, Sumithra Velupillai, Wendy W. Chapman, Guergana Savova, Noemie Elhadad, Sameer Pradhan, Brett R. South, Danielle L. Mowery, Gareth J.F. Jones, Johannes Leveling, Liadh Kelly, Lorraine Goeuriot, David Martinez, and Guido Zuccon. 2013. Overview of the ShARE/CLEF eHealth Evaluation Lab 2013. In Information Access Evaluation. Multilinguality, Multimodality, and Visualization, pages 212-231. Springer Berlin Heidelberg. + +Elena Tutubalina, Zulfat Miftahutdinov, Sergey Nikolenko, and Valentin Malykh. 2018. Medical concept normalization in social media posts with recurrent neural networks. Journal of Biomedical Informatics, 84:93-102. +Özlem Uzuner, Brett R. South, Shuying Shen, and Scott L. DuVall. 2011. 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Association, 18(5):552-556. +Davy Weissenbacher, Abeed Sarker, Arjun Magge, Ashlynn Daughton, Karen O'Connor, Michael J. Paul, and Graciela Gonzalez-Hernandez. 2019. Overview of the fourth social media mining for health (SMM4H) shared tasks at ACL 2019. In Proceedings of the Fourth Social Media Mining for Health Applications (#SMM4H) Workshop & Shared Task, pages 21-30, Florence, Italy. Association for Computational Linguistics. +Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. 2008. Listwise approach to learning to rank: theory and algorithm. In Proceedings of the 25th International Conference on Machine Learning, pages 1192-1199. Association for Computing Machinery. +Antonio Jimeno Yepes. 2017. Word embeddings and recurrent neural networks based on Long-Short Term Memory nodes in supervised biomedical word sense disambiguation. Journal of Biomedical Informatics, 73:137-147. + +# A Appendices + +# A.1 Lucene-based dictionary look-up system + +The lucene-based dictionary look-up system consists of the following components: + +(a) Lucene index over the training data finds all CUI-less mentions that exactly match mention $m$ . +(b) Lucene index over the training data finds CUIs of all training mentions that exactly match mention $m$ . +(c) Lucene index over UMLS finds CUIs whose preferred name exactly matches mention $m$ . +(d) Lucene index over UMLS finds CUIs where at least one synonym of the CUI exactly matches mention $m$ . +(e) Lucene index over UMLS finds CUIs where at least one synonym of the CUI has high character overlap with mention $m$ . To check the character overlap, we run the following three rules sequentially: token-level matching, fuzzy string matching with a maximum edit distance of 2, and character 3-gram matching.. + +along with the concept mention, to the BERT-based reranker (f). + +During training, we used component (e) alone instead of the combination of components (b)-(e) to generate training instances for the BERT-based reranker (f) as it generated many more training examples and resulted in better performance on the dev set. During evaluation, we used the whole pipeline. + +![](images/e63595c221a68333d5ecb0bf7003095f57a357aa88c09d78f8321f2b1190a480.jpg) +See Figure A1 for the flow of execution across the components. Whenever there are multiple CUIs generated from a component (a) to (e), they are fed, +Figure A1: Architecture of the lucene-based dictionary look-up system. The edges out of a search process indicate the number of matches necessary to follow the edge. Outlined nodes are terminal states that represent the predictions of the system. + +
Multi-classList-wise
AAPTwADR-LSMM4H-17AAPTwADR-LSMM4H-17MCN
learning_rate1e-45e-55e-55e-55e-53e-53e-5
num_train_epochs30304010102030
per_gpu_train_batch_size3216321616168
save_steps487301166976301333250
warmup_steps1463903664976301666750
list size (k)---10201030
m1---0.00.00.00.1
m2---0.20.20.20.2
λ---0.60.40.40.4
μ---0.60.40.40.8
+ +Table A1: Hyper-parameters for BERT-based multi-class and list-wise classifiers. AAP=AskAPatient. Terms with underscores are hyper-parameters in huggingface's pytorch implementation of BERT. + +# A.2 Hyper-parameters + +Table A1 shows the hyper-parameters for our models. We use huggingface's pytorch implementation of BERT. We tune the hyperparameters via grid search, and select the best BERT hyper-parameters based on the performance on the dev set. + +To keep the size of the candidate list equal to $k$ for every mention, we apply the following rules: if the list does not contain the gold concept and is already of length $k$ , we inject the correct one and remove an incorrect candidate; if the list is not length of $k$ , we inject the gold concept and the most frequent concepts in the training set to reach $k$ . \ No newline at end of file diff --git a/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/images.zip b/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0708b09a8ea611641aca30a23a153068bf3b3dcc --- /dev/null +++ b/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5592e921177ea4ecd92942efe02d3d5b93b5bdd9eeda6de84d00dab6a55530c4 +size 377311 diff --git a/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/layout.json b/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..78244ca9397da1649697582124c4c7712d2d5c29 --- /dev/null +++ b/agenerateandrankframeworkwithsemantictyperegularizationforbiomedicalconceptnormalization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:34c4fe806ecf1e77185331fbf5fc1f2cf9cc348e0e75656f66691a697e88ef79 +size 401179 diff --git a/agenerativemodelforjointnaturallanguageunderstandingandgeneration/70eacfec-ed5d-41d3-922c-fe9216813a7a_content_list.json b/agenerativemodelforjointnaturallanguageunderstandingandgeneration/70eacfec-ed5d-41d3-922c-fe9216813a7a_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..568974ad2b83d2c069bf7dee49c339d86c8139b0 --- /dev/null +++ b/agenerativemodelforjointnaturallanguageunderstandingandgeneration/70eacfec-ed5d-41d3-922c-fe9216813a7a_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be7ac10a732cc1cd4666fb70c1dbeb8f02f1924632993f0d8d3f0a0b0b78ebf4 +size 91355 diff --git a/agenerativemodelforjointnaturallanguageunderstandingandgeneration/70eacfec-ed5d-41d3-922c-fe9216813a7a_model.json b/agenerativemodelforjointnaturallanguageunderstandingandgeneration/70eacfec-ed5d-41d3-922c-fe9216813a7a_model.json new file mode 100644 index 0000000000000000000000000000000000000000..60b421b23caa7b67e5a55ea59ff3f208dc87527d --- /dev/null +++ b/agenerativemodelforjointnaturallanguageunderstandingandgeneration/70eacfec-ed5d-41d3-922c-fe9216813a7a_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2bb013623b917a7ba8a2152588961c2ef25fc12543c124cee0a6a07d2c7708a2 +size 109963 diff --git a/agenerativemodelforjointnaturallanguageunderstandingandgeneration/70eacfec-ed5d-41d3-922c-fe9216813a7a_origin.pdf b/agenerativemodelforjointnaturallanguageunderstandingandgeneration/70eacfec-ed5d-41d3-922c-fe9216813a7a_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..99f1f6fcd490fdd315e2e7219053348be86916a8 --- /dev/null +++ b/agenerativemodelforjointnaturallanguageunderstandingandgeneration/70eacfec-ed5d-41d3-922c-fe9216813a7a_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96898c1169ecaa85c8dcb39345552e4525caa11e446643719643c744b66d9a7d +size 786002 diff --git a/agenerativemodelforjointnaturallanguageunderstandingandgeneration/full.md b/agenerativemodelforjointnaturallanguageunderstandingandgeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a5aa9e235f658a8f4ece18f372b7fb6f8b8f5190 --- /dev/null +++ b/agenerativemodelforjointnaturallanguageunderstandingandgeneration/full.md @@ -0,0 +1,467 @@ +# A Generative Model for Joint Natural Language Understanding and Generation + +Bo-Hsiang Tseng $^{1*}$ , Jianpeng Cheng $^{2}$ , Yimai Fang $^{2}$ and David Vandyke $^{2}$ + +1Engineering Department, University of Cambridge, UK + +2Apple + +bht26@cam.ac.uk + +{jianpeng.cheng, yimai_fang, dvandyke}@apple.com + +# Abstract + +Natural language understanding (NLU) and natural language generation (NLG) are two fundamental and related tasks in building task-oriented dialogue systems with opposite objectives: NLU tackles the transformation from natural language to formal representations, whereas NLG does the reverse. A key to success in either task is parallel training data which is expensive to obtain at a large scale. In this work, we propose a generative model which couples NLU and NLG through a shared latent variable. This approach allows us to explore both spaces of natural language and formal representations, and facilitates information sharing through the latent space to eventually benefit NLU and NLG. Our model achieves state-of-the-art performance on two dialogue datasets with both flat and tree-structured formal representations. We also show that the model can be trained in a semi-supervised fashion by utilising unlabelled data to boost its performance. + +# 1 Introduction + +Natural language understanding (NLU) and natural language generation (NLG) are two fundamental tasks in building task-oriented dialogue systems. In a modern dialogue system, an NLU module first converts a user utterance, provided by an automatic speech recognition model, into a formal representation. The representation is then consumed by a downstream dialogue state tracker to update a belief state which represents an aggregated user goal. Based on the current belief state, a policy network decides the formal representation of the system response. This is finally used by an NLG module to generate the system response(Young et al., 2010). + +It can be observed that NLU and NLG have opposite goals: NLU aims to map natural language + +![](images/2644d73dbc99b324442415791cd16f0f8f032a868e91c0a58d3efcb40dc141ac.jpg) +(a) Generation + +![](images/0e795dd3093431337e58204c2a1afe19210bc9e73afb253d14da527f25800693.jpg) +(b) Inference + +![](images/1ecf8c4a76cdf892e461806145666c3bb29cd6baa1d200ec0a41a01f9856b5e1.jpg) +(c) NLU + +![](images/6f144e4ac8a6f871df2aea6c20bd72c90a9444e06e3591ddeadd2b057f28b5a1.jpg) +(d) NLG +Figure 1: Generation and inference process in our model, and how NLU and NLG are achieved. $x$ and $y$ denotes utterances and formal representations respectively; $z$ represents the shared latent variable for $x$ and $y$ . + +to formal representations, while NLG generates utterances from their semantics. In research literature, NLU and NLG are well-studied as separate problems. State-of-the-art NLU systems tackle the task as classification (Zhang and Wang, 2016) or as structured prediction or generation (Damonte et al., 2019), depending on the formal representations which can be flat slot-value pairs (Henderson et al., 2014), first-order logical form (Zettlemoyer and Collins, 2012), or structured queries (Yu et al., 2018; Pasupat et al., 2019). On the other hand, approaches to NLG vary from pipelined approach subsuming content planning and surface realisation (Stent et al., 2004) to more recent end-to-end sequence generation (Wen et al., 2015; Dušek et al., 2020). + +However, the duality between NLU and NLG has been less explored. In fact, both tasks can be treated as a translation problem: NLU converts + +natural language to formal language while NLG does the reverse. Both tasks require a substantial amount of utterance and representation pairs to succeed, and such data is costly to collect due to the complexity of annotation involved. Although unannotated data for either natural language or formal representations can be easily obtained, it is less clear how they can be leveraged as the two languages stand in different space. + +In this paper, we propose a generative model for Joint natural language Understanding and Generation (JUG), which couples NLU and NLG with a latent variable representing the shared intent between natural language and formal representations. We aim to learn the association between two discrete spaces through a continuous latent variable which facilitates information sharing between two tasks. Moreover, JUG can be trained in a semi-supervised fashion, which enables us to explore each space of natural language and formal representations when unlabelled data is accessible. We examine our model on two dialogue datasets with different formal representations: the E2E dataset (Novikova et al., 2017) where the semantics are represented as a collection of slot-value pairs; and a more recent weather dataset (Balakrishnan et al., 2019) where the formal representations are tree-structured. Experimental results show that our model improves over standalone NLU/NLG models and existing methods on both tasks; and the performance can be further boosted by utilising unlabelled data. + +# 2 Model + +Our key assumption is that there exists an abstract latent variable $z$ underlying a pair of utterance $x$ and formal representation $y$ . In our generative model, this abstract intent guides the standard conditional generation of either NLG or NLU (Figure 1a). Meanwhile, $z$ can be inferred from either utterance $x$ , or formal representation $y$ (Figure 1b). That means performing NLU requires us to infer the $z$ from $x$ , after which the formal representation $y$ is generated conditioning on both $z$ and $x$ (Figure 1c), and vice-versa for NLG (Figure 1d). In the following, we will explain the model details, starting with NLG. + +# 2.1 NLG + +As mentioned above, the task of NLG requires us to infer $z$ from $y$ , and then generate $x$ using + +both $z$ and $y$ . We choose the posterior distribution $q(z|y)$ to be Gaussian. The task of inferring $z$ can then be recast to computing mean $\mu$ and standard deviation $\sigma$ of the Gaussian distribution using an NLG encoder. To do this, we use a bi-directional LSTM (Hochreiter and Schmidhuber, 1997) to encode formal representation $y$ , which is linearised and represented as a sequence of symbols. After encoding, we obtain a list of hidden vectors $\mathbf{H}$ , with each representing the concatenation of forward and backward LSTM states. These hidden vectors are then average-pooled and passed through two feedforward neural networks to compute mean $\pmb{\mu}_{y,z}$ and standard deviation $\sigma_{y,z}$ vectors of the posterior $q(z|y)$ . + +$$ +\mathbf {H} = \operatorname {B i - L S T M} (\mathbf {y}) +$$ + +$$ +\bar {\mathbf {h}} = \operatorname {P o o l i n g} (\mathbf {H}) +$$ + +$$ +\boldsymbol {\mu} _ {y, z} = \mathbf {W} _ {\mu} \bar {\mathbf {h}} + \mathbf {b} _ {\mu} \tag {1} +$$ + +$$ +\sigma_ {y, z} = \mathbf {W} _ {\sigma} \bar {\mathbf {h}} + \mathbf {b} _ {\sigma} +$$ + +where $\mathbf{W}$ and $\mathbf{b}$ represent neural network weights and bias. Then the latent vector $\mathbf{z}$ can be sampled from the approximated posterior using the re-parameterisation trick of Kingma and Welling (2013): + +$$ +\epsilon \sim \mathcal {N} (0, \mathbf {I}) \tag {2} +$$ + +$$ +\mathbf {z} = \boldsymbol {\mu} _ {y, z} + \sigma_ {y, z} \boldsymbol {\epsilon} +$$ + +The final step is to generate natural language $x$ based on latent variable $z$ and formal representation $y$ . We use an LSTM decoder relying on both $z$ and $y$ via attention mechanism (Bahdanau et al., 2014). At each time step, the decoder computes: + +$$ +\mathbf {g} _ {i} ^ {x} = \operatorname {L S T M} \left(\mathbf {g} _ {i - 1} ^ {x}, \mathbf {x} _ {i - 1}\right) +$$ + +$$ +\mathbf {c} _ {i} = \operatorname {a t t e n t i o n} \left(\mathbf {g} _ {i} ^ {x}, \mathbf {H}\right) \tag {3} +$$ + +$$ +p \left(x _ {i}\right) = \operatorname {s o f t m a x} \left(\mathbf {W} _ {\mathbf {v}} \left[ \mathbf {c} _ {i} \oplus \mathbf {g} _ {i} ^ {x} \oplus \mathbf {z} \right] + \mathbf {b} _ {v}\right) +$$ + +where $\oplus$ denotes concatenation. $\mathbf{x}_{i - 1}$ is the word vector of input token; $\mathbf{g}_i^x$ is the corresponding decoder hidden state and $p(x_{i})$ is the output token distribution at time step $i$ . + +# 2.2 NLU + +NLU performs the reverse procedures of NLG. First, an NLU encoder infers the latent variable $z$ from utterance $x$ . The encoder uses a bi-directional LSTM to convert the utterance into a list of hidden states. These hidden states are pooled and passed through feed-forward neural networks to compute the mean $\mu_{x,z}$ and standard deviation $\sigma_{x,z}$ of the posterior $q(z|x)$ . This procedure follows Equation 1 in NLG. + +However, note that a subtle difference between natural language and formal language is that the former is ambiguous while the later is precisely defined. This makes NLU a many-to-one mapping problem but NLG is one-to-many. To better reflect the fact that the NLU output requires less variance, when decoding we choose the latent vector $\mathbf{z}$ in NLU to be the mean vector $\mu_{x,z}$ , instead of sampling it from $q(z|x)$ like Equation 2. + +After the latent vector is obtained, the formal representation $y$ is predicted from both $z$ and $x$ using an NLU decoder. Since the space of $y$ depends on the formal language construct, we consider two common scenarios in dialogue systems. In the first scenario, $y$ is represented as a set of slot-value pairs, e.g., {food type=British, area=north} in restaurant search domain (Mrkšić et al., 2017). The decoder here consists of several classifiers, one for each slot, to predict the corresponding values. Each classifier is modelled by a 1-layer feed-forward neural network that takes $z$ as input: + +$$ +p (y _ {s}) = \operatorname {s o f t m a x} (\mathbf {W} _ {\mathbf {s}} \mathbf {z} + \mathbf {b} _ {\mathbf {s}}) \qquad (4) +$$ + +where $p(y_{s})$ is the predicted value distribution of slot $s$ . + +In the second scenario, $y$ is a tree-structured formal representation (Banarescu et al., 2013). We then generate $y$ as a linearised token sequence using an LSTM decoder relying on both $z$ and $x$ via the standard attention mechanism (Bahdanau et al., 2014). The decoding procedure follows exactly Equation 3. + +# 2.3 Model Summary + +One flexibility of the JUG model comes from the fact that it has two ways to infer the shared latent variable $z$ through either $x$ or $y$ ; and the inferred $z$ can aid the generation of both $x$ and $y$ . In this next section, we show how this shared latent variable enables the JUG model to explore unlabelled $x$ and $y$ , while aligning the learned meanings inside the latent space. + +# 3 Optimisation + +We now describe how JUG can be optimised with a pair of $x$ and $y$ (§3.1), and also unpaired $x$ or + +$y$ (§3.2). We specifically discuss the prior choice of JUG objectives in §3.3. A combined objective can be thus derived for semi-supervised learning: a practical scenario when we have a small set of labelled data but abundant unlabelled ones (§3.4). + +# 3.1 Optimising $p(x,y)$ + +Given a pair of utterance $x$ and formal representation $y$ , our objective is to maximise the log-likelihood of the joint probability $p(x, y)$ : + +$$ +\log p (x, y) = \log \int_ {z} p (x, y, z) \tag {5} +$$ + +The optimisation task is not directly tractable since it requires us to marginalise out the latent variable $z$ . However, it can be solved by following the standard practice of neural variational inference (Kingma and Welling, 2013). An objective based on the variational lower bound can be derived as + +$$ +\begin{array}{l} \mathcal {L} _ {x, y} = \mathbb {E} _ {q (z | x)} \log p (y | z, x) + \mathbb {E} _ {q (z | x)} \log p (x | z, y) \\ - \operatorname {K L} [ q (z | x) | | p (z) ] \tag {6} \\ \end{array} +$$ + +where the first term on the right side is the NLU model; the second term is the reconstruction of $x$ ; and the last term denotes the Kullback-Leibler divergence between the approximate posterior $q(z|x)$ with the prior $p(z)$ . We defer the discussion of prior to Section 3.3 and detailed derivations to Appendix. + +The symmetry between utterance and semantics offers an alternative way of inferring the posterior through the approximation $q(z|y)$ . Analogously we can derive a variational optimisation objective: + +$$ +\begin{array}{l} \mathcal {L} _ {y, x} = \mathbb {E} _ {q (z | y)} \log p (x | z, y) + \mathbb {E} _ {q (z | y)} \log p (y | z, x) \\ - \operatorname {K L} [ q (z | y) | | p (z) ] \tag {7} \\ \end{array} +$$ + +where the first term is the NLG model; the second term is the reconstruction of $y$ ; and the last term denotes the KL divergence. + +It can be observed that our model has two posterior inference paths from either $x$ or $y$ , and also two generation paths. All paths can be optimised. + +# 3.2 Optimising $p(x)$ or $p(y)$ + +Additionally, when we have access to unlabelled utterance $x$ (or formal representation $y$ ), the optimisation objective of JUG is the marginal likelihood $p(x)$ (or $p(y)$ ): + +$$ +\log p (x) = \log \int_ {y} \int_ {z} p (x, y, z) \tag {8} +$$ + +Note that both $z$ and $y$ are unobserved in this case. + +We can develop an objective based on the variational lower bound for the marginal: + +$$ +\begin{array}{l} \mathcal {L} _ {x} = \mathbb {E} _ {q (y | z, x)} \mathbb {E} _ {q (z | x)} \log p (x | z, y) \\ - \operatorname {K L} [ q (z | x) | | p (z) ] \tag {9} \\ \end{array} +$$ + +where the first term is the auto-encoder reconstruction of $x$ with a cascaded NLU-NLG path. The second term is the KL divergence which regularizes the approximated posterior distribution. Detailed derivations can be found in Appendix. + +When computing the reconstruction term of $x$ , it requires us to first run through the NLU model to obtain the prediction on $y$ , from which we run through NLG to reconstruct $x$ . The full information flow is $(x \to z \to y \to z \to x)$ . Connections can be drawn with recent work which uses back-translation to augment training data for machine translation (Sennrich et al., 2016; He et al., 2016). Unlike back-translation, the presence of latent variable in our model requires us to sample $z$ along the NLU-NLG path. The introduced stochasticity allows the model to explore a larger area of the data manifold. + +The above describes the objectives when we have unlabelled $x$ . We can derive a similar objective for leveraging unlabelled $y$ : + +$$ +\begin{array}{l} \mathcal {L} _ {y} = \mathbb {E} _ {q (x | z, y)} \mathbb {E} _ {q (z | y)} \log p (y | z, x) \\ - \operatorname {K L} [ q (z | y) | | p (z) ] \tag {10} \\ \end{array} +$$ + +where the first term is the auto-encoder reconstruction of $y$ with a cascaded NLG-NLU path. The full information flow here is $(y\rightarrow z\rightarrow x\rightarrow z\rightarrow y)$ + +# 3.3 Choice of Prior + +The objectives described in 3.1 and 3.2 require us to match an approximated posterior (either $q(z|x)$ or $q(z|y)$ ) to a prior $p(z)$ that reflects our belief. A common choice of $p(z)$ in the research literature is the Normal distribution (Kingma and Welling, 2013). However, it should be noted that even if we match both $q(z|x)$ and $q(z|y)$ to the same prior, it does not guarantee that the two inferred posteriors are close to each other; this is a desired property of the shared latent space. + +To better address the property, we propose a novel prior choice: when the posterior is inferred + +from $x$ (i.e., $q(z|x))$ , we choose the parameterised distribution $q(z|y)$ as our prior belief of $p(z)$ . Similarly, when the posterior is inferred from $y$ (i.e., $q(z|y))$ , we have the freedom of defining $p(z)$ to be $q(z|x)$ . This approach directly pulls $q(z|x)$ and $q(z|y)$ closer to ensure a shared latent space. + +Finally, note that it is straightforward to compute both $q(z|x)$ and $q(z|y)$ when we have parallel $x$ and $y$ . However when we have the access to unlabelled data, as described in Section 3.2, we can only use the pseudo $x - y$ pairs that are generated by our NLU or NLG model, such that we can match an inferred posterior to a pre-defined prior reflecting our belief of the shared latent space. + +# 3.4 Training Summary + +In general, JUG subsumes the following three training scenarios which we will experiment with. + +When we have fully labelled $x$ and $y$ , the JUG jointly optimises NLU and NLG in a supervised fashion with the objective as follows: + +$$ +\mathcal {L} _ {\text {b a s i c}} = \sum_ {(x, y) \sim (X, Y)} \left(\mathcal {L} _ {x, y} + \mathcal {L} _ {y, x}\right) \tag {11} +$$ + +where $(X,Y)$ denotes the set of labelled examples. + +Additionally in the fully supervised setting, JUG can be trained to optimise both NLU, NLG and auto-encoding paths. This corresponds to the following objective: + +$$ +\mathcal {L} _ {\text {m a r g i n a l}} = \mathcal {L} _ {\text {b a s i c}} + \sum_ {(x, y) \sim (X, Y)} \left(\mathcal {L} _ {x} + \mathcal {L} _ {y}\right) \tag {12} +$$ + +Furthermore, when we have additional unlabelled $x$ or $y$ , we optimise a semi-supervised JUG objective as follows: + +$$ +\mathcal {L} _ {\text {s e m i}} = \mathcal {L} _ {\text {b a s i c}} + \sum_ {x \sim X} \mathcal {L} _ {x} + \sum_ {y \sim Y} \mathcal {L} _ {y} \tag {13} +$$ + +where $X$ denotes the set of utterances and $Y$ denotes the set of formal representations. + +# 4 Experiments + +We experiment on two dialogue datasets with different formal representations to test the generality of our model. The first dataset is E2E (Novikova et al., 2017), which contains utterances annotated with flat slot-value pairs as their semantic representations. The second dataset is the recent weather dataset (Balakrishnan et al., 2019), where both utterances and semantics are represented in tree structures. Examples of the two datasets are provided in tables 1 and 2. + +
Natural Language +"sousa offers British food in the low price range. +it is family friendly with a 3 out of 5 star rating. +you can find it near the sunshine vegetarian cafe."
Semantic Representation +restaurant_name=sousa, food=english, +price_range=cheap, customerrating=average, +family Friendly=yes, near=sunshine vegetarian cafe
+ +Table 1: An example in E2E dataset. + +
Natural Language (original) +"[_DG_YES_Yes] , [_DG_INFORM_ +[_ARG_DATE_TIME_ [_ARG_COLLOQUIAL today's] ] +forecast is [_ARG_CLOUD_COVERAGEmostly cloudy ] +with [_ARG_CONDITION_light rain showers] ."
Natural Language (processed by removing tree annotations) +"Yes, today's forecast is mostly cloudy with light rain showers."
Semantic Representation +[_DG_YES_ [_ARG_TASK_get_weather_attribute] ] +[_DG_INFORM_ [_ARG_TASK_get_forecast] +[_ARG_CONDITION_light rain showers ] +[_ARG_CLOUD_COVERAGEmostly cloudy ] +[_ARG_DATE_TIME_ [_ARG_COLLOQUIAL today's] ]
+ +# 4.1 Training Scenarios + +We primarily evaluated our models on the raw splits of the original datasets, which enables us to fairly compare fully-supervised JUG with existing work on both NLU and NLG. Statistics of the two datasets can be found in Table 3. + +In addition, we set up an experiment to evaluate semi-supervised JUG with a varying amount of labelled training data (5%, 10%, 25%, 50%, 100%, with the rest being unlabelled). Note that the original E2E test set is designed on purpose with unseen slot-values in the test set to make it difficult (Dusek et al., 2018, 2020); we remove the distribution bias by randomly re-splitting the E2E dataset. On the contrary, utterances in the weather dataset contains extra tree-structure annotations which make the NLU task a toy problem. We therefore remove these annotations to make NLU more realistic, as shown in the second row of Table 2. + +As described in Section 3.4, we can optimise our proposed JUG model in various ways. We investigate the following approaches: + +JUGbasic: this model jointly optimises NLU + +Table 2: An example in weather dataset. The natural language in original dataset (first row) is used for training to have a fair comparison with existing methods. The processed utterances (second row) is used in our semi-supervised setting. + +
DatasetTrainValidTest
E2E4206146724693
Weather2539030783121
+ +Table 3: Number of examples in two datasets + +
E2E NLUF1
Dual supervised learning (Su et al., 2019)0.7232
JUGbasic0.7337
E2E NLGBLEU
TGEN (Dušek and Jurcicek, 2016)0.6593
SLUG (Juraska et al., 2018)0.6619
Dual supervised learning (Su et al., 2019)0.5716
JUGbasic0.6855
Weather NLGBLEU
S2S-CONSTR (Balakrishnan et al., 2019)0.7660
JUGbasic0.7768
+ +Table 4: Comparison with previous systems on two datasets. Note that there is no previous system trained for NLU in weather dataset. + +and NLG with the objective in Equation 11. This uses labelled data only. + +JUGmarginal: jointly optimises NLU, NLG and auto-encoders with only labelled data, per Equation 12. + +JUGsemi: jointly optimises NLU and NLG with labelled data and auto-encoders with unlabelled data, per Equation 13. + +# 4.2 Baseline Systems + +We compare our proposed model with some existing methods as shown in Table 4 and two designed baselines as follows: + +Decoupled: The NLU and NLG models are trained separately by supervised learning. Both of the individual models have the same encoder-decoder structure as JUG. However, the main difference is that there is no shared latent variable between the two individual NLU and NLG models. + +Augmentation: We pre-train Decoupled models to generate pseudo label from the unlabeled corpus (Lee, 2013) in a setup similar to backtranslation (Sennrich et al., 2016). The pseudo data and labelled data are then used together to fine-tune the pre-trained models. + +Among all systems in our experiments, the number of units in LSTM encoder/decoder are set to $\{150, 300\}$ and the dimension of latent space is 150. The optimiser Adam (Kingma and Ba, 2014) is used with learning rate 1e-3. Batch size is set to $\{32, 64\}$ . All the models are fully trained and the + +
Model / Data5%10%25%50%100%
Decoupled52.77 (0.874)62.32 (0.902)69.37 (0.924)73.68 (0.935)76.12 (0.942)
Augmentation*54.71 (0.878)62.54 (0.902)68.91 (0.922)73.84 (0.935)-
JUGbasic60.30 (0.902)67.08 (0.918)72.49 (0.932)74.74 (0.937)78.05 (0.945)
JUGmarginal62.96 (0.907)68.43 (0.920)73.35 (0.933)75.74 (0.939)78.93 (0.948)
JUGsemi*68.09 (0.921)70.33 (0.925)73.79 (0.935)75.46 (0.939)-
+ +Table 5: NLU results on E2E dataset. Joint accuracy (%) and F1 score (in bracket) are both reported with varying percentage of labelled training data. Models using unlabelled data are marked with * . + +
Model / Data5%10%25%50%100%
Decoupled0.693 (83.47)0.723 (87.33)0.784 (92.52)0.793 (94.91)0.813 (96.98)
Augmentation*0.747 (84.79)0.770 (90.13)0.806 (94.06)0.815 (96.04)-
JUGbasic0.685 (84.20)0.734 (88.68)0.769 (93.83)0.788 (95.11)0.810 (95.07)
JUGmarginal0.724 (85.57)0.775 (93.59)0.803 (94.99)0.817 (98.67)0.830 (99.11)
JUGsemi*0.814 (90.47)0.792 (94.76)0.819 (95.59)0.827 (98.42)-
+ +Table 6: NLG results on E2E dataset. BLEU and semantic accuracy $(\%)$ (in bracket) are both reported with varying percentage of labelled training data. Models using unlabelled data are marked with $*$ + +
Model / Data5%10%25%50%100%
Decoupled73.4680.8586.0088.4590.68
Augmentation*74.7779.8486.2488.69-
JUGbasic73.6280.1386.1587.9490.55
JUGmarginal74.6181.1486.8389.0691.28
JUGsemi*79.1983.2287.4689.17-
+ +best model is picked by the average of NLU and NLG results on validation set during training. + +# 4.3 Main Results + +We start by comparing the JUGbasic performance with existing work following the original split of the datasets. The results are shown in Table 4. On E2E dataset, we follow previous work to use F1 of slot-values as the measurement for NLU, and BLEU-4 for NLG. For weather dataset, there is only published results for NLG. It can be observed that the JUGbasic model outperforms the previous state-of-the-art NLU and NLG systems on the E2E dataset, and also for NLG on the weather dataset. The results prove the effectiveness of introducing the shared latent variable $z$ for jointly training NLU and NLG. We will further study the impact of the shared $z$ in Section 4.4.2. + +We also evaluated the three training scenarios of JUG in the semi-supervised setting, with different proportion of labelled and unlabelled data. The results for E2E is presented in Table 5 and 6. We computed both F1 score and joint accuracy (Mrksić + +Table 7: NLU results with exact match accuracy $(\%)$ on weather dataset. + +
Model / Data5%10%25%50%100%
Decoupled0.6320.6670.7030.7190.725
Augmentation*0.6350.6770.7030.727-
JUGbasic0.6340.6730.7010.7200.726
JUGmarginal0.6270.6710.7110.7210.722
JUGsemi*0.6700.7010.7250.733-
+ +Table 8: NLG results with BLEU on weather dataset. + +et al., 2017) of slot-values as a more solid NLU measurement. Joint accuracy is defined as the proportion of test examples whose slot-value pairs are all correctly predicted. For NLG, both BLEU-4 and semantic accuracy are computed. Semantic accuracy measures the proportion of correctly generated slot values in the produced utterances. From the results, we observed that Decoupled can be improved with techniques of generating pseudo data (Augmentation), which forms a stronger baseline. However, all our model variants perform better than the baselines on both NLU and NLG. When using only labelled data, our model JUGmarginal can surpass Decoupled across all the four measurements. The gains mainly come from the fact that the model uses auto-encoding objectives to help learn a shared semantic space. Compared to Augmentation, JUGmarginal also has a 'built-in mechanism' to bootstrap pseudo data on the fly of training (see Section 3.4). When adding extra unlabelled data, our model JUGsemi gets further performance boosts and outperforms all baselines by a significant margin. + +With the varying proportion of unlabelled data in + +![](images/38220f9e151e3e8d61545e5f3bd58f74f98ab6fc5161766a3ba18145d4267126.jpg) +Figure 2: Visualisation of latent variable $z$ . Given a pair of $x$ and $y$ , $z$ can be sampled from the posterior $q(z|x)$ or $q(z|y)$ , denoted by blue and orange dots respectively. + +the training set, we see that unlabelled data is helpful in almost all cases. Moreover, the performance gain is the more significant when the labelled data is less. This indicates that the proposed model is especially helpful for low resource setups when there is a limited amount of labelled training examples but more available unlabelled ones. + +The results for weather dataset are presented in Table 7 and 8. In this dataset, NLU is more like a semantic parsing task (Berant et al., 2013) and we use exact match accuracy as its measurement. Meanwhile, NLG is measured by BLEU. The results reveal a very similar trend to that in E2E. The generated examples can be found in Appendix. + +# 4.4 Analysis + +In this section we further analyse the impact of the shared latent variable and also the impact of utilising unlabelled data. + +# 4.4.1 Visualisation of Latent Space + +As mentioned in Section 2.1, the latent variable $z$ can be sampled from either posterior approximation $q(z|x)$ or $q(z|y)$ . We inspect the latent space in Figure 2 to find out how well the model learns intent sharing. We plot $z$ with the E2E dataset on 2-dimensional space using t-SNE projection (Maaten and Hinton, 2008). + +We observe two interesting properties. First, for each data point $(x,y)$ , the $z$ values sampled from $q(z|x)$ and $q(z|y)$ are close to each other. This reveals that the meanings of $x$ and $y$ are tied in the latent space. Second, there exists distinct clusters in the space of $z$ . By further inspecting the actual examples within each cluster, we found that a cluster represents a similar meaning composition. For instance, the cluster cen + +
ModelNLUNLG
JUGbasic90.550.726
JUGbasic (feed random z)38.130.482
+ +Table 9: A comparative study to evaluate the contribution of the learned latent variable $z$ in NLU/NLG decoding. Models are trained on the whole weather dataset. + +
MethodNLUNLG
MiReWrMiWr
Decoupled714256238257142317
JUGbasic594169188448712102
+ +Table 10: Error analysis on E2E dataset. Numbers of missing (Mi), redundant (Re) and wrong (Wr) predictions on slot-value pairs are reported for NLU; numbers of missing or wrong generated slot values are listed for NLG. Lower number indicates the better results. Both models are trained on $5\%$ of the training data. + +tered at (-20, -40) contains {name, foodtype, price, rating, area, near}, while the cluster centered at (45, 10) contains {name, eattype, foodtype, price}. This indicates that the shared latent serves as conclusive global feature representations for NLU and NLG. + +# 4.4.2 Impact of the Latent Variable + +One novelty of our model is the introduction of shared latent variable $z$ for natural language $x$ and formal representations $y$ . A common problem in neural variational models is that when coupling a powerful autoregressive decoder, the decoder tends to learn to ignore $z$ and solely rely on itself to generate the data (Bowman et al., 2016; Chen et al., 2017; Goyal et al., 2017). In order to examine to what extent does our model actually rely on the shared variable in both NLU and NLG, we seek for an empirical answer by comparing the JUGbasic model with a model variant which uses a random value of $z$ sampled from a normal distribution $N(0,1)$ during testing. From Table 9, we can observe that there exists a large performance drop if $z$ is assigned with random values. This suggests that JUG indeed relies greatly on the shared variable to produce good-quality $x$ or $y$ . + +We further analyse the various sources of errors to understand the cases which $z$ helps to improve. On E2E dataset, wrong prediction in NLU comes from either predicting not Mention label for certain slots in ground truth semantics; predicting arbitrary values on slots not present in the ground truth semantics; or predicting wrong values com + +
MethodE2EWeather
NLUNLGNLUNLG
JUGbasic60.300.68573.620.634
+unlabelled x62.890.76574.970.654
+unlabelled y59.550.81576.980.621
+unlabelled x and y68.090.81479.190.670
+ +Table 11: Comparison on sources of unlabelled data for semi-supervised learning using only utterances $(x)$ , only semantic representations $(y)$ or both $(x$ and $y)$ . JUGbasic model is trained on $5\%$ of training data. + +paring to ground truth. Three types of error are referred to Missing (Mi), Redundant (Re) and Wrong (Wr) in Table 10. For NLG, semantic errors can be either missing or generating wrong slot values in the given semantics (Wen et al., 2015). Our model makes fewer mistakes in all these error sources comparing to the baseline Decoupled. We believe this is because the clustering property learned in the latent space provides better feature representations at a global scale, eventually benefiting NLU and NLG. + +# 4.4.3 Impact of Unlabelled Data Source + +In Section 4.3, we found that the performance of our model can be further enhanced by leveraging unlabelled data. As we used both unlabelled utterances and unlabelled semantic representations together, it is unclear if both contributed to the performance gain. To answer this question, we start with the JUGbasic model, and experimented with adding unlabelled data from 1) only unlabelled utterances $x$ ; 2) only semantic representations $y$ ; 3) both $x$ and $y$ . As shown in Table 11, when adding any uni-sourced unlabelled data ( $x$ or $y$ ), the model is able to improve to a certain extent. However, the performance can be maximised when both data sources are utilised. This strengthens the argument that our model can leverage bi-sourced unlabelled data more effectively via latent space sharing to improve NLU and NLG at the same time. + +# 5 Related Work + +Natural Language Understanding (NLU) refers to the general task of mapping natural language to formal representations. One line of research in the dialogue community aims at detecting slot-value pairs expressed in user utterances as a classification problem (Henderson et al., 2012; Sun et al., 2014; Mrkšić et al., 2017; Vodolán et al., 2017). Another line of work focuses on converting single-turn user utterances to more structured meaning representa + +tions as a semantic parsing task (Zettlemoyer and Collins, 2005; Jia and Liang, 2016; Dong and Lapata, 2018; Damonte et al., 2019). + +In comparison, Natural Language Generation (NLG) is scoped as the task of generating natural utterances from their formal representations. This is traditionally handled with a pipelined approach (Reiter and Dale, 1997) with content planning and surface realisation (Walker et al., 2001; Stent et al., 2004). More recently, NLG has been formulated as an end-to-end learning problem where text strings are generated with recurrent neural networks conditioning on the formal representation (Wen et al., 2015; Dušek and Jurcicek, 2016; Dušek et al., 2020; Balakrishnan et al., 2019; Tseng et al., 2019). + +There has been very recent work which does NLU and NLG jointly. Both Ye et al. (2019) and Cao et al. (2019) explore the duality of semantic parsing and NLG. The former optimises two sequence-to-sequence models using dual information maximisation, while the latter introduces a dual learning framework for semantic parsing. Su et al. (2019) proposes a learning framework for dual supervised learning (Xia et al., 2017) where both NLU and NLG models are optimised towards a joint objective. Their method brings benefits with annotated data in supervised learning, but does not allow semi-supervised learning with unlabelled data. In contrast to their work, we propose a generative model which couples NLU and NLG with a shared latent variable. We focus on exploring a coupled representation space between natural language and corresponding semantic annotations. As proved in experiments, the information sharing helps our model to leverage unlabelled data for semi-supervised learning, which eventually benefits both NLU and NLG. + +# 6 Conclusion + +We proposed a generative model which couples natural language and formal representations via a shared latent variable. Since the two space is coupled, we gain the luxury of exploiting each unpaired data source and transfer the acquired knowledge to the shared meaning space. This eventually benefits both NLU and NLG, especially in a low-resource scenario. The proposed model is also suitable for other translation tasks between two modalities. + +As a final remark, natural language is richer and more informal. NLU needs to handle ambiguous + +or erroneous user inputs. However, formal representations utilised by an NLG system are more precisely-defined. In future, we aim to refine our generative model to better emphasise this difference of the two tasks. + +# Acknowledgments + +Bo-Hsiang Tseng is supported by Cambridge Trust and the Ministry of Education, Taiwan. This work has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service (http://www.hpc.cam.ac.uk) funded by EPSRC Tier-2 capital grant EP/P020259/1.. + +# References + +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. +Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, and Rajen Subba. 2019. Constrained decoding for neural nlg from compositional representations in task-oriented dialogue. arXiv preprint arXiv:1906.07220. +Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffith, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178-186, Sofia, Bulgaria. Association for Computational Linguistics. +Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533-1544, Seattle, Washington, USA. Association for Computational Linguistics. +Samuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew Dai, Rafal Jozefowicz, and Samy Bengio. 2016. Generating sentences from a continuous space. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 10-21. +Ruisheng Cao, Su Zhu, Chen Liu, Jieyu Li, and Kai Yu. 2019. Semantic parsing with dual learning. ACL. +Xi Chen, Diederik P. Kingma, Tim Salimans, Yan Duan, Prafulla Dhariwal, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2017. Variational lossy autoencoder. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, + +France, April 24-26, 2017, Conference Track Proceedings. +Marco Damonte, Rahul Goel, and Tagyoung Chung. 2019. Practical semantic parsing for spoken language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers), pages 16-23. +Li Dong and Mirella Lapata. 2018. Coarse-to-fine decoding for neural semantic parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 731-742. +Ondrej Dusek and Filip Jurcicek. 2016. Sequence-to-sequence generation for spoken dialogue via deep syntax trees and strings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 45-51. +Ondrej Dušek, Jekaterina Novikova, and Verena Rieser. 2018. Findings of the e2e nlg challenge. In Proceedings of the 11th International Conference on Natural Language Generation, pages 322-328. +Ondrej Dušek, Jekaterina Novikova, and Verena Rieser. 2020. Evaluating the state-of-the-art of end-to-end natural language generation: The e2e nlg challenge. Computer Speech & Language, 59:123-156. +Anirudh Goyal Alias Parth Goyal, Alessandro Sordoni, Marc-Alexandre Côté, Nan Rosemary Ke, and Yoshua Bengio. 2017. Z-forcing: Training stochastic recurrent networks. In Advances in neural information processing systems, pages 6713-6723. +Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. 2016. Dual learning for machine translation. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pages 820-828. Curran Associates Inc. +Matthew Henderson, Milica Gašić, Blaise Thomson, Pirros Tsiakoulis, Kai Yu, and Steve Young. 2012. Discriminative spoken language understanding using word confusion networks. In 2012 IEEE Spoken Language Technology Workshop (SLT), pages 176-181. IEEE. +Matthew Henderson, Blaise Thomson, and Steve Young. 2014. Word-based dialog state tracking with recurrent neural networks. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 292-299. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. + +Robin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12-22. +Juraj Juraska, Panagiotis Karagiannis, Kevin Bowden, and Marilyn Walker. 2018. A deep ensemble model with slot alignment for sequence-to-sequence natural language generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 152-162. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Diederik P Kingma and Max Welling. 2013. Autoencoding variational bayes. arXiv preprint arXiv:1312.6114. +Dong-Hyun Lee. 2013. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning, ICML, volume 3, page 2. +Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579-2605. +Nikola Mrkšić, Diarmuid Žáeghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017. Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1777-1788. +Jekaterina Novikova, Ondrej Dušek, and Verena Rieser. 2017. The E2E dataset: New challenges for end-to-end generation. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pages 201-206, Saarbrücken, Germany. Association for Computational Linguistics. +Panupong Pasupat, Sonal Gupta, Karishma Mandyam, Rushin Shah, Mike Lewis, and Luke Zettlemoyer. 2019. Span-based hierarchical semantic parsing for task-oriented dialog. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1520-1526, Hong Kong, China. Association for Computational Linguistics. +Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Natural Language Engineering, 3(1):57-87. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86-96. + +Amanda Stent, Rashmi Prasad, and Marilyn Walker. 2004. Trainable sentence planning for complex information presentation in spoken dialog systems. In Proceedings of the 42nd annual meeting on association for computational linguistics, page 79. Association for Computational Linguistics. +Shang-Yu Su, Chao-Wei Huang, and Yun-Nung Chen. 2019. Dual supervised learning for natural language understanding and generation. ACL. +Kai Sun, Lu Chen, Su Zhu, and Kai Yu. 2014. The sjtu system for dialog state tracking challenge 2. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIG-DIAL), pages 318-326. +Bo-Hsiang Tseng, Paweł Budzianowski, Yen-chen Wu, and Milica Gasic. 2019. Tree-structured semantic encoder with knowledge sharing for domain adaptation in natural language generation. In Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue, pages 155–164. +Miroslav Vodolan, Rudolf Kadlec, Jan Kleindienst, and V Parku. 2017. Hybrid dialog state tracker with asr features. EACL 2017, page 205. +Marilyn A Walker, Owen Rambow, and Monica Rogati. 2001. Spot: A trainable sentence planner. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, pages 1-8. Association for Computational Linguistics. +Tsung-Hsien Wen, Milica Gasic, Nikola Mrkšić, PeiHao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned LSTM-based natural language generation for spoken dialogue systems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1711-1721. +Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256. +Yingce Xia, Tao Qin, Wei Chen, Jiang Bian, Nenghai Yu, and Tie-Yan Liu. 2017. Dual supervised learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3789-3798. JMLR.org. +Hai Ye, Wenjie Li, and Lu Wang. 2019. Jointly learning semantic parser and natural language generator via dual information maximization. arXiv preprint arXiv:1906.00575. +Steve Young, Milica GaÅqiÄG, Simon Keizer, FranÅgois Mairesse, Jost Schatzmann, Blaise Thomson, and Kai Yu. 2010. The hidden information state model: A practical framework for pomdp-based spoken dialogue management. Computer Speech Language, 24(2):150 - 174. + +Tao Yu, Zifan Li, Zilin Zhang, Rui Zhang, and Dragomir Radev. 2018. Typesql: Knowledge-based type-aware neural text-to-sql generation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2, pages 588-594. +Luke S Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: structured classification with probabilistic categorical grammars. In Proceedings of the Twenty-First Conference on Uncertainty in Artificial Intelligence, pages 658-666. AUAI Press. +Luke S Zettlemoyer and Michael Collins. 2012. Learning to map sentences to logical form: Structured classification with probabilistic categorical grammars. arXiv preprint arXiv:1207.1420. +Xiaodong Zhang and Houfeng Wang. 2016. A joint model of intent determination and slot filling for spoken language understanding. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pages 2993-2999. AAAI Press. + +# A Appendices + +# A.1 Derivation of Lower Bounds + +We derive the lower bounds for $\log p(x,y)$ as follows: + +$$ +\begin{array}{l} \log p (x, y) = \log \int_ {z} p (x, y, z) \\ = \log \int_ {z} \frac {p (x , y , z) q (z | x)}{q (z | x)} \\ = \log \int_ {z} \frac {p (x | z , y) p (y | z , x) p (z) q (z | x)}{q (z | x)} \\ = \log \mathbb {E} _ {q (z | x)} \frac {p (x | z , y) p (y | z , x) p (z)}{q (z | x)} \\ \geq \mathbb {E} _ {q (z | x)} \log \frac {p (x | z , y) p (y | z , x) p (z)}{q (z | x)} \\ = \mathbb {E} _ {q (z | x)} [ \log p (x | z, y) + \log p (y | z, x) ] \\ - \operatorname {K L} [ q (z | x) | | p (z) ] \tag {14} \\ \end{array} +$$ + +where $q(z|x)$ represents an approximated posterior. This derivation gives us the Equation 6 in the paper. Similarly we can derive an alternative lower bound in Equation 7 by introducing $q(z|y)$ instead of $q(z|x)$ . + +For marginal log-likelihood $\log p(x)$ or $\log p(y)$ , its lower bound is derived as follows: + +$$ +\begin{array}{l} \log p (x) = \log \int_ {y} \int_ {z} p (x, y, z) \\ = \log \int_ {y} \int_ {z} \frac {p (x | z , y) p (y) p (z) q (z | x) q (y | z , x)}{q (z | x) q (y | z , x)} \\ = \log \mathbb {E} _ {q (y | z, x)} \mathbb {E} _ {q (z | x)} \frac {p (x | z , y) p (y) p (z)}{q (z | x) q (y | z , x)} \\ \geq \mathbb {E} _ {q (y | z, x)} \mathbb {E} _ {q (z | x)} \log \frac {p (x | z , y) p (y) p (z)}{q (z | x) q (y | z , x)} \\ = \mathbb {E} _ {q (y | z, x)} \mathbb {E} _ {q (z | x)} \log p (x | z, y) \\ - \operatorname {K L} [ q (z | x) | | p (z) ] - \operatorname {K L} [ q (y | x, z) | | p (y) ] \tag {15} \\ \end{array} +$$ + +Note that the resulting lower bound consists of three terms: a reconstruction of $x$ , a KL divergence which regularises the space of $z$ , and also a KL divergence which regularises the space of $y$ . We have dropped the last term in our optimisation objective in Equation 9, since we do not impose any prior assumption on the output space of the NLU model. + +Analogously we can derive the lower bound for $\log p(y)$ . We also do not impose any prior assumption on the output space of the NLG model, which leads us to Equation 10. + +# A.2 Generated Examples + +
Reference of example +x: "for those prepared to pay over £30, giraffe is a restaurant located near the six bells." +y: {name=giraffe, eat_type=restaurant, price_range=more than £30, near=the six bells}
Prediction by Decoupled model +x: "near the six bells, there is a restaurant called giraffe that is children friendly." +y: {name=travelers rest beefeater, price_range=more than £30, near=the six bells} (wrong name, miss eat_type)
Prediction by JUGsemi model +x: "giraffe is a restaurant near the six bells with a price range of more than £30." +y: {name=giraffe, eat_type=restaurant, price_range=more than £30, near=the six bells} (exact match)
+ +Table 12: An example of E2E dataset and predictions generated by the baseline model Decoupled and the proposed model JUGsemi. x and y denotes natural language and the corresponding semantic representation. Errors are highlighted following predictions. + +
Reference of example
x: "it's going to be __arg_temp__ and __arg-cloud_coverage__ +__arg_colloquial__ between __arg_start_time__ and __arg_end_time__ "
y: [[dg inform] [arg_task__ get_forecast] [arg_temp__ arg_temp] +[arg-cloud_coverage__ arg-cloud_coverage] +[arg_date_time_range] [arg_start_time__ arg_start_time] +[arg_end_time__ arg_end_time] [arg_colloquial__ arg_colloquial]]
Prediction by Decoupled model
x: "it will be __arg_temp__ degrees and __arg-cloud_coverage__ from +__arg_start_time__ to __arg_end_time"
y: [[dg inform] [arg_task__ get_forecast] [arg_temp__ arg_temp] [arg-cloud_coverage__ +__arg-cloud_coverage] [arg_date_time] [arg_colloquial__ arg_colloquial]] +[dg inform] [arg_task__ get_forecast] [arg_temp__ arg_temp] [arg-cloud_coverage__ +__arg-cloud_coverage] [arg_date_time_range] [arg_start_time__ arg_start_time] +[arg_end_time__ arg_end_time] ] (not match)
Prediction by JUG_semi model
x: "the temperature will be around __arg_temp__ degrees +__arg_colloquial__ between __arg_start_time__ and __arg_end_time"
y: [[dg inform] [arg_task__ get_forecast] [arg_temp__ arg_temp] +[arg-cloud_coverage__ arg-cloud_coverage] +[arg_date_time_range] [arg_start_time__ arg_start_time] +[arg_end_time__ arg_end_time] [arg_colloquial__ arg_colloquial]] ] (exact match)
+ +Table 13: An example of weather dataset and predictions generated by the baseline model Decoupled and the proposed model JUGsemi. $x$ and $y$ denotes natural language and the corresponding semantic representation. NLU result are highlighted following predictions. \ No newline at end of file diff --git a/agenerativemodelforjointnaturallanguageunderstandingandgeneration/images.zip b/agenerativemodelforjointnaturallanguageunderstandingandgeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..dfd950bc4316979ea41aa802188a2dbf3831fcd7 --- /dev/null +++ b/agenerativemodelforjointnaturallanguageunderstandingandgeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1adb07417572898a2e7f2e8404e303aca1e630ca92e5920802e36babb269070e +size 812673 diff --git a/agenerativemodelforjointnaturallanguageunderstandingandgeneration/layout.json b/agenerativemodelforjointnaturallanguageunderstandingandgeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..8ed5b17d83ea3655092d02397c3e19cc50a1c6b3 --- /dev/null +++ b/agenerativemodelforjointnaturallanguageunderstandingandgeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9fd99ba115aa4129a855f9f66dca81e7e16444db580e9f748d17fc7496befefb +size 533330 diff --git a/agirlhasanamedetectingauthorshipobfuscation/f881a0e7-5290-438c-ba13-c37776e6f01e_content_list.json b/agirlhasanamedetectingauthorshipobfuscation/f881a0e7-5290-438c-ba13-c37776e6f01e_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..02ab2fff149c01654e860541ed6ea5fd12759e7f --- /dev/null +++ b/agirlhasanamedetectingauthorshipobfuscation/f881a0e7-5290-438c-ba13-c37776e6f01e_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cc98d8d63f08e45e29d4f0b077cff8fef60f26b5820eb640ba2b2eabe4ce19c2 +size 71599 diff --git a/agirlhasanamedetectingauthorshipobfuscation/f881a0e7-5290-438c-ba13-c37776e6f01e_model.json b/agirlhasanamedetectingauthorshipobfuscation/f881a0e7-5290-438c-ba13-c37776e6f01e_model.json new file mode 100644 index 0000000000000000000000000000000000000000..382289c20cc3a54e8dad36965219a95008233e7a --- /dev/null +++ b/agirlhasanamedetectingauthorshipobfuscation/f881a0e7-5290-438c-ba13-c37776e6f01e_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae78ff99bdf4c03ce170031a2d37df24d3eb89eb66da5638f8509c4abc1f19e0 +size 86767 diff --git a/agirlhasanamedetectingauthorshipobfuscation/f881a0e7-5290-438c-ba13-c37776e6f01e_origin.pdf b/agirlhasanamedetectingauthorshipobfuscation/f881a0e7-5290-438c-ba13-c37776e6f01e_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..db7a43a256f91ba6637bc7a55803752756bb90a6 --- /dev/null +++ b/agirlhasanamedetectingauthorshipobfuscation/f881a0e7-5290-438c-ba13-c37776e6f01e_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:36d16349cfd1ea18ddbe03d98b69de80f9b7eff4f3c163a00520cf0438c9de54 +size 373657 diff --git a/agirlhasanamedetectingauthorshipobfuscation/full.md b/agirlhasanamedetectingauthorshipobfuscation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2845cb2b2243e73db404b995989d861ce012be93 --- /dev/null +++ b/agirlhasanamedetectingauthorshipobfuscation/full.md @@ -0,0 +1,285 @@ +# A Girl Has A Name: Detecting Authorship Obfuscation + +Asad Mahmood + +Zubair Shafiq + +Padmini Srinivasan + +The University of Iowa + +{asad-mahmood,zubair-shafiq,padmini-srinivasan}@uiowa.edu + +# Abstract + +Authorship attribution aims to identify the author of a text based on the stylometric analysis. Authorship obfuscation, on the other hand, aims to protect against authorship attribution by modifying a text's style. In this paper, we evaluate the stealthiness of state-of-the-art authorship obfuscation methods under an adversarial threat model. An obfuscator is stealthy to the extent an adversary finds it challenging to detect whether or not a text modified by the obfuscator is obfuscated – a decision that is key to the adversary interested in authorship attribution. We show that the existing authorship obfuscation methods are not stealthy as their obfuscated texts can be identified with an average F1 score of 0.87. The reason for the lack of stealthiness is that these obfuscators degrade text smoothness, as ascertained by neural language models, in a detectable manner. Our results highlight the need to develop stealthy authorship obfuscation methods that can better protect the identity of an author seeking anonymity. + +# 1 Introduction + +Authorship attribution aims to identify the author of a text using stylometric techniques designed to capitalize on differences in the writing style of different authors. Owing to recent advances in machine learning, authorship attribution methods can now identify authors with impressive accuracy (Abbasi and Chen, 2008) even in challenging settings such as cross-domain (Overdorf and Greenstadt, 2016) and at a large-scale (Narayanan et al., 2012; Ruder et al., 2016). Such powerful authorship attribution methods pose a threat to privacy-conscious users such as journalists and activists who may wish to publish anonymously (Times, 2018; Anonymous, 2018). + +Authorship obfuscation, a protective countermeasure, aims to evade authorship attribution by obfuscating the writing style in a text. Since it + +is challenging to accomplish this manually, researchers have developed automated authorship obfuscation methods that can evade attribution while preserving semantics (PAN, 2018). However, a key limitation of prior work is that authorship obfuscation methods do not consider the adversarial threat model where the adversary is "obfuscation aware" (Karadzhov et al., 2017; Potthast et al., 2018; Mahmood et al., 2019). Thus, in addition to evading attribution and preserving semantics, it is important that authorship obfuscation methods are "stealthy" - i.e., they need to hide the fact that text was obfuscated from the adversary. + +In this paper, we investigate the stealthiness of state-of-the-art authorship obfuscation methods. Our intuition is that the application of authorship obfuscation results in subtle differences in text smoothness (as compared to human writing) that can be exploited for obfuscation detection. To capitalize on this intuition, we use off-the-shelf pre-trained neural language models such as BERT and GPT-2 to extract text smoothness features in terms of word likelihood. We then use these as features to train supervised machine learning classifiers. The results show that we can accurately detect whether or not a text is obfuscated. + +Our findings highlight that existing authorship obfuscation methods themselves leave behind stylistic signatures that can be detected using neural language models. Our results motivate future research on developing stealthy authorship obfuscation methods for the adversarial threat model where the adversary is obfuscation aware. + +Our key contributions are as follows: + +- We study the problem of obfuscation detection for state-of-the-art authorship obfuscation methods. This and the underlying property of stealthiness has been given scant attention in the literature. We also note that this problem is potentially more challenging + +than the related one of synthetic text detection since most of the original text can be retained during obfuscation. + +- We explore 160 distinct BERT and GPT-2 based neural language model architectures designed to leverage text smoothness for obfuscation detection. +- We conduct a comprehensive evaluation of these architectures on 2 different datasets. Our best architecture achieves F1 of 0.87, on average, demonstrating the serious lack of stealthiness of existing authorship obfuscation methods. + +Paper Organization: The rest of this paper proceeds as follows. Section 2 summarizes related work on authorship obfuscation and obfuscation detection. Section 3 presents our proposed approach for obfuscation detection using neural language models. Section 4 presents details of our experimental setup including the description of various authorship obfuscation and obfuscation detection methods. We present the experimental results in Section 5 before concluding. The relevant source code and data are available at https://github.com/asad1996172/Obfuscation-Detection. + +# 2 Related Work + +In this section, we separately discuss prior work on authorship obfuscation and obfuscation detection. + +# 2.1 Authorship Obfuscation + +Given the privacy threat posed by powerful authorship attribution methods, researchers have started to explore text obfuscation as a countermeasure. Early work by Brennan et al. (2012) instructed users to manually obfuscate text such as by imitating the writing style of someone else. Anonymouth (McDonald et al., 2012, 2013) was proposed to automatically identify the words and phrases that were most revealing of an author's identity so that these could be manually obfuscated by users. Follow up research leveraged automated machine translation to suggest alternative sentences that can be further tweaked by users (Almishari et al., 2014; Keswani et al., 2016). Unfortunately, these methods are not effective or scalable because it is challenging to manually obfuscate text even with some guidance. + +Moving towards full automation, the digital text forensics community (Potthast and Hagen, 2018) has developed rule-based authorship obfuscators (Mansoorizadeh et al., 2016; Karadzhov et al., 2017; Castro-Castro et al., 2017). For example, Karadzhov et al. (2017) presented a rule-based obfuscation approach to adapt the style of a text towards the "average style" of the text corpus. Castro et al. (2017) presented another rule-based obfuscation approach to "simplify" the style of a text. + +Researchers have also proposed search and model based approaches for authorship obfuscation. For example, Mahmood et al. (2019) proposed a genetic algorithm approach to "search" for words that when changed, using a sentiment-preserving word embedding, would have the maximum adverse effect on authorship attribution. Bevendorff et al. (2019) proposed a heuristic-based search algorithm to find words that when changed using operators such as synonyms or hy pernyms, increased the stylistic distance to the author's text corpus. Shetty et al. (2018) used Generative Adversarial Networks (GANs) to "transfer" the style of an input text to a target style. Emmery et al. (2018) used auto-encoders with a gradient reversal layer to "de-style" an input text (aka style invariance). + +# 2.2 Obfuscation Detection + +Prior work has successfully used stylometric analysis to detect manual authorship obfuscation (Juola, 2012; Afroz et al., 2012). The intuition is that humans tend to follow a particular style as they try to obfuscate a text. In a related area, Shahid et al. (2017) used stylometric analysis to detect whether or not a document was "spun" by text spinners. We show later that these stylometric-methods do not accurately detect more advanced automated authorship obfuscation methods. + +There is increasing interest in distinguishing synthetic text generated using deep learning based language models such as BERT and GPT-2 from human written text. Using contextual word likelihoods, as estimated using a pre-trained language model (Radford et al., 2019), Gehrmann et al. (2019) were able to raise the accuracy of humans at detecting synthetic text from $54\%$ to $72\%$ . Zellers et al. (2019) showed that a classifier based on a language model can accurately detect synthetic text generated by the same language model. + +However, the detection accuracy degrades when different language models are used to generate and to detect. Bakhtin et al. (2019) also showed that the detection accuracy degrades when the synthetic text is generated using a language model trained on a different corpus. + +In summary, recent research has leveraged language models to detect their generated synthetic text. However, in obfuscation we start with human written text and make modifications such that text semantics is still preserved. This is in part achieved by retaining chunks of the original writing. Thus, the quirks of the obfuscator will be mingled in unpredictable proportions and ways with the author's original writing style. This makes the detection of obfuscated text different and potentially more challenging than synthetic text detection. To the best of our knowledge, this work presents the first systematic study of the detection of automatically obfuscated text. + +# 3 Proposed Approach + +# 3.1 Intuition + +An automated authorship obfuscator changes the input text so that it evades authorship attribution while preserving semantics. The quality and smoothness of automated text transformations using the state-of-the-art obfuscators differ from that of human written text (Mahmood et al., 2019). Therefore, the intuition behind our obfuscation detectors is to exploit the differences in text smoothness between human written and obfuscated texts. We capture text smoothness using powerful pretrained context aware neural language models. A text with a relatively greater proportion of high likelihood words is likely to be more smooth. + +# 3.2 Detector Architectures + +Figure 1 shows the pipeline of our method for detecting whether or not a given text is obfuscated. First, a language model is used to extract the likelihood (in the form of probability or rank) for each word in the text. Second, these likelihoods are used to build a smoothness representation for the text. This is input to a supervised machine learning model that is trained to classify the text as human written or obfuscated. The three steps correspond to three significant architectural dimensions + +of our detectors with multiple algorithmic options in each dimension. Combinations of choices along each dimension yield different architectures that can be used by an adversary to detect obfuscated documents. We detail each dimension next. + +# 3.2.1 Word likelihood extraction + +Given a word sequence, language models are designed to predict the next word. They do this by building contextual models of word occurrences as probability distributions over the full vocabulary. Then some heuristic is used to pick the next word e.g., select the word with the highest probability. In our case, instead of word prediction, we extract the likelihood from the language model (either as a probability or as a rank) for each word in the text given its context. + +The language model has a critical role. Thus, we use neural language models with deep architectures and trained on large amounts of data which are better at identifying both long-term and short-term context. In order to imitate an adversary who may not have the significant resources needed to train such models, we use off-the-shelf pre-trained neural language models. Specifically, we choose well-known context-aware neural language models GPT-2 (Radford et al., 2019) and BERT (Devlin et al., 2018). We choose both as they use different approaches. GPT-2 has been shown to perform better than BERT (Gehrmann et al., 2019) at synthetic text detection, with word rank giving higher performance than word probability. Their relative merit for obfuscation detection is unknown. + +1) GPT-2. GPT-2 released by Open AI in 2019 uses at its core, a variation of the "transformer" architecture, an attention based model (Vaswani et al., 2017) and is trained on text from 45 million outbound links on Reddit (40 GB worth of text). We use GPT-2 to compute the conditional probability for word $i$ as $p(w_i | w_{1\dots i - 1})$ . The position of $w_i$ in the sorted list (descending order of probability) of vocabulary words gives the word rank. The authors (Radford et al., 2019) trained four versions of GPT-2 differing in architecture size. Of these, we used the small and medium versions containing 117M and 345M parameters, respectively. The authors eventually also released a large version containing 762M parameters and a very large version containing 1542M parameters. We did not use + +![](images/da76e5b9d74d39f2f1804adc900ee90a9b9eb5a8098cea30fe4aea80a56ef187.jpg) +Figure 1: Pipeline for obfuscation detection + +them because only the small and medium versions were released at the time of our experimentation. + +2) BERT. BERT released by Google in 2018 is also based on "Transformers". It is trained on text from Wikipedia (2.5B words) and BookCorpus (800M words). BERT considers a bidirectional context unlike the uni-directional context considered by GPT-2. Thus, in BERT the conditional occurrence probability for word $i$ is $p(w_i | w_{i-k\dots i-1}, w_{i+1\dots i+k})$ where $k$ is the window size on each direction. Rank is computed in the similar way as GPT-2. We use both pre-trained BERT: BERT BASE with 110M parameters and BERT LARGE with 340M parameters. + +We implement likelihood extraction for both GPT-2 and BERT, using code made available by the Giant Language Model Test Room (GLTR) tool. + +# 3.2.2 Feature Representation + +We experiment with two different representations of smoothness. Each is explored with occurrence probabilities and with ranks. + +1) Binning based features: Text smoothness is represented by the likelihood of words in text. A text with a greater proportion of high likelihood words is likely to be smoother. We aggregate this information using fixed size bins representing different likelihood ranges. For probabilities we create bin sizes of 0.001, 0.005 and 0.010. For ranks we create bin sizes of 10, 50 and 100. Thus for example, one feature representation is to consider bins of ranks from 0 to 10, 11 to 20, 21 to 30 etc. Each bin contains the proportion of words in the document with likelihood in that range. +2) Image based features: Since the word likelihood values received from language models are in essence signals, we explore signal detection approaches as well. For example, for audio classifi + +cation (Hershey et al., 2017) store plots of the logmel spectrogram of the audios as images and then apply image classification methods. VGG (Simonyan and Zisserman, 2014), was one of the top performers of the different classifiers they tested. Inspired by them, we explore obfuscation detection via image classification. Specifically, we explore a transfer learning approach wherein we use the VGG-19 classifier trained for image classification on ImageNet dataset. For our method, we sort the extracted likelihood values for the text in descending order and then plot these values saving it as an image. This image is then processed by the pre-trained VGG-19. We extract the document's representation from the last flatten layer of VGG-19 (before the fully connected layers) as it contains high-level information regarding edges and patterns in the image. We expect this resulting feature representation vector to capture information regarding text smoothness. + +# 3.2.3 Classification + +We experiment with Support Vector Machine (SVM) with a linear kernel, Random Forest Classifier (RFC) an ensemble learning method, K Nearest Neighbor (KNN) which is a nonparametric method, Artificial Neural Network (ANN) which is a parametric method, and Gaussian Naive Bayes (GNB) which is a probabilistic method. All classifiers are trained using default parameters from scikit-learn except for ANN, where we use lbfgs solver instead of adam because it is more performant and works well on smaller datasets. + +# 3.2.4 Detection Architectures + +Options selected for each dimension combine to form a distinct obfuscation detection architecture. + +4https://keras.io/applications/#vgg19 +5http://www(image-net.org/ +6Terms 'text' and 'document' are used interchangeably +7https://scikit-learn.org/stable/ + +With 4 language models giving probabilities or ranks as output, 4 features (3 binning based features and 1 image based feature) and 5 different classifiers we experiment with a total of 160 distinct architectures. The assumption here is that a determined adversary will similarly look for the most effective obfuscation detector. + +# 4 Experimental Setup + +# 4.1 Authorship Obfuscation Approaches + +As state-of-the-art automated authorship obfuscators we identified the top two systems (Potthast et al., 2018) from PAN, a shared CLEF task. We also chose Mutant-X, a search based system presented in (Mahmood et al., 2019), which shows better performance than the PAN obfuscation systems. These are detailed next. + +Document Simplification (Castro-Castro et al., 2017). This approach obfuscates by applying rule-based text simplifications on the input document. The process is as follows. 1) If the number of contractions in the document is greater than the number of expansions, then replace all contractions with expansions otherwise replace all expansions with contractions. 2) Simplify by removing parenthetical texts that do not contain any named entity, discourse markers or appositions. 3) Replace words with synonyms that haven't been already used in the text. We implement this approach and refer to it as DS-PAN17. + +Style Neutralization (Karadzhov et al., 2017). This system is also a rule-based text obfuscator. First they calculate the average values for the whole corpus for stylometric features such as stopwords to non stopwords ratio, punctuation to word count ratio and average number of words per sentence. Next, they calculate the values of same stylometric features for the input document. Finally, using text transformation rules (e.g., replace! with !!, merge or split sentences etc.) they move the document's stylometric feature values towards the corpus averages. We evaluate this approach using the code provided by the authors and refer to it as SN-PAN16. + +MUTANT-X (Mahmood et al., 2019) This system uses a genetic algorithm (GAs) in combination with an authorship attribution system to identify words that when changed would have the highest positive effect towards obfuscation. Text + +transformations are done using a sentiment preserving variation of Word2Vec (Yu et al., 2017). The authors present two versions: MUTANT-X writeprintsRFC, built using a traditional machine learning based authorship attribution system and MUTANT-X embeddingCNN, built using a deep learning based authorship attribution system. We evaluate MUTANT-X embeddingCNN using code made available by authors. + +# 4.2 Data + +We use the two data collections which were used by (Mahmood et al., 2019). + +1) Extended Brennan Greenstadt corpus. This text corpus from (Brennan et al., 2012) contains 699 documents written by 45 unique authors. Documents are mainly academic in nature but they do not contain any citations and section headings and have under 500 words, as instructed by data collectors. We refer to this collection as EBG. +2) Blog authorship corpus. This text corpus which is from (Schler et al., 2006) contains more than 600,000 blogger.com blog posts written by 19,320 unique authors. These texts are more casual in nature and are written without any instructions. We refer to this collection as BLOG. + +The authors in (Mahmood et al., 2019) tested their GA obfuscators using 49 EBG and 200 BLOG documents written by 10 authors. + +# 4.2.1 Obfuscated versus Evaded Datasets + +A document is considered obfuscated if it has been processed by an authorship obfuscation tool. Thus we processed each document in the 49 EBG and 200 BLOG sets using each of the three obfuscation systems in turn (DS-PAN17, SN-PAN16 and MUTANT-X embeddingCNN). This resulted in 127 obfuscated EBG documents and 571 obfuscated BLOG documents. + +An obfuscated document is viewed as an evaded document if it successfully evades authorship attribution. Using the writeprintsRFC authorship attribu-tor (Mahmood et al., 2019) for this purpose, we process all obfuscated documents (127 and 571 for EBG and BLOG respectively). This resulted + +
DatasetTrainTest
OriginalObfuscated/ EvadedOriginalObfuscated/ Evaded
EBG obfuscated4318026847
EBG evaded2362923530
BLOG obfuscated20002921900277
BLOG evaded700711000113
+ +Table 1: Number of original and obfuscated/evaded documents in train and test sets of each of the four datasets + +in 59 evaded EBG documents (out of 127) and 184 evaded BLOG documents (out of 571). + +Despite the type and extent of transformations made, the remaining obfuscated documents did not evade the attributor. The evaded documents are those where the modification strategy somehow crossed an implicit threshold for evading authorship attribution. Thus, we surmise that the evaded documents are likely to be relatively less smooth. It is noteworthy that evaded documents are the realistic goal from a privacy seeking user's perspective. Thus, in addition to assessing detection for obfuscated data, we also assess detection for evaded data for both (EBG and BLOG) obfuscated datasets. + +# 4.2.2 Final datasets and cross validation + +For each dataset, we use the original and obfuscated/evaded documents by 5 (out of 10) authors for training and the documents by the remaining 5 authors for testing. An important decision to make here is with regards to the distribution of original documents versus obfuscated (or evaded) ones in our training and testing set. We have no real world scenario to mimic in that we have not encountered any real world use of automated obfuscators and their outputs. Therefore we make the datasets under a reasonable assumption that original documents are in the vast majority. Thus we create train and test datasets with roughly $85\% -90\%$ original documents. We do this by adding more original documents from each data collection to achieve these proportions. Overall, we ensure that each author's documents (of any version) are all either in the train or in the test set. Table 1 provides the size details for our obfuscated and evaded datasets. + +# 4.3 Obfuscation Detection Methods + +# 4.3.1 Methods from our architecture + +We propose a total of 160 distinct architectures (see 3.2.4). Testing these for each of the four datasets, we conduct a total of 640 distinct obfuscation detection experiments. In addition, we explore three other methods; the first is inspired by recent research to detect synthetic text. The other two were used previously to detect manually obfuscated documents. + +# 4.3.2 Other Methods + +1) GLTR (Gehrmann et al., 2019). The authors present a tool to help humans distinguish between original and machine generated synthetic text. Their tool uses pretrained language models to extract word likelihoods and presents their plot to humans making the decision. Their best model uses GPT-2 117M language model to extract word ranks which are then put in 4 unequal range bins: 0-10, 10-100, 100-1000 and greater than 1000. We test the same method on our obfuscation detection problem. For obfuscation detection, we tested this feature set with linear SVM and RFC. Among these, linear SVM performed the best. + +2) Character trigrams + KNN (Juola, 2012). This paper presents a stylometric approach to detect manually obfuscated documents. Firstly, they preprocess each document by unifying case and separating punctuation (e.g., !! becomes !!). Then they represent each document using character trigrams. Finally the classification is performed using KNN with normalized cosine distance. We implement this approach to evaluate it against automated authorship obfuscators. + +3) Writeprints + SVM (Afroz et al., 2012). This paper also uses stylometric features and is also focused on detecting manual obfuscation. It is also similar to the approach in (Shahid et al., 2017), which uses stylometric features to perform spun document detection. Afroz et al. tested with 3 different feature sets of which writeprints was the best. Their writeprints feature set contains character related features (e.g., total characters, percentage of uppercase letters etc.), word related features (e.g., total words, frequency of large words etc.) and other features including frequencies of digits, special characters, function words etc. They use this feature set with SVM (using poly kernel) to + +perform obfuscation detection. On our datasets, we found linear SVM to be working better than the polynomial one, so we report results with linear SVM. We implement this writeprints approach with linear SVM as our final detector. + +# 5 Results + +Summary trends: After averaging we find that for obfuscation detection, $25\%$ of all 160 architectures achieve F1 score greater than 0.76, $50\%$ achieve F1 score greater than 0.72 and a high $75\%$ of them were able to achieve F1 score greater than 0.52. + +Figure 2 summarizes the performances of all 160 different architectures across the four datasets. We see that obfuscation detection is easier in EBG than in BLOG with median EBG F1 scores being significantly better than median BLOG F1 scores (notches do not overlap (Krzywinski and Altman, 2014)). This can be explained by the fact that EBG contains scholarly articles that are relatively more consistent in their smoothness than blogs. This likely makes it easier to pick up on the difference in smoothness caused by obfuscated documents in EBG than in BLOG. We can also see that evaded documents achieve a higher maximum F1 score than obfuscated documents. This confirms our intuition presented in 4.2.1, that evaded documents are likely to be less smooth and therefore easier to detect than obfuscated documents. However, we also see that F1 scores for evaded datasets are less stable (greater box size) than obfuscated datasets. We believe that this is due to the fact that there are fewer documents in evaded datasets as compared to their respective obfuscated datasets (see Table 1). + +Performance evaluation: In terms of architecture selection, instead of choosing randomly across 160 architectures, we make the following assumption: + +![](images/4b309d3a93738727cad5193343c25a67becf505c26c56c8698974a7cc9cc8e4f.jpg) +Figure 2: Notched box plots for obfuscation detection F1 scores using all 160 architectures for each dataset. + +
DatasetModelsPRF1
EBG obfuscatedBERT LARGE + ranks + VGG-19 + RFC1.000.850.92
BERT LARGE + ranks + VGG-19 + SVM0.980.830.90
GLTR + SVM1.000.700.83
Writeprints + SVM0.670.380.49
Character trigrams + KNN0.640.150.24
EBG evadedBERT LARGE + probs + bins(0.010) + ANN1.000.900.95
BERT BASE + probs + VGG-19 + GNB1.000.900.95
GLTR + SVM1.000.800.89
Writeprints + SVM0.790.630.70
Character trigrams + KNN1.000.170.29
BLOG obfuscatedBERT BASE + probs + VGG-19 + ANN0.850.710.77
BERT BASE + probs + VGG-19 + SVM0.790.740.77
GLTR + SVM0.920.400.56
Writeprints + SVM0.710.410.52
Character trigrams + KNN0.410.500.45
BLOG evadedGPT-2 345M + ranks + VGG-19 + GNB0.820.830.83
BERT BASE + probs + VGG-19 + ANN0.790.810.80
GLTR + SVM0.860.550.67
Writeprints + SVM0.840.620.71
Character trigrams + KNN0.860.500.63
+ +Table 2: Obfuscation detection results (P: precision, R: recall, F1: F1 score). + +tion. We assume that the adversary is knowledgeable about the various choices, tests these alternatives and employs the best configuration. Thus, we present results for the best models, based on F1 scores for obfuscation detection, achievable by the adversary (Table 2). Table 2 also presents results for the three additional methods presented in section 4.3.2. Our best BERT and GPT2 combinations outperform all other methods across each of the four datasets in F1 score and recall. Along with (GLTR + SVM) these achieve the best precision for the EBG datasets. In BLOG obfuscated, GLTR based method achieves the highest precision whereas in BLOG evaded both the GLTR based method and character trigrams method top the chart - however in each case with a sizeable penalty paid in recall and therefore in F1 score. In summary, we see that using the best of methods the adversary can detect evaded and obfuscated documents with F1 score of 0.77 or higher (average 0.87 across datasets) which indicates that the tested state-of-the-art obfuscators are far from stealthy. + +# 5.1 Detector Architecture Choices Analysis + +Now we analyze the effect of different choices made within each of the three dimensions depicted in Figure 1. As mentioned earlier, for a privacy seeking user evading author attribution is more im + +![](images/8b05198ae601151e4cb843d846ce708dcb98720ce84b3187ae8df917039d2a63.jpg) + +![](images/f6d5feecf9d8dcc646cfd407ec6734d7295414127b5cd7aa44479ab212e58fe0.jpg) + +![](images/4d02a962ff69d6f869f176d5655232026b1bdf86320915b312915e4efd9ae0a1.jpg) + +![](images/b4c18be4366f70f01fa831b2742c65afefc15f4fde3a506e68a6a4ae7d4baf7d.jpg) + +![](images/3cd860f9a8077995fa077aa5d09e530c25231c84f67f6e06ca1aca2a390f0855.jpg) +Figure 3: Notched box plots of F1 scores for all dimensions across the two evaded datasets. For each dataset every notched boxplot in (a) is generated from 40 experiments (experiments correspond to architectures), (b) is generated from 80 experiments, (c) is generated from 120 experiments for binning and 40 for image whereas (d) is generated from 32 different experimental combinations. + +![](images/2d40419796664bc0e2018f7034dd835fd169d4c22e571090bb33f2dddb762430.jpg) + +![](images/0f2951536545982623833bb259a26287cc3549ebcb89666645b9d9b4fc9eb1ba.jpg) + +![](images/e8842f3d01c4fd81c6dc4156fb1155c793f2cb41617816ecc1b77a9a4df00839.jpg) + +portant than just obfuscation. So, in this section we present architecture analysis results only for evaded datasets involving 320 experiments (160 each for EBG evaded and BLOG evaded). + +# 5.1.1 Dimension 1: Language model & output type + +Figure 3 (a) presents notched box plots comparing distribution of F1 scores achieved by language models across both datasets. In EBG evaded, BERT language models achieve higher maximum F1 score (0.95) than GPT-2 (0.90 - 0.91). On the other hand, in BLOG evaded, GPT-2 345M achieves higher maximum F1 score (0.83) than others (0.75 - 0.80). Relatively, BERT shows greater consistency in F1 score (box size) than GPT-2 in both datasets. We believe that the bidirectional nature of BERT helps in capturing context and consequently smoothness better than GPT-2 which is uni-directional. + +While the difference in maximum F1 score between ranks and probabilities is slight for each dataset (Figure 3 (b)) box sizes show the spread in F1 scores is smaller with probabilities than with ranks. Upon further investigation, we find that experiments which use probabilities with image based features have an inter-quartile range of 0.05 and 0.1 for EBG and BLOG respectively whereas for experiments using probabilities with binning based features, this range is 0.32 for both datasets. On the other hand, inter-quartile range for experi + +periments using ranks with image based features is 0.08 and 0.05 for EBG and BLOG whereas for experiments using ranks with binning based features, this range is 0.49 and 0.42 respectively. This shows that for both datasets, greater variation in F1 scores for ranks as compared to probabilities is caused by binning based features. We believe that binning ranks with fixed bin sizes (10, 50, 100) is less stable for both BERT and GPT-2 which have different limits of ranks - this could account for the larger inter-quartile range using ranks. + +# 5.1.2 Dimension 2: Feature type + +The box sizes in Figure 3 (c) show that image based features exhibit strikingly greater stability in F1 scores than binning based features. Image based features also achieve significantly higher median F1 score than with binning for both datasets. This can in part be explained by the observation stated earlier that some bin size choices tested perform much worse than others because of not being fine-tuned. There is no difference between feature types in maximum F1 score for EBG whereas in BLOG, image based feature achieve somewhat higher maximum F1 score (0.83) than binning based features (0.78). We believe that the reason why image based features work so well is that VGG-19, the image model we use to extract features, is powerful enough to recognize the slopes in plots which represent the smoothness in our case. + +# 5.1.3 Dimension 3: Classifier + +Figure 3 (d), shows that for EBG, ANN and GNB achieve higher maximum F1 score (0.95), whereas for BLOG, GNB achieve higher maximum F1 score (0.83). KNN and ANN consistently achieve far more stable F1 scores than other classification methods. In both datasets, KNN achieves significantly higher median F1 score than other classification methods. ANN also follows the same pattern with the exception of GNB in BLOG evaded. We believe that the reason why KNN and ANN achieve relatively high and stable performance is in their nature of being able to adapt to diverse and complex feature spaces. + +# 5.2 Takeaway + +In summary we conclude that BERT with probabilities is a good choice for dimension 1. (We remind the reader that in contrast, in the area of synthetic text detection (Gehrmann et al., 2019) GPT-2 had the edge over BERT). Image based features are a clear winner in dimension 2 while KNN and ANN are the best candidates for dimension 3. Key to note as well is that the top performing architectures in Table 2 differ across datasets indicating the need for dataset specific choices. + +# 5.3 Insights + +Figure 4 validates our intuition from Section 3 that the text generated by obfuscators is less smooth than the original text. Using EBG obfuscated dataset and BERT BASE for illustration, we first sort words in a document by estimated probability and plot average probability at each rank. The steeper the fall in the curve, the lower the smoothness of text. This plot shows that original documents are generally more smooth than obfuscated documents. The average detection error rates (Mutant-X embeddingCNN: 0.72, SN-PAN16: 0.48, and DS-PAN17: 0.07) are also consistent with the plot. These results show that Mutant-X is the most stealthy obfuscator while DS-PAN17 is the least stealthy obfuscator. + +# 6 Conclusion + +In this paper, we showed that the state-of-the-art authorship obfuscation methods are not stealthy. We showed that the degradation in text smoothness caused by authorship obfuscators allow a detector to distinguish between obfuscated documents and original documents. Our proposed + +![](images/339cbd2d5bf9f1de77f8df933f36cba9ec2bb8473b9ca8c9d66e447e13b100a2.jpg) +Figure 4: Comparison between different obfuscators and original documents on the basis of average sorted probabilities extracted by BERT BASE for EBG obfuscated dataset. + +obfuscation detectors were effective at classifying obfuscated and evaded documents (F1 score as high as 0.92 and 0.95, respectively). Our findings point to future research opportunities to build stealthy authorship obfuscation methods. We suggest that obfuscation methods should strive to preserve text smoothness in addition to semantics. + +# References + +2018. PAN @ CLEF 2018 - Author Obfuscation. https://pan.webis.de/clef18/ pan18-web/author-obfuscation.html. +Ahmed Abbasi and Hsinchun Chen. 2008. Writeprints: A stylometric approach to identity-level identification and similarity detection in cyberspace. ACM Transactions on Information Systems (TOIS), 26(2):7. +Sadia Afroz, Michael Brennan, and Rachel Greenstadt. 2012. Detecting hoaxes, frauds, and deception in writing style online. In 2012 IEEE Symposium on Security and Privacy, pages 461-475. IEEE. +Mishari Almishari, Ekin Oguz, and Gene Tsudik. 2014. Fighting Authorship Linkability with Crowdsourcing. In ACM Conference on Online Social Networks (COSN). +Anonymous. 2018. I'm an Amazon Employee. My Company Shouldn't Sell Facial Recognition Tech to Police. +Anton Bakhtin, Sam Gross, Myle Ott, Yuntian Deng, Marc'Aurelio Ranzato, and Arthur Szlam. 2019. Real or Fake? Learning to Discriminate Machine from Human Generated Text. arXiv preprint arXiv:1906.03351. + +Janek Bevendorff, Martin Potthast, Matthias Hagen, and Benno Stein. 2019. Heuristic Authorship Obfuscation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 1098-1108. +Michael Brennan, Sadia Afroz, and Rachel Greenstadt. 2012. Adversarial stylometry: Circumventing authorship recognition to preserve privacy and anonymity. ACM Transactions on Information and System Security (TISSEC), 15(3):12. +Daniel Castro-Castro, Reynier Ortega Bueno, and Rafael Munoz. 2017. Author Masking by Sentence Transformation. In *Notebook for PAN at CLEF*. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Chris Emmery, Enrique Manjavacas, and Grzegorz Chrupala. 2018. Style Obfuscation by Invariance. 27th International Conference on Computational Linguistics. +Sebastian Gehrmann, Hendrik Strobelt, and Alexander Rush. 2019. GLTR: Statistical detection and visualization of generated text. pages 111-116. +Shawn Hershey, Sourish Chaudhuri, Daniel PW Ellis, Jort F Gemmeke, Aren Jansen, R Channing Moore, Manoj Plakal, Devin Platt, Rif A Saurous, Bryan Seybold, et al. 2017. CNN architectures for largescale audio classification. In 2017 IEEE international conference on acoustics, speech and signal processing (icassp), pages 131-135. IEEE. +Patrick Juola. 2012. Detecting stylistic deception. In Proceedings of the Workshop on Computational Approaches to Deception Detection, pages 91-96. Association for Computational Linguistics. +Georgi Karadzhov, Tsvetomila Mihaylova, Yasen Kiprov, Georgi Georgiev, Ivan Koychev, and Preslav Nakov. 2017. The case for being average: A mediocrity approach to style masking and author obfuscation. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 173-185. Springer. +Yashwant Keswani, Harsh Trivedi, Parth Mehta, and Prasenjit Majumder. 2016. Author Masking through Translation. In *Notebook for PAN at CLEF* 2016, pages 890-894. +Martin Krzywinski and Naomi Altman. 2014. Points of significance: visualizing samples with box plots. +Asad Mahmood, Faizan Ahmad, Zubair Shafiq, Padmini Srinivasan, and Fareed Zaffar. 2019. A Girl Has No Name: Automated Authorship Obfuscation using Mutant-X. Proceedings on Privacy Enhancing Technologies, 2019(4):54-71. + +Muharram Mansoorizadeh, Taher Rahgooy, Mohammad Aminiyan, and Mahdy Eskandari. 2016. Author obfuscation using WordNet and language models. In *Notebook for PAN at CLEF* 2016. +Andrew WE McDonald, Sadia Afroz, Aylin Caliskan, Ariel Stolerman, and Rachel Greenstadt. 2012. Use fewer instances of the letter "i": Toward writing style anonymization. In International Symposium on Privacy Enhancing Technologies Symposium, pages 299-318. Springer. +Andrew W.E. McDonald, Jeffrey Ulman, Marc Barrowclift, and Rachel Greenstadt. 2013. Anonymouth Revamped: Getting Closer to Stylometric Anonymity. In *PETools: Workshop on Privacy Enhancing Tools*, volume 20. +Arvind Narayanan, Hristo Paskov, Neil Zhenqiang Gong, John Bethencourt, Emil Stefanov, Eui Chul Richard Shin, and Dawn Song. 2012. On the feasibility of internet-scale author identification. In 2012 IEEE Symposium on Security and Privacy, pages 300-314. IEEE. +Rebekah Overdorf and Rachel Greenstadt. 2016. Blogs, twitter feeds, and reddit comments: Cross-domain authorship attribution. Proceedings on Privacy Enhancing Technologies, 2016(3):155-171. +Martin Potthast, Felix Schremmer, Matthias Hagen, and Benno Stein. 2018. Overview of the author obfuscation task at pan 2018: A new approach to measuring safety. In CLEF (Working Notes). +Schremmer Potthast and Stein Hagen. 2018. Overview of the Author Obfuscation Task at PAN 2018: A New Approach to Measuring Safety. In *Notebook for PAN at CLEF* 2018. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). +Sebastian Ruder, Parsa Ghaffari, and John G Breslin. 2016. Character-level and multi-channel convolutional neural networks for large-scale authorship attribution. arXiv preprint arXiv:1609.06686. +Jonathan Schler, Moshe Koppel, Shlomo Argamon, and James W Pennebaker. 2006. Effects of age and gender on blogging. In AAAI spring symposium: Computational approaches to analyzing weblogs, volume 6, pages 199-205. +Usman Shahid, Shehroze Farooqi, Raza Ahmad, Zubair Shafiq, Padmini Srinivasan, and Fareed Zaffar. 2017. Accurate detection of automatically spun content via stylometric analysis. In 2017 IEEE International Conference on Data Mining (ICDM), pages 425-434. IEEE. +Rakshith Shetty, Bernt Schiele, and Mario Fritz. 2018. A4NT: author attribute anonymity by adversarial training of neural machine translation. In 27th + +USENIX Security Symposium (USENIX Security 18), pages 1633-1650. +Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. +NY Times. 2018. I am part of the resistance inside the trump administration. NY Times. Retrieved from https://www.nytimes.com/2018/09/05/../trump-white-house-anonymous-resistance.html. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Liang-Chih Yu, Jin Wang, K Robert Lai, and Xuejie Zhang. 2017. Refining word embeddings using intensity scores for sentiment analysis. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(3):671-681. +Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. Conference on Neural Information Processing Systems (NeurIPS). \ No newline at end of file diff --git a/agirlhasanamedetectingauthorshipobfuscation/images.zip b/agirlhasanamedetectingauthorshipobfuscation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..39c3f6d9e40c06496f43fa0cf478fcaa5dbd97c2 --- /dev/null +++ b/agirlhasanamedetectingauthorshipobfuscation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe75f41ebe9405611ee4072b71bed69e70494364188f2aed86685fdade4bc3bd +size 254135 diff --git a/agirlhasanamedetectingauthorshipobfuscation/layout.json b/agirlhasanamedetectingauthorshipobfuscation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..3cf69d89be00ae9b184e94793c079700f5e09f71 --- /dev/null +++ b/agirlhasanamedetectingauthorshipobfuscation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e13823aa5ebf11d5fee3a325e49b138cf75f12e9e2fa6ee9bd884eb8c85e82af +size 287360 diff --git a/agraphautoencodermodelofderivationalmorphology/d32c8427-de1e-4683-b2e1-b2983e45179f_content_list.json b/agraphautoencodermodelofderivationalmorphology/d32c8427-de1e-4683-b2e1-b2983e45179f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1918392727e7f479c5d95559ad11db9893e32ad0 --- /dev/null +++ b/agraphautoencodermodelofderivationalmorphology/d32c8427-de1e-4683-b2e1-b2983e45179f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f2a62cb6d08321a5ea14c65e715b0188a83882cde221e1d68f4f122bf6e1b4bc +size 100900 diff --git a/agraphautoencodermodelofderivationalmorphology/d32c8427-de1e-4683-b2e1-b2983e45179f_model.json b/agraphautoencodermodelofderivationalmorphology/d32c8427-de1e-4683-b2e1-b2983e45179f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b34f183b2cebcca8d0b26d59e7d338097adf323b --- /dev/null +++ b/agraphautoencodermodelofderivationalmorphology/d32c8427-de1e-4683-b2e1-b2983e45179f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:006275724a6ad543a9100a95f08b7cfdaa9826332046b18891fbba84bcca7a24 +size 123382 diff --git a/agraphautoencodermodelofderivationalmorphology/d32c8427-de1e-4683-b2e1-b2983e45179f_origin.pdf b/agraphautoencodermodelofderivationalmorphology/d32c8427-de1e-4683-b2e1-b2983e45179f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..9b7d9e161aa92145bc75bdef1c17f41269d94d3a --- /dev/null +++ b/agraphautoencodermodelofderivationalmorphology/d32c8427-de1e-4683-b2e1-b2983e45179f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6cf5c7b6510b294469397fa93c3ad165b0866e6467401e14f32580ed1a7fa3de +size 1341034 diff --git a/agraphautoencodermodelofderivationalmorphology/full.md b/agraphautoencodermodelofderivationalmorphology/full.md new file mode 100644 index 0000000000000000000000000000000000000000..70ab8854d24f3bf757d8b8457c84c20edd85ecb4 --- /dev/null +++ b/agraphautoencodermodelofderivationalmorphology/full.md @@ -0,0 +1,465 @@ +# A Graph Auto-encoder Model of Derivational Morphology + +Valentin Hofmann\*, Hinrich Schütze‡, Janet B. Pierrehumbert†* + +* Faculty of Linguistics, University of Oxford + +$^{\dagger}$ Department of Engineering Science, University of Oxford + +$^{\ddagger}$ Center for Information and Language Processing, LMU Munich + +valentin.hofmann@ling-phil.ox.ac.uk + +# Abstract + +There has been little work on modeling the morphological well-formedness (MWF) of derivatives, a problem judged to be complex and difficult in linguistics (Bauer, 2019). We present a graph auto-encoder that learns embeddings capturing information about the compatibility of affixes and stems in derivation. The auto-encoder models MWF in English surprisingly well by combining syntactic and semantic information with associative information from the mental lexicon. + +# 1 Introduction + +A central goal of morphology is, as famously put by Aronoff (1976), "to tell us what sort of new words a speaker can form." This definition is tightly intertwined with the notion of morphological well-formedness (MWF). While non-existing morphologically well-formed words such as pro\(computer\)ism conform to the morphological patterns of a language and could be formed, non-existing morphologically ill-formed words such as pro\)and\)ism violate the patterns and are deemed impossible (Allen, 1979). + +More recent research has shown that MWF is a gradient rather than binary property: non-existing words that conform to the morphological patterns of a language differ in how likely they are to be actually created by speakers (Pierrehumbert, 2012). This is particularly true in the case of derivational morphology, which is not obligatory and often serves communicative needs (Bauer, 2019). As a result, the degree of MWF of a non-existing derivative is influenced by a multitude of factors and judged to be hard to predict (Bauer, 2001). + +In NLP, the lack of reliable ways to estimate the MWF of derivatives poses a bottleneck for generative models, particularly in languages exhibiting a rich derivational morphology; e.g., while inflected + +![](images/e19f37ac5cee0d98684d2b3100105fd1f9fa95142b80d4f237505eeb85b982a2.jpg) +(a) Mental lexicon $(\mathcal{L})$ +(b) Two-mode projection $(\mathcal{B})$ + +![](images/bdba3424ff769ae179bec0ceeaabc3670102d4c7ba27cd6f62e0c60b5a39cd74.jpg) +Figure 1: Derivatives in the mental lexicon $\mathcal{L}$ (a) and the derivational graph (DG), their derivational projection $\mathcal{B}$ (b). Predicting whether a word is part of a derivational abstraction corresponds to predicting a single edge in the DG (dotted lines). + +forms can be translated by generating morphologically corresponding forms in the target language (Minkov et al., 2007), generating derivatives is still a major challenge for machine translation systems (Sreelekha and Bhattacharyya, 2018). Similar problems exist in the area of automatic language generation (Gatt and Krahmer, 2018). + +This study takes a first step towards computationally modeling the MWF of English derivatives. We present a derivational graph auto-encoder (DGA) that combines semantic and syntactic information with associative information from the mental lexicon, achieving very good results on MWF prediction and performing on par with a character-based LSTM at a fraction of the number of trainable parameters. The model produces embeddings that capture information about the compatibility of affixes and stems in derivation and can be used as pretrained input to other NLP applications. + +# 2 Derivational Morphology + +# 2.1 Inflection and Derivation + +Linguistics divides morphology into inflection and derivation. While inflection refers to the different + +word forms of a lexeme, e.g., listen, listens, and listened, derivation refers to the different lexemes of a word family, e.g., listen, listener, and listenable. There are several differences between inflection and derivation, some of which are highly relevant for NLP. + +Firstly, while inflection is obligatory and determined by syntactic needs, the existence of derivatives is mainly driven by communicative goals, allowing to express a varied spectrum of meanings (Acquaviva, 2016). Secondly, derivation can produce a larger number of new words than inflection since it is iterable (Haspelmath and Sims, 2010); derivational affixes can be combined, in some cases even recursively (e.g., post\(post\)modern\)ism). However, morphotactic constraints restrict the ways in which affixes can be attached to stems and other affixes (Hay and Plag, 2004); e.g., the suffix \)less$ can be combined with $ness (atom$less$ness) but not with $ity (atom$less$ity). + +The semantic and formal complexity of derivation makes predicting the MWF of derivatives more challenging than the MWF of inflectional forms (Anshen and Aronoff, 1999; Bauer, 2019). Here, we model the MWF of derivatives as the likelihood of their existence in the mental lexicon. + +# 2.2 Derivatives in the Mental Lexicon + +How likely a derivative is to exist is influenced by various factors (Bauer, 2001; Pierrehumbert and Granell, 2018). In this study, we concentrate on the role of the structure of the mental lexicon. + +The mental lexicon can be thought of as a set of associations between meaning $m$ and form $f$ , i.e., words, organized in a network, where links correspond to shared semantic and phonological properties (see Pierrehumbert (2012) for a review). Since we base our study on textual data, we will treat the form of words orthographically rather than phonologically. We will refer to the type of information conveyed by the cognitive structure of the mental lexicon as associative information. + +Sets of words with similar semantic and formal properties form clusters in the mental lexicon (Alegre and Gordon, 1999). The semantic and formal properties reinforced by such clusters create abstractions that can be extended to new words (Bybee, 1995). If the abstraction hinges upon a shared derivational pattern, the effect of such an extension is a new derivative. The extent to which a word + +conforms to the properties of the cluster influences how likely the abstraction (in our case a derivational pattern) is to be extended to that word. This is what is captured by the notion of MWF. + +# 2.3 Derivational Graphs + +The main goal of this paper is to predict the MWF of morphological derivatives (i.e., how likely is a word to be formed as an extension of a lexical cluster) by directly leveraging associative information. Since links in the mental lexicon reflect semantic and formal similarities of various sorts, many of which are not morphological (Tamariz, 2008), we want to create a distilled model of the mental lexicon that only contains derivational information. One way to achieve this is by means of a derivational projection of the mental lexicon, a network that we call the Derivational Graph (DG). + +Let $\mathcal{L} = (\mathcal{W},\mathcal{Q})$ be a graph of the mental lexicon consisting of a set of words $\mathcal{W}$ and a set of links between the words $\mathcal{Q}$ . Let $\mathcal{W}_a\subset \mathcal{W}$ be a set of words forming a fully interconnected cluster in $\mathcal{L}$ due to a shared derivational pattern $a$ . We define $\mathcal{S}_a$ as the set of stems resulting from stripping off $a$ from the words in $\mathcal{W}_a$ and $\mathcal{R}_a = \{(s,a)\}_{s\in \mathcal{S}_a}$ as the corresponding set of edges between the stems and the shared derivational pattern. We then define the two-mode derivational projection $\mathcal{B}$ of $\mathcal{L}$ as the Derivational Graph (DG) where $\mathcal{B} = (\mathcal{V},\mathcal{E})$ , $\mathcal{V} = \bigcup_{a}(S_{a}\cup \{a\})$ and $\mathcal{E} = \bigcup_{a}\mathcal{R}_{a}$ . Figure 1 gives an example of $\mathcal{L}$ and DG $(= \mathcal{B})$ . + +The DG is a bipartite graph whose nodes consist of stems $s \in S$ with $\mathcal{S} = \bigcup_{a} \mathcal{S}_{a}$ and derivational patterns $a \in \mathcal{A}$ with $\mathcal{A} = \bigcup_{a} \{a\}$ . The derivational patterns are sequences of affixes such as re $\Phi$ size $\Phi$ ate $\Phi$ ion in the case of revitalization. The cognitive plausibility of this setup is supported by findings that affix groups can trigger derivational generalizations in the same way as individual affixes (Stump, 2017, 2019). + +We define $\mathbf{B} \in \mathbb{R}^{|\mathcal{V}| \times |\mathcal{V}|}$ to be the adjacency matrix of $\mathcal{B}$ . The degree of an individual node $n$ is $d(n)$ . We further define $\Gamma^1(n)$ as the set of one-hop neighbors and $\Gamma^2(n)$ as the set of two-hop neighbors of $n$ . Notice that $\Gamma^1(s) \subseteq \mathcal{A}$ , $\Gamma^1(a) \subseteq \mathcal{S}$ , $\Gamma^2(s) \subseteq \mathcal{S}$ , and $\Gamma^2(a) \subseteq \mathcal{A}$ for any $s$ and $a$ since the DG is bipartite. + +The advantage of this setup of DGs is that it abstracts away information not relevant to derivational morphology while still allowing to interpret results in the light of the mental lexicon. The cre + +![](images/9c8b02502d4f4c1bb3b1c7c761e97496b938cbf73a272802e5572d1801be5a60.jpg) +Figure 2: Experimental setup. We extract DGs from Reddit and train link prediction models on them. In the shown toy example, the derivatives super $\$ 8$ minecraft and affirm $\$ 9$ ation are held out for the test set. + +ation of a derivative corresponds to a new link between a stem and a derivational pattern in the DG, which in turn reflects the inclusion of a new word into a lexical cluster with a shared derivational pattern in the mental lexicon. + +# 3 Experimental Data + +# 3.1 Corpus + +We base our study on data from the social media platform Reddit.2 Reddit is divided into so-called subreddits (SRs), smaller communities centered around shared interests. SRs have been shown to exhibit community-specific linguistic properties (del Tredici and Fernandez, 2018). + +We draw upon the Baumgartner Reddit Corpus, a collection of publicly available comments posted on Reddit since 2005. The preprocessing of the data is described in Appendix A.1. We examine data in the SRs r/cfb (cfb - college football), r/gaming (gam), r/leagueof legends (lol), r/movies (mov), r/nba (nba), r/nfl (nfl), r/politics (pol), r/science (sci), and r/technology (tec) between 2007 and 2018. These SRs were chosen because they are of comparable size and are among the largest SRs (see Table 1). They reflect three distinct areas of interest, i.e., sports (cfb, nba, nfl), entertainment (gam, lol, mov), and knowledge (pol, sci, tec), thus allowing for a multifaceted view on how topical factors impact MWF: seeing MWF as an emergent property of the mental lexicon entails that communities with different lexica should differ in what derivatives are most likely to be created. + +# 3.2 Morphological Segmentation + +Many morphologically complex words are not decomposed into their morphemes during cognitive processing (Sonnenstuhl and Huth, 2002). Based on experimental findings in Hay (2001), we segment a morphologically complex word only if the stem has a higher token frequency than the deriva + +
SRnwnt|S||A||E|
cfb475,870,562522,67510,9342,26146,110
nba898,483,442801,26013,5763,02364,274
nfl911,001,246791,35213,9823,01664,821
gam1,119,096,9991,428,14919,3064,519107,126
lol1,538,655,4641,444,97618,3754,515104,731
mov738,365,964860,26315,7403,61477,925
pol2,970,509,5541,576,99824,1756,188143,880
sci277,568,720528,22311,2673,32358,290
tec505,966,695632,94011,9863,28063,839
+ +Table 1: SR statistics. $n_w$ : number of tokens; $n_t$ : number of types; $|S|$ : number of stem nodes; $|\mathcal{A}|$ : number of affix group nodes; $|\mathcal{E}|$ : number of edges. + +tive (in a given SR). Segmentation is performed by means of an iterative affix-stripping algorithm introduced in Hofmann et al. (2020) that is based on a representative list of productive prefixes and suffixes in English (Crystal, 1997). The algorithm is sensitive to most morpho-orthographic rules of English (Plag, 2003): when $\$ \text{ness}$ is removed from happi $\$ \text{ness}$ , e.g., the result is happy, not happi. See Appendix A.2. for details. + +The segmented texts are then used to create DGs as described in Section 2.3. All processing is done separately for each SR, i.e., we create a total of nine different DGs. Figure 2 illustrates the general experimental setup of our study. + +# 4 Models + +Let $W$ be a Bernoulli random variable denoting the property of being morphologically well-formed. We want to model $P(W|d, C_r) = P(W|s, a, C_r)$ , i.e., the probability that a derivative $d$ consisting of stem $s$ and affix group $a$ is morphologically well-formed according to SR corpus $C_r$ . + +Given the established properties of derivational morphology (see Section 2), a good model of $P(W|d, C_r)$ should include both semantics and formal structure, + +$$ +P (W | d, C _ {r}) = P (W | m _ {s}, f _ {s}, m _ {a}, f _ {a}, C _ {r}), \tag {1} +$$ + +where $m_s, f_s, m_a, f_a$ , are meaning and form (here + +![](images/e7497a43148ef7da871e821bb98b8dd2b5997c24c2cd505e4ce6901939b6cff1.jpg) +Figure 3: DGA model architecture. The DGA takes as input an adjacency matrix $\mathbf{B}$ and additional feature vectors $\mathbf{x}_s$ and $\mathbf{x}_a$ and learns embeddings $\mathbf{z}_s$ and $\mathbf{z}_a$ . + +modeled orthographically, see Section 2.2) of the involved stem and affix group, respectively. The models we examine in this study vary in which of these features are used, and how they are used. + +# 4.1 Derivational Graph Auto-encoder + +We model $P(W|d, C_r)$ by training a graph autoencoder (Kipf and Welling, 2016, 2017) on the DG $\mathcal{B}$ of each SR. The graph auto-encoder attempts to reconstruct the adjacency matrix $\mathbf{B}$ (Section 2.3) of the DG by means of an encoder function $g_{\theta}$ and a decoder function $h_{\theta}$ , i.e., its basic structure is + +$$ +\tilde {\mathbf {B}} = h _ {\theta} \left(g _ {\theta} (\mathbf {B})\right), \tag {2} +$$ + +where $\tilde{\mathbf{B}}$ is the reconstructed version of $\mathbf{B}$ . The specific architecture we use (see Figure 3), which we call a Derivational Graph Auto-encoder (DGA), is a variation of the bipartite graph auto-encoder (van den Berg et al., 2018). + +Encoder. The encoder $g_{\theta}$ takes as one of its inputs the adjacency matrix $\mathbf{B}$ of the DG $\mathcal{B}$ . This means we model $f_{s}$ and $f_{a}$ , the stem and affix group forms, by means of the associative relationships they create in the mental lexicon. Since a DG has no information about semantic relationships between nodes within $S$ and $\mathcal{A}$ , we reintroduce meaning as additional feature vectors $\mathbf{x}_s, \mathbf{x}_a \in \mathbb{R}^n$ for $m_{s}$ and $m_{a}$ , stem and affix group embeddings that are trained separately on the SR texts. The input to $g_{\theta}$ is thus designed to provide complementary information: associative information (B) and semantic information $(\mathbf{x}_s$ and $\mathbf{x}_a)$ . + +For the encoder to be able to combine the two types of input in a meaningful way, the choice of $g_{\theta}$ is crucial. We model $g_{\theta}$ as a graph convolutional network (Kipf and Welling, 2016, 2017), providing an intuitive way to combine information from the DG with additional information. The graph convolutional network consists of $L$ convolutional layers. Each layer (except for the last one) performs two steps: message passing and activation. + +During the message passing step (Dai et al., 2016; Gilmer et al., 2017), transformed versions of + +the embeddings $\mathbf{x}_s$ and $\mathbf{x}_a$ are sent along the edges of the DG, weighted, and accumulated. We define $\Gamma_{+}^{1}(s) = \Gamma^{1}(s)\cup \{s\}$ as the set of nodes whose transformed embeddings are weighted and accumulated for a particular stem $s$ . $\Gamma_{+}^{1}(s)$ is extracted from the adjacency matrix $\mathbf{B}$ and consists of the one-hop neighbors of $s$ and $s$ itself. The message passing propagation rule (Kipf and Welling, 2016, 2017) can then be written as + +$$ +\mathbf {m} _ {s} ^ {(l)} = \sum_ {n \in \Gamma_ {+} ^ {1} (s)} \frac {\mathbf {x} _ {n} ^ {(l - 1)} \mathbf {W} ^ {(l)}}{\sqrt {\left| \Gamma_ {+} ^ {1} (s) \right| \left| \Gamma_ {+} ^ {1} (n) \right|}}, \tag {3} +$$ + +where $\mathbf{W}^{(l)}$ is the trainable weight matrix of layer $l$ , $\mathbf{x}_n^{(l-1)}$ is the embedding of node $n$ from layer $l-1$ with $\mathbf{x}_n^{(0)} = \mathbf{x}_n$ , and $\sqrt{|\Gamma_+^1(s)| |\Gamma_+^1(n)|}$ is the weighting factor. The message passing step is performed analogously for affix groups. The matrix form of Equation 3 is given in Appendix A.3. + +Intuitively, a message passing step takes embeddings of all neighbors of a node and the embedding of the node itself, transforms them, and accumulates them by a normalized sum. Given that the DG $\mathcal{B}$ is bipartite, this means for a stem $s$ that the normalized sum contains $d(s)$ affix group embeddings and one stem embedding (and analogously for affix groups). The total number of convolutional layers $L$ determines how far the influence of a node can reach. While one convolution allows nodes to receive information from their one-hop neighbors (stems from affix groups they co-occur with and vice versa), two convolutions add information from the two-hop neighbors (stems from stems co-occurring with the same affix group and vice versa), etc. (see Figure 4). + +During the activation step, the output of the convolutional layer $l$ for a particular stem $s$ is + +$$ +\mathbf {x} _ {s} ^ {(l)} = \operatorname {R e L U} \left(\mathbf {m} _ {s} ^ {(l)}\right), \tag {4} +$$ + +where $\mathrm{ReLU}(\cdot) = \max (0,\cdot)$ is a rectified linear unit (Nair and Hinton, 2010). The final output of the encoder is + +$$ +\mathbf {z} _ {s} = \mathbf {m} _ {s} ^ {(L)}, \tag {5} +$$ + +i.e., there is no activation in the last layer. The activation step is again performed analogously for affix groups. $\mathbf{z}_s$ and $\mathbf{z}_a$ are representations of $s$ and $a$ enriched with information about the semantics of nodes in their DG neighborhood. + +![](images/a198cd161a7207d7958dfab261e043fad8e2784df035e7eb2ecfdfbdbef4d3bf.jpg) +(a) $L = 1$ + +![](images/a670fdbe2a24593e998aedbca2b9ff2d23feadf7204785c3334e67674dea5937.jpg) +(b) $L = 2$ + +![](images/3e91671f87f3f76b087aa836b90761b7139bcf00bcb070f3977ae0243d6f3752.jpg) +(c) $L = 3$ +Figure 4: Influence of $L$ , the number of convolutional layers, on message passing. The blue nodes illustrate neighbors whose messages can be received by the orange node under varying $L$ . + +Decoder. We model the decoder as a simple bilinear function, + +$$ +h _ {\theta} \left(\mathbf {z} _ {s}, \mathbf {z} _ {a}\right) = \sigma \left(\mathbf {z} _ {s} ^ {\top} \mathbf {z} _ {a}\right), \tag {6} +$$ + +where $\sigma$ is the sigmoid and $\mathbf{z}_s$ and $\mathbf{z}_a$ are the outputs of the encoder. We set $P(W|d, C_r) = h_\theta(\mathbf{z}_s, \mathbf{z}_a)$ and interpret this as the probability that the corresponding edge in a DG constructed from a corpus drawn from the underlying distribution exists. The resulting matrix $\tilde{\mathbf{B}}$ in Equation 2 is then the reconstructed adjacency matrix of DG. + +Notice that the only trainable parameters of the DGA are the weight matrices $\mathbf{W}^{(l)}$ . To put the performance of the DGA into perspective, we compare against four baselines, which we present in decreasing order of sophistication. + +# 4.2 Baseline 1: Character-based Model (CM) + +We model $P(W|d, C_r)$ as $P(W|f_s, f_a, C_r)$ using a character-based model (CM), i.e., as opposed to the DGA, $f_s$ and $f_a$ are modeled directly by means of their orthographic form. This provides the CM with phonological information, a central predictor of MWF (see Section 2.2). CM might also learn semantic information during training, but it is not directly provided with it. Character-based models show competitive results on derivational tasks (Cotterell et al., 2017; Vylomova et al., 2017; Deutsch et al., 2018), a good reason to test their performance on MWF prediction. + +We use two one-layer bidirectional LSTMs to encode the stem and affix group into a vector $\mathbf{o}$ by concatenating the last hidden states from both LSTM directions $\vec{\mathbf{h}}_s, \vec{\mathbf{h}}_s, \vec{\mathbf{h}}_a$ , and $\vec{\mathbf{h}}_a$ , + +$$ +\mathbf {o} = \left[ \vec {\mathbf {h}} _ {s} \oplus \vec {\mathbf {h}} _ {s} \oplus \vec {\mathbf {h}} _ {a} \oplus \vec {\mathbf {h}} _ {a} \right], \tag {7} +$$ + +where $\oplus$ denotes concatenation. $\mathbf{o}$ is then fed into a two layer feed-forward neural network with a ReLU non-linearity after the first layer. The activation function after the second layer is $\sigma$ . + +# 4.3 Baseline 2: Neural Classifier (NC) + +We model $P(W|d, C_r)$ as $P(W|m_s, m_a, C_r)$ using a neural classifier (NC) whose architecture is similar to the auto-encoder setup of the DGA. + +Similarly to the DGA, $m_{s}$ and $m_{a}$ are modeled by means of stem and affix group embeddings trained separately on the SRs. The first encoder-like part of the NC is a two-layer feed-forward neural network with a ReLU non-linearity after the first layer. The second decoder-like part of the NC is an inner-product layer as in the DGA. Thus, the NC is identical to the DGA except that it does not use associative information from the DG via a graph convolutional network; it only has information about the stem and affix group meanings. + +# 4.4 Baseline 3: Jaccard Similarity (JS) + +We model $P(W|d, C_r)$ as $P(W|f_s, f_a, C_r)$ . Like in the DGA, we model the stem and affix group forms by means of the associative relationships they create in the mental lexicon. Specifically, we predict links without semantic information. + +In feature-based machine learning, link prediction is performed by defining similarity measures on a graph and ranking node pairs according to these features (Liben-Nowell and Kleinberg, 2003). We apply four common measures, most of which have to be modified to accommodate the properties of bipartite DGs. Here, we only cover the best performing measure, Jaccard similarity (JS). JS is one of the simplest graph-based similarity measures, so it is a natural baseline for answering the question: how far does simple graph-based similarity get you at predicting MWF? See Appendix A.4 for the other three measures. + +The JS score of an edge $(s, a)$ is traditionally defined as + +$$ +\zeta_ {J S} (s, a) = \frac {\left| \Gamma^ {1} (s) \cap \Gamma^ {1} (a) \right|}{\left| \Gamma^ {1} (s) \cup \Gamma^ {1} (a) \right|}. \tag {8} +$$ + +However, since $\Gamma^1 (s)\cap \Gamma^1 (a) = \emptyset$ for any $s$ and $a$ (the DG is bipartite), we redefine the set of common neighbors of two nodes $n$ and $m$ , $\Gamma_{\cap}(n,m)$ , as $\Gamma^2 (n)\cap \Gamma^1 (m)$ , i.e., the intersection of the two-hop neighbors of $n$ and the one-hop neighbors of + +$m$ , and analogously $\Gamma_{\cup}(n,m)$ as $\Gamma^2 (n)\cup \Gamma^1 (m)$ . Since these are asymmetric definitions, we define + +$$ +\zeta_ {J S} (s, a) = \frac {\left| \Gamma_ {\cap} (s , a) \right|}{\left| \Gamma_ {\cup} (s , a) \right|} + \frac {\left| \Gamma_ {\cap} (a , s) \right|}{\left| \Gamma_ {\cup} (a , s) \right|} \tag {9} +$$ + +JS assumes that a stem that is already similar to a lexical cluster in its derivational patterns is more likely to become even more similar to the cluster than a less similar stem. + +# 4.5 Baseline 4: Bigram Model (BM) + +We again model $P(W|d, C_r)$ as $P(W|f_s, f_a, C_r)$ , leaving aside semantic information. However, in contrast to JS, this model implements the classic approach of Fabb (1988), according to which pairwise constraints on affix combinations, or combinations of a stem and an affix, determine the allowable sequences. Taking into account more recent results on morphological gradience, we do not model these selection restrictions with binary rules. Instead, we use transition probabilities, beginning with the POS of the stem $s$ and working outwards to each following suffix $a^{(s)}$ or preceding prefix $a^{(p)}$ . Using a simple bigram model (BM), we can thus calculate the MWF of a derivative as + +$$ +P (W | d, C _ {r}) = P \left(a ^ {(s)} | s\right) \cdot P \left(a ^ {(p)} | s\right), \tag {10} +$$ + +where $P(a^{(s)}|s) = P(a_{1}^{(s)}|s)\prod_{i = 2}^{n}P(a_{i}^{(s)}|a_{i - 1}^{(s)})$ is the probability of the suffix group conditioned on the POS of the stem. $P(a^{(p)}|s)$ is defined analogously for prefix groups. + +# 5 Experiments + +# 5.1 Setup + +We train all models on the nine SRs using the same split of $\mathcal{E}$ into training ( $n_{train}^{(p)} = 0.85 \cdot |\mathcal{E}|$ ), validation ( $n_{val}^{(p)} = 0.05 \cdot |\mathcal{E}|$ ), and test ( $n_{test}^{(p)} = 0.1 \cdot |\mathcal{E}|$ ) edges. For validation and test, we randomly sample $n_{val}^{(n)} = n_{val}^{(p)}$ and $n_{test}^{(n)} = n_{test}^{(p)}$ non-edges ( $s, a) \notin \mathcal{E}$ as negative examples such that both sets are balanced (0.5 positive, 0.5 negative). + +For training, we sample $n_{train}^{(n)} = n_{train}^{(p)}$ non-edges $(s,a) \notin \mathcal{E}$ in every epoch (i.e., the set of sampled non-edges changes in every epoch). Nodes are sampled according to their degree with $P(n) \propto d(n)$ , a common strategy in bipartite link prediction (Chen et al., 2017). We make sure non-edges sampled in training are not in the validation or test sets. During the test phase, we rank all edges according to their predicted scores. + +
Modelnp
DGA+30,200
DGA20,200
CM349,301
NC+30,200
NC20,200
+ +Table 2: Number of trainable parameters for neural models. $n_p$ : number of trainable parameters. + +We evaluate the models using average precision (AP) and area under the ROC curve (AUC), two common evaluation measures in link prediction that do not require a decision threshold. AP emphasizes the correctness of the top-ranked edges (Su et al., 2015) more than AUC. + +# 5.2 Training Details + +DGA, DGA+: We use binary cross entropy as loss function. Hyperparameter tuning is performed on the validation set. We train the DGA for 600 epochs using Adam (Kingma and Ba, 2015) with a learning rate of 0.01. We use $L = 2$ hidden layers in the DGA with a dimension of 100. For regularization, we apply dropout of 0.1 after the input layer and 0.7 after the hidden layers. For $\mathbf{x}_s$ and $\mathbf{x}_a$ , we use 100-dimensional GloVe embeddings (Pennington et al., 2014) trained on the segmented text of the individual SRs with a window size of 10. These can be seen as GloVe variants of traditional morpheme embeddings as proposed, e.g., by Qiu et al. (2014), with the sole difference that we use affix groups instead of individual affixes. For training the embeddings, derivatives are segmented into prefix group, stem, and suffix group. In the case of both prefix and suffix groups, we add prefix and suffix group embeddings. + +Since the window size impacts the information represented by the embeddings, with larger windows tending to capture topical and smaller windows morphosyntactic information (Lison and Kutuzov, 2017), we also train the DGA with 200-dimensional embeddings consisting of concatenated 100-dimensional embeddings trained with window sizes of 10 and 1, respectively $(\mathrm{DGA}+)$ .7 Since DGA already receives associative informa + +
Modelsportsentertainmentknowledgeμ±σ
cfbnbanflgamlolmovpolscitecAPAUC
APAUCAPAUCAPAUCAPAUCAPAUCAPAUCAPAUCAPAUCAPAUCAPAUC
DGA+.783.754.764.749.773.751.775.759.758.740.772.749.777.766.809.795.799.778.779±.015.760±.016
DGA.760.730.754.731.762.740.762.743.752.738.765.747.764.750.783.770.781.761.765±.010.746±.012
CM.745.745.751.759.764.766.768.776.766.773.769.780.776.786.793.804.768.775.767±.013.774±.016
NC+.737.739.729.733.737.740.722.728.730.733.732.741.725.731.772.781.758.756.738±.016.742±.016
NC.704.710.705.715.719.728.709.714.699.711.709.720.695.708.731.743.734.737.712±.013.721±.012
JS.632.593.617.582.626.588.619.588.609.584.622.589.614.591.649.617.638.608.625±.012.593±.011
BM.598.602.592.597.600.600.592.592.583.585.596.594.583.584.610.601.589.596.594±.008.595±.006
+ +Table 3: Performance on MWF prediction. The table shows AP and AUC of the models for the nine SRs as well as averaged scores. Grey highlighting illustrates the best score in a column, light grey the second-best. + +tion from the DG and semantic information from the embeddings trained with window size 10, the main advantage of DGA+ should lie in additional syntactic information. + +CM: We use binary cross entropy as loss function. We train the CM for 20 epochs using Adam with a learning rate of 0.001. Both input character embeddings and hidden states of the bidirectional LSTMs have 100 dimensions. The output of the first feed-forward layer has 50 dimensions. We apply dropout of 0.2 after the embedding layer as well as the first feed-forward layer. + +NC, NC+: All hyperparameters are identical to the DGA and the DGA+, respectively. + +JS: Similarity scores are computed on the SR training sets. + +BM: Transition probabilities are maximum likelihood estimates from the SR training sets. If a stem is assigned several POS tags by the tagger, we take the most frequent one. + +Table 2 summarizes the number of trainable parameters for the neural models. Notice that CM has more than 10 times as many trainable parameters as DGA+, DGA, NC+, and NC. + +# 5.3 Results + +The overall best performing models are DGA+ and CM (see Table 3). While DGA+ beats CM on all SRs except for lol in AP, CM beats DGA+ on all SRs except for cfb and tec in AUC. Except for CM, DGA+ beats all other models on all SRs in both AP and AUC, i.e., it is always the best or second-best model. DGA beats all models except for DGA+ and CM on all SRs in AP but has lower AUC than NC+ on three SRs. It also outperforms CM on three SRs in AP. NC+ and NC mostly have scores above 0.7, showing that traditional morpheme embeddings also capture information about the compatibility of affixes and stems (albeit to a lesser degree than models with associative or orthographic + +information). Among the non-neural methods, JS outperforms BM (and the other non-neural link prediction models, see Appendix A.4) in AP, but is beaten by BM in AUC on six SRs. + +The fact that DGA+ performs on par with CM while using less than $10\%$ of CM's parameters demonstrates the power of incorporating associative information from the mental lexicon in modeling the MWF of derivatives. This result is even more striking since DGA+, as opposed to CM, has no direct access to orthographic (i.e., phonological) information. At the same time, CM's high performance indicates that orthographic information is an important predictor of MWF. + +# 6 Derivational Embeddings + +# 6.1 Comparison with Input Vectors + +To understand better how associative information from the DG increases performance, we examine how DGA+ changes the shape of the vector space by comparing input vs. learned embeddings (X vs. $\mathbf{Z}_{DGA+}$ ), and contrast that with NC+ (X vs. $\mathbf{Z}_{NC+}$ ). A priori, there are two opposing demands the embeddings need to respond to: (i) as holds for bipartite graphs in general (Gao et al., 2018), the two node sets (stems and affix groups) should form two separated clusters in embedding space; (ii) stems associated with the same affix group should form clusters in embedding space that are close to the embedding of the respective affix group. + +For this analysis, we define $\delta(\mathcal{N},\mathbf{v})$ as the mean cosine similarity between the embeddings of a node set $\mathcal{N}$ and an individual embedding $\mathbf{v}$ , + +$$ +\delta (\mathcal {N}, \mathbf {v}) = \frac {1}{| \mathcal {N} |} \sum_ {n \in \mathcal {N}} \cos \left(\mathbf {u} _ {n}, \mathbf {v}\right), \tag {11} +$$ + +where $\mathbf{u}_n$ is the embedding of node $n$ . We calculate $\delta$ for the set of stem nodes $\mathcal{S}$ and their centroid $\mathbf{c}_{\mathcal{S}} = \frac{1}{|\mathcal{S}|} \sum_{s \in \mathcal{S}} \mathbf{u}_s$ as well as the set of affix group + +![](images/b8cc0196e7b9ba519f5553175bfcb07bd11abd8ce28d697ab497f057d958f5fa.jpg) +(a) $\mathbf{X}$ + +![](images/9b9eb73ddda442b0b8734862836785a2e03f10583bd2036cbf73df7f9d6db11f.jpg) +(b) $\mathbf{Z}_{NC + }$ +Figure 5: Comparison of input embeddings X with learned representations $Z_{NC+}$ and $Z_{DGA+}$ . The plots are t-SNE projections (van der Maaten and Hinton, 2008) of the embedding spaces. We highlight two example sets of stems occurring with a common affix: the blue points are stems occurring with $\$esque$ , the orange points stems occurring with $\$ful$ . $\times$ marks the embedding of $\$esque$ , + the embedding of $\$ful$ . + +![](images/851b71a2d8a061ec1fbee1f5ecf53b455d5d1481c85dfcbf7417d415532fc59e.jpg) +(c) $\mathbf{Z}_{DGA + }$ + +
MeasureXZNC+ZDGA+
δ(S, cS).256 ± .026.500 ± .027.487 ± .022
δ(A, cA).377 ± .017.522 ± .016.322 ± .030
δ(Sa, cSa).281 ± .006.615 ± .017.671 ± .024
δ(Sa, ua).133 ± .006.261 ± .022.278 ± .033
+ +Table 4: Comparison of $\mathbf{X}$ with ${\mathbf{Z}}_{NC + }$ and ${\mathbf{Z}}_{DGA + }$ . The table shows topological measures highlighting differences between the input and learned embeddings. + +nodes $\mathcal{A}$ and their centroid $\mathbf{c}_{\mathcal{A}} = \frac{1}{|\mathcal{A}|}\sum_{a\in \mathcal{A}}\mathbf{u}_a$ . Table 4 shows that while NC $^+$ makes the embeddings of both S and A more compact (higher similarity in $\mathbf{Z}_{NC + }$ than in X), DGA $^+$ makes S more compact, too, but decreases the compactness of $\mathcal{A}$ (lower similarity in $\mathbf{Z}_{DGA + }$ than in X). $\mathbf{Z}_{NC + }$ meets (i) to a greater extent than $\mathbf{Z}_{DGA + }$ + +We then calculate $\delta$ for all sets of stems $S_{a}$ occurring with a common affix group $a$ and their centroids $\mathbf{c}_{S_a} = \frac{1}{|\mathcal{S}_a|}\sum_{s\in S_a}\mathbf{u}_s$ . We also compute $\delta$ for all $S_{a}$ and the embeddings of the corresponding affix groups $\mathbf{u}_a$ . As Table 4 shows, both values are much higher in $\mathbf{Z}_{DGA + }$ than in $\mathbf{X}$ , i.e., DGA+ brings stems with a common affix group $a$ (lexical clusters in the mental lexicon) close to each other while at the same time moving $a$ into the direction of the stems. The embeddings $\mathbf{Z}_{NC + }$ exhibit a similar pattern, but more weakly than $\mathbf{Z}_{DGA + }$ (see Table 4 and Figure 5). $\mathbf{Z}_{DGA + }$ meets (ii) to a greater extent than $\mathbf{Z}_{NC + }$ . + +Thus, DGA+ and NC+ solve the tension between (i) and (ii) differently; the associative information from the mental lexicon allows DGA+ to put a greater emphasis on (ii), leading to higher performance in MWF prediction. + +# 6.2 Comparison between SRs + +Another reason for the higher performance of the models with associative information could be that their embeddings capture differences in derivational patterns between the SR communities. To examine this hypothesis, we map the embeddings $\mathbf{Z}_{DGA+}$ of all SRs into a common vector space by means of orthogonal procrustes alignment (Schönenmann, 1966), i.e., we optimize + +$$ +\mathbf {R} ^ {(i)} = \underset {\mathbf {T} ^ {\top} \mathbf {T} = \mathbf {I}} {\arg \min } | | \mathbf {Z} _ {D G A +} ^ {(i)} \mathbf {T} - \mathbf {Z} _ {D G A +} ^ {(0)} | | _ {F} \tag {12} +$$ + +for every SR, where $\mathbf{Z}_{DGA+}^{(i)}$ is the embedding matrix of the SR $i$ , and $\mathbf{Z}_{DGA+}^{(0)}$ is the embedding matrix of a randomly chosen SR (which is the same for all projections). We then compute the intersection of stem and affix group nodes from all SRs $S_{\cap} = \bigcap_{i} S^{(i)}$ and $\mathcal{A}_{\cap} = \bigcap_{i} \mathcal{A}^{(i)}$ , where $S^{(i)}$ and $\mathcal{A}^{(i)}$ are the stem and affix group sets of SR $i$ , respectively. To probe whether differences between SRs are larger or smaller for affix embeddings as compared to stem embeddings, we define + +$$ +\Delta \left(\mathcal {S} ^ {(i)}, \mathcal {S} ^ {(j)}\right) = \sum_ {s \in \mathcal {S} _ {\cap}} \frac {\cos \left(\hat {\mathbf {z}} _ {s} ^ {(i)} , \hat {\mathbf {z}} _ {s} ^ {(j)}\right)}{\left| \mathcal {S} _ {\cap} \right|}, \tag {13} +$$ + +i.e., the mean cosine similarity between projected embedding pairs $\hat{\mathbf{z}}_s^{(i)}$ and $\hat{\mathbf{z}}_s^{(j)}$ from two SRs $i$ and $j$ representing the same stem $s$ in the intersection set $S_{\cap}$ , with $\hat{\mathbf{z}}_s^{(i)} = \mathbf{z}_s^{(i)}\mathbf{R}^{(i)}$ . $\Delta (\mathcal{A}^{(i)},\mathcal{A}^{(j)})$ is defined analogously for affix groups. + +The mean value for $\Delta (\mathcal{A}^{(i)},\mathcal{A}^{(j)})$ $(0.723\pm$ 0.102) is lower than that for $\Delta (S^{(i)},S^{(j)})$ $(0.760\pm$ 0.087), i.e., differences between affix group embeddings are more pronounced than between stem embeddings. Topically connected SRs are more similar to each other than SRs of different topic groups, + +![](images/e003bf0d14a496485fa42dcdbe4d8e85045f99b2014d7b86009d9e80765840be.jpg) +(a) $\Delta (\mathcal{S}^{(i)},\mathcal{S}^{(j)})$ + +![](images/f3dd7bfb900c6dc476d7f0cc8b93a3fa8e21bd295d4089d529e531c4995d1101.jpg) +(b) $\Delta (\mathcal{A}^{(i)},\mathcal{A}^{(j)})$ +Figure 6: Comparison of embedding spaces across SRs. The plots show color-coded values of $\Delta(S^{(i)}, S^{(j)})$ and $\Delta(\mathcal{A}^{(i)}, \mathcal{A}^{(j)})$ for all pairs of SRs, respectively. The block-diagonal structure highlights the impact of topical relatedness on embedding similarities. + +with the differences being larger in $\Delta (\mathcal{A}^{(i)},\mathcal{A}^{(j)})$ than in $\Delta (S^{(i)},S^{(j)})$ (see Figure 6). + +These results can be related to Section 6.1: affix groups are very close to the stems they associate with in $\mathbf{Z}_{DGA+}$ , i.e., if an affix group is used with stems of meaning $p$ in one SR and stems with meaning $q$ in the other SR, then the affix groups also have embeddings close to $p$ and $q$ in the two SRs. Most technical vocabulary, on the other hand, is specific to a SR and does not make it into $S_{\square}$ . + +A qualitative analysis supports this hypothesis: affix groups with low cosine similarities between SRs associate with highly topical stems; e.g., the affix group $\$ 0$ ocracy has a low cosine similarity of -0.189 between the SRs nba and pol, and it occurs with stems such as kobe, jock in nba but left, wealth in pol. + +# 7 Related Work + +Much recent computational research on derivational morphology in NLP has focused on two related problems: predicting the meaning of a derivative given its form, and predicting the form of a derivative given its meaning. + +The first group of studies models the meaning of derivatives as a function of their morphological structure by training embeddings directly on text segmented into morphemes (Luong et al., 2013; Qiu et al., 2014) or by inferring morpheme embeddings from whole-word vector spaces, e.g., using the vector offset method (Lazaridou et al., 2013; Padó et al., 2016). Formally, given a derived form $f_{d}$ , this line of research tries to find the meaning $m_{d}$ that maximizes $P(m_{d}|f_{d})$ . + +The second group of studies models the form + +of derivatives as a function of their meaning. The meaning is represented by the base word and a semantic tag (Cotterell et al., 2017; Deutsch et al., 2018) or the sentential context (Vylomova et al., 2017). Formally, given a meaning $m_d$ , these studies try to find the derived form $f_d$ of a word that maximizes $P(f_d|m_d)$ . + +Our study differs from these two approaches in that we model $P(W|f_d, m_d)$ , i.e., we predict the overall likelihood of a derivative to exist. For future research, it would be interesting to apply derivational embeddings in studies of the second type by using them as pretrained input. + +Neural link prediction is the task of inferring the existence of unknown connections between nodes in a graph. Advances in deep learning have prompted various neural models for link prediction that learn distributed node representations (Tang et al., 2015; Grover and Leskovec, 2016). Kipf and Welling (2016, 2017) proposed a convolutional graph auto-encoder that allows to include feature vectors for each node. The model was adapted to bipartite graphs by van den Berg et al. (2018). + +Previous studies on neural link prediction for bipartite graphs have shown that the embeddings of the two node sets should ideally form separated clusters (Gao et al., 2018). Our work demonstrates that relations transcending the two-mode graph structure can lead to a trade-off between clustering and dispersion in embedding space. + +# 8 Conclusion + +We have introduced a derivational graph autoencoder (DGA) that combines syntactic and semantic information with associative information from the mental lexicon to predict morphological well-formedness (MWF), a task that has not been addressed before. The model achieves good results and performs on par with a character-based LSTM at a fraction of the number of trainable parameters (less than $10\%$ ). Furthermore, the model learns embeddings capturing information about the compatibility of affixes and stems in derivation. + +Acknowledgements. Valentin Hofmann was funded by the Arts and Humanities Research Council and the German Academic Scholarship Foundation. This research was also supported by the European Research Council (Grant No. 740516). We thank the reviewers for their helpful and very constructive comments. + +# References + +Paolo Acquaviva. 2016. Morphological semantics. In Andrew Hippisley and Gregory Stump, editors, The Cambridge handbook of morphology, pages 117-148. Cambridge University Press, Cambridge. +Lada A. Adamic and Eytan Adar. 2003. Friends and neighbors on the web. Social Networks, 25:211-230. +Maria Alegre and Peter Gordon. 1999. Rule-based versus associative processes in derivational morphology. *Brain and Language*, 68(1-2):347-354. +Margaret R. Allen. 1979. Morphological investigations. University of Connecticut, Mansfield, CT. +Frank Anshen and Mark Aronoff. 1999. Using dictionaries to study the mental lexicon. *Brain and Language*, 68:16-26. +Mark Aronoff. 1976. Word formation in generative grammar. MIT Press, Cambridge, MA. +Laurie Bauer. 2001. Morphological productivity. Cambridge University Press, Cambridge, UK. +Laurie Bauer. 2019. Rethinking morphology. Edinburgh University Press, Edinburgh, UK. +Joan Bybee. 1995. Regular morphology and the lexicon. Language and Cognitive Processes, 10(425-455). +Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong. 2017. On sampling strategies for neural network-based collaborative filtering. In International Conference on Knowledge Discovery and Data Mining (KDD) 23. +Ryan Cotterell, Ekaterina Vylomova, Huda Khayral-lah, Christo Kirov, and David Yarowsky. 2017. Paradigm completion for derivational morphology. In Conference on Empirical Methods in Natural Language Processing (EMNLP) 2017. +David Crystal. 1997. The Cambridge encyclopedia of the English language. Cambridge University Press, Cambridge, UK. +Hanjun Dai, Bo Dai, and Le Song. 2016. Discriminative embeddings of latent variable models for structured data. In International Conference on Machine Learning (ICML) 33. +Marco del Tredici and Raquel Fernández. 2018. The road to success: Assessing the fate of linguistic innovations in online communities. In International Conference on Computational Linguistics (COLING) 27. +Daniel Deutsch, John Hewitt, and Dan Roth. 2018. A distributional and orthographic aggregation model for english derivational morphology. In Annual Meeting of the Association for Computational Linguistics (ACL) 56. + +Nigel Fabb. 1988. English suffixation is constrained only by selectional restrictions. Natural Language & Linguistic Theory, 6(4):527-539. +Ming Gao, Leihui Chen, Xiangnan He, and Aoying Zhou. 2018. BiNE: Bipartite network embedding. In International Conference on Research and Development in Information Retrieval (SIGIR) 41. +Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artificial Intelligence Research, 61:65-170. +Justin Gilmer, Samuel S. Schoenholz, Patrick F. Riley, Oriol Vinyals, and George E. Dahl. 2017. Neural message passing for quantum chemistry. In International Conference on Machine Learning (ICML) 34. +Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In International Conference on Knowledge Discovery and Data Mining (KDD) 22. +Bo Han and Timothy Baldwin. 2011. Lexical normalisation of short text messages: Makn sens a #twitter. In Annual Meeting of the Association for Computational Linguistics (ACL) 49. +Martin Haspelmath and Andrea D. Sims. 2010. Understanding morphology. Routledge, New York, NY. +Jennifer Hay. 2001. Lexical frequency in morphology: Is everything relative? Linguistics, 39(6):1041-1070. +Jennifer Hay and Ingo Plag. 2004. What constrains possible suffix combinations? on the interaction of grammatical and processing restrictions in derivational morphology. Natural Language & Linguistic Theory, 22(3):565-596. +Valentin Hofmann, Janet B. Pierrehumbert, and Hinrich Schütze. 2020. Predicting the growth of morphological families from social and linguistic factors. In Annual Meeting of the Association for Computational Linguistics (ACL) 58. +Diederik P. Kingma and Jimmy L. Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR) 3. +Thomas N. Kipf and Max Welling. 2016. Variational graph auto-encoders. In NIPS Bayesian Deep Learning Workshop. +Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR) 5. +Angeliki Lazaridou, Marco Marelli, Roberto Zamparelli, and Marco Baroni. 2013. Compositional-ly derived representations of morphologically complex words in distributional semantics. In Annual Meeting of the Association for Computational Linguistics (ACL) 51. + +David Liben-Nowell and Jon Kleinberg. 2003. The link prediction problem for social networks. In ACM Conference on Information and Knowledge Management (CIKM) 12. +Pierre Lison and Andrey Kutuzov. 2017. Redefining context windows for word embedding models: An experimental study. In arXiv 1704.05781. +Minh-Thang Luong, Richard Socher, and Christopher D. Manning. 2013. Better word representations with recursive neural networks for morphology. In Conference on Computational Natural Language Learning (CoNLL) 17. +Einat Minkov, Kristina Toutanova, and Hisami Suzuki. 2007. Generating complex morphology for machine translation. In Annual Meeting of the Association for Computational Linguistics (ACL) 45. +Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted Boltzmann machines. In International Conference on Machine Learning (ICML) 27. +Sebastian Padó, Aurélie Herbelot, Max Kisselew, and Jan Šnajder. 2016. Predictability of distributional semantics in derivational word formation. In International Conference on Computational Linguistics (COLING) 26. +Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In Conference on Empirical Methods in Natural Language Processing (EMNLP) 2014. +Janet Pierrehumbert. 2012. The dynamic lexicon. In Abigail Cohn, Cecile Fougeron, and Marie Huffman, editors, *The Oxford handbook of laboratory phonology*, pages 173–183. Oxford University Press, Oxford. +Janet Pierrehumbert and Ramon Granell. 2018. On hapax legomena and morphological productivity. In Workshop on Computational Research in Phonetics, Phonology, and Morphology (SIGMORPHON) 15. +Ingo Plag. 2003. Word-formation in English. Cambridge University Press, Cambridge, UK. +Siyu Qiu, Qing Cui, Jiang Bian, Bin Gao, and Tie-Yan Liu. 2014. Co-learning of word representations and morpheme representations. In International Conference on Computational Linguistics (COLING) 25. +Peter H. Schonemann. 1966. A generalized solution of the orthogonal procrustes problem. Psychometrika, 36(1). +Ingrid Sonnenstuhl and Axel Huth. 2002. Processing and representation of german -n plurals: A dual mechanism approach. *Brain and Language*, 81(1-3):276-290. + +S. Sreelekha and Pushpak Bhattacharyya. 2018. Morphology injection for English-Malayalam statistical machine translation. In International Conference on Language Resources and Evaluation (LREC) 11. +Gregory Stump. 2017. Rule conflation in an inferentialrealizational theory of morphotactics. Acta Linguistica Academica, 64(1):79-124. +Gregory Stump. 2019. Some sources of apparent gaps in derivational paradigms. Morphology, 29(2):271-292. +Wanhua Su, Yan Yuan, and Mu Zhu. 2015. A relationship between the average precision and the area under the ROC curve. In International Conference on the Theory of Information Retrieval (ICTIR) 2015. +Monica Tamariz. 2008. Exploring systematicity between phonological and context-cooccurrence representations of the mental lexicon. The Mental Lexicon, 3(2):259-278. +Chenhao Tan and Lillian Lee. 2015. All who wander: On the prevalence and characteristics of multi-community engagement. In International Conference on World Wide Web (WWW) 24. +Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiao zhu Mei. 2015. LINE: Large-scale information network embedding. In International Conference on World Wide Web (WWW) 24. +Rianne van den Berg, Thomas N. Kipf, and Max Welling. 2018. Graph convolutional matrix completion. In KDD 2018 Deep Learning Day. +Laurens van der Maaten and Geoffrey E. Hinton. 2008. Visualizing high-dimensional data using t-SNE. Journal of Machine Learning Research, 9:2579-2605. +Ekaterina Vylomova, Ryan Cotterell, Timothy Baldwin, and Trevor Cohn. 2017. Context-aware prediction of derivational word-forms. In Conference of the European Chapter of the Association for Computational Linguistics (EACL) 15. + +# A Appendices + +# A.1 Data Preprocessing + +We filter the Reddit posts for known bots and spammers (Tan and Lee, 2015). We remove abbreviations, strings containing numbers, references to users and SRs, and both full and shortened hyperlinks. We convert British English spelling variants to American English and lemmatize all words. We follow Han and Baldwin (2011) in reducing repetitions of more than three letters (niiiiice) to three letters. Except for excluding stopwords, we do not employ a frequency threshold. + +
Modelsportsentertainmentknowledgeμ±σ
cfbnbanflgamlolmovpolscitecAPAUC
APAUCAPAUCAPAUCAPAUCAPAUCAPAUCAPAUCAPAUCAPAUC
JS.632.593.617.582.626.588.619.588.609.584.622.589.614.591.649.617.638.608.625±.012.593±.011
AA.603.556.599.556.602.553.605.561.589.553.596.552.592.556.606.558.606.562.600±.006.556±.003
CN.600.553.596.553.598.550.602.558.585.550.592.548.588.552.601.554.603.558.596±.006.553±.003
PA.537.517.543.527.542.522.559.545.545.534.533.519.541.534.513.503.537.526.539±.011.525±.011
+ +Table 5: Performance on MWF prediction. The table shows AP and AUC of the models for the nine Subreddits as well as averaged scores. Grey highlighting illustrates the best score in a column, light grey the second-best. + +# A.2 Morphological Segmentation + +We start by defining a set of potential stems $O^{(i)}$ for each Subreddit $i$ . A word $w$ is given the status of a potential stem and added to $O^{(i)}$ if it consists of at least 4 characters and has a frequency count of at least 100 in the Subreddit. + +Then, to determine the stem of a specific word $w$ , we employ an iterative algorithm. Let $V^{(i)}$ be the vocabulary of the Subreddit, i.e., all words occurring in it. Define the set $B_{1}$ of $w$ as the bases in $V^{(i)}$ that remain when one affix is removed, and that have a higher frequency count than $w$ in the Subreddit. For example, reaction can be segmented as re\$action and react\$ion, so $B_{1}(\text{reaction}) = \{\text{action}, \text{react}\}$ (assuming action and react both occur in the Subreddit and are more frequent than reaction). We then iteratively create $B_{i+1}(w) = \bigcup_{b \in B_{i}(w)} B_{1}(b)$ . Let further $B_{0}(w) = \{w\}$ . We define $S(w) = O^{(i)} \cap B_{m}(w)$ with $m = \max \{k | O^{(i)} \cap B_{k}(w) \neq \emptyset\}$ as the set of stems of $w$ . If $|S(w)| > 1$ (which is rarely the case in practice), the element with the lowest number of suffixes is chosen. + +The algorithm is sensitive to most morpho-orthographic rules of English (Plag, 2003): when $ness is removed from happi$ness, e.g., the result is happy, not happi. + +# A.3 Message Passing Rule + +Let $\hat{\mathbf{B}}\in \mathbb{R}^{|\mathcal{V}|\times |\mathcal{V}|}$ be the adjacency matrix of the DG $\mathcal{B}$ with added self-loops, i.e., $\hat{B}_{ii} = 1$ and $\hat{\mathbf{D}}\in \mathbb{R}^{|\mathcal{V}|\times |\mathcal{V}|}$ the degree matrix of $\hat{\mathbf{B}}$ with $\hat{D}_{ii} = \sum_{j}\hat{B}_{ij}$ . The matrix form of the message passing step can be expressed as + +$$ +\mathbf {M} ^ {(l)} = \hat {\mathbf {D}} ^ {- \frac {1}{2}} \hat {\mathbf {B}} \hat {\mathbf {D}} ^ {- \frac {1}{2}} \mathbf {X} ^ {(l - 1)} \mathbf {W} ^ {(l)}, \tag {14} +$$ + +where $\mathbf{W}^{(l)}$ is the trainable weight matrix of layer $l$ , and $\mathbf{X}^{(l-1)} \in \mathbb{R}^{|\mathcal{V}| \times |n|}$ is the matrix containing the node feature vectors from layer $l-1$ (Kipf and Welling, 2016, 2017). The activation step then is + +$$ +\mathbf {X} ^ {(l)} = \operatorname {R e L U} \left(\mathbf {M} ^ {(l)}\right). \tag {15} +$$ + +# A.4 Feature-based Link Prediction + +Besides Jaccard similarity, we implement three other feature-based link prediction methods. + +Adamic-Adar. The Adamic-Adar (AA) index (Adamic and Adar, 2003) has to take the bipartite structure of DGs into account. Using the modified definition of common neighbors as with $\zeta_{JS}$ , we calculate it as + +$$ +\zeta_ {A A} (s, a) = \sum_ {\substack {n \in \Gamma_ {\cap} (s, a) \\ n \in \Gamma_ {\cap} (a, s)}} \frac {1}{d (n)}. \tag{16} +$$ + +Common Neighbors. The score of an edge $(s, a)$ is calculated as the cardinality of the set of common neighbors (CN) of $s$ and $a$ . Similarly to $\zeta_{JS}$ and $\zeta_{AA}$ , we calculate the CN score as + +$$ +\zeta_ {C N} (s, a) = | \Gamma_ {\cap} (s, a) | + | \Gamma_ {\cap} (a, s) |. \tag {17} +$$ + +Preferential Attachment. For preferential attachment (PA), the score of an edge $(s, a)$ is the product of the two node degrees, + +$$ +\zeta_ {P A} (s, a) = d (s) \cdot d (a). \tag {18} +$$ + +The training regime is identical to Jaccard similarity. AA outperforms PA and CN but is consistently beaten by JS (see Table 5). \ No newline at end of file diff --git a/agraphautoencodermodelofderivationalmorphology/images.zip b/agraphautoencodermodelofderivationalmorphology/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..739371e7b898e73a7c5de3eb3f4e6a16fe42fa05 --- /dev/null +++ b/agraphautoencodermodelofderivationalmorphology/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4a7ef789b261891380ea5fdec1a15319cee6ef84c43799a41f97448720087086 +size 452011 diff --git a/agraphautoencodermodelofderivationalmorphology/layout.json b/agraphautoencodermodelofderivationalmorphology/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1248c2079d2e495fbf6d0ca3d2a660c8ab7cf5de --- /dev/null +++ b/agraphautoencodermodelofderivationalmorphology/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8febaee11b24c05971bbae772d8746552ca01d8417adb49a44de031c23e782d +size 661437 diff --git a/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/dc963c0e-1154-47df-900a-db631bdb13d7_content_list.json b/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/dc963c0e-1154-47df-900a-db631bdb13d7_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6bb8ce3fa9039f7949b25c7597490ad91984ae82 --- /dev/null +++ b/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/dc963c0e-1154-47df-900a-db631bdb13d7_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9f3f1985010d9a7da261371da570f7e1fba2468fa0702eb14550e8ace21572ab +size 70807 diff --git a/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/dc963c0e-1154-47df-900a-db631bdb13d7_model.json b/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/dc963c0e-1154-47df-900a-db631bdb13d7_model.json new file mode 100644 index 0000000000000000000000000000000000000000..13cdd3e5702ae311a863dbe9f808e2608e7401f3 --- /dev/null +++ b/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/dc963c0e-1154-47df-900a-db631bdb13d7_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:489ccf0bd67064a7b873519fbcd157131aa9fe252f76a6a4adacb474f101d654 +size 82685 diff --git a/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/dc963c0e-1154-47df-900a-db631bdb13d7_origin.pdf b/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/dc963c0e-1154-47df-900a-db631bdb13d7_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c5a4ec4bf9cd3425e88ae29eb40c4a2ecfb52563 --- /dev/null +++ b/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/dc963c0e-1154-47df-900a-db631bdb13d7_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:81fc3a7148a04cc2c9504acf012b77cc4d11dbd6d37151a4e42a32e8af760b8b +size 493529 diff --git a/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/full.md b/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..26a0d2aedc6641bd60d12018083a002508c38bcc --- /dev/null +++ b/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/full.md @@ -0,0 +1,289 @@ +# A Graph-based Coarse-to-fine Method for Unsupervised Bilingual Lexicon Induction + +Shuo Ren†‡, Shujie Liu§, Ming Zhou§, Shuai Ma†‡ + +$^{\dagger}$ SKLSDE Lab, Beihang University, Beijing, China + +$^{\ddagger}$ Beijing Advanced Innovation Center for Big Data and Brain Computing, China + +Microsoft Research Asia, Beijing, China + +† {shuoren,mashuai} @ buaa.edu.cn § {shujliu,mingzhou} @ microsoft.com + +# Abstract + +Unsupervised bilingual lexicon induction is the task of inducing word translations from monolingual corpora of two languages. Recent methods are mostly based on unsupervised cross-lingual word embeddings, the key to which is to find initial solutions of word translations, followed by the learning and refinement of mappings between the embedding spaces of two languages. However, previous methods find initial solutions just based on word-level information, which may be (1) limited and inaccurate, and (2) prone to contain some noise introduced by the insufficiently pre-trained embeddings of some words. To deal with those issues, in this paper, we propose a novel graph-based paradigm to induce bilingual lexicons in a coarse-to-fine way. We first build a graph for each language with its vertices representing different words. Then we extract word cliques from the graphs and map the cliques of two languages. Based on that, we induce the initial word translation solution with the central words of the aligned cliques. This coarse-to-fine approach not only leverages clique-level information, which is richer and more accurate, but also effectively reduces the bad effect of the noise in the pre-trained embeddings. Finally, we take the initial solution as the seed to learn cross-lingual embeddings, from which we induce bilingual lexicons. Experiments show that our approach improves the performance of bilingual lexicon induction compared with previous methods. + +# 1 Introduction + +Bilingual lexicon induction (BLI) is an important task of machine translation and becomes an essential part of recent unsupervised machine translation approaches (Lample et al., 2018; Artetxe et al., 2018c; Marie and Fujita, 2018; Ren et al., 2019; Artetxe et al., 2019). Previous methods for BLI are + +mostly based on unsupervised cross-lingual word embeddings (Zhang et al., 2017; Artetxe et al., 2017; Conneau et al., 2017; Artetxe et al., 2018b; Xu et al., 2018; Hoshen and Wolf, 2018; Alvarez-Melis and Jaakkola, 2018), the goal of which is to find a mapping function, typically a linear transformation (Mikolov et al., 2013), to map the source embeddings into the target embedding spaces. To do this, they first build a seed dictionary (known as the initial solution) with different methods and then learn the optimal mapping function that fits the seed dictionary. Based on the mapping function, a new dictionary of higher quality is inferred from the cross-lingual word embeddings by finding nearest neighbors in the target embedding space. With the new dictionary, the mapping function is further refined to fit it. The inference of the dictionary and the refinement of the mapping function are iteratively done until the final convergence. During the whole procedure, the initialization stage is important and heavily focused in previous work. + +Previous methods for finding the initial solution fall into three categories. The first one is heuristic rules such as treating identical words as the seed (Artetxe et al., 2017), but this kind of method is restricted to languages sharing the alphabet. The second category is adversarial methods (Zhang et al., 2017; Conneau et al., 2017; Xu et al., 2018; Alvarez-Melis and Jaakkola, 2018), but suffering from the drawbacks of generative adversarial models, i.e., the sensitivity of hyper-parameters, long training time, etc. The third category is structure-based methods (Artetxe et al., 2018b; Hoshen and Wolf, 2018), which is more flexible and robust than other categories, and achieve the state-of-the-art BLI performance. In Artetxe et al. (2018b), they first compute a similarity matrix of all words in the vocabulary, and then represent each word with the distribution of the similarity values, while in Hoshen and Wolf (2018), they project the word + +vectors to the top 50 principal components of the embedding spaces. After that, both of them directly use the word representation of two languages to retrieve the initial bilingual lexicons by computing the cosine distances of source and target word representations. However, directly finding word alignments from scratch has some demerits. (1) The information that a word can provide is limited and independent of each other. (2) According to our observation, there is some noise in the pre-trained embeddings even for high-frequency words so that the initial word alignments derived from them are not accurate. Those mistakes in the initial word-level alignments can hurt the performance in the following iteration steps. + +To solve those issues, we propose a novel graph-based coarse-to-fine paradigm to generate initial solutions for learning cross-lingual word embeddings, from which we induce bilingual lexicons. Specifically, given source and target languages, our method first uses pre-trained monolingual embeddings to construct a graph for each language, with the vertices representing different words, so that the mutual relationship between words is preserved. Next, we use the Bron-Kerbosch algorithm (Akkoyunlu, 1973) to extract cliques (a subset of vertices in which every two distinct vertices are adjacent) in the source and target graphs. After that, we calculate the clique embeddings and map the cliques from two graphs. We then treat the central words of the aligned cliques as the seeds to learn the mapping of the two word embedding spaces. + +Our contributions are threefold. (1) By building word graphs, we leverage the clique-level information extracted from them. The cliques cluster similar words and assemble their mutual relationship of them, providing richer and more accurate information. (2) We propose the coarse(clique extraction)-to-fine(seed induction) procedure for the BLI task, which effectively reduces the bad effect of the noise in the pre-trained embeddings; (3) We improve the BLI performance on the MUSE dataset with our method, even compared with strong baselines. + +# 2 Background + +Unsupervised bilingual lexicon induction (BLI) is the task of inducing word translations from monolingual corpora of two languages. Recently proposed methods follow the same procedure, i.e., first learning cross-lingual embeddings in an unsupervised way (§2.1) and then inducing bilingual + +lexicons from the embedding spaces (§2.2). + +# 2.1 Unsupervised Cross-lingual Embeddings + +Previous methods for learning cross-lingual embeddings can be roughly divided into two categories (Ormazabal et al., 2019), i.e., mapping methods and joint learning methods. As the second category, the skip-gram (Luong et al., 2015) for example, requires bilingual corpus during training, current methods for unsupervised cross-lingual embeddings mainly fall into the first category. Given pretrained monolingual embeddings of two languages, the mapping methods try to map the source and target embedding spaces through a linear transformation (Mikolov et al., 2013) $\mathbf{W} \in \mathbf{M}_{d \times d}(\mathbb{R})$ , where $\mathbf{M}_{d \times d}(\mathbb{R})$ is the space of $d \times d$ matrices of real numbers and $d$ is the dimension of the embeddings. Based on that, Xing et al. (2015) propose to constrain $\mathbf{W}$ to be orthogonal, i.e., $\mathbf{W}^{\top} \mathbf{W} = \mathbf{I}$ , and Conneau et al. (2017) find this is a Procrustes problem which advantageously offers a closed-form solution obtained from singular value decomposition (SVD) of $\mathbf{Y} \mathbf{X}^{\top}$ as follows: + +$$ +\mathbf {W} ^ {*} = \underset {\mathbf {W}} {\arg \min } | | \mathbf {W} \mathbf {X} - \mathbf {Y} | | _ {F} = \mathbf {U V} ^ {\top}, \tag {1} +$$ + +$$ +\text {w i t h} \mathbf {U} \boldsymbol {\Sigma} \mathbf {V} ^ {\top} = \operatorname {S V D} \left(\mathbf {Y} \mathbf {X} ^ {\top}\right) +$$ + +where $\mathbf{X}$ and $\mathbf{Y} \in \mathbf{M}_{d \times n}(\mathbb{R})$ consist of the embeddings of the bilingual lexicons $\{x_i, y_i\}_{i=1}^n$ in the seed dictionary. + +Therefore, there are two steps to learn unsupervised cross-lingual embeddings. The first step is to find an initial solution (also known as the seed dictionary), and the second one is to obtain the desired $\mathbf{W}$ according to Eq. (1). The above two steps can be iteratively done, by inducing new seed dictionary from the learned cross-lingual embeddings with the method introduced next, and using the new dictionary to refine the matrix $\mathbf{W}$ (known as the "refinement" process in some literature). + +The first step, i.e., finding the initial solution, is crucial because it decides the direction of the following iteration. Loads of previous work are devoted to finding good initial solutions with different methods, as is described in §1. But their methods only exploit word-level information, which is limited and may be inaccurate due to the noise in pretrained monolingual embeddings, leading to mistakes in the initial word-level alignments. Therefore, we propose a novel graph-based coarse-to-fine paradigm to find the initial solution of higher qual + +ity, leveraging clique-level information which we think is richer and more accurate. + +# 2.2 Bilingual Lexicon Induction + +Based on the learned cross-lingual embeddings, bilingual lexicons can be induced from the mapped spaces via the nearest neighbor (NN) method by calculating the cosine distance of the mapped source embeddings and the target embeddings. However, this method suffers from the "hubness" problem (Dinu et al., 2014) such that some target words appear as the nearest neighbors of many source words. To mitigate this problem, alternatives of the distance function have been proposed, such as invsoftmax (Smith et al., 2017), CSLS (Conneau et al., 2017) and margin-based scores (Artetxe and Schwenk, 2018). Among them, CSLS, as a special case of margin-based scores, is widely used in the SOTA embedding-based BLI methods. Formally, CSLS calculates the distance between the mapped and the target embeddings as follows: + +$$ +\operatorname {C S L S} (\mathbf {W x}, \mathbf {y}) = 2 \cos (\mathbf {W x}, \mathbf {y}) - r _ {\mathrm {T}} (\mathbf {W x}) - r _ {\mathrm {S}} (\mathbf {y}) \tag {2} +$$ + +where + +$$ +r _ {\mathrm {T}} (\mathbf {W} \mathbf {x}) = \frac {1}{K} \sum_ {y \in \mathcal {N} _ {\mathrm {T}} (\mathbf {W} \mathbf {x})} \cos (\mathbf {W} \mathbf {x}, \mathbf {y}) \tag {3} +$$ + +is the mean similarity of a source embedding $\mathbf{x}$ to its $K$ target neighborhoods $(\mathcal{N}_{\mathrm{T}}(\mathbf{W}\mathbf{x}))$ . Similarly, $r_{\mathrm{S}}(\mathbf{y})$ is the mean similarity of a target embedding $\mathbf{y}$ to its neighborhoods. + +# 3 Methodology + +As is mentioned before, recent work on bilingual lexicon induction (BLI) is mostly based on unsupervised cross-lingual embeddings, whose key point is to find initial solutions to learn the mapping function. However, previous methods find initial solutions just based on word-level information, which may be limited and inaccurate due to the noise in pre-trained monolingual embeddings. Therefore, we exploit the information provided by word cliques and figure out a coarse-to-fine procedure to denoise and find the initial solution of higher quality. Based on that, we learn the cross-lingual embeddings and induce word translations. + +As shown in Figure 1, our method for BLI can be roughly divided into several steps. Given the source and target languages, we first build a graph for each language. The graph vertex represents + +the word. Next, we extract word cliques from the graphs and map the cliques of two languages in an unsupervised way. Then, we induce the seed dictionary from the bilingual cliques by choosing the respective central words of the aligned cliques. After that, we learn cross-lingual embeddings with the help of the induced seed dictionary. The above steps can be iteratively done until the final convergence. By building word graphs, we can use the clique-level information which is richer and more accurate than what a single word provides. Besides, the whole coarse-to-fine procedure also reduces the bad effect of the noise in the pre-trained embeddings, because the clique-level alignment (coarse) is more accurate at the beginning and the word alignments inferred from it (fine) are more reasonable. We will next introduce each step. + +# 3.1 Word Graph Construction + +Given the pre-trained monolingual embeddings, we can derive an edge-weighted graph from them by regarding words as the vertices and their similarities as edges. Formally, the graph is + +$$ +G = < V, E > \tag {4} +$$ + +where $V$ is the vertex set (vocabulary of each language) and $\mathbf{E}$ is the edge set. The edges are built with monolingual embedding similarities. For example, for language $x$ , to define the edges, we first get the word-to-word similarity matrix $\mathbf{M}$ with + +$$ +\mathbf {M} _ {i, j} = \left\{ \begin{array}{l l} \operatorname {C S L S} (\mathbf {x} _ {i}, \mathbf {x} _ {j}), & i \neq j \\ 0, & i = j \end{array} \right. \tag {5} +$$ + +where $\mathbf{x}_i$ and $\mathbf{x}_j$ are the normalized embeddings of two words respectively. We set the main diagonal elements to zero to avoid self-loop. Theoretically, there is one edge between any two arbitrary words with the edge weight to be $\mathbf{M}_{i,j}$ , but if the weight of an edge is too small, it will provide little information and introduce a lot of noise. Therefore, we prune these non-informative edges with $\mathbf{M}_{i,j}$ less than a threshold of $\theta$ . Meanwhile, the pruning greatly reduces the computation time of the next step. We build two graphs $G_{x}$ and $G_{y}$ for two languages $x$ and $y$ in this way respectively. + +# 3.2 Clique Extraction and Mapping + +Different from previous methods, we infer the initial solution not using word-level information but from word cliques, which we think is richer and more accurate. Following Wang et al. (2016), the + +![](images/1c354c223e9316e5694e291b1f61bc7ec94947692301e9cb1c8a1468bc7831f5.jpg) +Figure 1: Overview of our method. In each iteration, based on the word graphs, we first map the cliques of two languages in an unsupervised way, and then infer the seed dictionary to learn cross-lingual word embeddings. + +"clique" here means a maximum complete subgraph where every two distinct vertices in the clique are adjacent. Extracting cliques from a given graph is a nontrivial problem and is shown to be NP-complete (Karp, 1972). In this paper, we adopt Bron-Kerbosch (BK) algorithm (Akkoyunlu, 1973) with pivoting (Johnston, 1976) to extract the cliques from a given graph. Having extracted the word cliques of two languages, we calculate clique embeddings by averaging the embedding vectors of all words in each clique. We choose the word whose embedding is closest to its clique embedding as the central word of each clique. After that, we follow Artetxe et al. (2018b) to map the cliques of two languages in a fully unsupervised way, i.e. to learn cross-lingual clique embeddings. + +We use the clique extraction rather than clustering methods because (1) a word may fall into different categories because of polysemy, which can be well modeled by the cliques, and (2) the BK algorithm is much more efficient than clustering. + +# 3.3 Seed Dictionary Induction + +§3.2 maps the clique embeddings of two languages into the same space so that we can retrieve aligned cliques. For each source clique, we choose the nearest target clique according to the CSLS similarity score calculated by Eq. (2). Remember that we have chosen the central word for each clique after the clique extraction in §3.2, so the seed dictionary inferring process is simply picking the central words of the aligned cliques just as shown in Figure 1. Note that we remove the duplication of seed word pairs in this process. + +# 3.4 Cross-lingual Embedding Learning + +Based on the initial solution (known as the seed dictionary), we then learn cross-lingual word embeddings following the Procrustes and refinement + +process introduced in §2.1. After obtaining the learned cross-lingual word embeddings, we rebuild the word graphs with the help of them and iterate the whole process again until the final convergence as shown in Figure 1. + +Previously methods used a single matrix $\mathbf{W}$ as transformation function between the embedding spaces of two languages, based on the assumption that the embedding spaces of different languages are isomorphic (Mikolov et al., 2013). However, this is doubtful because the isomorphic assumption may not hold all the time (Søgaard et al., 2018). Fortunately, the cliques we extracted naturally provide good local features for us, because they are usually much different from each other in meanings, which enables us to investigate alternatives to a single mapping matrix $\mathbf{W}$ . Therefore, after the final iteration, we divide all the cliques into $K$ groups via clustering, i.e., $\{L_i\}_{i=1}^K$ , and train an individual matrix $\mathbf{W}_i$ for each of them. We denote this process as "group mapping". Each $\mathbf{W}_i$ is initialized with the learned $\mathbf{W}$ and fine-tuned as + +$$ +\mathbf {W} _ {i} = \underset {\mathbf {W} _ {i}} {\arg \min } \left| \left| \mathbf {W} _ {i} \mathbf {X} _ {i} - \mathbf {Y} _ {i} \right| \right| _ {\mathrm {F}}, \text {s . t .} \mathbf {W} _ {i} ^ {\top} \mathbf {W} _ {i} = \mathbf {I} \tag {6} +$$ + +where $\mathbf{X}_i$ and $\mathbf{Y}_i$ are the embedding matrices of words belonging to $L_i$ . We divide each word into the group closest to its word embedding. The whole training procedure is shown in Algorithm 1. + +# 3.5 Inference + +After the training, we can obtain the renewed word graphs of both languages as well as their cliques, and get a set of group mapping matrices $\{\mathbf{W}_i\}_{i=1}^k$ . During the inference, for each source word $x$ , we first find its closest clique $C_s$ by calculating the similarities of $x$ 's embeddings to all clique embeddings. Next, we retrieve the group $L_s$ that $C_s$ belongs to, and choose the corresponding $\mathbf{W}_s$ . Then, + +Algorithm 1: Training procedure of the proposed graph-based coarse-to-fine method. + +Input: Monolingual embeddings of two languages $\mathbf{X}$ , Y Output: Multiple local mapping matrices $\{\mathbf{W}_i\}_{i = 1}^m$ while not convergence do +1 Build the word graphs $G_{x}$ and $G_{y}$ by calculating the embedding similarities of each language. +2 Extract cliques $\{C_i^x\}_{i = 0}^m$ and $\{C_j^y\}_{j = 0}^n$ from each graph using the Bron-Kerbosch algorithm. +3 Calculate the clique embeddings by averaging the embeddings of all the words belonging to it. +4 Map the source and target cliques with the method of Artetxe et al. (2018b). +5 Build seed dictionary with the central words of the aligned cliques. +6 Do the Procrustes and refinement iteration described in §2.1 and learn the mapping matrix W. +7 Renew the embeddings of the source language as $\mathbf{X}\coloneqq \mathbf{WX}$ +8 Divide $\{C_i^x\}_{i = 0}^m$ into $K$ groups via clustering. Initialize $\{\mathbf{W}_i\}_{i = 0}^K$ with W. +9 Fine-tune each $\mathbf{W}_i$ according to Eq. (6) and do the refinement. +return $\{\mathbf{W}_i\}_{i = 0}^K$ + +we retrieve the translation of $x$ by calculating the CSLS score of $\mathbf{W}_s\mathbf{x}$ and each target embedding $\mathbf{y}$ , similar to Eq. (2) introduced in §2.2. + +# 4 Experiment + +# 4.1 Dataset + +Bilingual lexicon induction (BLI) measures the word translation accuracy in comparison to a gold standard. We report results on the widely used MUSE dataset (Conneau et al., 2017). This dataset consists of monolingual fastText (Bojanowski et al., 2017) embeddings of many languages and dictionaries for many language pairs divided into training and test sets. The evaluation follows the setups of Conneau et al. (2017). + +# 4.2 Implementation Details + +# 4.2.1 Pre-processing + +We choose the top 10,000 word embeddings to build word graph because the monolingual embeddings of low-frequency words may be trained insufficiently. The embeddings are normalized following Artetxe et al. (2018b). Specifically, we first apply length normalization to the embeddings, and then mean center each dimension. After that, we do length normalization again to ensure the word embeddings have a unit length. + +# 4.2.2 Clique Extraction + +An efficient algorithm for clique extraction is the Bron-Kerbosch (BK) algorithm, which is a recursive backtracking algorithm that searches for all maximal cliques in a given graph $G$ . The pruning operation described in §3.1 makes the word graph a sparse graph, for which the BK algorithm can be made to run in time $O(dn3^{d/3})$ (Eppstein and Strash, 2011), where $n$ is the number of vertexes in $G$ , and $d$ is the degeneracy1 of the graph. We choose a public efficient C implementation of BK algorithm2, and only extract the cliques that contain no less than three words. According to our observation, the cliques can be extracted within several seconds with this code. + +# 4.2.3 Clique and Word Embedding Mapping + +In our experiment, the clique embeddings of two languages are mapped with the method proposed by Artetxe et al. (2018b). We use their public code to finish this step. We initialized $W$ with a random orthogonal matrix. After building the seed dictionary, we first solve the Procrustes problem (Eq. (1)), followed by the refinement process. + +# 4.3 Main Results + +# 4.3.1 Baselines + +We choose several supervised and unsupervised methods to be our baselines. The supervised baselines include: (1) The iterative Procrustes method proposed by Smith et al. (2017); (2) The multi-step framework proposed by Artetxe et al. (2018a); (3) a geometric method proposed by Jawanpuria et al. (2019). The unsupervised baselines include (1) MUSE proposed by Conneau et al. (2017), which is a GAN based method followed by a refinement process; (2) a Wasserstein GAN based method combined with distribution matching and back translation, proposed by Xu et al. (2018); (3) a method proposed by Alvarez-Melis and Jaakkola (2018) that views the mapping problem as optimal transportation and optimize the Gromov-Wasserstein distance between embedding spaces; (4) A robust self-learning method proposed by Artetxe et al. (2018b), which leverages the intra-linguistic word similarity information to infer initial solutions, followed by a self-learning iteration; (5) A nonadversarial method proposed by Hoshen and Wolf + +
Methoden-fren-deen-esen-iten-ruen-zh
Supervised
(Smith et al., 2017)81.182.473.572.481.482.943.138.051.763.742.736.7
(Artetxe et al., 2018a)80.583.173.573.580.583.861.339.650.567.332.343.4
(Joulin et al., 2018)83.384.179.176.384.186.3--57.967.245.946.4
(Jawanpuria et al., 2019)82.184.274.976.781.985.5--52.867.649.145.3
Unsupervised
(Conneau et al., 2017)82.381.174.072.281.783.377.476.144.059.132.531.4
(Xu et al., 2018)77.975.569.367.079.577.872.673.4----
(Alvarez-Melis and Jaakkola, 2018)81.378.971.972.881.780.478.975.245.143.7--
(Artetxe et al., 2018b)82.383.675.174.382.384.778.879.549.265.6--
(Hoshen and Wolf, 2018)82.384.174.773.082.184.177.977.547.561.8--
Ours (without GM)82.783.475.575.782.684.878.679.548.963.938.135.2
Ours (with GM)82.983.975.376.182.985.379.179.949.764.738.935.9
+ +(2018), which uses PCA-based alignment to initialize and iteratively refine the alignment. + +# 4.3.2 Results of Common Languages + +We report the result of the BLI task on the MUSE dataset (Conneau et al., 2017). The language pairs we choose are French (fr), German (de), Spanish (es), Italian (it), Russian (ru), Chinese (zh) from and to English(en), as shown in Table 1. + +From Table 1, we find that our proposed method significantly outperforms previous methods on nearly all directions, especially on en-de and en-zh pairs, with the improvements of 2 to 6 points compared with previous state-of-the-art unsupervised approaches. The results on some language pairs such as en-fr, en-de and en-es are remarkably competitive with strong supervised methods. + +We also see that for distant languages, i.e., en-ru and en-zh, our method achieves good results, on which some unsupervised baselines fail to converge. However, the results are still far lagging behind the supervised methods, indicating that the seed dictionaries built with our method may not be perfect for these distant languages. This may root in the original diversified training data of the monolingual embeddings on those pairs. Even so, we still significantly outperforms the MUSE (Conneau et al., 2017) for the en-ru and en-zh pairs. + +# 4.3.3 Results of Morphologically Rich Languages + +We also list results of some morphologically rich languages, i.e., Finnish (fi), Polish (pl) and Turkish (tr) in Table 2, which are selected by Søgaard et al. (2018). They find that these languages are differ + +Table 1: Precision@1 for the MUSE BLI task. All baselines leverage CSLS to be the retrieve metric during inference except for Xu et al. (2018) which uses cosine similarity. The bold numbers indicate the best results of supervised and unsupervised methods. "GM" means applying the group mapping technique described in §3.4. + +
Methoden-fien-plen-tr
Supervised
5k+Pro.+Ref.47.359.558.266.946.359.2
Unsupervised
(Conneau et al., 2017)0.159.853.90.045.40.0
(Søgaard et al., 2018)45.059.157.366.745.461.4
Ours (without GM)47.159.259.768.450.259.7
Ours (with GM)48.160.460.869.051.460.9
+ +Table 2: Precision@1 for the MUSE BLI task of morphologically rich languages. The bold numbers indicate the best results of all methods. Pro.: Procrustes; Ref.: Refinement. + +ent in morphological traits from commonly benchmarked languages which are morphological poor isolating or exclusively concatenating languages. For these languages, Søgaard et al. (2018) leverage identical tokens in both languages as the seeds (Artetxe et al., 2017), followed by the Procrustes solution plus the refinement process, which generates relatively good results. We compare our results with the supervised method, i.e., use 5k dictionary to start up followed by Procrustes + refinement, MUSE (Conneau et al., 2017) and Søgaard et al. (2018) on these languages. + +From the table, we see that the GAN-based method (MUSE) fails to give good results of some directions, maybe due to its unstable training. Using identical tokens as the seed gives good results (Søgaard et al., 2018) and compares with the supervised method. Our method performs well on these morphologically rich languages, and even outperforms the supervised method. We also conduct experiments on other morphologically rich + +languages such as Estonian, Greek, and Hungarian, but fail to converge. + +# 4.3.4 Effect of Group Mapping + +From Table 1 and Table 2, we also find that leveraging the group mapping (GM, §3.4) contributes to bilingual lexicon induction, especially for some distant languages such as en-ru, en-zh, and morphologically rich languages, with the improvement from 0.7 to 1.2 points. This result indicates the assumption that the embedding spaces of different languages are isomorphic may only hold locally. With the help of the cliques we extracted, we can find those locality features via clustering. + +# 4.4 Sensitivity to Hyper-parameters + +Notice that our method depends on three major hyper-parameters: (1) the number of words $N$ we use to build word graphs; (2) the threshold $\theta$ to prune the edges in the graphs; (3) the number of iterations $I$ we do. In this subsection, we discuss the impact of these hyper-parameters on the BLI results, taking en2fr as an example. We depict the precision@1 on different hyper-parameter settings in Figure 2. + +![](images/5c7da1e4b3a079ece03d0b66f10c4393a1489e8e3f30b303738ca707fac5371a.jpg) +Figure 2: Influence of the hyper-parameters. + +From the figure, we find that the performance of our method is sensitive to the choice of $N$ and $\theta$ . If $N$ is too small, the cliques extracted cannot reach agreement semantically across different languages because of the sparsity of semantic units. If $N$ is too large, the improperly trained low-frequency word vectors will impair the performance too. As for $\theta$ , if the threshold is too small, then much noise will be introduced into the word graphs, not only reducing the quality of extracted cliques but increasing the execution time of the BK algorithm. For $I$ , we find that the performance improves fast when $I$ is increased from 0 to 2, but reaches convergence at 5. Too many iterations hurt the performance because, at this time, the seed dictionary inferred from the mapped cliques is redundant. + +# 4.5 Influence to Unsupervised MT + +It has been shown that BLI can benefit unsupervised machine translation (MT) (Lample et al., 2018; Marie and Fujita, 2018; Ren et al., 2019) by building Statistical Machine Translation (SMT) with the induced bilingual lexicons and language models as SMT features, followed by an iterative back-translation process. In this part, we will discuss the influence of different bilingual lexicon induction methods (Conneau et al., 2017; Artetxe et al., 2018b) to the performance of the initial SMT model, and report the BLEU scores on newstest2014 en-fr and en-de tasks in Table 3. Note that we do not do the subsequent iterative back-translation process. From the table, we see that the performance of unsupervised SMT is restricted to the quality of BLI results. As our method provides better word translations, the initial SMT models benefit from ours accordingly. + +
BLI Methoden2frfr2enen2dede2en
MUSE11.7415.348.1411.03
VecMap13.0416.409.1211.98
Ours13.9117.2110.2412.41
+ +Table 3: BLEU of initial unsupervised SMT. The SMT features are word translation tables inferred from different BLI methods and pre-trained language models. + +# 5 Case Study + +# 5.1 Extracted Cliques + +In this part, we give some examples of the English cliques extracted with our method, as listed in Table 5. From the table, we see that our method can extract reasonable cliques containing words that share similar meanings. Each clique can be regarded as a semantic unit, which is more explicit than the PCA-based initialization method (Hoshen and Wolf, 2018) where they represent the semantic units with a fixed number of principal components. An interesting phenomenon is that "May" is not in the fifth clique which groups all the words of months. This is because, in this dataset, all the words are lower-cased so that "may" is also a modal verb. Besides, we observe the extracted cliques of other languages and find they are also reasonable, which are not listed here due to space limitation. + +
enfrzh
MUSEVecMapOursMUSEVecMapOurs
andpartshare)étabir(establish)et(and)也(too)/和(and)
hisnmatin(morning)lui(him)此(now)第六(sixth)他(he)
southun (a)avait(had)ouest(west)台北(Taipei)(prize)北(north)
augustflotte(fleet)mars(march)mars (march)电影(film)第五(fifth)三月(march)
buildparis(Paris)seule(alone)faire(make)用作(used as)了解(understand)形成(form)
+ +Table 4: Examples of seeds produced with different methods. Inside the brackets is the interpretation of the words. + +
idwords
1, . -) (
2and also both well addition additionally besides
3his himself him he her
4northeastern west south southeastern southeast east southwest northeast northwest southwestern north
5january march august july september october june april decaber november february
6science scientists scientific biology mathematics physics chemistry sciences
+ +Table 5: Examples of English cliques extracted from the word graph in the first iteration. The bold words are the central words in their respective cliques. + +# 5.2 Seed Dictionary + +To demonstrate that our method can produce good initial solutions for learning cross-lingual embeddings, in this part, we give an example of the seed dictionary inferred during the first iteration with our method, compared with that inferred by MUSE (Conneau et al., 2017) and VecMap (Artetxe et al., 2018b). The language pairs we choose are en-fr and en-zh, as listed in Table 4. From the table, we find that our method produces initial solutions with higher quality. This is because our coarse-to-fine process can effectively filter out the noise from the start. Notice that the initial solution produced by MUSE in the first iteration is not good, which may be because the GAN based method is not stable enough at the beginning of the training. + +# 6 Related Work + +Bilingual lexicon induction (BLI) is an important task of machine translation. Recent methods for bilingual lexicon induction are mostly based on unsupervised cross-lingual word embeddings (Zhang et al., 2017; Artetxe et al., 2017; Conneau et al., 2017; Artetxe et al., 2018b; Xu et al., 2018; Hoshen and Wolf, 2018; Alvarez-Melis and Jaakkola, 2018). They follow the same procedure that is first building initial solutions (a seed dictionary) and then learning a mapping function be + +tween the two word embedding spaces. During inference, for a given source word, they find the target word via the nearest neighbors search by calculating the distance of the mapped source embedding and all target word embeddings. The main focus of the previous methods is how to find the initial solution, which is the most important part. + +Their methods can be divided into three categories according to the way of finding the initial solution. The first category is using heuristic rules such as treating identical words as the seed (Artetxe et al., 2017), but this kind of method is restricted to languages sharing the vocabulary or at least the notation of numbers. The second category is adversarial methods (Zhang et al., 2017; Conneau et al., 2017; Xu et al., 2018; Alvarez-Melis and Jaakkola, 2018). They train a generator to finish mapping between the two word embedding spaces, and a discriminator to distinguish the mapped embeddings from the target embeddings. However, they suffer from the drawbacks of generative adversarial models, i.e., the sensitivity of hyper-parameters, long training time and lack of interpretability (Hoshen and Wolf, 2018). The third category is structure-based methods, which achieve the state-of-the-art performance on BLI. They either leverage the intra-linguistic word similarity information (Artetxe et al., 2018b) or principal components of monolingual word embeddings (Hoshen and Wolf, 2018), but their methods infer initial solutions just based on word-level information which is limited and prone to contain some noise due to the insufficient training of pre-trained embeddings. Different from their methods, ours leverages clique-level information which is richer and more accurate, and uses a coarse-to-fine procedure to reduce the adverse effect of the noise mentioned above. + +# 7 Conclusion + +In this paper, we propose a novel graph-based coarse-to-fine paradigm for unsupervised bilingual + +lexicon induction. Our method uses clique-level information and reduces the bad effect of noise in the pre-trained embeddings. The experiments show that our method can significantly improve the bilingual word induction performance after several iterations compared with strong baselines, even for distant language pairs. In the future, we will consider combining our method with Graph Neural Networks to update the word graphs we build. + +# Acknowledgments + +This work is supported in part by National Key R&D Program of China AAA0102301, and NSFC 61925203 & U1636210 & 61421003. + +# References + +Eralp Abdurrahim Akkoyunlu. 1973. The enumeration of maximal cliques of large graphs. SIAM Journal on Computing, 2(1):1-6. +David Alvarez-Melis and Tommi Jaakkola. 2018. Gromov-wasserstein alignment of word embedding spaces. In Proceedings of the 2018 Conference on EMNLP, pages 1881-1890. +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of ACL (Volume 1: Long Papers), pages 451-462. +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In Thirty-Second AAAI Conference on Artificial Intelligence. +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of ACL (Volume 1: Long Papers), pages 789-798. +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018c. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on EMNLP, Brussels, Belgium. Association for Computational Linguistics. +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019. An effective approach to unsupervised machine translation. In Proceedings of the 57th Annual Meeting of ACL. +Mikel Artetxe and Holger Schwenk. 2018. Margin-based parallel corpus mining with multilingual sentence embeddings. arXiv preprint arXiv:1811.01136. + +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146. +Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087. +Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2014. Improving zero-shot learning by mitigating the hubness problem. arXiv preprint arXiv:1412.6568. +David Eppstein and Darren Strash. 2011. Listing all maximal cliques in large sparse real-world graphs. In International Symposium on Experimental Algorithms, pages 364-375. Springer. +Yedid Hoshen and Lior Wolf. 2018. Non-adversarial unsupervised word translation. In Proceedings of the 2018 Conference on EMNLP, pages 469-478. +Pratik Jawanpuria, Arjun Balgovind, Anoop Kunchukuttan, and Bamdev Mishra. 2019. Learning multilingual word embeddings in latent metric space: a geometric approach. Transactions of the Association for Computational Linguistics, 7:107-120. +HC Johnston. 1976. Cliques of a graph-variations on the bron-kerbosch algorithm. International Journal of Computer & Information Sciences, 5(3):209-238. +Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Hervé Jégou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on EMNLP, pages 2979-2984. +Richard M Karp. 1972. Reducibility among combinatorial problems. In Complexity of computer computations, pages 85-103. Springer. +Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, et al. 2018. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on EMNLP, pages 5039-5049. +Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Bilingual word representations with monolingual quality in mind. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 151-159. +Benjamin Marie and Atsushi Fujita. 2018. Unsupervised neural machine translation initialized by unsupervised statistical machine translation. arXiv preprint arXiv:1810.12703. +Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. + +Aitor Ormazabal, Mikel Artetxe, Gorka Labaka, Aitor Soroa, and Eneko Agirre. 2019. Analyzing the limitations of cross-lingual word embedding mappings. In Proceedings of the 57th Annual Meeting of ACL. +Shuo Ren, Zhirui Zhang, Shujie Liu, Ming Zhou, and Shuai Ma. 2019. Unsupervised neural machine translation with smt as posterior regularization. In Thirty-Three AAAI Conference on Artificial Intelligence. +Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. arXiv preprint arXiv:1702.03859. +Anders Søgaard, Sebastian Ruder, and Ivan Vulić. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th Annual Meeting of ACL (Volume 1: Long Papers), pages 778-788. +Rui Wang, Hai Zhao, Sabine Ploux, Bao-Liang Lu, and Masao Utiyama. 2016. A bilingual graph-based semantic model for statistical machine translation. In *IJCAI*, pages 2950–2956. +Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of NAACL: Human Language Technologies, pages 1006-1011. +Ruochen Xu, Yiming Yang, Naoki Otani, and Yuexin Wu. 2018. Unsupervised cross-lingual transfer of word embedding spaces. In Proceedings of the 2018 Conference on EMNLP, pages 2465–2474. +Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Adversarial training for unsupervised bilingual lexicon induction. In Proceedings of the 55th Annual Meeting of ACL (Volume 1: Long Papers), pages 1959-1970. \ No newline at end of file diff --git a/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/images.zip b/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4f45217aecba6b21fda18fc7b417e2cb0e6dce4d --- /dev/null +++ b/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:66857f13b9736ffb30a238ddf195d26eabf9f735f214411efae49ee989ae26bc +size 341552 diff --git a/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/layout.json b/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e468c54ef1b619249048e7576842d4657a79f141 --- /dev/null +++ b/agraphbasedcoarsetofinemethodforunsupervisedbilinguallexiconinduction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:41bd39f828a4fc13930a3028fd51b95d5514332f0334751c70ce0b614aa860a0 +size 318239 diff --git a/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/e4849617-f6bc-45c6-9eee-6d00cb743d11_content_list.json b/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/e4849617-f6bc-45c6-9eee-6d00cb743d11_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e548983e3f709acf67815ad1b4825a2f56133d50 --- /dev/null +++ b/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/e4849617-f6bc-45c6-9eee-6d00cb743d11_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e3171783439a4216bc2a8a71b91faba7c1d52e2d1ba9866701e867458ea0f82 +size 81565 diff --git a/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/e4849617-f6bc-45c6-9eee-6d00cb743d11_model.json b/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/e4849617-f6bc-45c6-9eee-6d00cb743d11_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0dd0f79db2055587219fd586c644e42711eb399f --- /dev/null +++ b/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/e4849617-f6bc-45c6-9eee-6d00cb743d11_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7d49eb4c2dd79b3f2b0dc1c98d3f829c6d99c9ef4ee582141675b682e286014e +size 104073 diff --git a/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/e4849617-f6bc-45c6-9eee-6d00cb743d11_origin.pdf b/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/e4849617-f6bc-45c6-9eee-6d00cb743d11_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ba5e47e57ef683ea1e94bd2969899ab75b597f0a --- /dev/null +++ b/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/e4849617-f6bc-45c6-9eee-6d00cb743d11_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c8d4de42f85210f802df5a9431edfdde19f6c63fb0b3c26052ea0a9fb063764 +size 576316 diff --git a/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/full.md b/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0c9dd312b74f4faff4a6060282bc67754f1d1d84 --- /dev/null +++ b/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/full.md @@ -0,0 +1,325 @@ +# Agreement Prediction of Arguments in Cyber Argumentation for Detecting Stance Polarity and Intensity + +Joseph W Sirrianni, Xiaoqing "Frank" Liu, Douglas Adams + +University of Arkansas, Fayetteville, AR, USA. {jwsirria, frankliu, djadams}@uark.edu + +# Abstract + +In online debates, users express different levels of agreement/disagreement with one another's arguments and ideas. Often levels of agreement/disagreement are implicit in the text and must be predicted to analyze collective opinions. Existing stance detection methods predict the polarity of a post's stance toward a topic or post, but don't consider the stance's degree of intensity. We introduce a new research problem, stance polarity and intensity prediction in response relationships between posts. This problem is challenging because differences in stance intensity are often subtle and require nuanced language understanding. Cyber argumentation research has shown that incorporating both stance polarity and intensity data in online debates leads to better discussion analysis. We explore five different learning models: Ridge-M regression, Ridge-S regression, SVR-RF-R, pkudblab-PIP, and T-PAN-PIP for predicting stance polarity and intensity in argumentation. These models are evaluated using a new dataset for stance polarity and intensity prediction collected using a cyber argumentation platform. The SVR-RF-R model performs best for prediction of stance polarity with an accuracy of $70.43\%$ and intensity with RMSE of 0.596. This work is the first to train models for predicting a post's stance polarity and intensity in one combined value in cyber argumentation with reasonably good accuracy. + +# 1 Introduction + +Many major online and social media and networking sites, such as Facebook, Twitter, and Wikipedia, have taken over as the new public forum for people to discuss and debate issues of national and international importance. With more participants in these debates than ever before, the volume of unstructured discourse data continues to increase, and the + +need for automatic processing of this data is prevalent. A critical task in processing online debates is to automatically determine the different argumentative relationships between online posts in a discussion. These relationships typically consist of a stance polarity (i.e., whether a post is supporting, opposing, or is neutral toward another post) and the degree of intensity of the stance. + +Automatically determining these types of relationships from a given text is a goal in both stance detection and argumentation mining research. Stance detection models seek to automatically determine a text's stance polarity (Favoring, Opposing, or Neutral) toward another text or topic based on its textual information (Mohammad et al., 2016). Likewise, argumentation mining seeks to determine the stance relationship (Supporting, Attacking, or Neutral) between argumentation components in a text (Stede and Schneider, 2018). However, in both cases, attention is only paid to the stance's polarity, while the intensity of the relationship is often ignored. Some studies have tried to incorporate intensity into their predictions by expanding the number of classes to predict (Strongly For, For, Other, Against, and Strongly Against); however, this expansion lowered their classification performance considerably compared classification without intensity (Sobhani et al., 2015). Thus, effective incorporation of stance intensity into stance classification remains an issue. + +Research in Cyber Argumentation has shown that incorporating both stance polarity and intensity information into online discussions improves the analysis of discussions and the various phenomena that arise during a debate, including opinion polarization (Sirrianni et al., 2018), and identifying outlier opinions (Arvapally et al., 2017), compared to using stance polarity alone. Thus, automatically identifying both the post's stance polarity and intensity, allows these powerful analytical models to be + +applied to unstructured debate data from platforms such as Twitter, Facebook, Wikipedia, comment threads, and online forums. + +To that end, in this paper, we introduce a new research problem, stance polarity and intensity prediction in a responsive relationship between posts, which aims to predict a text's stance polarity and intensity which we combine into a single continuous agreement value. Given an online post A, which is replying to another online post B, we predict the stance polarity and intensity value of A towards B using A's (and sometimes B's) textual information. The stance polarity and intensity value is a continuous value, bounded from -1.0 to +1.0, where the value's sign (positive, negative, or zero) corresponds to the text's stance polarity (favoring, opposing, or neutral) and the value's magnitude (0 to 1.0) corresponds to the text's stance intensity. + +Stance polarity and intensity prediction encapsulates stance detection within its problem definition and is thus a more difficult problem to address. While stance polarity can be identified through specific keywords (e.g., "agree", "disagree"), the intensity is a much more fuzzy concept. The difference between strong opposition and weak opposition is often expressed through subtle word choices and conversational behaviors. Thus, to accurately predict agreement intensity, a learned model must understand the nuances between word choices in the context of the discussion. + +We explore five machine learning models for agreement prediction, adapted from the top-performing models for stance detection: Ridge-M regression, Ridge-S regression, SVR-RF-R, pkudblab-PIP, and T-PAN-PIP. These models were adapted from Mohammad et al. (2016), Sobhani et al. (2016), Mourad et al. (2018), Wei et al. (2016), and Dey et al. (2018) respectively. We evaluated these models on a new dataset for stance polarity and intensity prediction, collected over three empirical studies using our cyber argumentation platform, the Intelligent Cyber Argumentation System (ICAS). This dataset contains over 22,000 online arguments from over 900 users discussing four important issues. In the dataset, each argument is manually annotated by their authoring user with an agreement value. + +Results from our empirical analysis show that the SVR-RF-R ensemble model performed the best for agreement prediction, achieving an RMSE score of 0.596 for stance polarity and intensity predic + +tion, and an accuracy of $70\%$ for stance detection. Further analysis revealed that the models trained for stance polarity and intensity prediction often had better accuracy for stance classification (polarity only) compared to their counterpart stance detection models. This result demonstrates that the added difficulty of detecting stance intensity does not come at the expense of detecting stance polarity. To our knowledge, this is the first time that learning models can be trained to predict an online post's stance polarity and intensity simultaneously. + +The contributions of our work are the following: + +- We introduce a new research problem called stance polarity and intensity prediction, which seeks to predict a post's agreement value that contains both the stance polarity (value sign) and intensity (value magnitude), toward its parent post. +- We apply five machine learning models on our dataset for agreement prediction. Our empirical results reveal that an ensemble model with many hand-crafted features performed the best, with an RMSE of 0.595, and that models trained for stance polarity and intensity prediction do not lose significant performance for stance detection. + +# 2 Related Work + +# 2.1 Stance Detection + +Stance detection research has a wide interest in a variety of different application areas including opinion mining (Hasan and Ng, 2013), sentiment analysis (Mohammad, 2016), rumor veracity (Derczynski et al., 2017), and fake news detection (Lillie and Middelboe, 2019). Prior works have applied stance detection to many types of debate and discussion settings, including congressional floor debates (Burfoot et al., 2011), online forums (Hasan and Ng, 2013; Dong et al., 2017), persuasive essays (Persing and Ng, 2016), news articles (Hanselowski et al., 2018), and on social media data like Twitter (Mohammad et al., 2016). Approaches to stance detection depends on the type of text and relationship the stance is describing. For example, stance detection on Twitter often determines the author's stance (for/against/neutral) toward a proposition or target (Mohammad et al., 2016). In this work, we adapt the features sets and models used on the SemEval 2016 stance detection task Twitter dataset (Mohammad et al., 2016). + +This dataset has many similarities to our data in terms of post length and topics addressed. Approaches to Twitter stance detection include SVMs (Mohammad et al., 2016; Sobhani et al., 2016; Elfardy and Diab, 2016), ensemble classifiers (Tutek et al., 2016; Mourad et al., 2018), convolutional neural networks (Igarashi et al., 2016; Vijayaraghavan et al., 2016; Wei et al., 2016), recurrent neural networks (Zarrella and Marsh, 2016; Dey et al., 2018), and deep learning approaches (Sun et al., 2018; Sobhani et al., 2019). Due to the size of the dataset, the difference in domain, and time constraints, we did not test Sun et al. (2018)'s model in this work, because we could not gather sufficient argument representation features. + +# 2.2 Argumentation Mining + +Argumentation mining is applied to argumentative text to identify the major argumentative components and their relationships to one another (Stede and Schneider, 2018). While stance detection identifies the relationship between an author's stance toward a concept or target, argumentation mining identifies relationships between arguments, similar to our task in agreement prediction. However, unlike our task, argumentation mining typically defines arguments based on argument components, instead of treating an entire post as a single argument. In argumentation mining, a single text may contain many arguments. + +The major tasks of argumentation mining include: 1) identify argumentative text from the non-argumentative text, 2) classify argumentation components (e.g., Major Claim, Claims, Premise, etc.) in the text, 3) determine the relationships between the different components, and 4) classify the relationships as supporting, attacking, or neutral (Lippi and Torroni, 2016). End-to-end argument mining seeks to solve all the argumentation mining tasks at once (Persing and Ng, 2016; Eger et al., 2017), but most research focuses on one or two tasks at once. The most pertinent task to this work is the fourth task (though often times this task is combined with task 3). Approaches to this task include using textual entailment suites with syntactic features (Boltužić and Šnajder, 2014), or machine learning classifiers with different combinations of features including, structural and lexical features (Persing and Ng, 2016), sentiment features (Stab and Gurevych, 2017), and Topic modeling features (Nguyen and Litman, 2016). We use many of these + +types of features in our Ridge-S and SVR-RF-R models. + +# 2.3 Cyber Argumentation Systems + +Cyber argumentation systems help facilitate and improve understanding of large-scale online discussions, compared to other platforms used for debate, such as social networking and media platforms, online forums, and chat rooms (Klein, 2011). These systems typically employ argumentation frameworks, like IBIS (Kunz and Rittel, 1970) and Toulmin's structure of argumentation (Toulmin, 2003), to provide structure to discussions, making them easier to analyze. More specialized systems include features that improve the quality and understanding of discussions. Argumentation learning systems teach the users effective debating skills using argumentation scaffolding (Bell and Linn, 2000). More complex systems, like ICAS and the Deliberatorium (Klein, 2011), provide several integrated analytical models that identify and measure various phenomena occurring in the discussions. + +# 3 Background + +# 3.1 ICAS Platform + +Our research group has developed an intelligent cyber argumentation system, ICAS, for facilitating large scale discussions among many users (Liu et al., 2007, 2010, 2011; Chanda and Liu, 2015; Liu et al., 2012; Arvapally et al., 2017; Sirrianni et al., 2018). ICAS an updated version of the OLIAS argumentation system (Arvapally and Liu, 2013). + +ICAS implements an IBIS structure (Kunz and Rittel, 1970), where each discussion is organized as a tree. In ICAS, discussions are organized by issue. Issues are important problems that need to be addressed by the community. Under each issue are several positions, which act as solutions or approaches toward solving the issue. Under each position, there are several arguments that argue for or against the parent position. Under these arguments, there can be any number of follow-on arguments that argue for or against the parent argument, and so on until the discussion has ended. Figure 1 provides a visualization of the discussion tree structure ICAS employs. + +In ICAS, arguments have two components: a textual component and an agreement value. The textual component is the written argument the user makes. ICAS does not limit the length of argument text; however, in practice, the average argument + +![](images/832041e728c9b89d0aa799da5bcef7f8f469c4559215f3880d1d3672378c038b.jpg) +Figure 1: An example discussion tree structure used in ICAS. The value above an argument is its agreement value. + +length is about 160 characters, similar to the length of a tweet. The agreement value is a numerical value that indicates the extent to which an argument agrees or disagrees with its parent. Unlike other argumentation systems, this system allows users to express partial agreement or disagreement with other posts. Users are allowed to select agreement values from a range of -1 to +1 at 0.2 increments that indicate different partial agreement values. Positive values indicate partial or complete agreement, negative values indicate partial or complete disagreement, and a value of 0 indicates indifference or neutrality. These agreement values represent each post's stance polarity (the sign) and intensity (the magnitude). These agreement values are distinctly different from other argumentation weighting schemes where argument weights represent the strength or veracity of an argument (see (Amgoud and Ben-Naim, 2018; Levow et al., 2014)). Each agreement value is selected by the author of the argument and is a mandatory step when posting. + +# 4 Models for Stance Polarity and Intensity Prediction + +This section describes the models we applied to the stance polarity and intensity prediction problem. We applied five different models, adapted from top-performing stance classification models based on their performance and approach on the SemEval 2016 stance classification Twitter dataset (Mohammad et al., 2016). + +# 4.1 Ridge Regressions (Ridge-M and Ridge-S) + +Our first two models use a linear ridge regression as the underlying model. We created two ridge regression models using two feature sets. + +The first ridge model (Ridge-M) used the feature + +set described in Mohammad et al. (2016) as their benchmark. They used word 1-3 grams and character 2-5 grams as features. We filtered out English stop words, tokens that existed in more than $95\%$ of posts, and tokens that appear in less than $0.01\%$ of posts for word N-grams and fewer than $10\%$ for character N-grams. There were a total of 838 N-gram features for the Ridge-M model. + +The second ridge model (Ridge-S) used the feature set described in Sobhani, Mohammad, and Kiritchenko's follow-up paper (2016). In that paper, they found the sum of trained word embeddings with 100 dimensions, in addition to the N-gram features outlined by Mohammad et al. (2016), to be the best-performing feature set. We trained a word-embedding (skip-gram word2vec) model on the dataset. For each post, and summed the embeddings for each token in the post were summed up and normalized by the total number of tokens of a post to generate the word embedding features. Ridge-S had 938 total features. + +# 4.2 Ensemble of Regressions (SVR-RF-R) + +This model (SRV-RF-R) consisted of an average-voting ensemble containing three different regression models: an Epsilon-Support Vector Regression model, a Random Forest regressor, and a ridge regression model. This model is an adaption of the ensemble model presented by Mourad et al. (2018) for stance detection. Their model used a large assortment of features, including linguistic features, topic features, tweet-specific features, labeled-based features, word-Embedding features, similarity features, context features, and sentiment lexicon features. They then used the feature selection technique reliefF (Kononenko et al., 1997) to select the top 50 features for usage. Due to the changes in context (Twitter vs. Cyber Argumentation), we constructed a subset of their feature set, which included the following features1: + +- Linguistic Features: Word 1-3 grams as binary vectors, count vectors, and tfidf weighted vectors. Character 1-6 grams as count vectors. POS tag 1-3 grams concatenated with their words (ex: word1_pos1 ...) and concatenated to the end of the post (ex: word1, word2, ..., POS1, POS2, ...). +- Topic Features: Topic membership of each + +post after LDA topic modeling (Blei et al., 2003) had run on the entire post corpus. + +- Word Embedding Features: The 100-dimensional word embedding sums for each word in a post and the cosine similarity between the summed embedding vectors for the target post and its parent post. +- Lexical Features: Sentiment lexicon features outlined in Mourad et al. (2018), excluding the DAL and NRC Hashtag Lexicons. + +We tested using the top 50 features selected using reliefF and reducing the feature size to 50 using Principal Component Analysis (PCA), as well as using the full feature set. We found that the full feature set (2855 total) performed significantly better than the reliefF and PCA feature sets. We used the full feature set in our final model. + +# 4.3 pkudblab-PIP + +The highest performing CNN model, pkudblab, applied to the SemEval 2016 benchmark dataset, was submitted by Wei et al. (2016). Their model applied a convolutional neural network on the word embedding features of a tweet. We modified this model for agreement prediction. The resulting model's (pkudblab-PIP) architecture is shown in Figure 2. We used pre-trained embeddings (300-dimension) published by the word2vec team (Mikolov et al., 2013). Given an input of word embeddings of size $d$ by $|s|$ , where $d$ is the size of the word embedding and $|s|$ is the normalized post length, the input was fed into a convolution layer. The convolution layer contained filters with window size $(m)$ 3, 4, and 5 words long with 100 filters $(n)$ each. Then the layers were passed to a max-pooling layer and finally passed through a fully-connected sigmoid layer to produce the final output value. We trained the model using a mean squared error loss function and used a $50\%$ dropout layer after the max-pooling layer. + +# 4.4 T-PAN-PIP + +The RNN model (T-PAN-PIP) is adapted from the T-PAN framework by Dey et al. (2018), which was one of the highest performing neural network models on the SemEval 2016 benchmark dataset. The T-PAN framework uses a two-phase LSTM model with attention, based on the architecture proposed by Du et al. (2017). We adapted this model for regression by making some modifications. Our + +![](images/cc48b288c2c1e55f22bfdcb2a31cee15c21eea96979a61796b2b5eaa9b5f026f.jpg) +Figure 2: The architecture of pkudblab-PIP for stance polarity and intensity prediction. + +adapted model (T-PAN-PIP) uses only a single-phase architecture, resembling Du et al.'s original design (2017), where the output is the predicted agreement value, instead of a categorical prediction. + +Figure 3 illustrates the architecture of T-PAN-PIP. It uses word embedding features (with embedding size 300) as input to two network branches. The first branch feeds the word embeddings into a bi-directional LSTM (Bi-LSTM) with 256 hidden units, which outputs the hidden states for each direction (128 hidden units each) at every time step. The other branch appends the average topic embedding from the topic text (i.e., the text of the post that the input is responding) to the input embeddings and feeds that input into a fully-connected softmax layer, to calculate what Dey et al. (2018) called the "subjectivity attention signal." The subjectivity attention signals are a linear mapping of each input word's target augmented embedding to a scalar value that represents the importance of each word in the input relative to the target's text. These values serve as the attention weights that are used to scale the hidden state output of the Bi-LSTM. + +The weighted attention application layer combines the attention weights to their corresponding hidden state output, as shown in (1). + +$$ +Q = \frac {1}{| s |} \sum_ {s = 0} ^ {| s | - 1} a _ {s} h _ {s} \tag {1} +$$ + +Where $a_{s}$ is the attention signal for word $s$ , $h_{s}$ is the hidden layer output of the Bi-LSTM for word $s$ , $|s|$ is the total number of words, and $Q$ is the resulting attention weighted vector of size 256, the size of the output of the hidden units of the BiLISTM. The output $Q$ feeds into a fully-connected sigmoid layer and outputs the predicted agreement value. We train the model using a mean absolute error loss function. + +![](images/227fc8be7b4826fb201e75fe95e2e8270ef42fde9b8f65a00db7f9fcf76cf011.jpg) +Figure 3: The architecture of T-PAN-PIP for stance polarity and intensity prediction. + +# 5 Empirical Dataset Description + +The dataset was constructed from three separate empirical studies collected in Fall 2017, Spring 2018, and Spring 2019. In each study, a class of undergraduate students in an entry-level sociology class was offered extra credit to participate in discussions in ICAS. Each student was asked to discuss four different issues relating to the content they were covering in class. The issues were: 1) Healthcare: Should individuals be required by the government to have health insurance? 2) Same Sex Adoption: Should same-sex married couples be allowed to adopt children? 3) Guns on Campus: Should students with a concealed carry permit be allowed to carry guns on campus? 4) Religion and Medicine: Should parents who believe in healing through prayer be allowed to deny medical treatment for their child? + +Under each issue, there were four positions (with the exception of the Healthcare issue for Fall 2017, which had only 3 positions) to discuss. The positions were constructed such that there was one strongly conservative position, one moderately conservative position, one moderately liberal position, and one strongly liberal position. The students were asked to post ten arguments under each issue. + +The combined dataset contains 22,606 total arguments from 904 different users. Of those arguments, 11,802 are replying to a position, and 10,804 are replying to another argument. The average depth of a reply thread tends to be shallow, with $52\%$ of arguments on the first level (reply to position), $44\%$ on the second level, $3\%$ on the third level, and $1\%$ on the remaining levels (deepest level was 5). + +When a student posted an argument, they were required to annotate their argument with an agree- + +![](images/50c3181498f749377cb0f89191960bf59e3ba46f3724fc90b96d813968eaa5fa.jpg) +Figure 4: A histogram of the different agreement values across all of the issues in the cyber argumentation. + +ment value. Overall, argument agreement values skew positive. Figure 4 displays a histogram of the agreement values for the arguments in the dataset. + +The annotated labels in this dataset are self-labeled, meaning that when a user replies to a post, they provide their own stance polarity and intensity label. The label is a reflection of the author's intended stance toward a post, where the post's text is a semantic description of that intention. While these label values are somewhat subjective, they are an accurate reflection of their author's agreement, which we need to capture to analyze opinions in the discussion. Self-annotated datasets like this one have been used in stance detection for argumentation mining in the past (see (Boltužić and Šnajder, 2014; Hasan and Ng, 2014)). + +# 6 Empirical Study Evaluation + +# 6.1 Agreement Prediction Problem + +In this study, we want to evaluate the models' performance on the stance polarity and intensity prediction problem. We separated the dataset into training and testing sets using a 75-25 split. For the neural network models (pkudblab-PIP and T-PAN-PIP), we separated out $10\%$ of the training set as a validation set to detect over-fitting. The split was performed randomly without consideration of the discussion issue. Each issue was represented proportionally in the training and testing data sets with a maximum discrepancy of less than $1\%$ . + +For evaluation, we want to see how well the regression models are able to predict the continuous agreement value for a post. We report the root-mean-squared error (RMSE) for the predicted results. + +# 6.2 Agreement Prediction Models for Stance Detection + +We wanted to investigate whether training models for agreement prediction would degrade their performance for stance detection. Ideally, these models should learn to identify both stance intensity without impacting their ability to identify stance polarity. + +To test this, we compared each model to their original stance classification models described in their source papers. Thus, ridge-H is compared with an SVM trained on the same feature set (SVM-H), ridge-S is compared to a Linear-SVM trained on the same feature set (SVM-S), SVR-RF-R is compared to a majority-voting ensemble of a linear-SVM, Random Forest, and Naïve Bayes classifier using the same feature set (SVM-RF-NB), pkudblab-PIP is compared to the original pkudblab model trained using a softmax cross-entropy loss function, and T-PAN-PIP is compared to the original T-PAN model trained using a softmax cross-entropy loss function. We trained the classification models for stance detection by converting the continuous agreement values into categorical polarity values. When converted into categorical values, all of the positive agreement values are classified as Favoring, all negative values are classified as Opposing, and zero values are classified as Neutral. In the dataset, 12,258 arguments are Favoring $(54\%)$ , 8962 arguments are Opposing $(40\%)$ , and 1386 arguments are Neutral $(6\%)$ . To assess the stance detection performance of the models trained for agreement prediction, we converted the predicted continuous agreement values output by the models into the categorical values using the same method. + +For evaluation, we report both the accuracy value of the predictions and the macro-average F1-scores for the Favoring and Opposing classes on the testing set. This scoring scheme allows us to treat the Neutral category as a class that is not of interest (Mourad et al., 2018). + +# 7 Evaluation Results + +# 7.1 Agreement Prediction Results + +The results for agreement prediction are shown in Table 1. A mean prediction baseline model is shown in the table to demonstrate the difficulty associated with the problem. The neural network models perform worse than both the ridge regression and ensemble models. Ridge-S performed slightly better than Ridge-M due to the sum word + +
ModelRMSE
Baseline (Mean)0.718
Ridge-M0.620
Ridge-S0.615
SVR-RF-R0.596
pkudblab-PIP0.657
T-PAN-PIP0.623
+ +Table 1: The results of the regression models for the Agreement prediction task. The best result is bolded. + +embedding features. The best performing model was the SVR-RF-R model with an RMSE of 0.596. + +We performed feature analysis on the SVR-RF-R model using ablation testing (i.e., removing one feature set from the model). Results showed that removing a single features set for each type of feature (Word N-grams, Character N-grams, POS N-grams, Topic features, Lexicon features, word embedding features, and cosine similarity feature) impacted the RMSE of the model by less than 0.005. Using only the N-gram features resulted in an RMSE of 0.599, which is only a 0.0047 decrease from the total. This result matches the difference between Ridge-M (only uses N-gram features) and Ridge-S (includes N-gram and word embedding features). Since the N-gram features contain most of the textual information, it had the most impact on the model, while the additional features had smaller effects on the model accuracy. + +# 7.2 Agreement Prediction models for Stance Detection Results + +We compare the models trained on the agreement prediction task to their classification model counterparts in terms of performance on the stance detection task. Tables 2 and 3 show the comparison between the models in terms of accuracy and (macro) F1-score. + +SVR-RF-R has the best accuracy and F1-score for stance detection, which outperformed its classifier counterpart (SVM-RF-NB) by $2.12\%$ in accuracy and $+0.016$ in F1-score. Three of the models trained for stance polarity and intensity prediction, SVR-RF-R, Ridge-S, and T-PAN-PIP, outperformed their classifier counterparts in accuracy by $1 - 2\%$ and F1-score by $+0.009$ on average. Two of the models trained for stance polarity and intensity prediction, Ridge-H and pkudblab-PIP, slightly underperformed their classifier counterparts in accuracy by $-0.36\%$ and F1-score by $-0.011$ on average. + +
Stance Polarity Prediction ModelPolarity and Intensity Prediction ModelDiff
ModelAccuracyModelAccuracy
Baseline (Most Frequent)54.36%Baseline (Mean)54.36%0.00%
SVM-H68.48%Ridge-H68.16%-0.32%
SVM-S67.63%Ridge-S68.84%+1.21%
SVM-RF-NB68.31%SVR-RF-R70.43%+2.12%
pkudblab67.28%pkudblab-PIP66.89%-0.39%
T-PAN65.55%T-PAN-PIP66.64%+1.09%
+ +Table 2: The classification accuracy of the stance polarity prediction models and the stance polarity and intensity prediction models for Stance Detection (polarity only) classification. + +
Stance Polarity Prediction ModelPolarity and Intensity Prediction ModelDiff
ModelF1-ScoreModelF1-Score
Baseline (Most Frequent)0.352Baseline (Mean)0.3520.000
SVM-H0.701Ridge-H0.695-0.006
SVM-S0.697Ridge-S0.703+0.006
SVM-RF-NB0.705SVR-RF-R0.721+0.016
pkudblab0.688pkudblab-PIP0.672-0.016
T-PAN0.673T-PAN-PIP0.678+0.005
+ +Table 3: The F1-scores of the stance polarity prediction models and the stance polarity and intensity prediction models for Stance Detection (polarity only) classification. + +# 8 Discussion + +The models behaved very similarly on the agreement prediction problem, where the difference between the best performing model and the worst performing model is only 0.061. Overall, the best model received an RMSE of 0.596, which is reasonably good but can be improved. + +T-PAN-PIP had the worst performance, which is surprising, as it was the only model to include the parent post's information into its prediction, which should have helped improve its performance. It is possible that its architecture is unsuitable for agreement prediction; other architectures have been deployed that include a post's parent and ancestors into a stance prediction, which might be more suitable for agreement prediction. Future model designs should better incorporate a post's parent information into their predictions. + +The difference in performance between the agreement prediction models and the classification models on the stance detection task was small and sometimes better. This demonstrates that the models learning to identify stance intensity do so without significant loss of performance in identifying stance polarity. + +Larger gains in performance will likely require information about the post's author. Some post + +authors will state strong levels of agreement in their statements, but annotate their argument with weaker agreement levels. For example, one author wrote, "Agree completely. Government should stay out of healthcare." and annotated that argument with an agreement value of $+0.6$ . The authors were instructed on how to annotate their posts, but the annotations themselves were left to the post's author's discretion. Thus including author information into our models would likely improve the stance polarity and intensity prediction results. + +# 9 Conclusion + +We introduce a new research problem called stance polarity and intensity prediction in a responsive relationship between posts, which predicts both an online post's stance polarity and intensity value toward another post. This problem encapsulates stance detection and adds the additional difficulty of detecting subtle differences in intensity found in the text. We introduced a new large empirical dataset for agreement prediction, collected using a cyber argumentation platform. We implemented five models, adapted from top-performing stance detection models, for evaluation on the new dataset for agreement prediction. Our empirical results demonstrate that the ensemble model SVR-RF-R performed the best for agreement prediction and + +models trained for agreement prediction learn to differentiate between intensity values without degrading their performance for determining stance polarity. Research into this new problem of agreement prediction will allow for a more nuanced annotation and analysis of online debate. + +# Acknowledgments + +We would like to acknowledge Md Mahfuzer Rahman and Najla Alhuniyan for their efforts in developing the ICAS platform and planning the empirical studies. We are also grateful to the anonymous reviewers for their constructive input during the review process. + +# References + +Leila Amgoud and Jonathan Ben-Naim. 2018. Weighted Bipolar Argumentation Graphs: Axioms and Semantics. In *IJCAI'18 Proceedings of the 27th International Joint Conference on Artificial Intelligence*, pages 5194–5198, Stockholm, Sweden. +Ravi S. Arvapally, Xiaoqing Frank Liu, Fiona Fui-Hoon Nah, and Wei Jiang. 2017. Identifying outlier opinions in an online intelligent argumentation system. *Concurrency and Computation: Practice and Experience*, page e4107. +Ravi Santosh Arvapally and Xiaoqing (Frank) Liu. 2013. Polarisation assessment in an intelligent argumentation system using fuzzy clustering algorithm for collaborative decision support. 4(3):181-208. +Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. SENTIWORDNET 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining. In Proceedings of the Seventh International Conference on Language Resources and Evaluation, volume 10, pages 2200 - 2210, Valletta, Malta. +Philip Bell and Marcia C. Linn. 2000. Scientific arguments as learning artifacts: designing for learning from the web with KIE. International Journal of Science Education, 22(8):797-817. +David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3(Jan):993-1022. +Filip Boltužić and Jan Šnajder. 2014. Back up your Stance: Recognizing Arguments in Online Discussions. In Proceedings of the First Workshop on Argumentation Mining, pages 49-58, Baltimore, Maryland. Association for Computational Linguistics. +Clinton Burfoot, Steven Bird, and Timothy Baldwin. 2011. Collective Classification of Congressional Floor-debate Transcripts. In Proceedings of the 49th + +Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT '11, pages 1506-1515, Stroudsburg, PA, USA. Association for Computational Linguistics. Event-place: Portland, Oregon. +N. Chanda and X. F. Liu. 2015. Intelligent analysis of software architecture rationale for collaborative software design. In 2015 International Conference on Collaboration Technologies and Systems (CTS), pages 287-294. +Leon Derczynski, Kalina Bontcheva, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Arkaitz Zubiaga. 2017. SemEval-2017 Task 8: RumourEval: Determining rumour veracity and support for rumours. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 69-76, Vancouver, Canada. ArXiv: 1704.05972. +Kuntal Dey, Ritvik Shrivastava, and Saroj Kaushik. 2018. Topical Stance Detection for Twitter: A Two-Phase LSTM Model Using Attention. In Advances in Information Retrieval, Lecture Notes in Computer Science, pages 529-536. Springer International Publishing. +Rui Dong, Yizhou Sun, Lu Wang, Yupeng Gu, and Yuan Zhong. 2017. Weakly-Guided User Stance Prediction via Joint Modeling of Content and Social Interaction. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM '17, pages 1249-1258, New York, NY, USA. ACM. Event-place: Singapore, Singapore. +Jiachen Du, Ruifeng Xu, Yulan He, and Lin Gui. 2017. Stance classification with target-specific neural attention networks. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. +Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural End-to-End Learning for Computational Argumentation Mining. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 11–22, Vancouver, Canada. Association for Computational Linguistics. ArXiv: 1704.06104. +Heba Elfardy and Mona Diab. 2016. CU-GWU Perspective at SemEval-2016 Task 6: Ideological Stance Detection in Informal Text. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 434-439, San Diego, California. Association for Computational Linguistics. +Andreas Hanselowski, Avinesh PVS, Benjamin Schiller, Felix Caspelherr, Debanjan Chaudhuri, Christian M. Meyer, and Iryna Gurevych. 2018. A Retrospective Analysis of the Fake News Challenge Stance Detection Task. arXiv:1806.05180 [cs]. ArXiv: 1806.05180. + +Kazi Saidul Hasan and Vincent Ng. 2013. Stance Classification of Ideological Debates: Data, Models, Features, and Constraints. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 1348 - 1356. +Kazi Saidul Hasan and Vincent Ng. 2014. Why are you taking this stance? identifying and classifying reasons in ideological debates. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 751-762. Association for Computational Linguistics. +Minqing Hu and Bing Liu. 2004. Mining and Summarizing Customer Reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '04, pages 168-177, New York, NY, USA. ACM. Event-place: Seattle, WA, USA. +Yuki Igarashi, Hiroya Komatsu, Sosuke Kobayashi, Naoaki Okazaki, and Kentaro Inui. 2016. Tohoku at SemEval-2016 Task 6: Feature-based Model versus Convolutional Neural Network for Stance Detection. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 401-407, San Diego, California. Association for Computational Linguistics. +Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs]. ArXiv:1412.6980. +Mark Klein. 2011. How to Harvest Collective Wisdom on Complex Problems: An Introduction to the MIT Deliberatorium. +Igor Kononenko, Edvard Simec, and Marko Robnik-Sikonja. 1997. Overcoming the Myopia of Inductive Learning Algorithms with RELIEFF. Applied Intelligence, 7(1):39-55. +Werner Kunz and Horst W J Rittel. 1970. Issues as elements of information systems. volume 131, Berkeley. +Gina-Anne Levow, Valerie Freeman, Alena Hrynkevich, Mari Ostendorf, Richard Wright, Julian Chan, Yi Luan, and Trang Tran. 2014. Recognition of stance strength and polarity in spontaneous speech. In 2014 IEEE Spoken Language Technology Workshop (SLT), pages 236-241. ISSN: null. +Anders Edelbo Lillie and Emil Refsgaard Middelboe. 2019. Fake News Detection using Stance Classification: A Survey. arXiv:1907.00181 [cs]. ArXiv: 1907.00181. +Marco Lippi and Paolo Torroni. 2016. Argumentation Mining: State of the Art and Emerging Trends. ACM Trans. Internet Technol., 16(2):10:1-10:25. +X. Liu, R. Wanchoo, and R. S. Arvapally. 2011. Empirical study of an intelligent argumentation system in MCDM. In 2011 International Conference on Collaboration Technologies and Systems (CTS), pages 125-133. + +Xiaoqing (Frank) Liu, Eric Christopher Barnes, and Juha Erik Savolainen. 2012. Conflict detection and resolution for product line design in a collaborative decision making environment. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, CSCW '12, pages 1327-1336. ACM. Event-place: Seattle, Washington, USA. +Xiaoqing (Frank) Liu, Ekta Khudkhudia, Lei Wen, Vamshi Sajja, and Ming C. Leu. 2010. An Intelligent Computational Argumentation System for Supporting Collaborative Software Development Decision Making. In Farid Meziane, Sunil Vadera, and Ivan Giannoccaro, editors, Artificial Intelligence Applications for Improved Software Engineering Development: New Prospects, Advances in Computational Intelligence and Robotics, pages 167 - 180. IGI Global. +Xiaoqing (Frank) Liu, Samir Raorane, and Ming C. Leu. 2007. A web-based intelligent collaborative system for engineering design. In W. D. Li, Chris McMahon, S. K. Ong, and Andrew Y. C. Nee, editors, Collaborative Product Design and Manufacturing Methodologies and Applications, Springer Series in Advanced Manufacturing, pages 37-58. Springer London. +Edward Loper and Steven Bird. 2002. NLTK: The Natural Language Toolkit. In Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics, pages 63 - 70. ArXiv: cs/0205028. +Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. +Andrew Kachites McCallum. 2002. Mallet: A Machine Learning for Language Toolkit. +Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. arXiv:1301.3781 [cs]. ArXiv: 1301.3781. +Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016. SemEval-2016 Task 6: Detecting Stance in Tweets. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 31-41, San Diego, California. Association for Computational Linguistics. + +Saif M. Mohammad. 2016. Sentiment Analysis: Detecting Valence, Emotions, and Other Affectual States from Text. In Herbert L. Meiselman, editor, Emotion Measurement, pages 201-237. Woodhead Publishing. +Saif M. Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. NRC-Canada: Building the State-of-the-Art in Sentiment Analysis of Tweets. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 321-327. ArXiv: 1308.6242. +Sara S. Mourad, Doaa M. Shawky, Hatem A. Fayed, and Ashraf H. Badawi. 2018. Stance Detection in Tweets Using a Majority Vote Classifier. In The International Conference on Advanced Machine Learning Technologies and Applications (AMLTA2018), Advances in Intelligent Systems and Computing, pages 375-384. Springer International Publishing. +Huy Nguyen and Diane Litman. 2016. Context-aware Argumentative Relation Mining. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1127-1137, Berlin, Germany. Association for Computational Linguistics. +Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournaepau, Matthieu Brucher, Matthieu Perrot, and Édouard Duchesnay. 2011. Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 12:2825-2830. +Isaac Persing and Vincent Ng. 2016. End-to-End Argumentation Mining in Student Essays. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1384–1394, San Diego, California. Association for Computational Linguistics. +Joseph Sirrianni, Xiaoqing (Frank) Liu, and Douglas Adams. 2018. Quantitative Modeling of Polarization in Online Intelligent Argumentation and Deliberation for Capturing Collective Intelligence. In 2018 IEEE International Conference on Cognitive Computing (ICCC), pages 57-64. +Parinaz Sobhani, Diana Inkpen, and Stan Matwin. 2015. From Argumentation Mining to Stance Classification. In Proceedings of the 2nd Workshop on Argumentation Mining, pages 67-77, Denver, CO. Association for Computational Linguistics. +Parinaz Sobhani, Diana Inkpen, and Xiaodan Zhu. 2019. Exploring deep neural networks for multi-target stance detection. Computational Intelligence, 35(1):82-97. + +Parinaz Sobhani, Saif Mohammad, and Svetlana Kiritchenko. 2016. Detecting Stance in Tweets And Analyzing its Interaction with Sentiment. In Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics, pages 159-169, Berlin, Germany. Association for Computational Linguistics. +Christian Stab and Iryna Gurevych. 2017. Parsing Argumentation Structures in Persuasive Essays. Computational Linguistics, 43(3):619-659. +Manfred Stede and Jodi Schneider. 2018. Argumentation Mining, volume 11 of Synthesis Lecutres on Human Language Technologies. Morgan & Claypool Publishers. +Qingying Sun, Zhongqing Wang, Qiaoming Zhu, and Guodong Zhou. 2018. Stance Detection with Hierarchical Attention Network. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2399-2409, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Stephen E. Toulmin. 2003. The Uses of Argument. Cambridge University Press, Cambridge. Google-Books-ID: 8UYgegaB1S0C. +Martin Tutek, Ivan Sekulic, Paula Gombar, Ivan Paljak, Filip Culinovic, Filip Boltuzic, Mladen Karan, Domagoj Alagic, and Jan Šnajder. 2016. TakeLab at SemEval-2016 Task 6: Stance Classification in Tweets Using a Genetic Algorithm Based Ensemble. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 464-468, San Diego, California. Association for Computational Linguistics. +Prashanth Vijayaraghavan, Ivan Sysoev, Soroush Vosoughi, and Deb Roy. 2016. DeepStance at SemEval-2016 Task 6: Detecting Stance in Tweets Using Character and Word-Level CNNs. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 425-431, San Diego, California. ArXiv: 1606.05694. +Wan Wei, Xiao Zhang, Xuqin Liu, Wei Chen, and Tengjiao Wang. 2016. pkudblab at SemEval-2016 Task 6: A Specific Convolutional Neural Network System for Effective Stance Detection. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 384-388, San Diego, California. Association for Computational Linguistics. +Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing Contextual Polarity in Phrase-Level Sentiment Analysis. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 347-354. +Guido Zarrella and Amy Marsh. 2016. MITRE at SemEval-2016 Task 6: Transfer Learning for Stance + +Detection. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 458 - 463, San Diego, California. ArXiv: 1606.03784. + +# A Appendices + +# A.1 Extended Model Description + +The following sections give a more detailed description for some of the models used in our research. The models were written using the Sci-kit learn (Pedregosa et al., 2011) and TensorFlow libraries (Martín Abadi et al., 2015). + +# A.1.1 SVR-RF-R Feature Set Description + +The SVR-RF-R model used a total of 2855 features. They are listed below. + +Linguistic Features: + +- 1-3 word grams as binary vectors, count vectors, and tfidf weighted vectors. Word grams must appear in at least $1\%$ of posts and no more than $95\%$ of posts. +1-6 character grams as count vectors. Character grams must appear in at least $10\%$ of posts and no more than $95\%$ of posts. +- 1-3 Part-Of-Speech grams as count vectors. The Part-Of-Speech tags were generated using the NLTK library (Loper and Bird, 2002). The POS tags were used in two formats, with the tags concatenated to their corresponding word (e.g. word1_POS1 word2_POS2 ...) and with the POS tags appended to the end of the sentence (e.g. word1 word2 ...word_N POS1 POS2 ...POS_N). + +# Topic Features: + +- Topic membership of each post. LDA topic modeling was run on the entire dataset. Different numbers of topics were tested and their performance was judged using silhouette score. The best performing model had two topics. Word Embedding Features: +100-dimensional word embedding sums for each post. The word embeddings were trained using Mallet (McCallum, 2002). Similarity Features: +- The cosine similarity between the summed word embeddings for the target post and its parent post. + +# Lexical Features: + +- The ratio of positive words to all words, ratio of negative words to all words, sum count of positive words, sum count of negative words, and the positive and negative count for each POS tag for the MPQA (Wilson et al., 2005) and SentiWordNet (Baccianella et al., 2010) lexicons. +- The ratio of positive words to all words, ratio of negative words to all words, sum count of positive words, sum count of negative words for the Hu Liu Lexicon (Hu and Liu, 2004). +- The sum score, maximum score, positive sum, and negative sum for sentiment tokens from the NRC lexicon (Mohammad et al., 2013). + +In their original paper, Mourad et al. (2018), used the reliefF (Kononenko et al., 1997) features selection technique to select the 50 most important features. We tested using the top 50 features selected using reliefF and reducing the feature size to 50 using Principal Component Analysis (PCA), as well as using the full fea-ture set. We found that the full feature set (2855 total) performed significantly better than the reliefF and PCA feature sets. We used the full feature set in our final model. + +# A.1.2 pkudblab-PIP Training + +The pkudblab-PIP model used the following input sizes: + +- Word Embedding Size $(d)$ : 300. +Maximum Sentence Length $(|s|)$ : 150. Posts longer than 150 words were truncated from the beginning and posts less than 150 words were padded at the end. +- Total number of filters: 300. 100 for each window size: 3, 4, and 5. + +The model was trained using a batch size of 64, a drop-out rate of $50\%$ , and used an Adam optimizer (Kingma and Ba, 2014). + +# A.1.3 T-PAN-PIP Training + +The T-PAN-PIP model used the following input sizes: + +- Word Embedding Size $(d)$ : 300. + +Maximum Sentence Length $(|s|)$ : 150. Posts longer than 150 words were truncated from the beginning and posts less than 150 words were padded at the end. +- LSTM hidden units: 256 total (128 for each direction). + +The model was trained using a batch size of 64 and used an Adam optimizer. \ No newline at end of file diff --git a/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/images.zip b/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..deddc2b3baf5fb567f2b19bda3142378a7126199 --- /dev/null +++ b/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e56911981e2f9df954bd9d8ec922675246a5852deffd78c398d779f4d5e31709 +size 235790 diff --git a/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/layout.json b/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d6569f28879d6a590d9a1cd7e08c3bb9ecfb581a --- /dev/null +++ b/agreementpredictionofargumentsincyberargumentationfordetectingstancepolarityandintensity/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:96a95385c4877161c835e54cf40e751f6107b8f260e9d924ea3eedf573fc03a2 +size 364351 diff --git a/ajointmodelfordocumentsegmentationandsegmentlabeling/497df25b-df02-454c-a6fe-dffb8f7e648b_content_list.json b/ajointmodelfordocumentsegmentationandsegmentlabeling/497df25b-df02-454c-a6fe-dffb8f7e648b_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..dae27e4f23db8c51bec674c0f397399452b6965f --- /dev/null +++ b/ajointmodelfordocumentsegmentationandsegmentlabeling/497df25b-df02-454c-a6fe-dffb8f7e648b_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:db5ae12c598983777e8984d423c4d9a767f918bd3764d6863854dfd67458dedf +size 70772 diff --git a/ajointmodelfordocumentsegmentationandsegmentlabeling/497df25b-df02-454c-a6fe-dffb8f7e648b_model.json b/ajointmodelfordocumentsegmentationandsegmentlabeling/497df25b-df02-454c-a6fe-dffb8f7e648b_model.json new file mode 100644 index 0000000000000000000000000000000000000000..9cd6f6705202dca8b32a61346ac256384d6b214f --- /dev/null +++ b/ajointmodelfordocumentsegmentationandsegmentlabeling/497df25b-df02-454c-a6fe-dffb8f7e648b_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1885e17b9b2313d04b7fb30b8633e436d67079c4ab6b407c4bfe316bae60333 +size 86356 diff --git a/ajointmodelfordocumentsegmentationandsegmentlabeling/497df25b-df02-454c-a6fe-dffb8f7e648b_origin.pdf b/ajointmodelfordocumentsegmentationandsegmentlabeling/497df25b-df02-454c-a6fe-dffb8f7e648b_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3324800a2aa95973cb3bf4c3ba6fc0540966d016 --- /dev/null +++ b/ajointmodelfordocumentsegmentationandsegmentlabeling/497df25b-df02-454c-a6fe-dffb8f7e648b_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eaf39b4a4e8be8373bba84c23eb6118cf98e6ebee7d001209b3ba43bde9ceb90 +size 569506 diff --git a/ajointmodelfordocumentsegmentationandsegmentlabeling/full.md b/ajointmodelfordocumentsegmentationandsegmentlabeling/full.md new file mode 100644 index 0000000000000000000000000000000000000000..537a7a8f29bad89061cb0062e46fc5709bb69246 --- /dev/null +++ b/ajointmodelfordocumentsegmentationandsegmentlabeling/full.md @@ -0,0 +1,306 @@ +# A Joint Model for Document Segmentation and Segment Labeling + +Joe Barrow* +University of Maryland +jdbarrow@cs.umd.edu + +Rajiv Jain +Adobe Research +rajijain@adobe.com + +Vlad I. Morariu +Adobe Research +morariu@adobe.com + +Varun Manjunatha +Adobe Research +vmanjuna@adobe.com + +Douglas W. Oard +University of Maryland +oard@umd.edu + +Philip Resnik +University of Maryland +resnik@umd.edu + +# Abstract + +Text segmentation aims to uncover latent structure by dividing text from a document into coherent sections. Where previous work on text segmentation considers the tasks of document segmentation and segment labeling separately, we show that the tasks contain complementary information and are best addressed jointly. We introduce the Segment Pooling LSTM (S-LSTM) model, which is capable of jointly segmenting a document and labeling segments. In support of joint training, we develop a method for teaching the model to recover from errors by aligning the predicted and ground truth segments. We show that S-LSTM reduces segmentation error by $30\%$ on average, while also improving segment labeling. + +# 1 Introduction + +A well-written document is rich not only in content but also in structure. One type of structure is the grouping of content into topically coherent segments. These segmented documents have many uses across various domains and downstream tasks. Segmentation can, for example, be used to convert unstructured medical dictations into clinical reports (Sadoughi et al., 2018), which in turn can help with medical coding (since a diagnosis mentioned in a "Medical History" might be different from a diagnosis mentioned in an "Intake" section (Ganesan and Subotin, 2014)). Segmentation can also be used downstream for retrieval (Hearst and Plaunt, 2002; Edinger et al., 2017; Allan et al., 1998), where it can be particularly useful when applied to informal text or speech that lacks explicit segment markup. Topically segmented documents are also useful for pre-reading (the process of skimming or surveying a text prior to careful reading), thus serving as an aid for reading comprehension (Swaffar et al., 1991; Ajideh, 2003). + +Uncovering latent, topically coherent segments of text is a difficult problem because it requires solving a chicken-and-egg problem: determining the segment topics is easier if segment boundaries are given, and identifying the boundaries of segments is easier if the topic(s) addressed in parts of the document are known. Prior approaches to text segmentation can largely be split into two categories that break the cycle by sequentially solving the two problems: those that attempt to directly predict segment bounds (Koshorek et al., 2018), and those that attempt to predict topics per passage (e.g., per sentence) and use measures of coherence for post hoc segmentation (Hearst, 1997; Arnold et al.; Eisenstein and Barzilay, 2008; Riedl and Biemann, 2012; Glavaš et al., 2016). The benefit of the topic modeling approach is that it can work in unsupervised settings where collecting ground truth segmentations is difficult and labeled data is scarce (Eisenstein and Barzilay, 2008; Choi, 2000). Recent work uses Wikipedia as a source of segmentation labels by eliding the segment bounds of a Wikipedia article to train supervised models (Koshorek et al., 2018; Arnold et al.). This enables models to directly learn to predict segment bounds or to learn sentence-level topics and perform post hoc segmentation. + +Our work is motivated by the observation that the segment bounds and topicality are tightly interwoven, and should ideally be considered jointly rather than sequentially. We start by examining three properties about text segmentation: (1) segment bounds and segment labels contain complementary supervisory signals, (2) segment labels are a product of lower level (e.g. sentence) labels which must be composed, and (3) the model should not only learn to label from ground-truth segmentations at training time, but instead the labeler should learn to be robust to segmentation errors. These properties build on previous work discussed in Section 2. We + +experimentally evaluate and verify each of these properties in Section 5 with respect to a document segmentation and segment labeling task. + +Taking advantage of these properties, we propose a neural model that jointly segments and labels without committing to a priori segmentations, Segment Pooling LSTM (S-LSTM). It consists of three components: a segment proposal LSTM (discussed in Section 3.2), a segment pooling layer (Section 3.3), and a segment aligner for training and evaluation (Section 3.4). + +Our main contribution is a model that performs segmentation and labeling jointly rather than separately. By virtue of joint inference, our model takes advantage of the complementary supervisory signals for segmentation and topic inference, considers the contribution of all sentences to the segment label, and avoids committing to early errors in low-level inference. + +Our approach improves over neural and nonneural baselines of a document segmentation task. We use a dataset of Wikipedia articles described in Section 5 for training and evaluation. We show that S-LSTM is capable of reducing segmentation error by, on average, $30\%$ while also improving segment classification. We also show that these improvements hold on out-of-domain datasets. + +# 2 Related Work + +Coherence-based Segmentation. Much work on text segmentation uses measures of coherence to find topic shifts in documents. Hearst (1997) introduced the TextTiling algorithm, which uses term co-occurrences to find coherent segments in a document. Eisenstein and Barzilay (2008) introduced BayesSeg, a Bayesian method that can incorporate other features such as cue phrases. Riedl and Biemann (2012) later introduced TopicTiling, which uses coherence shifts in topic vectors to find segment bounds. Glavaš et al. (2016) proposed GraphSeg, which constructs a semantic relatedness graph over the document using lexical features and word embeddings, and segments using cliques. Nguyen et al. (2012) proposed SITS, a model for topic segmentation in dialogues that incorporates a per-speaker likelihood to change topics. + +While the above models are unsupervised, Arnold et al. introduced a supervised method to compute sentence-level topic vectors using Wikipedia articles. The authors created the WikiSection dataset and proposed the SECTOR neural + +model. The SECTOR model predicts a label for each sentence, and then performs post hoc segmentation looking at the coherence of the latent sentence representations, addressing segmentation and labeling separately. We propose a model capable of jointly learning segmentation boundaries and segment-level labels at training time. Our segmentation does not rely on measures of coherence, and can instead learn from signals in the data, such as cue phrases, to predict segment bounds, while still performing well at the segment labeling task. + +Supervised Segmentation. An alternative to using measures of topical coherence to segment text is to learn to directly predict segment bounds from labeled data. This was the approach taken in Koshorek et al. (2018), where the authors used Wikipedia as a source of training data to learn text segmentation as a supervised task. However, learning only to predict segment bounds does not necessarily capture the topicality of a segment that is useful for informative labeling. + +The task of document segmentation and labeling is well-studied in the clinical domain, where both segmenting and learning segment labels are important tasks. Pomares-Quimbaya et al. (2019) provide a current overview of work on clinical segmentation. Ganesan and Subotin (2014) trained a logistic regression model on a clinical segmentation task, though they did not consider the task of segment labeling. Tepper et al. (2012) considered both tasks of segmentation and segment labeling, and proposed a two-step pipelined method that first segments and then classifies the segments. Our proposed model is trained jointly on both the segmentation and segment labeling tasks. + +Concurrent work considers the task of document outline generation (Zhang et al., 2019). The goal of outline generation is to segment and generate (potentially hierarchical) headings for each segment. The authors propose the HiStGen model, a hierarchical LSTM model with a sequence decoder. The work offers an alternative view of the joint segmentation and labeling problem, and is evaluated using exact match for segmentation and ROUGE (Lin, 2004) for heading generation if the segment is predicted correctly. In contrast, we evaluate our models using a commonly-used probabilistic segmentation measure, Pk, which assigns partial credit to incorrect segmentations (Beeferman et al., 1999). We also use an alignment technique to assign partial credit to labels of incorrect segmentations, both for + +training and evaluation. In addition, we explicitly consider the problem of model transferability, evaluating the pretrained models on additional datasets. + +IOB Tagging. The problem of jointly learning to segment and classify is well-studied in NLP, though largely at a lower level, with Inside-Outside-Beginning (IOB) tagging (Ramshaw and Marcus, 1999). Conditional random field (CRF) decoding has long been used with IOB tagging to simultaneously segment and label text, e.g. for named entity recognition (NER, McCallum and Li, 2003). The models that perform best at joint segmentation/classification tasks like NER or phrase chunking were IOB tagging models, typically LSTMs with a CRF decoder (Lample et al., 2016) until BERT (Devlin et al., 2019) and ELMo (Peters et al., 2018). Tepper et al. (2012) proposed the use of IOB tagging to segment and label clinical documents, but argued for a pipelined approach. + +CRF-decoded IOB tagging models are more difficult to apply to the multilabel case. Segment bounds need to be consistent across all labels, so modeling the full transition from $|L| \longrightarrow |L|$ (where $|L|$ is the size of the label space) at every time step is computationally expensive. In contrast, our joint model performs well at multilabel prediction, while also outperforming a neural CRF-decoded model on a single-label labeling task. + +# 3 Modeling + +In order to jointly model document segmentation and segment classification, we introduce the Segment Pooling LSTM (S-LSTM) model. S-LSTM is a supervised model trained to both predict segment bounds and pool over and classify the segments. The model consists of three components: a sentence encoder (Section 3.1), a segment predictor LSTM (Section 3.2), and a segment pooling network which pools over predicted segments to classify them (Section 3.3). The segment predictor is allowed to make mistakes that the labeler must learn to be robust to, a process which we refer to as exploration, and accomplish by aligning predicted and ground truth segments (Section 3.4). The full architecture is presented in Figure 1, and the loss is discussed in Section 3.5. + +# 3.1 Encoding Sentences + +The first stage is encoding sentences. S-LSTM is agnostic to the choice of sentence encoder, though in this work we use a concat pooled bi-directional + +LSTM (Howard and Ruder, 2018). First, the embedded words are passed through the LSTM encoder. Then, the maximum and mean of all hidden states are concatenated with the final hidden states, and this is used as the sentence encoding. + +# 3.2 Predicting Segment Bounds + +The second step of our model is a Segment Predictor LSTM, which predicts segment boundaries within the document. For this step we use a bidirectional LSTM that consumes each sentence vector and predicts an indicator variable, (B)eginning or (I)inside a segment. It is trained from presegmented documents using a binary cross entropy loss. This indicator variable determines if the sentence is the start of a new segment or not. This is similar to the approach taken by TextSeg in Koshorek et al. (2018), though we do not estimate a threshold, $\tau$ , and instead learn to predict two classes: (B)eginning and (I)inside. + +# 3.3 Segment Pooling + +After segmenting the document, the third stage of the model pools within the predicted segments to predict a label for each segment. The sentence vectors for the predicted segments are all grouped, and a pooling function is run over them. There are several possible sequence-to-vector pooling functions that could be used, such as averaging, and more complex learned pooling functions, such as LSTMs. The full S-LSTM model uses a concat pooling LSTM, and our experimental results show that this yields a better segment label than just averaging. We then use a classifier following the output of the segment pooler, which can provide a distribution over labels for each segment. + +The combination of segment prediction and pooling is one way that S-LSTM is different from previous hierarchical LSTM models. The model can predict and label segments dynamically, generating a single vector for predicted segments. + +# 3.4 Segment Alignment and Exploration + +Because segments can be considered dynamically at training time, we propose a method of assigning labels to potentially incorrect segments by aligning the predicted segments with ground truth segments. This label assignment allows segment-labeling loss to be propagated through the end-to-end model. + +Teacher Forcing. Teacher forcing, or feeding ground truth inputs into a recurrent network as + +4. Pool over Proposed Segments to generate a single label or topic prediction per sentence using a concat pooled LSTM. +3. Propose Segment Bounds based on the encoded sentence representation run through a bi-directional LSTM. +2. Encode the Sentences using a concat pooled LSTM. +1. Embed the Words + +![](images/8af3e56229b171525fc369ffb684b48a55a7be8f773c279fb4215caec1c86e72.jpg) +Figure 1: Segment Pooling LSTM (S-LSTM) architecture. The network first proposes segment bounds based on text, and then pools over sentence representations in the proposed segment to generate a segment label. + +![](images/2d81fae41c8a643a974f625480715bbb8dc5afc2affb150471a6c41c7013b7f6.jpg) + +opposed to model predictions, was first developed in Williams and Zipser (1989). The idea is to use ground truth predictions for inputs that would normally come from model predictions for the first stages of training, to help with convergence. For S-LSTM, it is the simplest approach to segment pooling and alignment: at training time feed the ground truth segments (as opposed to the predicted segments) the segment pooler (step 3 in Figure 1). This gives us a one-to-one alignment of "predicted" (forced) segments and ground truth segments. This is opposed to only using the predicted segments as the bounds for segment pooler. + +Exploration. Employing only teacher forcing does not allow the segment labeler to learn how to recover from errors in segmentation. The mechanism for allowing the model to explore incorrect segmentations is to align the predicted segments with overlapping ground truth segments at training time, and treat the all aligned ground truth labels as correct. While many alignments are possible, we use the one presented in Figure 2. This many-to-many alignment ensures that every ground-truth segment is mapped to at least one predicted segment and every predicted segment is mapped to at least one ground truth segment. + +We can additionally schedule teacher forcing. At the beginning, when the segmentation prediction network performs poorly, the model pools over only ground truth segment bounds, allowing it to learn the cleanest topic representations. However, as training progresses and the segmentation accuracy begins to converge, we switch from pooling over ground truth segments to aligning predicted and ground truth segment. In this way, the segment + +pooler learns to be robust to segmentation errors. + +# 3.5 Joint Training + +To jointly train the model, we use a multi-task loss, + +$$ +\begin{array}{l} L (X, y; \theta) = \alpha \cdot L _ {s e g} (X, y _ {s e g}; \theta_ {s e g}) + \\ (1 - \alpha) \cdot L _ {c l s} (X, y _ {c l s}; \theta_ {c l s}, \text {a l i g n e r}), \\ \end{array} +$$ + +where $y_{seg}$ are the labels for the segment prediction LSTM and $y_{cls}$ are segment labels. In addition, we pass in an aligner, which determines how to align the predicted segments with the ground truth segments to compute the loss, and either teacher forces the model or allows it to explore. + +# 4 Experimental Setup + +We follow the experimental procedure of Arnold et al. to evaluate S-LSTM for the tasks of document segmentation and segment labeling. + +# 4.1 Datasets + +WikiSection. Arnold et al. introduced the WikiSection dataset, which contains Wikipedia articles across two languages (English and German) and domains (Cities and Diseases). Articles are segmented using the Wikipedia section structure. The heading of each segment is retained, as well as a normalized label for each heading type (e.g. History, Demography), drawn from a restricted label vocabulary. There are two tasks: (1) jointly segment the document and assign a single restricted-vocabulary label to the segment, and (2) predict the bag-of-words in the title of the Wikipedia section as a label. For instance, the bag-of-words label for the title of this section would be the words: + +1. Align all ground truth segments with the maximum overlapping predicted segment. (↓) +2. Align unmatched predicted segments with maximum overlapping ground truth segments. (↑) + +![](images/7dd845fc468404172119f02d70e30e9f1055bbc4fa0bb230e5c7396f6ca01078.jpg) +Figure 2: Greedy many-to-many alignment. This alignment is used to assign ground-truth labels to predicted segments for training. Each ground truth segment first aligns to the maximally overlapping predicted segment; each leftover predicted segment then aligns to the maximally overlapping ground truth segment. + +# 1. Slide a probe of length $k$ over the items. + +# 2. Increase a counter by 1 whenever: + +a. the items are in the same segment in the ground truth, but not the predictions; or +b. the items are in different segments in the ground truth, but not the predictions. + +# 3. Divide the counter by the number of measures taken. + +![](images/ed93b111a72c6dcdc038f95f4243a38b0d8b1408cdf0ec12ba30a123e6a1b0a0.jpg) +Figure 3: Computing $P_{k}$ . A sliding window of length $k$ is run over the text, and a counter increments whenever the same/different status for the two ends of the window doesn't match in the ground truth and predicted segmentation. + +[Dataset, Experimental, Setup].1 For the second task, we post-process headers to remove stopwords, numbers and punctuation. We then remove words that occur fewer than 20 times in the training data to get the final label vocabulary sizes. + +Of note, we encountered a smaller label vocabulary for the bag-of-words generation task than that reported by Arnold et al.. For the four datasets, the original reported sizes of the header vocabularies were: [1.5k 1.0k, 2.8k, 1.1k]. When reproducing earlier results, we verified with the dataset authors that the actual sizes were: [179, 115, 603, 318]. + +The first task aligns closely with the clinical domain, in which headers are typically drawn from a fixed label set (Tepper et al., 2012). The second aligns more closely with learning to segment and label from naturally labeled data, such as contracts or Wikipedia articles, which can potentially then be transferred (Koshorek et al., 2018). + +Wiki-50. The Wiki-50 dataset was introduced as a test set in Koshorek et al. (2018), which also introduced the full Wiki-727k dataset. The dataset contains 50 randomly sampled Wikipedia articles, segmented and with their headers, and was used to evaluate computationally expensive methods such as BAYESSEG (Eisenstein and Barzilay, 2008). + +# Cities and Elements. The Cities and Elements + +datasets were introduced in Chen et al. (2009). They provide two additional Wikipedia datasets with both segmentation and segment headers. + +Clinical. We use the Clinical Textbook dataset from Eisenstein and Barzilay (2008), which has segment boundaries but no headings. + +# 4.2 Experimental Design + +We evaluate S-LSTM with previous document segmentation and segment labeling approaches on all four WikiSection datasets—English-language Diseases (en_disease), German-language Diseases (de_disease), English-language Cities (en_city), and German-language Cities (de_city)—for both the single label and multi-label tasks. + +Model Ablation. In order to understand the effect of our proposed segment pooling and segment exploration strategies, we also include results for simpler baselines for each of these modules. For the segment labeling we report not only the full S-LSTM model with LSTM pooling, but also additionally a mean pooling model, which we denote with "-pool". For the segment exploration we report not only the model with exploration, but also a model only trained using teacher forcing, which we denote with "-expl". + +Model Transferability. To evaluate model transferability, we test models trained on the English + +![](images/c0ecbab42d4a75e260d94757fa99690abec3f285472e812d8c7c094f4a850853.jpg) +Figure 4: A randomly selected document from the en_cities test set, with the output of SECTOR (left) and S-LSTM (right). Green lines are a correctly predicted segment bound, red lines are false positive bound predictions, and yellow dashed lines are false negatives. For each segment, the top 1-2 predicted terms are also shown. Terms are bold green if they appear in the maximally overlapping segment in the ground truth, underlined red if they are false positive terms, and italicized yellow if they are false negatives. S-LSTM does not predict any false positive segment bounds, and makes only a small number of labeling errors compared with the SECTOR baseline. + +WikiSection tasks (en_disease and en_city) on the Cities, Elements, Wiki-50, and Clinical datasets. + +# 4.3 Evaluation Measures + +Segmentation: Pk. $P_{k}$ is a probabilistic measure (Beeferman et al., 1999) that works by running a sliding window of width $k$ over the predicted and ground truth segments, and counting the number of times there is disagreement about the ends of the probe being in the same or different sections (see Figure 3). The number of disagreements is then divided by the total number of window positions, resulting in a score normalized between 0 and 1. Our segmentation results are reported setting $k$ to half the average size of ground truth segments. + +Classification: F1, MAP, and Prec@1. For classification, we report three different measures, depending on the task. For the single label tasks, we report $F_{1}$ and Mean Average Precision (MAP). For evaluating the bag-of-words (multi-label) tasks, we report Precision at the first rank position (Prec@1) and MAP. In both cases, these are computed by first aligning the predicted segments with the ground truth segments as shown in Figure 2 and described in Section 3.4. In all cases, the metrics are micro-averaged. + +# 4.4 Baselines + +We report C99 (Choi, 2000), TopicTiling (Riedl and Biemann, 2012), and TextSeg (Koshorek et al., 2018) as baselines on WikiSection segmentation. For a neural baseline, we report the SECTOR model (Arnold et al.) with pre-trained embeddings, denoted in the paper as $\mathrm{SEC} > \mathrm{T},\mathrm{H} + \mathrm{emb}$ . For the additional datasets, we report GraphSeg (Glavaš et al., + +2016), BayesSeg (Eisenstein and Barzilay, 2008) and pretrained TextSeg and SECTOR models. + +In addition, we implemented an LSTM-LSTM-CRF IOB tagging model following Lample et al. (2016). This is only used for the single-label experiments, as CRF-decoded IOB tagging models are more difficult to apply to the multilabel case. + +# 4.5 Model Setup + +For each task and dataset, we use the same set of hyperparameters: Adam optimizer (Kingma and Ba, 2015) with learning rate 0.001 and weight decay 0.9. Dropout (Srivastava et al., 2014) is applied after each layer except the final classification layers; we use a single dropout probability of 0.1 for every instance. For models with exploration, we employ teacher forcing for 10 epochs. Model weights are initialized using Xavier normal initialization (Glorot and Bengio, 2010). All LSTM hidden-layer sizes are set to 200. We use fixed 300-dimensional FastText embeddings (Bojanowski et al., 2017) for both English and German, and project them down to 200 dimensions using a trainable linear layer. + +# 5 Results and Analysis + +There are five major takeaways from the experimental results and analysis. First, the jointly trained S-LSTM model shows major improvement over prior work that modeled document segmentation and segment labeling tasks separately. Second, segment alignment and exploration during training reduces error rates. Third, the segment pooling layer leads to improvements for both segmentation and segment labeling. Fourth, S-LSTM outperforms an IOB-tagging CRF-decoded model for single label segment labeling, and also generalizes easily + +
WikiSection-topics +single-label classificationen_diseasede_diseaseen_cityde_city
27 topics25 topics30 topics27 topics
model configuration↓Pk↑F1↑MAP↓Pk↑F1↑MAP↓Pk↑F1↑MAP↓Pk↑F1↑MAP
C9937.4n/an/a42.7n/an/a36.8n/an/a38.3n/an/a
TopicTiling43.4n/an/a45.4n/an/a30.5n/an/a41.3n/an/a
TextSeg24.3n/an/a35.7n/an/a19.3n/an/a27.5n/an/a
SEC>T+emb26.355.869.427.548.965.115.571.681.016.271.081.1
LSTM-LSTM-CRF23.957.2n/a23.651.4n/a9.777.5n/a10.274.0n/a
S-LSTM20.059.372.418.855.669.09.176.183.59.576.584.5
+ +Table 1: WikiSection results. Baselines are TopicTiling (Riedl and Biemann, 2012), TextSeg (Koshorek et al., 2018), and C99 (Choi, 2000), and the best neural SECTOR models from Arnold et al.. + +
WikiSection-headings +multi-label classification +model configurationen_disease +179 topicsde_disease +115 topicsen_city +603 topicsde_city +318 topics
↓Pk↑Prec@1↑MAP↓Pk↑Prec@1↑MAP↓Pk↑Prec@1↑MAP↓Pk↑Prec@1↑MAP
C9937.4n/an/a42.7n/an/a36.8n/an/a38.3n/an/a
TopicTiling43.4n/an/a45.4n/an/a30.5n/an/a41.3n/an/a
TextSeg24.3n/an/a35.7n/an/a19.3n/an/a27.5n/an/a
SEC>H+emb30.750.557.332.926.636.717.972.371.119.368.470.2
S-LSTM19.853.560.318.636.246.19.07371.38.274.175.1
S-LSTM, -expl20.852.15919.134.744.89.272.770.88.573.874.4
S-LSTM, -expl, -pool21.252.359.519.834.44510.469.767.210.264.166.7
+ +Table 2: WikiSection headings task results, which predicts a multi-label bag-of-words drawn from section headers. To show the effect of the segment pooling and model exploration used in S-LSTM we report two variants where -expl uses only teacher forcing and -pool uses only mean pooling. + +and tractably to multi-labeling. Fifth, a deeper analysis of the joint modeling demonstrates that segment labeling and segment bound prediction contain complementary information. + +# 5.1 Structure Predicts Better Structure + +Tables 1 and 2 show that by explicitly predicting segment bounds we can improve segmentation by a large margin. On the header prediction task (Table 2), we reduced $P_{k}$ by an average of over $30\%$ across the WikiSection datasets. $P_{k}$ was consistent across both WikiSection tasks, and did not degrade when going from single-label to multi-label prediction, as Arnold et al. had found. This shows that we can achieve a more robust segmentation through jointly modeling segmentation and labeling. This is also clear from Figure 4, where S-LSTM predicts a much more accurate segmentation. + +# 5.2 Exploration Allows Error Recovery + +The results of an ablation experiment (Table 2, bottom) show that there is an additional classification gain by allowing the model to explore recovering from segmentation errors. Exploration has the important property of allowing the model to optimize more closely to how it is being evaluated. This follows from a long line of work in NLP that shows + +that for tasks such as dependency parsing (Ballesteros et al., 2016), constituency parsing (Goodman, 1996), and machine translation (Och, 2003), all show improvements by optimizing on a loss that aligns with evaluation. + +The teacher forcing was important at the beginning of model training. When training variants of S-LSTM that did not use teacher forcing at the beginning, which instead could explore the bad segmentation, the segmentation failed to converge and the model performed universally poorly. + +# 5.3 S-LSTM Can Take Advantage of Both of These, Plus Segment Pooling + +S-LSTM is capable of taking advantage of the complementary information by jointly learning to segment and label. It is capable of learning to recover from segmentation errors by exploring towards the end of training. But the ablation study shows that there is one more important component of S-LSTM that allows it to improve over previous baselines: LSTM pooling over segments. The addition of the segment pooling layer improves MAP and Prec@1 across all four datasets in the heading prediction task (Table 2), comparing the model without exploration (S-LSTM,-expl) with the model without exploration (which uses average pooling: S-LSTM, + +
Segmentation and multi-label classificationWiki-50CitiesElementsClinical
\( \downarrow P_k \)\( \uparrow \) MAP\( \downarrow P_k \)\( \uparrow \) MAP\( \downarrow P_k \)\( \uparrow \) MAP\( \downarrow P_k \)
GraphSeg63.6n/a40.0n/a49.1n/a-
BayesSeg49.2n/a36.2n/a35.6n/a57.8
TextSeg18.2*n/a19.7*n/a41.6n/a30.8
SEC> H+emb@en_disease----43.39.536.5
SEC> H+emb@en_city40.513.433.353.641.07.9-
S-LSTM@en_city22.716.621.254.234.511.0-
S-LSTM@en_disease----30.219.136.1
+ +Table 3: Transfer results across four datasets. Those marked * are trained on the training portion of the corresponding dataset, whereas those without are either unsupervised or trained on a different dataset. For the Wiki-50, Cities, and Elements datasets, S-LSTM outperforms all models not trained on corresponding training set. + +
WikiSection-headingsde_disease
multi-label classification115 topics
model configuration\( \downarrow P_k \)\( \uparrow \mathrm{P}@1 \)\( \uparrow \mathrm{MAP} \)
S-LSTM, w/o Segment Predictionn/a42.352.1
S-LSTM, w/ Segment Prediction19.143.353.3
+ +Table 4: A model trained to jointly predict segment bounds and segment labels improves classification over a baseline which only predicts labels. Both are given oracle segment bounds and do not use exploration. + +
WikiSection-headingsde_disease
document segmentation115 topics
model configuration↓Pk↑P@1↑MAP
S-LSTM, w/o Segment Labeling21.8n/an/a
S-LSTM, w/ Segment Labeling19.134.744.8
+ +Table 5: Inverse of the experiment in Table 4. A model that jointly predicts segment bounds and labels outperforms a model that only predicts segment bounds. + +expl,-pool). It is the combination of these three improvements that comprise the full S-LSTM. + +# 5.4 S-LSTM Outperforms a CRF Baseline + +In Table 1, the results demonstrate that S-LSTM outperforms LSTM-LSTM-CRF baseline in almost every case for single-labeling, and in every case for segmentation. This makes S-LSTM a useful model choice for cases like clinical segmentation and labeling, where segments are drawn from a small fixed vocabulary. S-LSTM also generalizes easily to multi-label problems, in contrast to an IOB-tagging LSTM-LSTM-CRF, since it only requires changing the segment-pooling loss from cross-entropy to binary cross-entropy. + +# 5.5 Predicting Structure Predicts Better Labels (and vice versa) + +Though we compare with TextSeg (a neural model that predicts segment bounds) and SECTOR (a neural model that predicts sentence labels and post hoc segments them) and show improvements compared to both models, we also directly test the hypothesis that the segmentation and segment labeling tasks contain complementary information. To do so, we conduct two experiments: (1) we fix the segment bounds at training and evaluation time, only training the model to label known segments (results in Table 5); and (2) we only have the model predict segment bounds (results in Table 4). + +In both cases, the addition of the loss from the companion task improves performance on the main task. This shows that the two tasks contain complementary information, and directly validates our core hypothesis that the two tasks are tightly interwoven. Thus, considering them jointly improves performance on both tasks. + +# 6 Conclusion and Future Work + +In this paper we introduce the Segment Pooling LSTM (S-LSTM) model for joint segmentation and segment labeling tasks. We find that the model dramatically reduces segmentation error (by $30\%$ on average across four datasets) while improving segment labeling accuracy compared to previous neural and non-neural baselines for both single-label and multi-label tasks. Experiments demonstrate that jointly modeling the segmentation and segment labeling, segmentation alignment and exploration, and segment pooling each contribute to S-LSTM's improved performance. + +S-LSTM is agnostic as to the sentence encoder used, so we would like to investigate the potential + +usefulness of transformer-based language models as sentence encoders. There are additional engineering challenges associated with using models such as BERT as sentence encoders, since encoding entire documents can be too expensive to fit on a GPU without model parallelism. We would also like to investigate the usefulness of an unconsidered source of document structure: the hierarchical nature of sections and subsections. Like segment bounds and headers, this structure is naturally available in Wikipedia. Having shown that segment bounds contain useful supervisory signal, it would be interesting to examine if segment hierarchies might also contain useful signal. + +# Acknowledgements + +The authors would like to thank Sebastian Arnold for his feedback and responsiveness. We would also like to thank others for their feedback, including Franck Dernoncourt, Sasha Spala, Nick Miller, Han-Chin Shing, Pedro Rodriguez, Denis Peskov, and Yogarshi Vyas. This work was supported through Adobe Gift Funding, which supports an Adobe Research-University of Maryland collaboration. It was completed while the primary author was interning at Adobe Research. + +# References + +Parviz Ajideh. 2003. Schema theory-based pre-reading tasks: A neglected essential in the ESL reading class. The Reading Matrix, 3(1). +James Allan, Jaime G Carbonell, George Doddington, Jonathan Yamron, and Yiming Yang. 1998. Topic detection and tracking pilot study final report. In Proceedings of the DARPA Broadcast News Transcription and Understanding Workshop. +Sebastian Arnold, Rudolf Schneider, Philippe Cudre-Mauroux, Felix A Gers, and Alexander Loser. Sector: A neural model for coherent topic segmentation and classification. Transactions of the Association for Computational Linguistics, 7. +Miguel Ballesteros, Yoav Goldberg, Chris Dyer, and Noah A Smith. 2016. Training with exploration improves a greedy stack-LSTM parser. In Proceedings of Empirical Methods in Natural Language Processing. +Doug Beeferman, Adam Berger, and John Lafferty. 1999. Statistical models for text segmentation. Machine Learning, 34(1-3). +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with + +subword information. Transactions of the Association for Computational Linguistics, 5. +Harr Chen, SRK Branavan, Regina Barzilay, and David R Karger. 2009. Content modeling using latent permutations. Journal of Artificial Intelligence Research, 36. +Freddy YY Choi. 2000. Advances in domain independent linear text segmentation. Conference of the North American Chapter of the Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Conference of the North American Chapter of the Association for Computational Linguistics. +Tracy Edinger, Dina Demner-Fushman, Aaron M Cohen, Steven Bedrick, and William Hersh. 2017. Evaluation of clinical text segmentation to facilitate cohort retrieval. In AMIA Annual Symposium Proceedings. +Jacob Eisenstein and Regina Barzilay. 2008. Bayesian unsupervised topic segmentation. In Proceedings of Empirical Methods in Natural Language Processing. +Kavita Ganesan and Michael Subotin. 2014. A general supervised approach to segmentation of clinical texts. In IEEE International Conference on Big Data. +Goran Glavas, Federico Nanni, and Simone Paolo Ponzetto. 2016. Unsupervised text segmentation using semantic relatedness graphs. In Proceedings of the Joint Conference on Lexical and Computational Semantics. +Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of Artificial Intelligence and Statistics. +Joshua Goodman. 1996. Parsing algorithms and metrics. In Proceedings of the Association for Computational Linguistics. +Marti Hearst and Christian Plaunt. 2002. Subtopic structuring for full-length document access. Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval. +Marti A. Hearst. 1997. Text tiling: Segmenting text into multi-paragraph subtopic passages. Computational Linguistics, 23(1). +Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the Association for Computational Linguistics. + +Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations. +Omri Koshorek, Adir Cohen, Noam Mor, Michael Rotman, and Jonathan Berant. 2018. Text segmentation as a supervised learning task. In Conference of the North American Chapter of the Association for Computational Linguistics. +Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Conference of the North American Chapter of the Association for Computational Linguistics. +Chin-Yew Lin. 2004. Looking for a few good metrics: Automatic summarization evaluation-how many samples are enough? In NTCIR. +Andrew McCallum and Wei Li. 2003. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Conference of the North American Chapter of the Association for Computational Linguistics. +Viet-An Nguyen, Jordan Boyd-Graber, and Philip Resnik. 2012. SITS: A hierarchical nonparametric model using speaker identity for topic segmentation in multiparty conversations. In Proceedings of the Association for Computational Linguistics. +Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, Sapporo, Japan. +Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Conference of the North American Chapter of the Association for Computational Linguistics. +Alexandra Pomares-Quimbaya, Markus Kreuzthaler, and Stefan Schulz. 2019. Current approaches to identify sections within clinical narratives from electronic health records: a systematic review. BMC medical research methodology, 19(1). +Lance A Ramshaw and Mitchell P Marcus. 1999. Text chunking using transformation-based learning. In Natural language processing using very large corpora. Springer. +Martin Riedl and Chris Biemann. 2012. *TopicTiling: a text segmentation algorithm based on LDA*. In *Proceedings of ACL 2012 Student Research Workshop*. +Najmeh Sadoughi, Greg P Finley, Erik Edwards, Amanda Robinson, Maxim Korenevsky, Michael Brenndoerfer, Nico Axtmann, Mark Miller, and David Suendermann-Oeft. 2018. Detecting section boundaries in medical dictations: Toward real-time conversion of medical dictations to clinical reports. + +In International Conference on Speech and Computer. Springer. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1). +Janet K Swaffar, Katherine Arens, and Heidi Byrnes. 1991. Reading for meaning: An integrated approach to language learning. Pearson College Division. +Michael Tepper, Daniel Capurro, Fei Xia, Lucy Vanderwende, and Meliha Yetisgen-Yildiz. 2012. Statistical section segmentation in free-text clinical records. In Proceedings of the Language Resources and Evaluation Conference. +Ronald J Williams and David Zipser. 1989. A learning algorithm for continually running fully recurrent neural networks. *Neural Computation*, 1(2). +Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, and Xueqi Cheng. 2019. Outline generation: Understanding the inherent content structure of documents. In Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval. \ No newline at end of file diff --git a/ajointmodelfordocumentsegmentationandsegmentlabeling/images.zip b/ajointmodelfordocumentsegmentationandsegmentlabeling/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6293bce85b1de100737c6fe794ac4f38c42bcffc --- /dev/null +++ b/ajointmodelfordocumentsegmentationandsegmentlabeling/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f5ac2423a0ac3bd92c0c46d985941f1b17f246f5a7496b52a20b445ecfb2c7ea +size 334301 diff --git a/ajointmodelfordocumentsegmentationandsegmentlabeling/layout.json b/ajointmodelfordocumentsegmentationandsegmentlabeling/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..e016a60797be43f1e2060a7b8439e011987ca59a --- /dev/null +++ b/ajointmodelfordocumentsegmentationandsegmentlabeling/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be8713049c5d846ff7d105b95ae4b00f010256a8e3a77a6825eb93e949a2d9ad +size 286874 diff --git a/ajointneuralmodelforinformationextractionwithglobalfeatures/52088d53-9dc5-406e-b2d7-be8887541a84_content_list.json b/ajointneuralmodelforinformationextractionwithglobalfeatures/52088d53-9dc5-406e-b2d7-be8887541a84_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..d815277e9084f6771c4697ac23c3362b5e8c7518 --- /dev/null +++ b/ajointneuralmodelforinformationextractionwithglobalfeatures/52088d53-9dc5-406e-b2d7-be8887541a84_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ef35cc38dc5f44fd9801260e1ecc662fcbc5b10538f47f184922844cdf70d52d +size 74047 diff --git a/ajointneuralmodelforinformationextractionwithglobalfeatures/52088d53-9dc5-406e-b2d7-be8887541a84_model.json b/ajointneuralmodelforinformationextractionwithglobalfeatures/52088d53-9dc5-406e-b2d7-be8887541a84_model.json new file mode 100644 index 0000000000000000000000000000000000000000..5c0fe0ccfadaf80293cf8c1d4e7e04ef52c475b0 --- /dev/null +++ b/ajointneuralmodelforinformationextractionwithglobalfeatures/52088d53-9dc5-406e-b2d7-be8887541a84_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:48a91e4715af7a70f6873b0200f7b8194ab8ea03f62a160895ff0cc702ddb206 +size 88140 diff --git a/ajointneuralmodelforinformationextractionwithglobalfeatures/52088d53-9dc5-406e-b2d7-be8887541a84_origin.pdf b/ajointneuralmodelforinformationextractionwithglobalfeatures/52088d53-9dc5-406e-b2d7-be8887541a84_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f1bf4b2c0c762f09a8539ba995a0daf34bdd4ec9 --- /dev/null +++ b/ajointneuralmodelforinformationextractionwithglobalfeatures/52088d53-9dc5-406e-b2d7-be8887541a84_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:798aa7569dda2a62fd0f03a2388600f6eed4ab3ee83ccab5ef07c579b5eae318 +size 1131121 diff --git a/ajointneuralmodelforinformationextractionwithglobalfeatures/full.md b/ajointneuralmodelforinformationextractionwithglobalfeatures/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1e0d97c31b4c0381a16b9a9ad6feb0cdf83fe8d8 --- /dev/null +++ b/ajointneuralmodelforinformationextractionwithglobalfeatures/full.md @@ -0,0 +1,326 @@ +# A Joint Neural Model for Information Extraction with Global Features + +Ying Lin $^{1}$ , Heng Ji $^{1}$ , Fei Huang $^{2}$ , Lingfei Wu $^{3}$ + +1University of Illinois at Urbana-Champaign +2Alibaba DAMO Academy 3IBM Research + +{yinglin8,hengji}@illinois.edu, + +f.huang@alibaba-inc.com, wuli@us.ibm.com + +# Abstract + +Most existing joint neural models for Information Extraction (IE) use local task-specific classifiers to predict labels for individual instances (e.g., trigger, relation) regardless of their interactions. For example, a VICTIM of a DIE event is likely to be a VICTIM of an ATTACK event in the same sentence. In order to capture such cross-subtask and cross-instance inter-dependencies, we propose a joint neural framework, ONEIE, that aims to extract the globally optimal IE result as a graph from an input sentence. ONEIE performs end-to-end IE in four stages: (1) Encoding a given sentence as contextualized word representations; (2) Identifying entity mentions and event triggers as nodes; (3) Computing label scores for all nodes and their pairwise links using local classifiers; (4) Searching for the globally optimal graph with a beam decoder. At the decoding stage, we incorporate global features to capture the cross-subtask and cross-instance interactions. Experiments show that adding global features improves the performance of our model and achieves new state-of-the-art on all subtasks. As ONEIE does not use any language-specific feature, we prove it can be easily applied to new languages or trained in a multilingual manner. Our code and models for English, Spanish and Chinese are publicly available for research purpose1. + +# 1 Introduction + +Information Extraction (IE) aims to extract structured information from unstructured texts. It is a complex task comprised of a wide range of subtasks, such as named, nominal, and pronominal mention extraction, entity linking, entity coreference resolution, relation extraction, event extraction, and event coreference resolution. Early efforts typically perform IE in a pipelined fashion, + +which leads to the error propagation problem and disallows interactions among components in the pipeline. As a solution, some researchers propose joint inference and joint modeling methods to improve local prediction (Roth and Yih, 2004; Ji and Grishman, 2005; Ji et al., 2005; Sil and Yates, 2013; Li et al., 2014; Durrett and Klein, 2014; Miwa and Sasaki, 2014; Lu and Roth, 2015; Yang and Mitchell, 2016; Kirschnick et al., 2016). Due to the success of deep learning, neural models have been widely applied to various IE subtasks (Collobert et al., 2011; Chiu and Nichols, 2016; Chen et al., 2015; Lin et al., 2016). Recently, some efforts (Wadden et al., 2019; Luan et al., 2019) revisit global inference approaches by designing neural networks with embedding features to jointly model multiple subtasks. However, these methods use separate local task-specific classifiers in the final layer and do not explicitly model the interdependencies among tasks and instances. Figure 1 shows a real example where the local argument role classifier predicts a redundant PERSON edge. The model should be able to avoid such mistakes if it is capable of learning and leveraging the fact that it is unusual for an ELECT event to have two PERSON arguments. + +![](images/29f79f7ac0a910a471976629b25655f9cc79b028b0dcf562a9cb13b9593fa727.jpg) +Example: Prime Minister Abdullah Gul resigned earlier Tuesday to make way for Erdogan, who won a parliamentary seat in by-elections Sunday. + +Figure 1: A typical error made by local classifiers without global constraints. + +To address this issue, we propose a joint neu + +![](images/9f0a97494359873164a5ad1ee4099b405fe3bd9a64524bd32ed0ff03d927b110.jpg) +Figure 2: An illustration of our end-to-end joint information extraction framework ONEIE at the test stage. We do not show all pairwise links for simplicity purposes. + +ral framework, ONEIE, to perform end-to-end IE with global constraints. As Figure 2 shows, instead of predicting separate knowledge elements using local classifiers, ONEIE aims to extract a globally optimal information network for the input sentence. When comparing candidate information networks during the decoding process, we not only consider individual label scores for each knowledge element, but evaluate cross-subtask and cross-instance interactions in the network. In this example, a graph with the INJURE-VICTIM-ORG (the VICTIM of an INJURE event is an ORG entity) structure is demoted. Experiments show that our framework achieves comparable or better results compared to the state-of-the-art end-to-end architecture (Wadden et al., 2019). + +To the best of our knowledge, ONEIE is the first end-to-end neural IE framework that explicitly models cross-subtask and cross-instance interdependencies and predicts the result as a unified graph instead of isolated knowledge elements. Because ONEIE does not rely on language-specific features, it can be rapidly applied to new languages. Furthermore, global features in our framework are highly explainable and can be explicitly analyzed. + +# 2 Task + +Given a sentence, our ONEIE framework aims to extract an information network representation (Li et al., 2014), where entity mentions and event triggers are represented as nodes, and relations and event-argument links are represented as edges. In other words, we perform entity, relation, and event extraction within a unified framework. In this sec + +tion, we will elaborate these tasks and involved terminologies. + +Entity Extraction aims to identify entity mentions in text and classify them into pre-defined entity types. A mention can be a name, nominal, or pronoun. For example, "Kashmir region" should be recognized as a location (LOC) named entity mention in Figure 2. + +Relation Extraction is the task of assigning a relation type to an ordered pair of entity mentions. For example, there is a PART-WHOLE relation between "Kashmir region" and "India". + +Event Extraction entails identifying event triggers (the words or phrases that most clearly express event occurrences) and their arguments (the words or phrases for participants in those events) in unstructured texts and classifying these phrases, respectively, for their types and roles. An argument can be an entity, time expression, or value (e.g., MONEY, JOB-TITLE, CRIME). For example, in Figure 2, the word "injured" triggers an INJURE event and "300" is the VICTIM argument. + +We formulate the task of extracting information networks as follows. Given an input sentence, our goal is to predict a graph $G = (V,E)$ , where $V$ and $E$ are the node and edge sets respectively. Each node $v_{i} = \langle a_{i},b_{i},l_{i}\rangle \in V$ represents an entity mention or event trigger, where $a$ and $b$ are the start and end word indices, and $l$ is the node type label. Each edge $e_{ij} = \langle i,j,l_{ij}\rangle \in E$ is represented similarly, whereas $i$ and $j$ denote the indices of involved nodes. For example, in Figure 2, the trigger "injured" is represented as $\langle 7,7,\text{INJURE}\rangle$ , the entity mention "Kashmir region" is represented as $\langle 10,$ + +11, LOC), and the event-argument edge between them is $\langle 2,3,\mathrm{PLACE}\rangle$ + +# 3 Approach + +As Figure 2 illustrates, our ONEIE framework extracts the information network from a given sentence in four steps: encoding, identification, classification, and decoding. We encode the input sentence using a pre-trained BERT encoder (Devlin et al., 2019) and identify entity mentions and event triggers in the sentence. After that, we compute the type label scores for all nodes and pairwise edges among them. During decoding, we explore possible information networks for the input sentence using beam search and return the one with the highest global score. + +# 3.1 Encoding + +Given an input sentence of $L$ words, we obtain the contextualized representation $x_{i}$ for each word using a pre-trained BERT encoder. If a word is split into multiple word pieces (e.g., Mondrian → Mon, ##dr, ##ian), we use the average of all piece vectors as its word representation. While previous methods typically use the output of the last layer of BERT, our preliminary study shows that enriching word representations using the output of the third last layer of BERT can substantially improve the performance on most subtasks. + +# 3.2 Identification + +At this stage, we identify entity mentions and event triggers in the sentence, which will act as nodes in the information network. We use a feedforward network FFN to compute a score vector $\hat{\pmb{y}}_i = \mathrm{FFN}(\pmb{x}_i)$ for each word, where each value in $\hat{\pmb{y}}_i$ represents the score for a tag in a target tag set $^2$ . After that, we use a conditional random fields (CRFs) layer to capture the dependencies between predicted tags (e.g., an I-PER tag should not follow a B-GPE tag). Similar to (Chiu and Nichols, 2016), we calculate the score of a tag path $\hat{\pmb{z}} = \{\hat{\pmb{z}}_1,\dots,\hat{\pmb{z}}_L\}$ as + +$$ +s (\pmb {X}, \hat {\pmb {z}}) = \sum_ {i = 1} ^ {L} \hat {y} _ {i, \hat {z} _ {i}} + \sum_ {i = 1} ^ {L + 1} A _ {\hat {z} _ {i - 1}, \hat {z} _ {i}}, +$$ + +where $\mathbf{X} = \{\pmb{x}_1,\dots,\pmb{x}_L\}$ is the contextualized representations of the input sequence, $\hat{y}_{i,\hat{z}_i}$ is the $\hat{z}_i$ -th + +component of the score vector $\hat{\pmb{y}}_i$ , and $A_{\hat{z}_{i-1},\hat{z}_i}$ is the $(\hat{z}_{i-1},\hat{z}_i)$ entry in matrix $\mathbf{A}$ that indicates the transition score from tag $\hat{z}_{i-1}$ to $\hat{z}_i$ . The weights in $\mathbf{A}$ are learned during training. We append two special tags and to the tag path as $\hat{z}_0$ and $\hat{z}_{L+1}$ to denote the start and end of the sequence. At the training stage, we maximize the log-likelihood of the gold-standard tag path as + +$$ +\log p (\boldsymbol {z} | \boldsymbol {X}) = s (\boldsymbol {X}, \boldsymbol {z}) - \log \sum_ {\hat {\boldsymbol {z}} \in Z} e ^ {s (\boldsymbol {X}, \hat {\boldsymbol {z}})}, +$$ + +where $Z$ is the set of all possible tag paths for a given sentence. Thus, we define the identification loss as $\mathcal{L}^{\mathrm{I}} = -\log p(\boldsymbol{z}|\boldsymbol{X})$ . + +In our implementation, we use separate taggers to extract entity mentions and event triggers. Note that we do not use types predicted by the taggers. Instead, we make a joint decision for all knowledge elements at the decoding stage to prevent error propagation and utilize their interactions to improve the prediction of node type. + +# 3.3 Classification + +We represent each identified node as $\pmb{v}_i$ by averaging its word representations. After that, we use separate task-specific feed-forward networks to calculate label scores for each node as $\hat{\pmb{y}}_i^t = \mathrm{FFN}^t(\pmb{v}_i)$ , where $t$ indicates a specific task. To obtain the label score vector for the edge between the $i$ -th and $j$ -th nodes, we concatenate their span representations and calculate the vector as $\hat{\pmb{y}}_k^t = \mathrm{FFN}^t(\pmb{v}_i, \pmb{v}_j)$ . + +For each task, the training objective is to minimize the following cross-entropy loss + +$$ +\mathcal {L} ^ {\mathrm {t}} = - \frac {1}{N ^ {t}} \sum_ {i = 1} ^ {N ^ {t}} \boldsymbol {y} _ {i} ^ {t} \log \hat {\boldsymbol {y}} _ {i} ^ {t}, +$$ + +where $\pmb{y}_i^t$ is the true label vector and $N^t$ is the number of instances for task $t$ . + +If we ignore the inter-dependencies between nodes and edges, we can simply predict the label with the highest score for each knowledge element and thus generate the locally best graph $\hat{G}$ . The score of $\hat{G}$ can be calculated as + +$$ +s ^ {\prime} (\hat {G}) = \sum_ {t \in T} \sum_ {i = 1} ^ {N ^ {t}} \max \hat {\boldsymbol {y}} _ {i} ^ {t}, +$$ + +where $T$ is the set of tasks. We refer to $s'(\hat{G})$ as the local score of $\hat{G}$ . + +
CategoryDescription
Role1. The number of entities that act as <rolei> and <rolej> arguments at the same time.
2. The number of <event_typei> events with <number> <rolej> arguments.
3. The number of occurrences of <event_typei>, <rolej>, and <entity_typek> combination.
4. The number of events that have multiple <rolei> arguments.
5. The number of entities that act as a <rolei> argument of an <event_typej> event and a <rolek> argument of an <event_typei> event at the same time.
Relation6. The number of occurrences of <entity_typei>, <entity_typej>, and <relation_typek> combination.
7. The number of occurrences of <entity_typei> and <relation_typej> combination.
8. The number of occurrences of a <relation_typei> relation between a <rolej> argument and a <rolek> argument of the same event.
9. The number of entities that have a <relation_typei> relation with multiple entities.
10. The number of entities involving in <relation_typei> and <relation_typej> relations simultaneously.
Trigger11. Whether a graph contains more than one <event_typei> event.
+ +Table 1: Global feature categories. + +# 3.4 Global Features + +A limitation of local classifiers is that they are incapable of capturing inter-dependencies between knowledge elements in an information network. We consider two types of inter-dependencies in our framework. + +The first type of inter-dependency is Cross-subtask interactions between entities, relations, and events. Consider the following sentence. "A civilian aid worker from San Francisco was killed in an attack in Afghanistan." A local classifier may predict "San Francisco" as a VICTIM argument because an entity mention preceding "was killed" is usually the victim despite the fact that a GPE is unlikely to be a VICTIM. To impose such constraints, we design a global feature as shown in Figure 3(a) to evaluate whether the structure DIE-VICTIM-GPE exists in a candidate graph. + +Another type of inter-dependency is Cross-instance interactions between multiple event and/or relation instances in the sentence. Take the following sentence as an example. "South Carolina boy, 9, dies during hunting trip after his father accidentally shot him on Thanksgiving Day." It can be challenging for a local classifier to predict "boy" as the VICTIM of the ATTACK event triggered by "shot" due to the long distance between these two words. However, as shown in Figure 3(b), if an entity is the VICTIM of a DIE event, it is also likely to be the VICTIM of an ATTACK event in the same sentence. + +Motivated by these observations, we design a set of global feature templates (event schemas) as listed in Table 1 to capture cross-subtask and cross-instance interactions, while the model fills in all possible types to generate features and learns the + +![](images/ab198d4872bad9ad313411c1273fc6b42039686fa6d277ffa436cfa9e625f900.jpg) +(a) Cross-subtask Interaction + +![](images/9b3d3abf7b17b3ae77b05c50a6afdd4e7023a2c07f1e5152d20aa4cf46c32727.jpg) +(b) Cross-instance Interactions +Figure 3: Examples of inter-dependencies between elements in information networks. (a) A VICTIM edge is unlikely to exist between a GPE entity and a DIE event trigger. (b) The VICTIM of a DIE event is likely to be the VICTIM of an ATTACK event in the same sentence. + +weight of each feature during training. Given a graph $G$ , we represent its global feature vector as $\pmb{f}_G = \{f_1(G), \dots, f_M(G)\}$ , where $M$ is the number of global features and $f_i(\cdot)$ is a function that evaluates a certain feature and returns a scalar. For example, + +$$ +f _ {i} (G) = \left\{ \begin{array}{l} 1, G \text {h a s m u l t i p l e A T T A C K e v e n t s} \\ 0, \text {o t h e r w i s e .} \end{array} \right. +$$ + +Next, ONEIE learns a weight vector $\pmb{u} \in \mathbb{R}^{M}$ and calculates the global feature score of $G$ as the dot product of $\pmb{f}_{G}$ and $\pmb{u}$ . We define the global score of $G$ as the sum of its local score and global feature score, namely + +$$ +s (G) = s ^ {\prime} (G) + \pmb {u} \pmb {f} _ {G}, +$$ + +We make the assumption that the gold-standard graph for a sentence should achieve the highest global score. Therefore, we minimize the following loss function + +$$ +\mathcal {L} ^ {\mathrm {G}} = s (\hat {G}) - s (G), +$$ + +where $\hat{G}$ is the graph predicted by local classifiers and $G$ is the gold-standard graph. + +Finally, we optimize the following joint objective function during training + +$$ +\mathcal {L} = \mathcal {L} ^ {I} + \sum_ {t \in T} \mathcal {L} ^ {t} + \mathcal {L} ^ {G} +$$ + +# 3.5 Decoding + +As we have discussed, because local classifiers ignore interactions among elements in an information network, they may predict contradictory results or fail to predict difficult edges that require information from other elements. In order to address these issues, ONEIE makes a joint decision for all nodes and their pairwise edges to obtain the globally optimal graph. The basic idea is to calculate the global score for each candidate graph and select the one with the highest score. However, exhaustive search is infeasible in many cases as the size of search space grows exponentially with the number of nodes. Therefore, we design a beam search-based decoder as Figure 4 depicts. + +Given a set of identified nodes $V$ and the label scores for all nodes and their pairwise links, we perform decoding with an initial beam set $B = \{K_0\}$ , where $K_{0}$ is an order-zero graph. At each step $i$ , we expand each candidate in $B$ in node step and edge step as follows. + +Node step: We select $v_{i} \in V$ and define its candidate set as $V_{i} = \{\langle a_{i}, b_{i}, l_{i}^{(k)} \rangle | 1 \leq k \leq \beta_{v}\}$ , where $l_{i}^{(k)}$ denotes the label with the $k$ -th highest local score for $v_{i}$ , and $\beta_{v}$ is a hyper-parameter that controls the number of candidate labels to consider. We update the beam set by + +$$ +B \leftarrow \{G + v | (G, v) \in B \times V _ {i} \}, +$$ + +Edge step: We iteratively select a previous node $v_{j} \in V, j < i$ and add possible edges between $v_{j}$ and $v_{i}$ . Note that if $v_{i}$ is a trigger, we skip $v_{j}$ if it is also a trigger. At each iteration, we construct a candidate edge set as $E_{ij} = \{\langle j,i,l_{ij}^{(k)}\rangle |1 \leq k \leq \beta_e\}$ , where $l_{ij}^{(k)}$ is the label with $k$ -th highest score for $e_{ij}$ and $\beta_{e}$ is a threshold for the number of candidate labels. Next, we update the beam set by + +$$ +B \gets \{G + e | (G, e) \in B \times E _ {i j} \}, +$$ + +At the end of each edge step, if $|B|$ is larger than the beam width $\theta$ , we rank all candidates by global score in descending order and keep the top $\theta$ ones. + +After the last step, we return the graph with the highest global score as the information network for the input sentence. + +# 4 Experiments + +# 4.1 Data + +We perform our experiments on the Automatic Content Extraction (ACE) 2005 dataset3, which provides entity, value, time, relation, and event annotations for English, Chinese, and Arabic. Following Wadden et al. (2019)'s pre-processing4, we conduct experiments on two datasets, ACE05-R that includes named entity and relation annotations, and ACE05-E that includes entity, relation, and event annotations. We keep 7 entity types, 6 coarse-grained relation types, 33 event types, and 22 argument roles. + +In order to reinstate some important elements absent from ACE05-R and ACE05-E, we create a new dataset, $\mathrm{ACE05 - E^{+}}$ , by adding back the order of relation arguments, pronouns, and multi-token event triggers, which have been largely ignored in previous work. We also skip lines before the tag (e.g., headline, datetime) as they are not annotated. + +In addition to ACE, we derive another dataset, ERE-EN, from the Entities, Relations and Events (ERE) annotation task created under the Deep Exploration and Filtering of Test (DEFT) program because it covers more recent articles. Specifically, we extract 458 documents and 16,516 sentences from three ERE datasets, LDC2015E29, LDC2015E68, and LDC2015E78. For ERE-EN, we keep 7 entity types, 5 relation types, 38 event types, and 20 argument roles. + +To evaluate the portability of our model, we also develop a Chinese dataset from ACE2005 and a Spanish dataset from ERE (LDC2015E107). We refer to these datasets as ACE05-CN and ERE-ES respectively. + +# 4.2 Experimental Setup + +We optimize our model with BertAdam for 80 epochs with a learning rate of 5e-5 and weight decay of 1e-5 for BERT, and a learning rate of 1e-3 and weight decay of 1e-3 for other parameters. We use the bert-base-multilingual-cased + +![](images/52b9e2744ac2f38392f1d8bc8188e715b521cf6ab1d3ec87ec0832eba415468c.jpg) +Figure 4: An illustration of our decoding algorithm. At each step, we expand each candidate graph by adding a new node and possible edges between it and existing nodes. After that, we rank all expanded graphs and keep the top ones. + +
DatasetSplit#Sents#Entities#Refs#Events
ACE05-RTrain10,05126,4734,788-
Dev2,4246,3621,131-
Test2,0505,4761,151-
ACE05-ETrain17,17229,0064,6644,202
Dev9232,451560450
Test8323,017636403
ACE05-CNTrain6,84129,6577,9342,926
Dev5262,250596217
Test5472,388672190
ACE05-E+Train19,24047,5257,1524,419
Dev9023,422728468
Test6763,673802424
ERE-ENTrain14,21938,8645,0456,419
Dev1,1623,320424552
Test1,1293,291477559
ERE-ESTrain7,06711,8391,6983,272
Dev556886120210
Test546811108269
+ +Table 2: Dataset statistics. + +model5 for ACE05-CN and ERE-ES, and use the bert-large-cased model for other datasets. Following (Wadden et al., 2019), we use two-layer FFNs with a dropout rate of 0.4 for local classifiers. We use 150 hidden units for entity and relation extraction, and 600 hidden units for event extraction. For global features, we set $\beta_v$ and $\beta_e$ to 2 and set $\theta$ to 10. In our experiments, we use random seeds and report averaged scores across runs. We use the same criteria as (Zhang et al., 2019; Wadden et al., 2019) for evaluation as follows. + +- Entity: An entity mention is correct if its offsets and type match a reference entity. +- Relation: A relation is correct if its relation type + +is correct and the offsets of the related entity mentions are correct. + +- Trigger: A trigger is correctly identified (Trig-I) if its offsets match a reference trigger. It is correctly classified (Trig-C) if its event type also matches the reference trigger. +- Argument: An argument is correctly identified (Arg-I) if its offsets and event type match a reference argument mention. It is correctly classified (Arg-C) if its role label also matches the reference argument mention. + +# 4.3 Overall Performance + +In Table 3, we compare our results with two models: (1) DYGIE++ (Wadden et al., 2019), the state-of-the-art end-to-end IE model that utilizes multisentence BERT encodings and span graph propagation; (2) BASELINE that follows the architecture of ONEIE but only uses the output of the last layer of BERT and local classifiers. We can see that our model consistently outperforms DYGIE++ and BASELINE on ACE05-R and ACE05-E. + +In (Wadden et al., 2019), the authors show that combining triggers predicted by a four-model ensemble optimized for trigger detection can improve the performance of event extraction. While we also report our results using a four-model ensemble in Table 4 for fair comparison, we hold the opinion that the single-model scores in Table 3 better reflect the actual performance of ONEIE and should be used for future comparison. + +Table 5 shows the performance of ONEIE on two new datasets, $\mathrm{ACE05 - E^{+}}$ and ERE-EN. + +In Table 6 we list salient global features learned by the model. Take feature #9 as an example, if a + +
DatasetTaskDYGIE++BASELINEONEIE
ACE05-REntity88.6-88.8
Relation63.4-67.5
ACE05-EEntity89.790.290.2
Trig-I-76.678.2
Trig-C69.773.574.7
Arg-I53.056.459.2
Arg-C48.853.956.8
+ +Table 3: Results on ACE2005 datasets (F-score, %). + +
DatasetTaskDYGIE++*ONEIE*
ACE05-EEntity90.790.3
Trig-I76.578.6
Trig-C73.675.2
Arg-I55.460.7
Arg-C52.558.6
+ +Table 4: Experiment results on ACE05-E (F-score, %). DYGIE++* and ONEIE* use a four-model ensemble optimized for trigger detection. + +
TaskEntityTrig-ITrig-CArg-IArg-CRelation
ACE05-E+89.675.672.857.354.858.6
ERE-EN87.068.457.050.146.553.2
+ +candidate graph contains multiple ORG-AFF edges incident to the same node, the model will demote this graph by adding a negative value into its global score. We also observe that the weights of about $9\%$ global features are almost not updated, which indicates that they are barely found in both gold-standard and predicted graphs. In Table 8, we perform qualitative analysis on concrete examples. + +# 4.4 Porting to Another Language + +As Table 7, we evaluate the proposed framework on ACE05-CN and ERE-ES. The results show that ONEIE works well on Chinese and Spanish data without any special design for the new language. We also observe that adding English training data can improve the performance on Chinese and Spanish. + +# 4.5 Remaining Challenges + +We have analyzed 75 of the remaining errors and in Figure 5 we present the distribution of various error types which need more features and knowledge acquisition to address in the future. In this section, we will discuss some main categories with examples. + +Need background knowledge. Most of current IE methods ignore external knowledge such as entity attributes and scenario models. For exam- + +Table 5: New benchmark results (F-score, %). + +
Positive FeatureWeight
1A TRANSPORT event has only one DESTINATION argument2.61
2An ATTACK event has only one PLACE argument2.31
3A TRANSPORT event has only one ORIGIN argument2.01
4An END-POSITION event has only one PERSON argument1.51
5A PER-SOC relation exists between two PER entities1.08
6A GEN-AFF relation exists between ORG and LOC entities0.96
7A BENEFICIARY argument is a PER entity0.93
8A GEN-AFF relation exists between ORG and GPE entities0.90
Negative FeatureWeight
9An entity has an ORG-AFF relation with multiple entities-3.21
10An entity has an PART-WHOLE relation with multiple entities-2.49
11An event has two PLACE arguments-2.47
12A TRANSPORT event has multiple DESTINATION arguments-2.25
13An entity has a GEN-AFF relation with multiple entities-2.02
14An ATTACK event has multiple PLACE arguments-1.86
15An entity has a PHYS relation with multiple entities-1.69
16An event has multiple VICTIM arguments-1.61
+ +Table 6: Salient positive and negative global features. + +
DatasetTrainingEntityRelationTrig-CArg-C
ACE05-CNCN88.562.465.652.0
CN+EN89.862.967.753.2
ERE-ESES81.348.156.840.3
ES+EN81.852.959.142.3
+ +Table 7: Results on ACE05-CN and ERE-ES (F-score, %). For ACE05-CN, EN refers to $\mathrm{ACE05 - E^{+}}$ . For ERE-ES, EN refers to ERE-EN. + +![](images/6cd8f49cb9eac94ad87fb603cab10d123667544560b0046c18fb55bd1b02320b.jpg) +Figure 5: Distribution of remaining errors. + +ple, in the following sentence, "And Putin's media aide, Sergei Yastrzhembsky, told Kommersant Russia would not forgive the Iraqi debt", our model mistakenly identifies "Kommersan" as a person instead of organization. With entity linking, we + +
Sentence & AnalysisBaseline+Global Features
#1: Russia's foreign minister expressed outrage at suggestions from a top Washington official last week that Moscow should forgive the eight billion dollars in Soviet-era debt that Baghdad owes it, as a gesture of good will. +★ Global feature category: 8 +★ Analysis: It is unlikely for a person to have an ORG-AFF relation with multiple entities.Russia minister GPE ORG-AFF PER ORG-AFF ORG-AFF PER ORG-AFF GPE official WashingtonRussia minister GPE ORG-AFF PER ORG-AFF GPE official Washington
#2: They also deployed along the border with Israel. +★ Global feature category: 9 +★ Analysis: It is uncommon that an ORIGIN argument and a DES-TINATION argument have a PART-WHOLE relation.border LOC PART-WHOLE Israel GPE destination origin Transport deployedborder LOC PART-WHOLE Israel GPE origin Transport deployed
#3: Prime Minister Abdullah Gul resigned earlier Tuesday to make way for Erdogan , who won a parliamentary seat in by-elections Sunday. +★ Global feature categories: 2 and 5 +★ Analysis: 1. An ELECT usually has only one PERSON argument; 2. An entity is unlikely to act as a PERSON argument for END-POSITION and ELECT events at the same time.ElectionpersonpersonElect End-Position resignedElectionpersonEnd-Position resigned
#4: Diller will continue to play a critical role in the future of Vivendi 's entertainment arm. +★ Global feature category: 6 +★ Analysis: A PART-WHOLE relation should not exist between PER and ORG entities.Vivendi PER PART-WHOLE arm ORG PER DillerVivendi PER ORG PER Diller
#5: He also brought a check from Campbell to pay the fines and fees. +★ Global feature category: 3 +★ Analysis: As “Campbell” is likely to be an ENTITY argument of a FINE event, the model corrects its entity type from FAC to PER.fines Campbell Fine FACfines entity Campbell Fine PER
+ +Table 8: Examples showing how global features improve the quality of extracted information networks. For some sentences, we do not draw the whole information network. + +can correct this error based on the first sentence in its Wikipedia page "Kommersant is a nationally distributed daily newspaper published in Russia mostly devoted to politics and business". + +Rare words. The second challenge is the famous long-tail problem: many triggers, entity mentions (e.g., "caretaker", "Gazeta.ru") and contextual phrases in the test data rarely appear in the training data. While most event triggers are verbs or nouns, some adverbs and multi-word expressions can also serve as triggers. + +Multiple types per trigger. Some trigger words may indicate both the procedure and the result status of an action. For example, "named" may indicate both NOMINATE and START-POSITION events; "killed" and "eliminate" may indicate both ATTACK and DIE events. In these cases the human ground truth usually only annotates the procedure types, whereas our system produces the resultant event types. + +Need syntactic structure. Our model may benefit from deeper syntactic analysis. For example, in the following sentence "As well as previously holding senior positions at Barclays Bank, BZW and Kleinwort Benson, McCarthy was formerly a top civil servant at the Department of Trade and Industry", our model misses all of the employers "Barclays Bank", "BZW" and "Kleinwort Benson" for "McCarthy" probably because they appear in a previous sub-sentence. + +Uncertain events and metaphors. Our model mistakenly labels some future planned events as specific events because its lacking of tense prediction and metaphor recognition. For example, START-ORG triggered by "formation" does not happen in the following sentence "The statement did not give any reason for the move, but said Lahoud would begin consultations Wednesday aimed at the formation of a new government". Our model also mistakenly identifies "camp" as a facility, and a + +DIE event triggered by "dying" in the following sentence "Russia hints 'peace camp' alliance with Germany and France is dying by Dmitry Zaks". + +The IE community is lacking of newer data sets with end-to-end annotations. Unfortunately, the annotation quality of the ACE data set is not perfect due to some long-term debates on the annotation guideline; e.g., Should "government" be tagged as a GPE or an ORG? Should "dead" be both an entity and event trigger? Should we consider designator word as part of the entity mention or not? + +# 5 Related Work + +Previous work (Roth and Yih, 2004; Li et al., 2011) encodes inter-dependency among knowledge elements as global constraints in an integer linear programming framework to effectively remove extraction errors. Such integrity verification results can be used to find knowledge elements that violate the constraints and identify possible instances of detector errors or failures. Inspired by these previous efforts, we propose a joint neural framework with global features in which the weights are learned during training. Similar to (Li et al., 2014)'s method, ONEIE also uses global features to capture cross-subtask and cross-instance interdependencies, while our features are language-independent and do not rely on other NLP tools such as dependency parsers. Our methods also differ in local features, optimization methods, and decoding procedures. + +Some recent efforts develop joint neural models to perform extraction of two IE subtasks, such as entity and relation extraction (Zheng et al., 2017; Katiyar and Cardie, 2017; Bekoulis et al., 2018; Fu et al., 2019; Luan et al., 2019; Sun et al., 2019) and event and temporal relation extraction (Han et al., 2019). Wadden et al. (2019) design a joint model to extract entities, relations and events based on BERT and dynamic span graphs. Our framework extends (Wadden et al., 2019) by incorporating global features based on cross-subtask and cross-instance constraints. Unlike (Wadden et al., 2019) that uses a span-based method to extract mentions, we adopt a CRF-based tagger in our framework because it can extract mentions of any length, not restricted by the maximum span width. + +# 6 Conclusions and Future Work + +We propose a joint end-to-end IE framework that incorporates global features to capture the inter + +dependency between knowledge elements. Experiments show that our framework achieves better or comparable performance compared to the state of the art and prove the effectiveness of global features. Our framework is also proved to be language-independent and can be applied to other languages, and it can benefit from multi-lingual training. + +In the future, we plan to incorporate more comprehensive event schemas that are automatically induced from multilingual multimedia data and external knowledge to further improve the quality of IE. We also plan to extend our framework to more IE subtasks such as document-level entity coreference resolution and event coreference resolution. + +# Acknowledgement + +This research is based upon work supported in part by U.S. DARPA KAIROS Program No. FA8750-19-2-1004, U.S. DARPA AIDA Program No. FA8750-18-2-0014, Air Force No. FA8650-17-C-7715, the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract No. FA8650-17-C-9116. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, ODNl, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. + +# References + +Giannis Bekoulis, Johannes Deleu, Thomas Demeester, and Chris Develder. 2018. Adversarial training for multi-context joint entity and relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. +Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multipooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP2015). +Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. Transactions of the Association of Computational Linguistics (TACL). +Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. + +Natural language processing (almost) from scratch. Journal of Machine Learning Research. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT2019). +Greg Durrett and Dan Klein. 2014. A joint model for entity analysis: Coreference, typing, and linking. In Transactions of the Association for Computational Linguistics (TACL). +Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. 2019. GraphRel: Modeling text as relational graphs for joint entity and relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL2019). +Rujun Han, Qiang Ning, and Nanyun Peng. 2019. Joint event and temporal relation extraction with shared representations and structured prediction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP2019). +Heng Ji and Ralph Grishman. 2005. Improving name tagging by reference resolution and relation detection. In In Proceedings of ACL 05, Ann Arbor, USA. +Heng Ji, David Westbrook, and Ralph Grishman. 2005. Using semantic relations to refine coreference decisions. In *In Proceedings of HLT/EMNLP* 05, Vancouver, B.C., Canada. +Arzoo Katiyar and Claire Cardie. 2017. Going out on a limb: Joint extraction of entity mentions and relations without dependency trees. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL2017). +Johannes Kirschnick, Holmer Hemsen, and Volker Markl. 2016. JEDI: Joint entity and relation detection using type inference. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics System Demonstrations (ACL2016). +Qi Li, Sam Anzaroot, Wen-Pin Lin, Xiang Li, and Heng Ji. 2011. Joint inference for cross-document information extraction. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management (CIKM2011). +Qi Li, Heng Ji, HONG Yu, and Sujian Li. 2014. Constructing information networks using one single model. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP2014). + +Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL2016). +Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP2015). +Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT2019). +Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP2014). +Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proceedings of the Eighth Conference on Computational Natural Language Learning (CoNLL2004). +Avirup Sil and Alexander Yates. 2013. Re-ranking for joint named-entity recognition and linking. In Proceedings of the 22nd ACM international conference on Conference on Information & Knowledge Management (CIKM2013). +Changzhi Sun, Yeyun Gong, Yuanbin Wu, Ming Gong, Daxin Jiang, Man Lan, Shiliang Sun, and Nan Duan. 2019. Joint type inference on entities and relations via graph convolutional networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL2019). +David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP2019). +Bishan Yang and Tom M. Mitchell. 2016. Joint extraction of events and entities within a document context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT2016). +Tongtao Zhang, Heng Ji, and Avirup Sil. 2019. Joint entity and event extraction with generative adversarial imitation learning. Data Intelligence. +Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extraction of entities and relations based on a novel tagging + +scheme. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL2017). \ No newline at end of file diff --git a/ajointneuralmodelforinformationextractionwithglobalfeatures/images.zip b/ajointneuralmodelforinformationextractionwithglobalfeatures/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..65e8f93ea449a252cddc371204b0b5b1c8824ef5 --- /dev/null +++ b/ajointneuralmodelforinformationextractionwithglobalfeatures/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ce2c7dcf44c0fcae2504a4543435bea6a349df732adb736befe34d9da5c6fb0e +size 794489 diff --git a/ajointneuralmodelforinformationextractionwithglobalfeatures/layout.json b/ajointneuralmodelforinformationextractionwithglobalfeatures/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..0d40eff0486aa151440aca840ae54c292b6a77ca --- /dev/null +++ b/ajointneuralmodelforinformationextractionwithglobalfeatures/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:18be2ed6a73f773409f3bd9ecd0795a5be0359ff33aa82c816c4013fb9cb59cf +size 357717 diff --git a/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/19582595-c993-4482-b05b-a4f065b8bc60_content_list.json b/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/19582595-c993-4482-b05b-a4f065b8bc60_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f7400061e820625ac7550155333a71862ae03af5 --- /dev/null +++ b/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/19582595-c993-4482-b05b-a4f065b8bc60_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f6992a95ea6d4b5a710ebd97ca07ff30f28669b4cfd75de7680a7da965448565 +size 52262 diff --git a/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/19582595-c993-4482-b05b-a4f065b8bc60_model.json b/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/19582595-c993-4482-b05b-a4f065b8bc60_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e2a20fd03b13b4f1f152e9dc24bc256865f2438e --- /dev/null +++ b/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/19582595-c993-4482-b05b-a4f065b8bc60_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:67838ba004018ba7ed02ae404613971b31223b718a6b5cfa9917de506debcfef +size 64290 diff --git a/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/19582595-c993-4482-b05b-a4f065b8bc60_origin.pdf b/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/19582595-c993-4482-b05b-a4f065b8bc60_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3866a7f0d9d6050189b898e74f27bee370ffe32b --- /dev/null +++ b/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/19582595-c993-4482-b05b-a4f065b8bc60_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63168e7012ed3a174d0e7a8abc98fb6bf7a85d644875a1c9913e3c7204f90dad +size 457674 diff --git a/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/full.md b/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/full.md new file mode 100644 index 0000000000000000000000000000000000000000..fbd8903b8b3b6296926b6d3f7f2feab0448f9885 --- /dev/null +++ b/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/full.md @@ -0,0 +1,220 @@ +# A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal + +Demian Gholipour Ghalandari $^{1,2}$ , Chris Hokamp $^{1}$ , Nghia The Pham $^{1}$ , John Glover $^{1}$ , Georgiana Ifrim $^{2}$ + +1Aylien Ltd., Dublin, Ireland + +$^{2}$ Insight Centre for Data Analytics, University College Dublin, Ireland + +1{first-name}@aylien.com + +georgiana.ifrim@insight-centre.org + +# Abstract + +Multi-document summarization (MDS) aims to compress the content in large document collections into short summaries and has important applications in story clustering for newsfeeds, presentation of search results, and timeline generation. However, there is a lack of datasets that realistically address such use cases at a scale large enough for training supervised models for this task. This work presents a new dataset for MDS that is large both in the total number of document clusters and in the size of individual clusters. We build this dataset by leveraging the Wikipedia Current Events Portal (WCEP), which provides concise and neutral human-written summaries of news events, with links to external source articles. We also automatically extend these source articles by looking for related articles in the Common crawl archive. We provide a quantitative analysis of the dataset and empirical results for several state-of-the-art MDS techniques. The dataset is available at https://github.com/complementizer/wcep-mds-dataset. + +# 1 Introduction + +Text summarization has recently received increased attention with the rise of deep learning-based end-to-end models, both for extractive and abstractive variants. However, so far, only single-document summarization has profited from this trend. Multidocument summarization (MDS) still suffers from a lack of established large-scale datasets. This impedes the use of large deep learning models, which have greatly improved the state-of-the-art for various supervised NLP problems (Vaswani et al., 2017; Paulus et al., 2018; Devlin et al., 2019), and makes a robust evaluation difficult. Recently, several larger MDS datasets have been created: Zopf (2018); Liu et al. (2018); Fabbri et al. (2019). However, these datasets do not realistically resemble use + +
Human-written summary +Emperor Akihito abdicates the Chrysanthemum Throne in favor of his elder son, Crown Prince Naruhito. He is the first Emperor to abdicate in over two hundred years, since Emperor Kōkaku in 1817.
Headlines of source articles (WCEP) +·Defining the Heisei Era: Just how peaceful were the past 30 years? +·As a New Emperor Is Enthroned in Japan, His Wife Won’t Be Al-owed to Watch
Sample Headlines from Common Crawl +·Japanese Emperor Akihito to abdiclate after three decades on throne +·Japan’s Emperor Akihito says he is abdicating as of Tuesday at a ceremony, in his final official address to his people +·Akihito begins abdication rituals as Japan marks end of era
+ +Table 1: Example event summary and linked source articles from the Wikipedia Current Events Portal, and additional extracted articles from Common Crawl. + +cases with large automatically aggregated collections of news articles, focused on particular news events. This includes news event detection, news article search, and timeline generation. Given the prevalence of such applications, there is a pressing need for better datasets for these MDS use cases. + +In this paper, we present the Wikipedia Current Events Portal (WCEP) dataset, which is designed to address real-world MDS use cases. The dataset consists of 10,200 clusters with one human-written summary and 235 articles per cluster on average. We extract this dataset starting from the Wikipedia Current Events Portal $(\mathrm{WCEP})^{1}$ . Editors on WCEP write short summaries about news events and provide a small number of links to relevant source articles. We extract the summaries and source articles from WCEP and increase the number of source articles per summary by searching for similar articles in the Common crawl News dataset. As a result, we obtain large clusters of highly redundant news articles, resembling the output of news clustering applications. Table 1 shows an example of + +an event summary, with headlines from both the original article and from a sample of the associated additional sources. In our experiments, we test a range of unsupervised and supervised MDS methods to establish baseline results. We show that the additional articles lead to much higher upper bounds of performance for standard extractive summarization, and help to increase the performance of baseline MDS methods. + +We summarize our contributions as follows: + +- We present a new large-scale dataset for MDS, that is better aligned with several real-world industrial use cases. +- We provide an extensive analysis of the properties of this dataset. +- We provide empirical results for several baselines and state-of-the-art MDS methods aiming to facilitate future work on this dataset. + +# 2 Related Work + +# 2.1 Multi-Document Summarization + +Extractive MDS models commonly focus on either ranking sentences by importance (Hong and Nenkova, 2014; Cao et al., 2015; Yasunaga et al., 2017) or on global optimization to find good combinations of sentences, using heuristic functions of summary quality (Gillick and Favre, 2009; Lin and Bilmes, 2011; Peyrard and Eckle-Kohler, 2016). + +Several abstractive approaches for MDS are based on multi-sentence compression and sentence fusion (Ganesan et al., 2010; Banerjee et al., 2015; Chali et al., 2017; Nayeem et al., 2018). Recently, neural sequence-to-sequence models, which are the state-of-the-art for abstractive single-document summarization (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017), have been used for MDS, e.g., by applying them to extractive summaries (Liu et al., 2018) or by directly encoding multiple documents (Zhang et al., 2018; Fabbri et al., 2019). + +# 2.2 Datasets for MDS + +Datasets for MDS consist of clusters of source documents and at least one ground-truth summary assigned to each cluster. Commonly used traditional datasets include the DUC 2004 (Paul and James, 2004) and TAC 2011 (Owczarzak and Dang, 2011), which consist of only 50 and 100 document clusters with 10 news articles on average. The MultiNews dataset (Fabbri et al., 2019) is a recent large-scale MDS dataset, containing 56,000 clusters, but each + +cluster contains only 2.3 source documents on average. The sources were hand-picked by editors and do not reflect use cases with large automatically aggregated document collections. MultiNews has much more verbose summaries than WCEP. + +Zopf (2018) created the auto-hMDS dataset by using the lead section of Wikipedia articles as summaries, and automatically searching for related documents on the web, resulting in 7,300 clusters. The WikiSum dataset (Liu et al., 2018) uses a similar approach and additionally uses cited sources on Wikipedia. The dataset contains 2.3 million clusters. These Wikipedia-based datasets also have long summaries about various topics, whereas our dataset focuses on short summaries about news events. + +# 3 Dataset Construction + +Wikipedia Current Events Portal: WCEP lists current news events on a daily basis. Each news event is presented as a summary with at least one link to external news articles. According to the editing guidelines3, the summaries must be short, up to 30-40 words, and written in complete sentences in the present tense, avoiding opinions and sensationalism. Each event must be of international interest. Summaries are written in English, and news sources are preferably English. + +Obtaining Articles Linked on WCEP: We parse the WCEP monthly pages to obtain a list of individual events, each with a list of URLs to external source articles. To prevent the source articles of the dataset from becoming unavailable over time, we use the 'Save Page Now' feature of the Internet Archive4. We request snapshots of all source articles that are not captured in the Internet Archive yet. We download and extract all articles from the Internet Archive Wayback Machine5 using the newspaper3k6 library. + +Additional Source Articles: Each event from WCEP contains only 1.2 sources on average, meaning that most editors provide only one source article when they add a new event. In order to extend the set of input articles for each of the ground-truth + +summaries, we search for similar articles in the Common Crawl News dataset7. + +We train a logistic regression classifier to decide whether to assign an article to a summary, using the original WCEP summaries and source articles as training data. For each event, we label the article-summary pair for each source article of the event as positive. We create negative examples by pairing each event with source articles from other events of the same date, resulting in a positive-negative ratio of 7:100. The features used by the classifier are listed in Table 2. + +
tfidf similarity between title and summary
tfidf similarity between body and summary
No. entities from summary appearing in title
No. linked entities from summary appearing in body
+ +We use unigram bag-of-words vectors with TF-IDF weighting and cosine similarity for the first two features. The entities are phrases in the WCEP summaries that the editors annotated with hyperlinks to other Wikipedia articles. We search for these entities in article titles and bodies by exact string matching. The classifier achieves $90\%$ Precision and $74\%$ Recall of positive examples on a hold-out set. + +For each event in the original dataset, we apply the classifier to articles published in a window of $\pm 1$ days of the event date and add those articles that pass a classification probability of 0.9. If an article is assigned to multiple events, we only add it to the event with the highest probability. This procedure increases the number of source articles per summary considerably (Table 4). + +Final Dataset: Each example in the dataset consists of a ground-truth summary and a cluster of original source articles from WCEP, combined with additional articles from Common Crawl. The dataset has 10,200 clusters, which we split roughly into $80\%$ training, $10\%$ validation and $10\%$ test (Table 3). The split is done chronologically, such that no event dates overlap between the splits. We also create a truncated version of the dataset with a maximum of 100 articles per cluster, by retaining all original articles and randomly sampling from the additional articles. + +# 4 Dataset Statistics and Analysis + +# 4.1 Overview + +Table 3 shows the number of clusters and of articles from all clusters combined, for each dataset partition. Table 4 shows statistics for individual clusters. We show statistics for the entire dataset (WCEP-total), and for the truncated version (WCEP-100) used in our experiments. The high mean cluster size is mostly due to articles from Common Crawl. + +Table 2: Features used in the article-summary binary classifier. + +
TRAINVALTESTTOTAL
# clusters8,1581,0201,02210,200
# articles (WCEP-total)1.67m339k373k2.39m
# articles (WCEP-100)494k78k78k650k
period start2016-8-252019-1-62019-5-8-
period end2019-1-52019-5-72019-8-20-
+ +Table 3: Size overview of the WCEP dataset. + +
MINMAXMEANMEDIAN
# articles (WCEP-total)18411234.578
# articles (WCEP-100)110063.778
# WCEP articles151.21
# summary words41413229
# summary sents171.41
+ +Table 4: Stats for individual clusters in WCEP dataset. + +# 4.2 Quality of Additional Articles + +To investigate how related the additional articles obtained from Common Crawl are to the summary they are assigned to, we randomly select 350 for manual annotation. We compare the article title and the first three sentences to the assigned summary, and pick one of the following three options: 1) "on-topic" if the article focuses on the event described in the summary, 2) "related" if the article mentions the event, but focuses on something else, e.g., follow-up, and 3) "unrelated" if there is no mention of the event. This results in $52\%$ on-topic, $30\%$ related, and $18\%$ unrelated articles. We think that this amount of noise is acceptable, as it resembles noise present in applications with automatic content aggregation. Furthermore, summarization performance benefits from the additional articles in our experiments (see Section 5). + +# 4.3 Extractive Strategies + +Human-written summaries can vary in the degree of how extractive or abstractive they are, i.e., how much they copy or rephrase information in source documents. To quantify extractiveness in our + +dataset, we use the measures coverage and density defined by Grusky et al. (2018): + +$$ +\operatorname {C o v e r a g e} (A, S) = \frac {1}{| S |} \sum_ {f \in F (A, S)} | f | \tag {1} +$$ + +$$ +D e n s i t y (A, S) = \frac {1}{| S |} \sum_ {f \in F (A, S)} | f | ^ {2} \tag {2} +$$ + +Given an article $A$ consisting of tokens $\langle a_1, a_2, \dots, a_n \rangle$ and its summary $S = \langle s_1, s_2, \dots, s_n \rangle$ , $F(A, S)$ is the set of token sequences (fragments) shared between $A$ and $S$ , identified in a greedy manner. Coverage measures the proportion of words from the summary appearing in these fragments. Density is related to the average length of shared fragments and measures how well a summary can be described as a series of extractions. In our case, $A$ is the concatenation of all articles in a cluster. + +![](images/1843d63239a2a837c1b1bcc54c2a02a53a30394f69ad91f9575e6782e6fddb45.jpg) +Figure 1: Coverage and density on different summarization datasets. + +![](images/fc9b3a876423ad3a91e4fac9b251d9df5235dab7886d9b923436ff3cae1be035.jpg) + +Figure 1 shows the distribution of coverage and density in different summarization datasets. WCEP-10 refers to a truncated version of our dataset with a maximum cluster size of 10. The WCEP dataset shows increased coverage if more articles from Common Crawl are added, i.e., all words of a summary tend to be present in larger clusters. High coverage suggests that retrieval and copy mechanisms within a cluster can be useful to generate summaries. Likely due to the short summary style and editor guidelines, high density, i.e., copying of long sequences, is not as common in WCEP as in the MultiNews dataset. + +# 5 Experiments + +# 5.1 Setup + +Due to scalability issues of some of the tested methods, we use the truncated version of the dataset with a maximum of 100 articles per cluster (WCEP-100). The performance of the methods that we consider starts to plateau after 100 articles (see Figure 2). We set a maximum summary length of 40 tokens, which is in accordance with the editor guidelines in WCEP. This limit also corresponds to the optimal length of an extractive oracle optimizing ROUGE F1-scores8. We recommend to evaluate models with a dynamic (potentially longer) output length using F1-scores and optionally to provide Recall results with truncated summaries. Extractive methods should only return lists of full untruncated sentences up to that limit. We evaluate lowercased versions of summaries and do not modify ground-truth or system summaries otherwise. We compare and evaluate systems using F1-score and Recall of ROUGE-1, ROUGE-2, and ROUGE-L (Lin, 2004). In the following, we abbreviate ROUGE-1 F1-score and Recall with R1-F and R1-R, etc. + +# 5.2 Methods + +We evaluate the following oracles and baselines to put evaluation scores into perspective: + +- ORACLE (MULTI): Greedy oracle, adds sentences from a cluster that optimize R1-F of the constructed summary until R1-F decreases. +- ORACLE (SINGLE): Best of oracle summaries extracted from individual articles in a cluster. +- LEAD ORACLE: The lead (first sentences up to 40 words) of an individual article with the best R1-F score within a cluster. +- RANDOM LEAD: The lead of a randomly selected article, which is our alternative to the lead baseline used in single-document summarization. + +We evaluate the unsupervised methods TEXTRANK (Mihalcea and Tarau, 2004), CENTROID (Radev et al., 2004) and SUBMODULAR (Lin and Bilmes, 2011). We test the following supervised methods: + +- TSR: Regression-based sentence ranking using statistical features and averaged word embeddings (Ren et al., 2016). + +- BERTREG: Similar framework to TSR but with sentence embeddings computed by a pretrained BERT model (Devlin et al., 2019). Refer to Appendix A.1 for more details. + +We tune hyperparameters of the methods described above on the validation set of WCEP-100 (Appendix A.2). We also test a simple abstractive baseline, SUBMODULAR + ABS: We first create an extractive multi-document summary with a maximum of 100 words using SUBMODULAR. We pass this summary as a pseudo-article to the abstractive bottom-up attention model (Gehrmann et al., 2018) to generate the final summary. We use an implementation from OpenNMT with a model pretrained on the CNN/Daily Mail dataset. All tested methods apart from ORACLE (MULTI & SINGLE) observe the length limit of 40 tokens. + +# 5.3 Results + +Table 5 presents the results on the WCEP test set. The supervised methods TSR and BERTREG show advantages over unsupervised methods, but not by a large margin, which poses an interesting challenge for future work. The high extractive bounds defined by ORACLE (SINGLE) suggest that identifying important documents before summarization can be useful in this dataset. The dataset does not favor lead summaries: RANDOM LEAD is of low quality, and LEAD ORACLE has relatively low F-scores (although very high Recall). The SUBMODULAR + ABS heuristic for applying a pre-trained abstractive model does not perform well. + +# 5.4 Effect of Additional Articles + +Figure 2 shows how the performance of several methods on the test set increases with different amounts of additional articles from Common Crawl. Using 10 additional articles causes a steep improvement compared to only using the original source articles from WCEP. However, using more than 100 articles only leads to minimal gains. + +# 6 Conclusion + +We present a new large-scale MDS dataset for the news domain, consisting of large clusters of news articles, associated with short summaries about news events. We hope this dataset will facilitate the creation of real-world MDS systems for use cases such as summarizing news clusters or search results. + +
F-score
MethodR1R2RL
ORACLE (MULTI)0.5580.290.4
ORACLE (SINGLE)0.5390.2830.401
LEAD ORACLE0.3290.1310.233
RANDOM LEAD0.2760.0910.206
RANDOM0.1810.030.128
TEXTRANK0.3410.1310.25
CENTROID0.3410.1330.251
SUBMODULAR0.3440.1310.25
TSR0.3530.1370.257
BERTREG0.350.1350.255
SUBMODULAR+ABS0.3060.1010.214
Recall
MethodR1R2RL
ORACLE (MULTI)0.6450.3310.458
ORACLE (SINGLE)0.580.3040.431
LEAD ORACLE0.5250.2170.372
RANDOM LEAD0.2810.0940.211
RANDOM0.2030.0340.145
TEXTRANK0.3870.1520.287
CENTROID0.3880.1540.29
SUBMODULAR0.3930.150.289
TSR0.4080.1610.301
BERTREG0.4070.160.301
SUBMODULAR+ABS0.3630.1230.258
+ +Table 5: Evaluation results on test set. + +![](images/4508d5669a007117c8285dfb7171543ddba35e8ba24b833aaf85847099778db0.jpg) +Figure 2: ROUGE-1 F1-scores for different numbers of supplementary articles from Common Crawl. + +We conducted extensive experiments to establish baseline results, and we hope that future work on MDS will use this dataset as a benchmark. Important challenges for future work include how to scale deep learning methods to such large amounts of source documents and how to close the gap to the oracle methods. + +# Acknowledgments + +This work was funded by the Irish Research Council (IRC) under grant number EBPPG/2018/23, the Science Foundation Ireland (SFI) under grant number 12/RC/2289_P2 and the enterprise partner Aylien Ltd. + +# References + +Siddhartha Banerjee, Prasenjit Mitra, and Kazunari Sugiyama. 2015. Multi-document abstractive summarization using ILP based multi-sentence compression. In Proceedings of the 24th International Conference on Artificial Intelligence. AAAI Press, 1208-1214. +Ziqiang Cao, Furu Wei, Li Dong, Sujian Li, and Ming Zhou. 2015. Ranking with recursive neural networks and its application to multi-document summarization. In Twenty-ninth AAAI conference on artificial intelligence. +Yllias Chali, Moin Tanvee, and Mir Tafseer Nay-eem. 2017. Towards Abstractive Multi-Document Summarization Using Submodular Function-Based Framework, Sentence Compression and Merging. IJCNLP 2017 (2017), 418. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 4171-4186. +Alexander Richard Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R. Radev. 2019. Multi-News: A Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers. 1074-1084. https://www.aclweb.org/anthology/P19-1102/ +Kavita Ganesan, ChengXiang Zhai, and Jiawei Han. 2010. Opinosis: A graph based approach to abstractive summarization of highly redundant opinions. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010). 340-348. +Sebastian Gehrmann, Yuntian Deng, and Alexander M Rush. 2018. Bottom-Up Abstractive Summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 4098-4109. +Dan Gillick and Benoit Favre. 2009. A scalable global model for summarization. In Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing. Association for Computational Linguistics, 10-18. +Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive Strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). 708-719. + +Kai Hong and Ani Nenkova. 2014. Improving the estimation of word importance for news multi-document summarization. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. 712-721. +Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In Text Summarization Branches Out. Association for Computational Linguistics, Barcelona, Spain, 74-81. https://www.aclweb.org/anthology/W04-1013 +Hui Lin and Jeff Bilmes. 2011. A class of submodular functions for document summarization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, 510-520. +Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating Wikipedia by Summarizing Long Sequences. In International Conference on Learning Representations. https://openreview.net/forum?id=Hyg0vbWC- +Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing. 404-411. +Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. 280-290. +Mir Tafseer Nayeem, Tanvir Ahmed Fuad, and Yllias Chali. 2018. Abstractive unsupervised multidocument summarization using paraphrastic sentence fusion. In Proceedings of the 27th International Conference on Computational Linguistics. 1191-1204. +Karolina Owczarzak and Hoa Trang Dang. 2011. Overview of the TAC 2011 summarization track: Guided task and AESOP task. In Proceedings of the Text Analysis Conference (TAC 2011), Gaithersburg, Maryland, USA, November. +Over Paul and Yen James. 2004. An introduction to duc-2004. In Proceedings of the 4th Document Understanding Conference (DUC 2004). +Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A Deep Reinforced Model for Abstractive Summarization. In International Conference on Learning Representations. https://openreview.net/forum?id=HkACLQgA- +Maxime Peyrard and Judith Eckle-Kohler. 2016. A general optimization framework for multi-document summarization using genetic algorithms and swarm intelligence. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. 247-257. + +Dragomir R Radev, Hongyan Jing, Malgorzata Stys, and Daniel Tam. 2004. Centroid-based summarization of multiple documents. Information Processing & Management 40, 6 (2004), 919-938. +Pengjie Ren, Furu Wei, Zhumin Chen, Jun Ma, and Ming Zhou. 2016. A Redundancy-Aware Sentence Regression Framework for Extractive Summarization. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. The COLING 2016 Organizing Committee, Osaka, Japan, 33-43. https://www.aclweb.org/anthology/C16-1004 +Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A Neural Attention Model for Abstractive Sentence Summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. 379-389. +Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get To The Point: Summarization with Pointer-Generator Networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 1073-1083. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998-6008. +Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Parek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based Neural Multi-Document Summarization. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017). 452–462. +Jianmin Zhang, Jiwei Tan, and Xiaojun Wan. 2018. Towards a neural network approach to abstractive multi-document summarization. arXiv preprint arXiv:1804.09010 (2018). +Markus Zopf. 2018. Auto-hMDS: Automatic Construction of a Large Heterogeneous Multilingual Multi-Document Summarization Corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). +Markus Zopf, Eneldo Loza Mencia, and Johannes Furnkranz. 2018. Which Scores to Predict in Sentence Regression for Text Summarization?. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). 1782-1791. + +# A Appendices + +# A.1 BERTREG + +This method uses a regression model to score and rank sentences. For a particular sentence, we obtain a contextualized embedding from a pre-trained + +BERT model10. We concatenate the embedding with several statistical and surface-form sentence features shown in Table 6. + +
length (in tokens)
position
stop word ratio
mean tf
mean tfidf
mean tf-icf
mean cluster-df
+ +Table 6: Features used for BERTREG apart from the contextual sentence embeddings. + +The corpus-level document and cluster frequencies (cf) in tfidf and tf-icf are obtained from the training set. cluster-df refers to the document frequency within a particular cluster. We feed this concatenated sentence vector to a feedforward network with one hidden layer of size 256. The model is trained to predict the R1 F-score between a sentence and the summary of a cluster, using the mean squared error loss. We found the F-score to work better than Precision or Recall. We use the SGD optimizer, a learning rate of 0.02, and train for 8 epochs with batch size 8. To construct a summary, we predict scores using this model, rank sentences, and greedily pick sentences from the ranked list under a redundancy constraint, as used in TSR. + +# A.2 Implementation Details for Extractive Methods + +We implement the methods TEXTRANK, CENTROID, TSR and BERTREG in a commonly used framework that greedily selects sentences from a ranked list while avoiding redundancy (Zopf et al., 2018). We measure redundancy as the proportion of bigrams in a new sentence that appear in an already selected sentence. For each method, we tune threshold values for redundancy from 0 to 1 in steps of 0.1. For SUBMODULAR, we tune a parameter called diversity with values 1 to 10 in steps of 1, which has a similar role as the redundancy threshold. We use 100 randomly selected clusters from the validation set in WCEP-100 for parameter tuning. We set a minimum sentence length of 7 tokens which avoids summaries slightly shorter than the 40 token limit to be padded with very short or broken sentences. \ No newline at end of file diff --git a/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/images.zip b/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..3c45414161123fcd62d7941d7d4fe98a326fab99 --- /dev/null +++ b/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a910297221c184c1fc5ac26683fcb40914cd27cd944d1480e2fb9e964446d762 +size 264469 diff --git a/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/layout.json b/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c17c70f427323a362cfb7afa9d04dfbcd44024dd --- /dev/null +++ b/alargescalemultidocumentsummarizationdatasetfromthewikipediacurrenteventsportal/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:08020292bf0c8e520e40e53769bad2163903a6a3b436e84d47a748fab1d74799 +size 230670 diff --git a/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/ef9107f4-d348-4f53-bb94-a751732e7bfd_content_list.json b/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/ef9107f4-d348-4f53-bb94-a751732e7bfd_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..62bdd8b8307e4300fac0f36cfca2ffab8ab56227 --- /dev/null +++ b/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/ef9107f4-d348-4f53-bb94-a751732e7bfd_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:200f9a79259fe5e09e7051b47d118e9b497362720aabdaa7c4de65e628a7c530 +size 95157 diff --git a/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/ef9107f4-d348-4f53-bb94-a751732e7bfd_model.json b/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/ef9107f4-d348-4f53-bb94-a751732e7bfd_model.json new file mode 100644 index 0000000000000000000000000000000000000000..11ba90d1948b9b68086a0bda615d13c01b9b17b2 --- /dev/null +++ b/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/ef9107f4-d348-4f53-bb94-a751732e7bfd_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c5ccd3eabece5bae81894f0c2f69f6d5c554746567e71959e100df51e1e3f46c +size 115069 diff --git a/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/ef9107f4-d348-4f53-bb94-a751732e7bfd_origin.pdf b/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/ef9107f4-d348-4f53-bb94-a751732e7bfd_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e32feb97c09cc527e60ed611db5ed104ee57f0e1 --- /dev/null +++ b/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/ef9107f4-d348-4f53-bb94-a751732e7bfd_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78867a459a2fe6d858dcd0f7dabc1e94a9e4244f7f8b4737c3fdad02f3e6c172 +size 652738 diff --git a/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/full.md b/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..39f9c7417dd392439bc7daf78b6bda6eaa91159b --- /dev/null +++ b/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/full.md @@ -0,0 +1,387 @@ +# A Methodology for Creating Question Answering Corpora Using Inverse Data Annotation + +Jan Deriu1, Katsiaryna Mlynchyk1, Philippe Schläpfer1, Alvaro Rodrigo2, Dirk von Grünigen1, Nicolas Kaiser1, Kurt Stockinger1, Eneko Agirre3, and Mark Cieliebak1 + +$^{1}$ Zurich University of Applied Sciences (ZHAW), Winterthur, Switzerland $\{\text{deri, mlyn, scp, vogr, stog, ciel}\} @zhaw.ch$ + +$^{2}$ National Distance Education University (UNED), Madrid, Spain alvarory@lsi.uned.es + +3University of the Basque Country (UPV/EHU), Donostia, Spain e.agirre@ehu.eus + +# Abstract + +In this paper, we introduce a novel methodology to efficiently construct a corpus for question answering over structured data. For this, we introduce an intermediate representation that is based on the logical query plan in a database called Operation Trees (OT). This representation allows us to invert the annotation process without losing flexibility in the types of queries that we generate. Furthermore, it allows for fine-grained alignment of query tokens to OT operations. + +In our method, we randomly generate OTs from a context-free grammar. Afterwards, annotators have to write the appropriate natural language question that is represented by the OT. Finally, the annotators assign the tokens to the OT operations. We apply the method to create a new corpus OTTA (Operation Trees and Token Assignment), a large semantic parsing corpus for evaluating natural language interfaces to databases. We compare OTTA to Spider and LC-QuaD 2.0 and show that our methodology more than triples the annotation speed while maintaining the complexity of the queries. Finally, we train a state-of-the-art semantic parsing model on our data and show that our corpus is a challenging dataset and that the token alignment can be leveraged to increase the performance significantly. + +# 1 Introduction + +Question Answering (QA) over structured data, also called Natural Language Interfaces to Databases (NLI2DB) or Text-to-SQL, is a key task in natural language processing and the semantic web. It is usually approached by mapping a natural language question (NL question) into executable queries in formal representations such as logical forms, SPARQL or SQL. + +The state-of-the-art in this problem uses machine learning techniques to learn the mapping. Unfortunately, the construction of labeled corpora to train + +and evaluate NLI2DB systems is time- and cost-intensive, which is slowing down progress in this area. In particular, it usually requires recruiting SQL or SPARQL experts to write queries for natural language questions. For instance, in Spider (Yu et al., 2018), the authors recruited students to write SQL queries. They worked 500 person-hours to generate 5,600 queries, which corresponds to more than 5 minutes per question. + +As a more cost-effective alternative to writing formal queries manually, some authors propose to use templates to generate them automatically. For instance, LC-QUAD 2.0 (Dubey et al., 2019) used 22 templates based on the structure of the target knowledge graph. Constructing templates is also time-consuming, and the expressiveness of the automatically produced queries is limited. + +Apart from the high cost of generating queries, the natural language questions in current datasets do not necessarily cover the whole range of data present in the database. In Spider, the coverage is limited by the creativity of the students, and in LC-QUAD 2.0 by the templates. + +In this paper, we propose a new procedure to increase the speed of the annotation process. For this, we first introduce an intermediate representation of the structured queries, which we call Operation Trees (OTs, see Figure 1). Our OTs follow a context-free grammar and are based on logical query plans that can easily be mapped to SPARQL or SQL, making our system more versatile. In addition, it has been shown that working on abstract tree representations instead of sequences yields better results (Guo et al., 2019). Recent work by (Cheng et al., 2019) shows the successful use of tree-like abstractions as an intermediate representation to parse text into semantic representations, reinforcing our choice of operation trees as the main representation language. + +Our annotation process works as follows. First, + +we use the context-free grammar to sample random OTs for a given database. We then let annotators in a first round write the corresponding NL questions for the sampled OTs. In a second, optional, round the annotators perform an assignment of tokens from the NL question to the operations in the OT. This additional annotation enriches the information in the dataset, and, as we will show below, allows for performance gains, especially in low data regimes. + +Our approach to producing datasets has the following advantages with respect to the methodology used in previous work: 1) It reduces the time needed for an annotation (less than 2 minutes, compared to more than 5 in Spider). 2) It allows us to cover the whole range of data present in the database structure and not to focus on the most prominent examples. 3) Our annotation procedure provides alignments between operations in the formal language and words in the question, which are an additional source of supervision when training. + +We applied our approach to five datasets, yielding a large corpus called OTTA which consists of 3,792 complex NL questions plus their corresponding OTs as well as the token assignment for one of our domains. Besides, we have adapted a state-of-the-art system (Yin and Neubig, 2017) to work on to operation trees, and included a mechanism to profit from token alignment annotations when training. The system yields better results with up to 7 point increase when trained on aligned OTs. + +# 2 Related Work + +In this section, we first review the related work in the area of Natural Language Interfaces to Databases (NLI2DB). Afterwards, we focus on the data resources that are currently available to evaluate these systems. + +Natural Language Interfaces to Databases. There is a vast amount of literature on NLI2DB. A recent survey on methods and technologies is provided by (Affolter et al., 2019). Early systems use a keyword-based approach with inverted indexes to query the databases (Simitsis et al., 2008; Blunschi et al., 2012; Bast and Haussmann, 2015). Pattern-based approaches are able to handle more + +complex NL questions (Damljanovic et al., 2010; Zheng et al., 2017). Parsing-based approaches use a natural language parser to analyze and reason about the grammatical structure of a query (Li and Jagadish, 2014; Saha et al., 2016). Grammar-based approaches only allow the user to formulate queries according to certain pre-defined rules, thus focus primarily on increasing the precision of answers (Song et al., 2015; Ferre, 2017). More recent systems use a neural machine translation approach similar to translating natural languages, say, from French to English (Iyer et al., 2017a; Basik et al., 2018; Cheng et al., 2019; Liu et al., 2019; Guo et al., 2019; Cheng et al., 2019). + +Data Resources. We will now review the major data resources that have recently been used for evaluating NLI2DB systems. These resources are mainly created following two approaches: (1) Both NL and structured queries are manually created, and (2) structured queries are automatically generated, and then humans create the corresponding NL questions. + +Regarding fully manually created resources, (Yu et al., 2018) provided Spider, a dataset with 5,600 SQL queries, over 200 databases and 10,181 NL questions annotated by 11 students, where some questions were manually paraphrased to increase the variability. (Finegan-Dollak et al., 2018) released Advising, with 4.5k questions about university course advising and SQL queries. (Dahl et al., 1994) created ATIS, a dataset with 5k user questions about flight-booking manually annotated with SQL queries and modified by (Iyer et al., 2017b) to reduce nesting. (Zelle and Mooney, 1996) created GeoQuery, with 877 questions about US geography annotated with Prolog and converted to SQL by (Popescu et al., 2003) and (Giordani and Moschitti, 2012). There are also smaller datasets about restaurants with 378 questions (Tang and Mooney, 2000), the Yelp website with 196 questions and IMDB with 131 questions (Yaghmazadeh et al., 2017). + +Resources using an automatic step usually rely on generating structured queries using templates created by experts. (Zhong et al., 2017) created WikiSQL, a collection of 80k pairs of SQL queries and NL questions made using Wikipedia. However, SQL queries are relatively simple because each of the databases consists of only a single table without foreign keys. Hence, the queries do not contain joins. (Dubey et al., 2019) developed + +LC-QuAD 2.0, with 30,000 complex NL questions and SPARQL queries over DBpedia and Wikidata. They used templates to generate SPARQL queries for seed entities and relations, which are lexicalized automatically using other templates. NL questions of both datasets were created by crowdsourcing workers. + +All the resources mentioned above required a large amount of effort. In each case, the annotators need an in-depth knowledge of SQL or a similarly structured language. Our approach simplifies the process of generating question-answering corpora while ensuring a large coverage of the underlying database without forfeiting any complexity in the queries. + +On the other hand, (Wang et al., 2015) developed a method similar to ours. They begin with a lexicon linking natural utterances with predicates in the database. Then, they use domain-specific grammar to create several canonical phrases associated with queries. Finally, crowdsourcing workers rewrite the canonical phrases and create natural utterances used for training a semantic parser. Similar to our approach, they combine an automatic method with crowdsourcing workers. However, they have to create the lexicon and the grammar for each database, while our method can be applied to any database without creating new resources. + +# 3 Operation Trees + +In our setting, the goal is to generate an Operation Tree (OT) that finds the correct answer for a given question in natural language. An OT is a binary tree that is closely related to a logical query plan in SQL database engines. An OT is composed of a sequence of operations that can be mapped to a database query language such as SQL or SPARQL to retrieve the proper result. + +Example. Assume that we have a database about movies that we want to query in natural language. In Figure 1, an example of an OT is depicted for the question "Who starred in 'The Notebook'?" In order to answer this question, the tables person and movie are selected, then the table movie is filtered by movie title The Notebook. In the next step, the tables are joined via the bridge-table cast. Finally, the person.name column is extracted. + +We enhance these OTs by associating a reasonable subset of tokens from the NL question to each operation in the tree. For instance, the token "starred" could be associated to the Join operation, + +![](images/a006da91f9a31de301171bf736f5960883b9c74bf6cf3eea203fdf190a6643a1.jpg) +(a) + +![](images/a3cc21c3f29e0976fc28bf8352bb0a4b9d68821977df1c9b7b17ab58973d6212.jpg) +(b) +Figure 1: (a) Example of an Operation Tree (OT) for the query "Who starred in 'The Notebook'?" (b) The corresponding database schema. + +as this operation implies that an actor starred in a movie, whereas the tokens "How many" could be associated to the Count operation. This mapping between tokens and operations will help later on to train machine learning algorithms to generate OTs automatically from natural language questions with better quality. + +Definition. More formally, the OTs follow a predefined context-free grammar. In the current state, the set of operations includes major operations from the relational algebra with specific extensions. The full grammar is shown in Figure 2. + +S := done(R) | isEmpty(R) | sum(T,A) | average(T,A) | count(R) +R := projection(T, A) +T := tableScan(TN) | selection(T, A, OP, V) | min(T, A) | max(T, A) | distinct(T) | join(T, T, A, A) | union(T, T, A, A) | +intersection(T, T, A, A) | difference(T, T, A, A) | averageBy(T, A) | +sumBy(T, A) | countBy(T, A) +TN ::= table name +A ::= attributes +OP::= < | > | <= | >= | == | != +V ::= values + +Figure 2: The set of production rules for the context-free grammar of the operation trees, where table name denotes the set of all entity types in the database, attributes denotes the set of all attributes of entity types, and values denotes the set of all entries in the database. The non-terminal symbols S, T, and R denote the start-symbol, intermediate tables, and result tables respectively. + +The OTs can be used to represent queries for any entity-relationship data paradigm. For instance, in SQL databases the entity types are the tables, the attributes are the columns, and the relationships are represented as tables as well. Similar mapping is possible for other paradigms. + +# Properties. The OTs have several features: + +Question Types: There are different types of questions that can be asked. For instance, 1) yes/no questions (IsEmpty), 2) questions about a list of items (Projection followed by Done), 3) questions about the cardinality of a result set (Count), and 4) questions about an aggregation (Sum, Avg, etc.). Result Types: The type of results is defined by the entity types in the result set. For instance, a question can ask about the list of directors that satisfy certain constraints (e.g., all directors that were born in France). In this case, the result type would be the person type. + +Constraints: The constraints represent the filters that are applied onto the attributes of the entities. For instance, "All directors born in France" sets a constraint on the birth_place attribute. + +Entity Types: They define which entity types are involved in the query. The selected entity types are combined, usually via a Join operation. For instance, in Figure 1 the entity types are movie and person, which are combined with the table cast. + +Aggregation Types: They define reduction operations, which are applied to the data. This includes Min/Max operations on an attribute, Set operations on two sets of relations, and Group By operations. + +Complexity. In order to categorize the OTs, we define a complexity score similar to (Yu et al., 2018), which is based on the number of components in the tree. The more Joins and Group By operations, Aggregations or Filters are in the query, the higher the score. Like (Yu et al., 2018), we define four categories: Easy, Medium, Hard, and Extra Hard. + +# 4 Corpus Construction + +The evident way to construct a corpus with NL questions and their corresponding OT queries would consist of two main parts: first, collect a set of NL questions, and then create the corresponding OT queries to these questions. However, this approach is very time-consuming and has a major issue. In essence, questions tend to be very narrow in scope, i.e., they do not necessarily cover the + +whole range of entity types, attributes and relationships that are present in the database. Moreover, writing the corresponding OT queries for the NL questions requires sufficient SQL skills as well as a mechanism to verify that the OT statements actually correspond to the question. + +Thus, we decided to invert the process. That is, we first randomly sample an OT using the above-defined context-free grammar, and then annotators write a corresponding question in natural language. In the last step, annotators manually map tokens of the question to the operations. There are several advantages to this procedure: 1) It allows for controlling the characteristics of the OTs, i.e., we can control the question type, the response type, the constraints, and the entity type. 2) It allows them to create more complex questions that better cover the variety of the underlying data. 3) The annotation process is less time consuming, as the annotators do not have to build the trees or write queries. Rather they can focus on writing the question and assigning tokens. We now describe the process of automatic sampling and manual annotation in more detail. + +# 4.1 Tree Sampling + +The tree sampling procedure is composed of the following steps: + +Question Type: This can be sampled at random or be manually set if a certain type is desired. + +Result Type: First, an entity type is randomly sampled. Then a specific set of attributes is sampled from the chosen entity type. Alternatively, the result type can be manually set. + +Entity Types: The entity types are sampled based on the graph structure of the entities and relationships in the database schema. For this, we sample from all the possible join-paths, which contain the table of the result type. This is also controllable, as we can specify the length of the paths we want to consider. + +Constraints: In the constraints, the filter arguments are sampled. First, the entity types are randomly selected on which the constraints are to be applied. Then we sample an operation and a value at random for each entity type and each attribute. We can limit the number of overall constraints and the number of maximum constraints for each entity type. + +Group By: The Group By operations (AvgBy, SumBy, CountBy) are chosen at random. For a + +Group By operation, two attributes need to be selected: a group-attribute, which defines on which attribute to group, and an aggregation-attribute, which defines on which column to apply the aggregation. For instance, we could group by genre and aggregate over the movie budget. + +Tree structure: The tree structure is sampled as follows. First, the Join operations are applied on the sampled entity types. Second, the set operations (Union, Intersect, Diff) are inserted. Third, the Selection operations are inserted. Next, the aggregation operations are inserted, i.e., Group By, Min, Max operations. Finally, the operations for the question type are sampled. For instance, if the question type is a list of entities, then we use the Projection operation, but if it is a cardinality question, we use the Count operation. + +This procedure may create trees that make no sense semantically. We handle those trees during the annotation phase, which we describe below. Furthermore, we make sure that the trees are executable. For this, we translate the trees into SQL and run them on the database. We also omit trees that return an empty result, as they can lead to confusions during the evaluation, as two different queries that both return an empty result would be counted as being equal. + +# 4.2 Annotation + +The annotation process, i.e., writing natural language questions and assigning query tokens to operations in the OT, is performed in two phases. For each phase, we developed a graphical user interface to facilitate the annotation process (for more details, see Appendix D). + +Phase 1. In the first phase, the annotator is presented with an OT, which is automatically sampled as described in the previous section. The task of the annotator is to formulate an appropriate NL question for the sampled OT. In some cases, the sampled tree has contradicting or nonsensical constraints (e.g., compute the average year). For these cases, the annotators can either skip or adapt the OT by changing the constraints. + +Phase 2. In the second phase, the annotators perform the token assignment as well as quality control. The annotators are presented with an OT and the NL question, which was written by a different annotator in phase 1. First, they check and correct the NL question, then they assign the tokens to the operations. In order to achieve consistent + +annotation results, we set up a guideline on how the tokens are to be assigned (more information in the Appendix). + +# 5 Corpus OTTA + +We applied our corpus construction procedure to a set of five databases and produced a new corpus with NL questions and corresponding OTs, called OTTA. In order to compare our results with previous work, we used four databases from the Spider corpus (CHINOOK, COLLEGE, DRIVING SCHOOL, and FORMULA I), which we extended with a dump from IMDB3 that we refer to as MOVIEDATA. For the annotations, we employed 22 engineers with basic knowledge in SQL-databases. + +# 5.1 Corpus Statistics + +Table 1 summarizes the dataset. The number of tables per database ranges from 6 to 18, and the number of attributes ranges from 45 to 93 columns per database. For CHINOOK and MOVIEDATA, our corpus has more than 1000 annotated OTs, while it has around 500 annotated OTs for the other three databases. For MOVIEDATA, we also performed the token annotation procedure. For each database, we computed the average complexity score. Except for MOVIEDATA, which is Hard, all other databases have a Medium average query complexity. The average time per question annotation ranges from 77 to 104 seconds (average 97.7 seconds). The token assignment and question correction, on the other hand, took on average 101 seconds per OT. + +# 5.2 Corpus Comparison + +In order to examine our corpus, we compare its characteristics to the Spider corpus and to the LCQuAD 2.0 corpus. We compare the coverage of the queried data, the complexity of the natural language questions and the complexity of the corresponding SPARQL/SQL queries. + +**Coverage.** Table 2 shows the major characteristics of the three corpora. We compare the coverage of the databases in terms of the ratio of tables and attributes which appear in the queries. + +The average attribute coverage of Spider over all databases equals $62.1\%$ . However, more than half of the databases in Spider contain 5 tables or less. Thus, we also report the coverage of attributes + +
MOVIEDATACHINOOKCOLLEGEDRIVING SCHOOLFORMULA1
#TABLES181111613
#ATTRIBUTES6463453993
#QUERIES11481067462547568
TIME PER ANNOTATION (SEC)1041047778104
AVG. COMPLEXITYHardMediumMediumMediumMedium
+ +Table 1: Statistics of the new corpus OTTA + +
#QUESTIONS#QUERIES#DB#TABLE/DBTABLE COV.ATTR COV.MSTTRAVG. #TOKENSANN. TIME
SPIDER10,1815,6932005.10.917 (0.87)0.621 (0.496)0.51912.67360 sec.
LC-QUAD 2.030,00030,0001157,0680.0190.1870.76110.6-
OTTA (OURS)3,7923,792511.80.9490.5440.6713.5398 sec.
+ +Table 2: Comparison of our corpus OTTA to the Spider and LC-QuaD 2.0 corpora. Note that the number of databases in LC-QuaD 2.0 is only 1, since it is an open-domain knowledge base, and the number of tables corresponds to the number of different classes. Numbers in parentheses only consider databases with more than 5 tables. + +
#AVG. JOIN#GROUP BY#ORDER BY#NESTED#HAVING#SET OP#AGGREGATIONS#BOOLEAN
SPIDER0.5370.2620.2340.1480.0680.0760.519-
LC-QUAD 2.02.05 hops00.0410000.0480.089
OTTA (OURS)1.190.133000.1170.020.40.161
+ +Table 3: Comparison of the query complexity based on the ratio of components per query. For the aggregations in LC-QuaD 2.0, we report the number of queries that use a Count operation. + +only considering the databases which have more than 5 tables, where Spider only covers $49.6\%$ of attributes. Corpus OTTA, in contrast, covers $54.4\%$ of all attributes. Furthermore, the divide becomes more apparent when we consider databases with larger amounts of tables. For instance, for the FORMULA-1 database, our corpus covers $44.2\%$ of all attributes, in contrast to Spider, where only $22.1\%$ of attributes are covered. LC-QuaD 2.0 covers 1,310 out of 7,005 properties $^{4}$ (i.e. attributes in SQL), which corresponds to $18.7\%$ . This is an extensive coverage, considering the high amount of properties. + +The table coverage shows a similar picture: our approach covers $94.9\%$ of all tables in the databases, whereas Spider covers $91.7\%$ . This number drops down to $87\%$ when considering only databases with more than 5 tables. Again, this effect is most pronounced for the FORMULA-1 database, where we cover $92\%$ of the tables, whereas Spider only covers $69.2\%$ . This shows that our method better scales to larger databases, which is relevant for real-world applications, where databases with a vast number of tables exist. LCQuaD 2.0 covers around $1.9\%$ of approx. 160k classes, which makes comparison hard, as it is impossible to cover this vast amount of classes with + +30k queries. + +Query Complexity. In order to compare the complexity of the queries, we examine the number of occurrences of different components in the queries (see Table 3). + +We first observe that our corpus OTTA does not contain any queries with Order By operators or nested queries - however, they could be easily added to the grammar to fill this gap. Furthermore, Spider contains more aggregation operations (in particular Min, Max, Count, Average, and Sum). Again, this could be easily adapted in our corpus by sampling more trees that contain these aggregations. On the other hand, our corpus stands out in the number of joins per query: on average OTTA has 1.19 join operations per query in contrast to Spider, which has 0.537 joins per query. In fact, about $40\%$ of the queries in Spider contain joins, whereas OTTA is composed of $54\%$ of queries, which contain at least one join operation. Furthermore, around $37\%$ of our queries contain two joins in contrast to $9\%$ in Spider. On the other hand, LC-QuaD 2.0 contains an average of 2 hops (equivalent to two joins in relational databases) per query, which lies in the nature of graph database queries that are optimized for handling queries that range over multiple triple patterns. However, LC-QuaD 2.0 lacks complexity when considering more complex components (e.g., Group By, Set-Operation, etc.). In addition to the operations in relational + +algebra, the OTs also support Boolean questions (i.e., yes/no questions), which make $16.1\%$ of our corpus compared to $8.9\%$ in LC-QuaD 2.0. + +Question Complexity. The lexical complexity of the NL questions is measured in terms of mean-segmental token-type-ratio (MSTTR) (Covington and McFall, 2010), which computes the number of different token types in relation to all tokens in a corpus. The MSTTR is computed over text segments of equal length, in order to avoid biases due to different lengths within the corpora. First, note that the average length of the questions in all three corpora is approximately the same, between 10.6-13.6 tokens on average. Table 2 shows that our corpus contains a much higher lexical complexity of the questions than Spider (0.67 instead of 0.52). Thus, our approach seems to avoid trivial or monotonous questions, which also matches with our impression from manual inspection. On the other hand, the lexical complexity is higher in LCQuaD 2.0, which is due to the open domain nature of the dataset. + +Examples. In Table 4, we show examples of questions from OTTA compared to questions from Spider. The examples show that the quality of the questions is similar. The easy questions in both datasets are often only simple filtering questions on one table. Medium complexity questions include join operations and filters. Hard questions in both datasets include join operations and aggregation operations such as finding the maximum or computing the average. The biggest difference is in the Extra complexity. There Spider focuses more on subqueries in the where clause. OTTA, on the other hand, focuses more on larger join paths, which are typical for real-world database queries as well as group-by operations and aggregations. + +# 6 Baseline Systems + +Baseline model. As baseline model for OTs from NL questions, we follow the Syntactic Neural Model for Code Generation by (Yin and Neubig, 2017), which we refer to as Grammar-RNN5. This model is based on an encoder-decoder architecture that learns to generate a sequence of production rules of an arbitrary grammar, which in turn produces the query for a given question. For a more detailed discussion on this architecture, we refer + +the reader to (Yin and Neubig, 2017). In our case, it learns to generate the rules defined in Figure 2 for a given question in natural language. Based on the generated list of rules, an OT is created. + +We train the model in two phases - a pre-training phase and a supervised phase. In the pre-training phase, we train a grammar-autoencoder on large amounts of randomly sampled OTs. In the supervised phase, we replace the grammar-encoder by a text encoder and train on the labelled dataset, i.e., the samples with NL question and corresponding OT. + +Encoder. For the NL question, we use a standard Gated-Recurrent Unit (GRU) (Chung et al., 2014) to encode the question. If $w_{i}$ denotes the representation of the i-th token in the question, then the encoder produces a corresponding hidden state $h_{i}^{E}$ . Let $H^{E} \in \mathbb{R}^{N \times h}$ denote the concatenation of all hidden states produced by the GRU for one question, where $N$ is the number of tokens and $h$ the size of the hidden state. + +Decoder. The decoder learns to generate a sequence of production rules with which a tree $y$ is generated for a given encoding $x$ of the NL question. The generation process is formalized as: + +$$ +p (y \mid x) = \prod_ {t = 1} ^ {T} p \left(a _ {t} \mid x, a _ {< t}, a _ {p _ {t}}\right) \tag {1} +$$ + +$a_{t}$ is the action taken at time t, $a_{< t}$ are the actions taken before time t, $a_{p_t}$ are the parent actions taken, and $x$ is the encoded input question. There are two different types of rules that the model applies during decoding: 1) If the current rule generates a non-terminal symbol, then ApplyRule[r] is executed, which applies a production rule to the current tree. 2) If the next symbol is a terminal, then GenToken[v] is applied, which selects the token from a vocabulary. In our case, we have different types of tokens to be generated: table-names, attribute-names and filter operations. Similar to Grammar-RNN, we implement the decoder using a recurrent neural network, where the internal state is given by: + +$$ +h _ {t} = G R U \left(\left[ a _ {t - 1}: c _ {t}: a _ {p _ {t}}: n _ {f _ {t}} \right], \widetilde {h} _ {t - 1}\right) \tag {2} +$$ + +$n_{f_t}$ is the embedding of the current node type (e.g. average, union, ...), $c_t$ is a context vector that is computed by applying soft-attention over the input hidden states $H^E$ , and $h_{t - 1}$ is the hidden vector of the last state. In contrast to (Yin and Neubig, + +
HardnessSpiderOTTA
easyFind the number of albums.Where were the invoices with the total sum of 1.99 or smaller issued?
What is the average unit price of all the tracks?What are the unit prices of tracks composed by Alfred Ellis/James Brown?
Find all the customer information in state NY.To which country belongs the 89503 postal code?
mediumCount the number of tracks that are part of the rock genre.What is the average length of the tracks in the Grunge playlist?
Please show the employee first names and ids of employees who serve at least 10 customers.When did we sell tracks larger than 8675345 bytes?
Find the name of the artist who made the album "Balls to the Wall".To which postal codes did we sell a track named Headspace?
hardWhat is the average duration in milliseconds of tracks that belong to Latin or Pop genre?How many different playlists with a track that is bigger than 7045314 bytes do exist?
What are the names of artists who have not released any albums?What is the album title having the track with the lowest length in milliseconds in the genre name Sci Fi & Fantasy?
What are the last names of customers without invoice totals exceeding 20?What are the genres from artists not named Scholars Baroque Ensemble?
extraWhat is the name of the media type that is least common across all tracks?What's the total unit price sold to customers with the email hholy@gmail.com and Argentina as billing country?
Count the number of artists who have not released an album.How many different genres do the tracks have, which were bought by customers who live in France?
What are the album titles for albums containing both Reggae and Rock genre tracks?Which customers made at least 35 purchases, excluding titles from the Chico Science & Nacao Zumbi album?
+ +Table 4: Example questions from OTTA and Spider. We grouped the examples by the hardness scores. The examples are for the Chinook domain, which is an online music store database. + +2017), we apply attention based on (Luong et al., 2015), where $\tilde{h}_{t - 1} = \tanh (W_c[h_{t - 1}:c_t])$ . + +For the selection of the terms, we have four output matrices $W_{R}, W_{T}, W_{A}, W_{C}$ , where $W_{R}$ encodes the grammar rules (i.e. for the non-terminal symbols), and $W_{T}, W_{A}, W_{C}$ encode the table names, attributes and comparison operations, respectively. Depending on the current frontier node, the next output is computed by: + +$$ +a _ {t} = \operatorname {a r g m a x} \left(\operatorname {s o f t m a x} \left(W _ {R} * h _ {t}\right)\right) \tag {3} +$$ + +Grammar Encoder. The tree encoder, which we use for the pre-training, is based on the same GRU architecture as the decoder. The hidden states for each rule are computed by: + +$$ +h _ {t} = G R U ([ a _ {t - 1}: a _ {p _ {t}}: n _ {f _ {t}} ], h _ {t - 1}) \qquad (4) +$$ + +In contrast to the encoder, there is no context vector $c_{t}$ . Moreover, $h_{t-1}$ is the last hidden state computed by the GRU. The output of the encoder is a sequence of all states: $H^{R} \in \mathbb{R}^{R \times h}$ , where $R$ denotes the number of rules in the encoded tree. + +Token Attention. A straight-forward method to include the explicit token alignment, which is created in the second annotation phase, is to force the attention mechanism to learn the alignment. For this, we add an extra loss function, which computes the binary cross entropy for each attention weight. + +More formally, let $\alpha_{t} = \text{softmax}(h_{t-1}H^{E}) \in \mathbb{R}^{N}$ be the attention weights computed for timestep + +t (during the pre-training phase $H^{E}$ is replaced by $H^{R}$ ). Then let $\alpha_{t}^{(i)}$ be the attention weight for the i-th token. For each token we add the loss + +$$ +g _ {i} * \log (\alpha_ {t} ^ {(i)}) + (1 - g _ {i}) * \log (1 - \alpha_ {t} ^ {(i)}), \quad (5) +$$ + +where $g_{i}\in [0,1]$ denotes if the token is assigned to the current node or not. + +# 7 Results + +We now report the results of our model. The details of the experimental setup can be found in Appendix A. Each experiment is repeated five times with different random seeds. Table 5 shows the precision of the Grammar-RNN on the 5 datasets of OTTA. The precision is defined as the exact result set matching between the gold standard query and the generated query. Furthermore, the table shows the average precision for each query complexity category. The column "Weighted Avg." refers to the mean average precision over all queries irrespective of the query complexity category. + +Precision. For all the databases, except FORMULA-1, the model achieves a precision between $45.1\%$ and $47.5\%$ . For FORMULA-1 the model only achieves a score of $26.3\%$ . This could be explained by the fact that the FORMULA-1 database contains 93 different attributes, and our data only covers 42 of these attributes. Furthermore, each attribute appears only 17.1 times per query on average. In contrast, for the COLLEGE + +database the attributes appear in 56 queries on average. Thus, it is harder for the model to learn attributes, which do not appear often in the training set. For most of the databases, the model cannot handle the extra hard questions, which often contain multiple joins, aggregations, and/or group by operators. Note that without the pre-training phase, the scores drop by a large margin. For instance, the scores for Moviedata drop below $30\%$ precision. + +
EasyMediumHardExtra HardWeighted Avg.
MOVIEDATA0.6450.6190.4370.1080.475
CHINOOK0.6100.4420.3960.4820.473
COLLEGE0.5250.7390.2940.0770.468
DRIVING School0.5180.2720.6110.1870.451
FORMULA 10.3550.0750.00.00.263
+ +Table 5: Precision of queries against our 5 datasets according to query complexity. "Weighted Avg." refers to the mean average precision over all queries irrespective of the query complexity category. + +Benefit from Token Assignments. We now evaluate whether the token assignments can help to train better models. Figure 3 displays the learning curves for the MOVIEDATA database with and without the token assignment. The model is trained with $20\%$ , $40\%$ , $60\%$ , $80\%$ , and $100\%$ of the data. The results show that using the token assignment increases the scores by around $2\%$ . In the case of $20\%$ training data, the gain is even as high as $7\%$ , thus showing that the model can benefit from the additional information that is provided in the token assignments. + +![](images/7754b41b1ab86c6838e208fdaccb14104f1182eccd1cb82da729cf974a5cd4dc.jpg) +Figure 3: Learning curve for $20\%$ , $40\%$ , $60\%$ , $80\%$ , and $100\%$ of the data on database MOVIEDATA as part of OTTA. We compare the scores for the training with and without token alignment. + +# 8 Conclusion + +In this paper, we introduced a fast annotation procedure to create NL queries and corresponding database queries (in our case, Operation Trees). + +Our procedure more than triples the velocity of annotation in comparison to previous methods, while ensuring a larger variety of different types of queries and covering a larger part of the underlying databases. Furthermore, our procedure allows a fine-grained alignment of tokens to operations. We then used our new method to generate OTTA, a novel corpus for semantic parsing based on operation trees in combination with token assignments. Generating this corpus was more time- and cost-efficient than with previous approaches. Our statistical analysis showed that the corpus yields a higher coverage of attributes in the databases and more complex natural language questions than other existing methods. Furthermore, we implemented a baseline system for automatically generating OTs from NL queries. This baseline achieves scores of up to $48\%$ precision, which are already reasonable while also leaving large potential for improvement in future research. Finally, we showed that the inclusion of the token alignment results in an increase of precision of up to $7\%$ . + +Based on these results, we will explore ways to leverage the token assignment to domain adaption and few-shot learning. We also plan to enhance the annotation process by automatically generating proposals for the NL questions and token assignments and letting the annotators only perform corrections. We hope that this increases annotation efficiency even more. + +# 9 Acknowledgements + +This work has been partially funded by the LIH-LITH project supported by the EU ERA-Net CHIST-ERA; the Swiss National Science Foundation [20CH21_174237]; the Agencia Estatal de Investigacin (AEI, Spain) projects PCIN-2017-118 and PCIN-2017-085; the INODE project supported by the European Unions Horizon 2020 research and innovation program under grant agreement No 863410. + +# References + +Katrin Affolter, Kurt Stockinger, and Abraham Bernstein. 2019. A comparative survey of recent natural language interfaces for databases. The VLDB Journal, 28(5):793-819. +Fuat Basik, Benjamin Hattasch, Amir Ilkhechi, Arif Usta, Shekar Ramaswamy, Prasetya Utama, Nathaniel Weir, Carsten Binnig, and Ugur Cetinternet. 2018. Dbpal: A learned nl-interface + +for databases. In Proceedings of the 2018 International Conference on Management of Data, pages 1765-1768. ACM. +Hannah Bast and Elmar Haussmann. 2015. More accurate question answering on freebase. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 1431-1440. ACM. +Lukas Blunschi, Claudio Jossen, Donald Kossmann, Magdalini Mori, and Kurt Stockinger. 2012. Soda: Generating sql for business users. Proceedings of the VLDB Endowment, 5(10):932-943. +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146. +Jianpeng Cheng, Siva Reddy, Vijay Saraswat, and Mirella Lapata. 2019. Learning an executable neural semantic parser. Computational Linguistics, 45(1):59-94. +Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. +Michael A. Covington and Joe D. McFall. 2010. Cutting the gordian knot: The moving-average typetoken ratio (mattr). Journal of Quantitative Linguistics, 17(2):94-100. +Deborah A. Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the atis task: The atis-3 corpus. In Proceedings of the Workshop on Human Language Technology, HLT '94, pages 43-48, Stroudsburg, PA, USA. Association for Computational Linguistics. +Danica Damljanovic, Milan Agatonovic, and Hamish Cunningham. 2010. Natural language interfaces to ontologies: Combining syntactic analysis and ontology-based lookup through the user interaction. In *Extended Semantic Web Conference*, pages 106–120. Springer. +Mohnish Dubey, Debayan Banerjee, Abdelrahman Abdelkawi, and Jens Lehmann. 2019. Lc-quad 2.0: A large dataset for complex question answering over wikidata and dbpedia. In International Semantic Web Conference, pages 69-78. Springer. +Sebastien Ferre. 2017. Sparklis: an expressive query builder for sparql endpoints with guidance in natural language. Semantic Web, 8(3):405-418. +Catherine Finegan-Dollak, Jonathan K. Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. 2018. Improving text-to-sql evaluation methodology. In Proceedings of the 56th Annual Meeting of the Association for + +Computational Linguistics (Volume 1: Long Papers), pages 351-360, Melbourne, Victoria, Australia. +Alessandra Giordani and Alessandro Moschitti. 2012. Automatic generation and reranking of sql-derived answers to nl questions. In Proceedings of the Second International Conference on Trustworthy Eternal Systems via Evolving Software, Data and Knowledge, EternalS'12, pages 59-76, Berlin, Heidelberg. Springer-Verlag. +Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, Jian-Guang Lou, Ting Liu, and Dongmei Zhang. 2019. Towards complex text-to-SQL in cross-domain database with intermediate representation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4524-4535, Florence, Italy. Association for Computational Linguistics. +Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017a. Learning a neural semantic parser from user feedback. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 963-973. +Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017b. Learning a neural semantic parser from user feedback. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 963-973, Vancouver, Canada. Association for Computational Linguistics. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Fei Li and HV Jagadish. 2014. Constructing an interactive natural language interface for relational databases. Proceedings of the VLDB Endowment, 8(1):73-84. +Haoyan Liu, Lei Fang, Qian Liu, Bei Chen, LOU Jian-Guang, and Zhoujun Li. 2019. Leveraging adjective-noun phrasing knowledge for comparison relation prediction in text-to-sql. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3506-3511. +Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal. Association for Computational Linguistics. +Ana-Maria Popescu, Oren Etzioni, and Henry Kautz. 2003. Towards a theory of natural language interfaces to databases. In Proceedings of the 8th International Conference on Intelligent User Interfaces, IUI '03, pages 149-157, New York, NY, USA. ACM. + +Diptikalyan Saha, Avrilia Floratou, Karthik Sankaranarayanan, Umar Farooq Minhas, Ashish R Mittal, and Fatma Ozcan. 2016. Athena: an ontology-driven system for natural language querying over relational data stores. Proceedings of the VLDB Endowment, 9(12):1209-1220. +Alkis Simitsis, Georgia Koutrika, and Yannis Ioannidis. 2008. Precis: from unstructured keywords as queries to structured databases as answers. The VLDB Journal The International Journal on Very Large Data Bases, 17(1):117-149. +Dezhao Song, Frank Schilder, Charese Smiley, Chris Brew, Tom Zielund, Hiroko Bretz, Robert Martin, Chris Dale, John Duprey, Tim Miller, et al. 2015. Tr discover: A natural language interface for querying and analyzing interlinked datasets. In International Semantic Web Conference, pages 21-37. Springer. +Lappoon R. Tang and Raymond J. Mooney. 2000. Automated construction of database interfaces: Integrating statistical and relational learning for semantic parsing. In 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 133-141, Hong Kong, China. Association for Computational Linguistics. +Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a semantic parser overnight. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1332-1342, Beijing, China. Association for Computational Linguistics. +Navid Yaghmazadeh, Yuepeng Wang, Isil Dillig, and Thomas Dillig. 2017. Squizer: Query synthesis from natural language. Proc. ACM Program. Lang., 1(OOPSLA):63:1-63:26. +Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 440-450, Vancouver, Canada. Association for Computational Linguistics. +Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018. Spider: A largescale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911-3921, Brussels, Belgium. Association for Computational Linguistics. +John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the Thirteenth National Conference on Artificial Intelligence - Volume 2, AAAI'96, pages 1050-1055. AAAI Press. + +Weiguo Zheng, Hong Cheng, Lei Zou, Jeffrey Xu Yu, and Kangfei Zhao. 2017. Natural language question/answering: Let users talk with the knowledge graph. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 217-226. ACM. +Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. CoRR, abs/1709.00103. + +# A Experimental Setup + +Preprocessing We use Spacy $^6$ to tokenize the NL questions in OTTA. In order to find the entities for the constraints, we employ simple string matching. With this, we find $96\%$ of all entities. Thus, the generated OTs are executable, and we can compare the results of the generated OT to the results by the gold-standard OT from the corpus. + +Model Configuration. For our model, we chose a hidden layer of $h = 256$ dimensions. We optimize using the Adam (Kingma and Ba, 2014) optimizer with the standard values. We let the model train using early stopping with a patience on the validation loss, where the validation set is the left-out fold in 5 fold cross-validation. For the word embeddings, we use the pre-trained FastText embeddings (Bojanowski et al., 2017), which are refined during the training phase. + +# B Query Complexity + +The tables below show more details of the coverage (see Tables 6 and 7) and the average number of joins per query (see Table 8). + +# C Example of Tree Sampling + +Figure 4 shows a randomly sampled tree. During Phase 1 of the annotation procedure, an annotator associated the tree with the question: What is the average movie vote of different movies having an Oscar nominee with a cast character called Jesse and were nominated for an Oscar in the year 1991 or later?. In the second phase of the annotation, the tokens of the questions were associated with the nodes of the tree. The tree is depicted from root to leaves, where the root node is the last operation, and the leave nodes are the GetData-nodes. Here we describe the tree sampling procedure in more detail with the tree in Figure 4 as an example. + +1 The query type is selected. There are five different types: list, sum, count, average, and Boolean. In our example, the average was selected. This can be forced manually or randomly sampled. +2 The result type is selected, which, in this case, is movie票_average. This can also be set manually or be sampled at random. Based on the query type, only certain types of results + +are allowed. More precisely, for average and sum operations, only numeric result types are allowed. + +3 The join path is selected. In the first step, a path length is selected, which can be predefined or randomly sampled. In this case, the path length is set to five. Then, in a second step, a random path of the predefined length is selected. In the current example, the query path is: movie, cast, person, oscar_nominee, oscar. The path always starts with the result type of table. +4 The set operation is selected among union, intersection, or difference. In this example, there is no set operation. After the operation was selected, a subpath is chosen, on which the set operation is performed. For instance, if we wanted to know the movies where Brad Pitt and George Clooney worked together, then the subpath movie, cast, person is selected. Finally, two different filters are inserted, one for each actor. +5 The group by operation is selected. First, the operator is selected among sum, average, or count. Then, the group by attribute and the aggregation attribute are selected. In our example, there is no group by operation. +6 The aggregation operation is selected among the min and max operation. This is relevant for the questions of the type: Which movie has the highest rating. In this example, we have no aggregation operation. +7 The filters are selected. For this, we define the number of total filters and the maximal number of filters per table. In this case, we set the number of filters equal to 2, and the maximal number of filters per table to one. Then, the appropriate number of attributes is selected randomly alongside the path. In this case, the tables oscar and cast were selected. Then, an attribute is selected, followed by a comparison operator and a value, which is randomly sampled from the database. In our example, we have: oscar.year $\geq 1991$ and cast.character = Jesse. + +# D Annotation Tool + +The annotation process is performed in two phases: writing an NL question for a given OT, and as + +
TOTALCHINOOKCOLLEGEDRIVING SCHOOLFORMULA_1
SPIDER0.917 (0.87)0.7270.90910.692
OUR DATASET0.94910.81810.923
+ +Table 6: Table Coverage, in % to total amount of existing tables. Our dataset shows better table coverage, except for one database (college), where the coverage differs by one table. The biggest improvement in coverage was achieved on the database formula_1, which is also the most complex database with the largest amount of tables. The number in braces indicates the average table coverage for the databases with more than 5 tables. + +
TOTALCHINOOKCOLLEGEDRIVING SCHOOLFORMULA-1
SPIDER0.621 (0.496)0.3540.3830.7300.221
OUR DATASET0.5440.5840.3840.7560.442
+ +Table 7: Attribute Coverage. Our method gives better attribute coverage in particular for larger datasets, for instance, FORMULA_1. The number in braces indicates the average attribute coverage for the databases with more than 5 tables. + +
TOTALCHINOOKCOLLEGEDRIVING SCHOOLFORMULA-1
SPIDER0.5040.6670.4120.4410.925
OUR DATASET1.150.951.180.8371.2
+ +Table 8: Average number of joins per query. + +signing tokens from the NL question to the nodes within the OT. We have built two user interfaces, one for each phase. Figure 5 shows screenshots of both tools. + +Phase 1. In the first phase, the annotators are presented with an OT and the constraints. Their task is to write an appropriate question for the OT. For this, they can adapt the constraints, in case that they are nonsensical. Furthermore, the annotators can access the intermediate results for each node in the tree to better understand what the OT does. In cases where the OT cannot be annotated with an appropriate question, the OT can be skipped. + +Phase 2. For the second phase (Figure 5 (b)), the annotators are presented with an OT and an NL question, which was written by another annotator in the previous phase. The task is to correct the question, and then assign the tokens of the NL to the nodes (i.e., operations) in the tree. For this task, the tool guides the annotator from node to node in the OT. Moreover, for each node, the annotator can choose the corresponding tokens. In the final step, the annotators can correct their token assignment using drag-and-drop features. + +Guidelines. In order to have consistent annotations (especially in the second phase), we provided the annotators with extensive tutorial videos. On average, the annotators took 30 minutes to get used to the tool and start to work efficiently. For the first phase, we instructed the annotators to write an + +appropriate question and gave examples, as well as examples of pitfalls. + +For the second phase, we introduced stricter guidelines, as we noticed that annotators had trouble with this step. Especially, the join operations were not clear to the annotators. Thus, we decided on the following rules: + +- Table: If the table denotes an entity type (e.g., movie), the tokens that denote this entity type are to be assigned (e.g., "movies"). If the table is a bridge table, which denotes a relationship between entities (e.g., production_country), then the tokens that denote this relationship are to be assigned to the operation (e.g., "movies", "produced", "in"). +- Joins: For the join operations, the same guidelines as for the bridge-tables are to be followed. +- Filter: For the filter constraints (e.g., "person_name = Tom Cruise") the tokens, which represent the constraint, are to be selected (e.g., "by", "Tom", "Cruise"). +- Query type: For each query type (e.g., count, average, sum, ...), the tokens that correspond or trigger this question type are to be selected (e.g., "How", "many"). + +Annotators. We recruited 22 annotators, which have a basic understanding of database technologies. We paid each annotator $25\mathrm{\Phi}$ per hour. Each + +![](images/66af301a9a758513d8aef2e5a4e5aea4c27132c32e950d3e74cbfbd7509c3951.jpg) +Figure 4: Example of a randomly sampled tree. The nodes denote the node type with their arguments. The Tokens are assigned during the second phase of the annotation process. This tree is the answer to the question: What is the average movie vote of different movies having an Oscar nominee with a cast character called Jesse and were nominated for an Oscar in the year 1991 or later? + +annotator was given access to a set of instruction videos as well as a user manual. Furthermore, the annotators could pose questions in a forum. + +![](images/c890fcacf69e3fa8c656ccc889985935cbb671e6789cd8c96dfe3ee916833257.jpg) +(a) + +![](images/4e58ab46978cc6b0678e323d579d1f7db5797cdae9515e976e1896fbdeb7fc16.jpg) + +![](images/3ba2f77b93e2688db1319c21e12d99d42757aec5dbec1b3d91c0071b5dd79ea0.jpg) +(b) + +![](images/1310e48101609ebcb937e605c2ef055975384eac8c44b8393e28e54c10d4ab06.jpg) +Figure 5: The annotation tool. (a) The OT and the constraints are shown to the annotators. For each node, the annotators can inspect the result of the execution. The annotators write a question and (b) assign the tokens of the question to the operations. \ No newline at end of file diff --git a/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/images.zip b/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4bb7e1402ecbe679956c2f8b0e568343fcc0acd6 --- /dev/null +++ b/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83d0fc352164c2c1b22b278fd1cd57b239b0fe9e14aa6becdfb4ff879260b6d0 +size 542642 diff --git a/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/layout.json b/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5e5bc2385f12d7d90cdb64c8163659bfbd47cf64 --- /dev/null +++ b/amethodologyforcreatingquestionansweringcorporausinginversedataannotation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b54a27eef018239838f711b36207a0310172645ae033f1267941b4ccf5ba3a9c +size 423470 diff --git a/amixtureofh1headsisbetterthanhheads/b775d5cd-cbb7-451f-8a54-f0b8b94894f6_content_list.json b/amixtureofh1headsisbetterthanhheads/b775d5cd-cbb7-451f-8a54-f0b8b94894f6_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1980d34dfb775e3af841d23cc67bab6c7879507a --- /dev/null +++ b/amixtureofh1headsisbetterthanhheads/b775d5cd-cbb7-451f-8a54-f0b8b94894f6_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4edc127e1d5ffdc0c0f27bd76af5ca6d57fe41a549eecbd466487f6ebd658296 +size 86061 diff --git a/amixtureofh1headsisbetterthanhheads/b775d5cd-cbb7-451f-8a54-f0b8b94894f6_model.json b/amixtureofh1headsisbetterthanhheads/b775d5cd-cbb7-451f-8a54-f0b8b94894f6_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d2271525737cad67bdf41dbbd713db266b35a480 --- /dev/null +++ b/amixtureofh1headsisbetterthanhheads/b775d5cd-cbb7-451f-8a54-f0b8b94894f6_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:add3a14d4ac2b8e125953b95ba20593915c4d5569ee1f4a0642a3ea9c756d8a5 +size 110407 diff --git a/amixtureofh1headsisbetterthanhheads/b775d5cd-cbb7-451f-8a54-f0b8b94894f6_origin.pdf b/amixtureofh1headsisbetterthanhheads/b775d5cd-cbb7-451f-8a54-f0b8b94894f6_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..be8dcb691cc7fcb67626697da380839f038aaf9e --- /dev/null +++ b/amixtureofh1headsisbetterthanhheads/b775d5cd-cbb7-451f-8a54-f0b8b94894f6_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:691cc369e4cac451bce4a7a69c52e961136eaa2d183ab5bdb0927bfd6bab59cb +size 530635 diff --git a/amixtureofh1headsisbetterthanhheads/full.md b/amixtureofh1headsisbetterthanhheads/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1229b9a30940811e0dedca3c940bad929d25c61e --- /dev/null +++ b/amixtureofh1headsisbetterthanhheads/full.md @@ -0,0 +1,381 @@ +# A Mixture of $h - 1$ Heads is Better than $h$ Heads + +Hao Peng\* Roy Schwartz\* Dianqi Li\* Noah A. Smith\* + +$\diamond$ Allen Institute for Artificial Intelligence + +$\spadesuit$ Paul G. Allen School of Computer Science & Engineering, University of Washington + +\*Department of Electrical & Computer Engineering, University of Washington {hapeng, roysch, nasmith}@cs.washington.edu, dianqili@uw.edu + +# Abstract + +Multi-head attentive neural architectures have achieved state-of-the-art results on a variety of natural language processing tasks. Evidence has shown that they are overparameterized; attention heads can be pruned without significant performance loss. In this work, we instead "reallocate" them—the model learns to activate different heads on different inputs. Drawing connections between multi-head attention and mixture of experts, we propose the mixture of attentive experts model (MAE). MAE is trained using a block coordinate descent algorithm that alternates between updating (1) the responsibilities of the experts and (2) their parameters. Experiments on machine translation and language modeling show that MAE outperforms strong baselines on both tasks. Particularly, on the WMT14 English to German translation dataset, MAE improves over "transformer-base" by 0.8 BLEU, with a comparable number of parameters. Our analysis shows that our model learns to specialize different experts to different inputs.1 + +# 1 Introduction + +The transformer architecture and its variants achieve state-of-the-art performance across a variety of NLP tasks, including machine translation (Vaswani et al., 2017; Ott et al., 2018), language modeling (Radford et al., 2018; Baevski and Auli, 2019), semantic role labeling (Strubell et al., 2018), and more (Devlin et al., 2019; Liu et al., 2019b; Yang et al., 2019b). Under the hood, multi-head attention provides the driving force: multiple separately parameterized attention functions act in parallel to contextualize the input representations; their outputs are then gathered by an affine transformation, and fed to onward computation. + +![](images/d61978a60e345b0a529b3fdf774d343546d15fe0a4a223a6d05436d215a7e12a.jpg) +Figure 1: Illustration of MAE: a mixture of attentive experts. Each $\mathbf{H}_i$ box is an attention head in a given layer; there are $h$ of them in total. Experts are groups of $h - 1$ attention heads. MAE learns an input-dependent distribution of the experts (g). At each training step, a single expert is selected and updated (solid line); during the evaluation, experts' outputs are linearly combined with weights produced by g. + +Recent efforts by Voita et al. (2019) and Michel et al. (2019) suggest that typical transformer networks are overparameterized, in the sense that at test time, many of the heads, or even a full layer (Fan et al., 2020), can be removed without significant loss in performance. In response to this observation, they propose to prune the unimportant attention heads in the model after it is trained, aiming for faster inference. + +In this paper, we ask whether, instead of reducing the model capacity, we can use it more effectively. We propose a mixture of attentive experts (MAE). MAE retains all attention heads, and learns to activate different heads on different inputs (see illustration in Figure 1). We start by showing that multi-head attention can be seen as an uniform, input-agnostic mixture of experts (Jacobs et al., 1991), by grouping a subset ofatten + +tion heads as an expert (§2.2). We then introduce MAE, which instead of uniformly weighting the experts, complements the experts with a learned, input-dependent function that assigns their responsibilities (§2.3). To train MAE, we propose a two-step algorithm based on block coordinate descent (§3), which alternates between updating the experts' responsibilities and their parameters. + +We evaluate MAE on machine translation and language modeling (§4). Our approach outperforms strong baselines on both; on the WMT14 English to German MT dataset, MAE outperforms transformer-base (Vaswani et al., 2017) by 0.8 BLEU with a negligible increase in the number parameters. Our analysis shows that MAE learns to encourage different experts to specialize on different inputs (§5). + +# 2 MAE: Mixture of Attentive Experts + +This section describes MAE in detail. It is inspired by a mixture-of-experts view of multi-head attention, which we present in §2.2. Specifically, we show that multi-head attention can be viewed as a mixture of uniformly weighted experts, each consisting of a subset of attention heads. Based on this observation, we propose MAE, which learns to weight the experts (§2.3) depending on the input. We begin by laying out notation and necessary background in §2.1. + +# 2.1 Background: Mixture of Experts + +Mixture of experts is a well-established technique for ensemble learning (Jacobs et al., 1991). It jointly trains a set of expert models $\{\mathbf{f}_i\}_{i=1}^k$ that are intended to specialize across different input cases. The outputs produced by the experts are aggregated by a linear combination, with a "gating function" $\mathbf{g} = [g_1, \ldots, g_k]$ determining the importance of each expert in the final decision: + +$$ +\operatorname {M o E} (\mathbf {x}) = \sum_ {i = 1} ^ {k} g _ {i} (\mathbf {x}) \cdot \mathbf {f} _ {i} (\mathbf {x}). \tag {1} +$$ + +The gating function can be parameterized by, e.g., a neural network. We will also refer to $\mathbf{g}$ as the responsibilities or weights of the experts. + +# 2.2 Multi-Head Attention: a Mixture-of-Experts Perspective + +Multi-head attention is the key building block for the state-of-the-art transformer architectures (Vaswani et al., 2017). At its core are mul + +tiple separately parameterized attention heads. An attention head takes as input a $n$ -by- $d$ matrix $\mathbf{X}$ , with each row being the vector representation of an input element. It contextualizes the input using a dot-product attention mechanism: + +$$ +\widetilde {\mathbf {H}} _ {i} = \operatorname {s o f t m a x} \left(\mathbf {X Q} _ {i} \mathbf {K} _ {i} ^ {\top} \mathbf {X} ^ {\top}\right) \mathbf {X V} _ {i}, \quad (2) +$$ + +where $\mathbf{Q}_i$ , $\mathbf{K}_i$ , and $\mathbf{V}_i$ are learned matrices, and the softmax normalizes row-wise. The outputs of attention heads are then concatenated and fed through a learned affine transformation: + +$$ +\mathbf {Z} \triangleq \operatorname {M u l t i H e a d} (\mathbf {X}) = \left[ \widetilde {\mathbf {H}} _ {1}; \dots ; \widetilde {\mathbf {H}} _ {h} \right] \mathbf {W} \tag {3} +$$ + +where $\mathbf{W}$ is a learned matrix, and $h$ denotes the number of attention heads. + +We now present a different computation equivalent to Eq. 3, aiming for a smoother transition into following sections. Let $\mathbf{H}_i = \widetilde{\mathbf{H}}_i\mathbf{W}_i$ where $\mathbf{W}_i$ is a block submatrix of $\mathbf{W}$ , i.e., $\mathbf{W} = [\mathbf{W}_1^\top; \mathbf{W}_2^\top, \ldots; \mathbf{W}_h^\top]^\top$ . Then + +$$ +\mathbf {Z} = \left[ \widetilde {\mathbf {H}} _ {1}; \dots ; \widetilde {\mathbf {H}} _ {h} \right] \mathbf {W} = \sum_ {i = 1} ^ {h} \mathbf {H} _ {i}. \tag {4} +$$ + +Eq. 4 provides a different view of the output computation of the multi-head attention: each attention head first projects the contextualized representation with a learned matrix (i.e., $\mathbf{H}_i = \widetilde{\mathbf{H}}_i\mathbf{W}_i$ ), then their outputs are gathered with a sum (Eq. 4). We now show that this can be seen as a uniformly weighted mixture of experts. + +A mixture-of-experts perspective. Let us take a closer look at Eq. 4 and rewrite it: + +$$ +\begin{array}{l} \mathbf {Z} = \frac {1}{h - 1} \sum_ {i = 1} ^ {h} (- 1 + h) \mathbf {H} _ {i} \\ = \frac {1}{h - 1} \left(- \sum_ {i = 1} ^ {h} \mathbf {H} _ {i} + \sum_ {i = 1} ^ {h} \sum_ {j = 1} ^ {h} \mathbf {H} _ {j}\right) \tag {5} \\ = \sum_ {i = 1} ^ {h} \underbrace {\frac {1}{h}} _ {\text {g a t e} g _ {i}} \underbrace {\frac {h}{h - 1} \left(- \mathbf {H} _ {i} + \sum_ {j = 1} ^ {h} \mathbf {H} _ {j}\right)} _ {\text {e x p e r t f} _ {i} (\mathbf {X}; \boldsymbol {\theta} _ {i})}. \\ \end{array} +$$ + +Eq. 5 interprets multi-head attention as a mixture of $\binom{h}{h-1} = h$ experts. It first constructs a set of $h$ experts $\{\mathbf{f}_i(\cdot; \boldsymbol{\theta}_i)\}$ , with $\boldsymbol{\theta}_i$ denoting $\mathbf{f}_i$ 's param + +eters. $\mathbf{f}_i(\cdot ;\pmb {\theta}_i)$ is a parameterized function of the input, which calculates a sum of the outputs by all but the $i$ th attention head. This is achieved by subtracting $\mathbf{H}_i$ from $\sum_{j = 1}^{h}\mathbf{H}_{j}$ , then scaling up the results by $h / (h - 1)$ . The experts share part of the parameters: any two share $h - 2$ attention heads. A uniform responsibility of $1 / h$ is used. + +Discussion. Viewing multi-head attention through this MoE lens suggests some interesting consequences. One can replace the input-agnostic responsibility in Eq. 5 with a function over the input. Indeed, we have good reasons for doing so. Voita et al. (2019) and Michel et al. (2019) show that for transformer networks, a handful of important attention heads are sufficient to achieve good test-time performance. They propose to prune the rest using an input-agnostic procedure. Instead of doing so, here we see a potential alternative: keep all the heads, but only activate those that are important to the input. This motivates MAE, which we now introduce. + +# 2.3 MAE: Learning to Weight Experts + +MAE is inspired by the connections between MoE and multi-head attention we draw in §2.2. On top of multi-head attention, MAE learns an input-dependent parameterized gating function $\mathbf{g}(\cdot ;\phi)$ to complement the experts. More formally, the uniform responsibility $1 / h$ in Eq. 5 is replaced by $\mathbf{g}(\cdot ;\phi)$ : given input $\mathbf{X}$ , MAE outputs + +$$ +\sum_ {i = 1} ^ {h} g _ {i} (\mathbf {X}; \boldsymbol {\phi}) \cdot \mathbf {f} _ {i} (\mathbf {X}; \boldsymbol {\theta} _ {i}). \tag {6} +$$ + +Experts $\mathbf{f}_i$ are the same as those in Eq. 5. + +$\mathbf{g}(\cdot ;\phi)$ is parameterized with a multi-layer perceptron (MLP) followed by a softmax. It first averages $\mathbf{X}$ along the row (i.e., the sequence direction), and then feeds the results through a two-layer tanh-MLP. $\mathbf{g}(\cdot ;\phi)$ outputs a normalized $h$ dimensional vector using a softmax, indicating the responsibilities of the experts. It can be seen as a learned probability distribution over the experts. + +MAE can learn to assign more responsibility to the experts that are more important to the given input, allowing them to contribute more. MAE is applicable wherever multi-head attention is used. For example, in a machine translation experiment (§4.2), we replace with MAE all the multi-head attention in a transformer network, including the self-attention in all encoder and decoder layers, as well as those attending over the encoded source + +from the decoder. Each of them is separately treated as a mixture of experts, and has its own gating function. The additional parameter overhead is small: gating functions account for only $3 - 5\%$ parameters of the full model (Appendix A). + +# 3 Training MAE with Block Coordinate Descent + +It is straightforward to jointly train the experts and the gating functions in an MAE model using backpropagation. However, in line with previous observations (Shen et al., 2019), we empirically observe that this is prone to degenerate solutions where the gating functions tend to learn to similarly weight the experts (see §5.1).4 + +As a remedy, we propose a block coordinate descent (BCD) training. At a high level, training is decomposed into two interleaving steps: A G step updates the gating function $\mathbf{g}(\cdot ;\phi)$ , fixing the experts; an F step fixes the gating function and updates one randomly selected expert $\mathbf{f}_i(\cdot ;\pmb {\theta}_i)$ . The computations for G and F steps differ: + +- In a G step, MAE outputs a linear combination of the experts' outputs, and only updates the gating function's parameters (AlGORITHM 1). No expert is updated. +- An F step computes the experts' responsibilities $\mathbf{g}(\mathbf{X})$ , according to which an expert $i$ is then sampled (Algorithm 2). MAE computes the output with $\mathbf{f}_i$ , which is then updated, without updating the gating function or other experts. + +A non-differentiable sampling from $\mathbf{g}$ is involved in F steps. It does not create difficulties for the + +1: procedure MAEG(X) +2: $\mathbf{Z} \gets \sum_{i=1}^{h} g_i(\mathbf{X}; \boldsymbol{\phi}) \cdot \mathbf{f}_i(\mathbf{X}; \boldsymbol{\theta}_i)$ +3: Forwardprop with $\mathbf{Z}$ and calculate $\mathcal{L}$ . +4: Calculate $\nabla_{\phi}\mathcal{L}$ with backprop. +5: $\phi \gets \phi -\eta \cdot \nabla_{\phi}\mathcal{L}.$ +6: end procedure + +1: procedure MAEF(X) +2: Draw $i\sim \operatorname {Cat}(\mathbf{g}(\mathbf{X};\phi))$ +3: $\mathbf{Z} \gets \mathbf{f}_i(\mathbf{X}; \boldsymbol{\theta}_i)$ +4: Forwardprop with $\mathbf{Z}$ and calculate $\mathcal{L}$ . +5: Calculate $\nabla_{\theta_i}\mathcal{L}$ with backprop. +6: $\pmb{\theta}_i\gets \pmb{\theta}_i - \eta \cdot \nabla_{\pmb{\theta}_i}\mathcal{L}.$ +7: end procedure + +backpropagation, since an F step never calculates the gradients w.r.t. $\phi$ . At test time, the computation is the same as that in a G step, i.e., MAE outputs a linear combination of the experts, weighted by $\mathbf{g}$ . + +Training time overhead. A straightforward training procedure is to, for each training instance, first take a G step, and then an F step. This doubles the forward propagation computation overhead. In practice, it is not necessary to take G steps as frequently as F steps, since they only update a small portion of the model. In the experiments, we take G steps one fifth as frequently as F steps: we make G updates every 5 epochs while always take F steps. In preliminary experiments, we find this reduces training time overhead without significant impact on the performance.7 + +Algorithm 3 summarizes the block coordinate descent training in a given epoch. + +Connections to dropout. In the above block coordinate descent training algorithm, an F step samples an expert to update, and ignores the rest in both forward and backward computation. It is reminiscent of dropout (Srivastava et al., 2014). Specifically, selecting expert $\mathbf{f}_i$ is equivalent to + +Algorithm 1 A G step update for MAE, with step size $\eta$ . +Algorithm 2 An F step update for MAE, with step size $\eta$ . +Algorithm 3 Block coordinate descent (BCD) training for MAE, at epoch $e$ . $\mathcal{D}$ denotes the training data. +1: procedure BCD(D = {Xi}i, e) +2: for Xi ∈ D do +3: ▷ Take G steps every 5 epochs. +4: if e mod 5 = 0 then +5: MAEG(Xi) +6: end if +7: ▷ Always do F step updates. +8: MAEF(Xi) +9: end for +10: end procedure + +dropping head $i$ . In other words, the F steps (Algorithm 2) can be seen as a structured dropout applied to the attention heads, but with learned input-dependent drop probabilities. When $g$ is a constant vector with elements $1 / h$ , it recovers the head dropout, which is also explored by concurrent work (Fan et al., 2020). + +So far, we view MAE as a mixture of $h$ experts, each consisting of $h - 1$ attention heads. One can, of course, generalize this to other settings, e.g., mixing $\binom{h}{h-2}$ experts, each containing $h - 2$ heads. From the dropout view, this translates to dropping more attention heads: dropping $t$ heads out of $h$ is equivalent to applying a dropout with drop probability $t / h$ , in the sense that their expected numbers of dropped units are the same. + +Despite the similarity between MAE and dropout, a key difference exists between the two: with the latter, the constant dropout probability is set a priori, while MAE uses a gating function $\mathbf{g}(\cdot ;\phi)$ to calculate a learned, input-dependent dropout probability. + +# 4 Experiments + +We empirically evaluate MAE on machine translation (§4.2) and language modeling (§4.3) benchmarks. We first introduce the compared models (§4.1). + +# 4.1 Compared Models + +MAE is evaluated under two settings: + +- MAE-7 mixes 8 experts each with 7 attention heads. + +- MAE-6 is similar to MAE-7, but mixes $\binom{8}{2} = 28$ experts each with 6 attention heads. $^{10}$ + +We compare MAE to the following baselines. + +- BASE is a sequence-to-sequence model based on the transformer architecture. +- NOBCD is the same model as MAE, but does not use block coordinate descent training. Instead, it jointly updates all experts and the gating function at training time, as discussed at the start of §3. +- UNI-MAE-7 is similar to MAE but does not have parameterized gating functions. It builds on BASE, and mixes 8 experts, each with 7 attention heads. Constant uniform responsibilities are assigned to the experts. At each training step, it updates one uniformly sampled expert; at test time, the outputs of all experts are averaged according to Eq. 5. +- UNI-MAE-6 mixes 28 6-attention-head experts, and is otherwise the same as UNI-MAE-7. + +We refer the readers to Appendix A for implementation details. + +# 4.2 Machine Translation + +Datasets. We experiment with two machine translation datasets: + +- WMT14 EN-DE (Bojar et al., 2014). Following previous practice (Vaswani et al., 2017) we train on WMT14, and designate newstest2013 and newstest2014 as development and test data respectively. Our preprocessing follows that of Vaswani et al. (2017) and Ott et al. (2018). A shared source-target vocabulary is used, with 32k byte pair encoding types (BPE; Sennrich et al., 2016). +- IWSLT14 DE-EN (Cettolo et al., 2014).12 It is based on TED talks, and is much smaller compared to WMT14. We use the preprocessing from Edunov et al. (2018). Following previous practice, we use separate vocabularies for the source and target, with around 9K and 7K BPE types respectively. + +Table 1 summarizes some statistics of the datasets. + +
DataTrainDev.TestVocab.
WMT144.5M3K3K32K
IWSLT14160K7K7K9K/7K
+ +Table 1: Some statistics for WMT14 and IWSLT14 datasets. We use separate source and target vocabularies in IWSLT14 experiments. + +Evaluation. The models are evaluated using BLEU (Papineni et al., 2002). A beam search with beam size 5 is used. In the WMT14 experiments, we follow Vaswani et al. (2017), and apply a compound split postprocessing. $^{13}$ + +Results. Table 2 summarizes WMT14 EN-DE translation test performance. The base and large sized transformer models are due to Vaswani et al. (2017). To control for compounding factors, we additionally compare to our implementation of the base sized model (BASE). It achieves slightly better performance than Vaswani et al. (2017), with a 0.3 BLEU edge. MAE-7 improves over the base transformer by 0.8 BLEU, obtaining similar performance to the large-size transformer of Vaswani et al. (2017) using less than a third as many parameters. Since we do not see similar improvement by UNI-MAE-7, we attribute this gain to input-dependent expert weighting. Having a smaller number of heads for each expert, MAE-6 slightly underperforms MAE-7, and so does UNI-MAE-6 in comparison to UNI-MAE-7. Finally, NOBCD gets worse performance than the transformer baseline, demonstrating the importance of the block coordinate decent training. + +We observe similar trends on the IWSLT14 DEEN dataset, summarized in Table 3. The BASE model here is similar to the base-sized transformer in the WMT14 experiment, but with a smaller hidden dimension. MAE-7 outperforms BASE by 0.9 BLEU. Interestingly, UNI-MAE-7 improves over BASE by 0.3 BLEU, possibly because the regularization effect of random expert selection training helps more on this smaller dataset.[14] + +# 4.3 Token-level Language Modeling + +Dataset. We experiment with the WikiText-103 dataset (Merit et al., 2016). It contains articles + +
ModelBLEU# Params.
Base Transformer27.365M
Large Transformer28.4213M
BASE27.661M
‡NOBCD27.563M
†UNI-MAE-727.761M
†UNI-MAE-627.661M
‡MAE-728.463M
‡MAE-628.163M
+ +Table 2: WMT14 EN-DE translation test performance on newstest2014. † randomly select an expert to update for each training instance, and ‡ learns a gating function to weight the experts. Transformer performance in the first two rows are due to Vaswani et al. (2017). + +
ModelBLEU# Params.
BASE34.639M
‡NOBCD34.841M
†UNI-MAE-734.939M
†UNI-MAE-635.039M
†‡MAE-735.541M
†‡MAE-635.441M
+ +from English Wikipedia, with a 268K-sized vocabulary. The training/development/test data respectively have 103M/218K/246K tokens. + +Setting. Here the BASE model is the strong language model by Baevski and Auli (2019). It is based on a 16-layer transformer network; each multi-head attention layer has 8 heads. It uses different embedding dimensions for the tokens, based on their frequencies. We closely follow Baevski and Auli (2019) in terms of hyperparameters and training procedures. The readers are referred to their paper and Appendix A for further architecture and hyperparameter details. + +Notes on context size. Baevski and Auli (2019) study the effect of context window, i.e., the number of history tokens the model attends over. They find that using larger context sizes lead to better performance (Baevski and Auli, 2019, Table 5). Their best setting uses a 3,072 training context size, and 2,048 at test time (i.e., the model has access 2,048 tokens before predicting any token at test time). However, we are not able to train MAE, + +Table 3: IWSLT14 GE-DE test set performance. See Table 2 caption for indications of the superscripts. + +
ModelPerplexity# Params.
*BASE (B&A, 2019)18.70247M
BASE (B&A, 2019)19.03247M
‡NOBCD19.12249M
†UNI-MAE-719.26247M
†‡MAE-718.71249M
+ +Table 4: Language modeling performance on WikiText-103 test set (lower is better). $\star$ Trains/evaluates with 3,072/2,048 context sizes and therefore not directly comparable to other models which use 512/480 sized ones. See Table 2 caption for the indications of other superscripts. Bold font indicates the best performance using smaller context sizes. The first two rows are due to Table 5 of Baevski and Auli (2019). + +nor replicate their results, under this setting—our GPUs have far less memory, and it is impossible to even load a 3,072-token context chunk. $^{15}$ Therefore we train and evaluate MAE and UNI-MAE-7 with smaller 512/480 context sizes, also explored by Baevski and Auli (2019), which allows for a head-to-head comparison. + +Results. Table 4 shows the perplexity on WikiText-103 test data. When trained under the same setting, MAE outperforms Baevski and Auli (2019) by more than 0.3 perplexity. Interestingly, despite the much smaller context at both training and test time, MAE matches the best setting by Baevski and Auli (2019). UNI-MAE-7 and NOBCD underperform the baseline (higher perplexity). + +# 5 Analysis + +This section first empirically confirms that MAE learns to activate different experts on different inputs in §5.1. We then run a synthetic experiment to explore MAE's potential in transfer learning (§5.2). + +# 5.1 Does MAE Learn to Specialize the Experts? + +One of the appealing properties of MoE models is that they could learn to activate different experts, depending on what "expertise" is needed for the + +
ModelBLEUDiff.
UNI-MAE-726.6-
One random expert25.8±0.2↓0.8±0.2
NOBCD26.7-
Most specialized expert26.0↓0.7
MAE-727.1-
Most specialized expert26.8↓0.3
+ +input. Does MAE learn to do so? We empirically study this question, and present evidence indicating that it does, at least in part. We consider the encoders of the UNI-MAE-7, NOBCD, and the MAE-7 models trained on WMT14.[16] + +We first study whether BCD training helps drifting MAE away from uniformly weighting the experts agnostic to the inputs. We treat the gating values as probabilities, and calculate their entropies: $\mathcal{H}(\mathbf{g}) = -\sum_{i=1}^{h} g_i \cdot \log g_i$ , which are then averaged across different layers. The average entropy on the development set for MAE-7 is 1.91, lower than the 2.02 by the NOBCD model trained without BCD. In comparison, UNI-MAE-7 uniformly weights the experts and has the entropy of 2.08. This indicates that gating weights of MAE trained with BCD are more "focused" on one or a subset of experts than trained without. + +Second, we study whether MAE learns to specialize different experts for different inputs. To do so we attribute the development instances to the experts that maximize the gating weights. For the first encoder layer of MAE-7, the percentages of instances attributed to each of the 8 experts are relatively balanced: $13\%$ , $14\%$ , $9\%$ , $16\%$ , $10\%$ , $15\%$ , $10\%$ , $12\%$ . This suggests that all experts are assigned a substantial part of the input, and it is not the case that BCD leads to a "rich get richer" outcome. + +We then continue and explore whether MAE performs reasonably well when using only the most "specialized" experts. For each development instance, we select those experts maximizing the + +Table 5: Performance decrease for different models on WMT14 development set when only one expert is used for each multi-head attention layer (5.1). + +
Expert 1Expert 2Expert 3Expert 4
neumannbellcandidacyveil
debutedzerorosemonument
rentalcomputingsubmissionfox
worthydecentralizedpalmunnerved
landboardsreutersrolesremainder
Expert 5Expert 6Expert 7Expert 8
spoilmensesromansodds
anybodytechnologicalstickerheat
endorsedinevitablyoutdatedmarvel
reservebetanalystornate
pendingpunkvenuesanticipating
+ +Table 6: Indicative tokens for each expert (§5.1). Tokens attributed to Expert 2 are mostly computer science terminology; trends for other experts are less clear. + +gating weights and ignore the rest, instead of linearly combining them as in Eq. 6. We see from Table 5 a 0.3 BLEU decrease under this setting. In comparison, NOBCD has a larger performance decrease of 0.7 BLEU. NOBCD's performance drop is similar to that of UNI-MAE-7, for which we randomly select an expert at each layer and average the performance over 5 runs. These results support the proposition that MAE specializes better when trained with BCD. + +Finally, we search for the tokens that are more likely to activate each expert. We compute the pointwise mutual information (PMI; Church and Hanks, 1990) between tokens and experts: + +$$ +\operatorname {P M I} \left(\operatorname {t o k e n} _ {i}, \operatorname {e x p e r t} _ {j}\right) = \log \frac {p \left(\operatorname {t o k e n} _ {i} , \operatorname {e x p e r t} _ {j}\right)}{p \left(\operatorname {t o k e n} _ {i}\right) p \left(\operatorname {e x p e r t} _ {j}\right)}. +$$ + +Table 6 lists the most indicative tokens of each expert, for the first layer. While some of the terms for some experts seem loosely related (e.g., bell, reuters, and computing for expert 2, it is hard to find clear patterns in most of them. + +# 5.2 MAE's Potential in Transfer Learning: A Case Study + +We now turn to evaluate another property of MAE: its potential for data-efficient transfer learning, by only updating the gating functions, freezing the experts. We consider the pretrain-then-finetune setting. Due to computation limits, we are unable to explore MAE for pre-training contextual representations (Peters et al., 2018; Devlin et al., 2019). Rather, we focus on the following small-scale machine translation experiments. + +Setting. We explore finetuning on IWSLT14 EN-DE data, a MAE model pretrained on the + +![](images/d0c2a3bc8178966cd766d5c19d7054c8642e3128c2a2986b9e2c79f9fe7b2c86.jpg) +Figure 2: IWSLT14 development performance of $\mathrm{FTG + }$ and FTALL using different amount of training data (§5.2). When trained on less than $20\%$ subset of the original training data, $\mathrm{FTG + }$ outperforms FTALL. + +much larger WMT14 dataset. $^{18}$ We compare three finetuning methods: + +- FTG finetunes the gating functions' parameters (i.e., $\phi$ ), keeping the rest frozen. +- $\mathrm{FTG+}$ updates the parameter matrix $\mathbf{W}$ in Eq. 4 in addition to $\phi$ . The rest of the model parameters are fixed. +- FTALL updates all parameters. + +As a baseline, NoFT is the out-of-box pretrained model without any finetuning. SCRATCH trains a MAE model from scratch. + +Table 7 summarizes the IWSLT14 EN-DE development set performance. Surprisingly, NoFT already outperforms SCRATCH without any fin-tuning. We attribute this improvement to the larger pretraining (WMT14) data. Only updating the gating functions, FTG improves over NoFT by 0.8 BLEU. Yet there is still a significant gap of 1.8 BLEU between FTG and FTALL. Interestingly, $\mathrm{FTG + }$ almost matches the performance of FTALL, but only updates 1/9 as many parameters. Both FTG and $\mathrm{FTG + }$ reach the best performance after around 1K gradient updates, i.e., one epoch, significantly less than FTALL or SCRATCH. + +We further compare $\mathrm{FTG + }$ and FTALL where less downstream training data is available. To simulate this, we randomly sample $[5\%, 10\%, 25\%, 50\%, 75\%]$ subsets of IWSLT14 training data, on which the pretrained model is finetuned. Figure 2 plots their performance. We see a clear trend: as less training data is available, the gap between $\mathrm{FTG + }$ and FTALL decreases; when less than $20\%$ of the training data is available, $\mathrm{FTG + }$ outperforms FTALL. These results suggest that finetuning MAE with $\mathrm{FTG + }$ can be viable in low-resource transfer learning. + +
MethodBLEU# Params.# Steps.
SCRATCH28.841M52K
NoFT29.300
FTG30.12M1K
FTG+31.67M1K
FTALL31.863M12K
+ +Table 7: IWSLT14 development set performance of different finetuning methods (§5.2). The last two columns indicate the number of parameters to update, and the number of gradient steps needed to achieve the best development performance. + +# 6 Related Work + +Multi-head attention. An increasing amount of effort has been devoted into developing better attention mechanisms (Malaviya et al., 2018; Deng et al., 2018; Sukhbaatar et al., 2019; Correia et al., 2019; Maruf et al., 2019, inter alia), and improving transformer architectures (Shaw et al., 2018; Dehghani et al., 2019; Hao et al., 2019; Correia et al., 2019; Yang et al., 2019a, inter alia). Closely related, Iida et al. (2019) applies another attention mechanism over the attention heads, allowing a learned reweighting of them. Our work focuses on the connection between multi-head attention and MoE, and the BCD training it suggests and benefits from. Concurrent to our work, (Fan et al., 2020) study structurally pruning transformer layers for more efficient inference. + +Another line of work aims to better understand the working of transformer models (Clark et al., 2019; Liu et al., 2019a; Tenney et al., 2019, inter alia). + +Mixture of experts. One of the most successful applications of MoE is ensemble learning (Caruana et al., 2004; Liu et al., 2018; Dutt et al., 2017, inter alia). Recent efforts also explore MoE in sequence learning (Shazeer et al., 2017), and to promote diversity in text generation (He et al., 2018; Shen et al., 2019; Cho et al., 2019, inter alia). + +# 7 Conclusion + +We presented MAE. It is inspired by a mixture-of-experts perspective of multi-head attention. With a learned gating function, MAE activates different experts on different inputs. MAE is trained using a block coordinate descent algorithm, which alternates between updating the responsibilities of + +the experts and their parameters. Our experiments show that MAE outperforms the transformer baselines on machine translation and language modeling benchmarks. The analysis shows that MAE learns to activate different experts. The code is publicly available at https://github.com/Noahs-ARK/MAE. + +# Acknowledgments + +We thank the anonymous reviewers, Yoav Artzi, Mandar Joshi, Jungo Kasai, Lingpeng Kong, Kenton Lee, Kelvin Luu, Will Merrill, Phoebe Mulcaire, Mark Neumann, Nikos Pappas, Ofir Press, Lianhui Qin, Swabha Swayamdipta, Vivek Srikumar, Sam Thomson, and Dani Yogatama for their helpful feedback. This work was supported in part by NSF grant 1562364, a Google Fellowship, and NVIDIA Corporation through the donation of a Tesla GPU. + +# References + +Alexei Baevski and Michael Auli. 2019. Adaptive input representations for neural language modeling. In Proc. of ICLR. +Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Ales Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proc. of WMT. +Rich Caruana, Alexandru Niculescu-Mizil, Geoff Crew, and Alex Ksikes. 2004. Ensemble selection from libraries of models. In Proc. of ICML. +Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th IWSLT evaluation campaign. In Proc. of IWSLT. +Jaemin Cho, Minjoon Seo, and Hannaneh Hajishirzi. 2019. Mixture content selection for diverse sequence generation. In Proc. of EMNLP. +Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):22-29. +Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT's attention. In Proc. of BlackBoxNLP. +Gonçalo M. Correia, Vlad Niculae, and André F.T. Martins. 2019. Adaptively sparse transformers. In Proc. of EMNLP. + +Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Łukasz Kaiser. 2019. Universal transformers. +Yuntian Deng, Yoon Kim, Justin Chiu, Demi Guo, and Alexander Rush. 2018. Latent alignment and variational attention. In Proc. of NeurIPS. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. of NAACL. +Anuvabh Dutt, Denis Pellerin, and Georges Quenot. 2017. Coupled ensembles of neural networks. arXiv:1709.06053. +Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018. Classical structured prediction losses for sequence to sequence learning. In Proc. of NAACL. +Angela Fan, Edouard Grave, and Armand Joulin. 2020. Reducing transformer depth on demand with structured dropout. In Proc. of ICLR. +Jie Hao, Xing Wang, Baosong Yang, Longyue Wang, Jinfeng Zhang, and Zhaopeng Tu. 2019. Modeling recurrence for transformer. In Proc. of NAACL. +Xuanli He, Gholamreza Haffari, and Mohammad Norouzi. 2018. Sequence to sequence mixture model for diverse machine translation. In Proc. of CoNLL. +Shohei Iida, Ryuichiro Kimura, Hongyi Cui, Po-Hsuan Hung, Takehito Utsuro, and Masaaki Nagata. 2019. Attention over heads: A multi-hop attention for neural machine translation. In Proc. of ACL: Student Research Workshop. +Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proc. of ICML. +Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. 1991. Adaptive mixtures of local experts. *Neural Computation*, 3(1):79-87. +Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019a. Linguistic knowledge and transferability of contextual representations. In Proc. of NAACL. +Xuanqing Liu, Minhao Cheng, Huan Zhang, and Chojui Hsieh. 2018. Towards robust neural networks via random self-ensemble. In Proc. of ECCV. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692. +Chaitanya Malaviya, Pedro Ferreira, and André FT Martins. 2018. Sparse and constrained attention for neural machine translation. In Proc. of ACL. + +Sameen Maruf, André FT Martins, and Gholamreza Haffari. 2019. Selective attention for context-aware neural machine translation. In Proc. of NAACL. +Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv:1609.07843. +Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In Proc. of NeurIPS. +Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. 2014. In search of the real inductive bias: On the role of implicit regularization in deep learning. In Proc. of ICLR: Worship Tack. +Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine translation. In Proc. of WMT. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. of ACL. +Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proc. of NAACL. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2018. Language models are unsupervised multitask learners. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proc. of ACL. +Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proc. of NAACL. +Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv:1701.06538. +Tianxiao Shen, Myle Ott, Michael Auli, and Marc'Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. In Proc. of ICML. +Daniel Soudry and Yair Carmon. 2016. No bad local minima: Data independent training error guarantees for multilayer neural networks. arXiv:1605.08361. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 15(1):1929-1958. +Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic role labeling. In Proc. of EMNLP. + +Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. 2019. Adaptive attention span in transformers. In Proc. of ACL. +Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proc. of ACL. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. of NeurIPS. +Elena Voita, David Talbot, Fedor Moiseev, Rico Senrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In Proc. of ACL. +Baosong Yang, Longyue Wang, Derek F. Wong, Lidia S. Chao, and Zhaopeng Tu. 2019a. Convolutional self-attention networks. In Proc. of NAACL. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019b. XLNet: Generalized autoregressive pretraining for language understanding. arXiv:1906.08237. +Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. 2016. Understanding deep learning requires rethinking generalization. In Proc. of ICLR. + +# Appendices + +# A Architectures and Implementations + +Our model is implemented using the PyTorch toolkit and the fairseq codebase.[19] + +Machine translation with WMT'14 Our BASE model in this experiment is the transformer-base by Vaswani et al. (2017). Its encoder and decoder are both of 6 transformer layers. Each multi-head attention layer is of hidden size 512, and uses 8 attention heads; the hidden dimensions for the feed forward networks are 2,048. We follow issue #346 of the fairseq's GitHub repository to replicate the results by Vaswani et al. (2017).20 When training MAE, we mostly use the same hyperparameters, with the only exception being that we warmup the learning rate for 8,000 updates, instead of 4,000.21 + +At evaluation time, we apply early stopping based on development set loss, and then average the most recent 5 checkpoints of the model, following Vaswani et al. (2017). + +Machine translation with IWSLT'14. The BASE model in this experiment is due to the faireq codebase.[22] It mostly follows the transformer-base architecture, but uses a larger dropout rate (0.3 vs. 0.1), a smaller feed forward network hidden size (1,024 vs. 2,048), and a larger weight decay (10 $^{-4}$ vs. 0). We use 8,000 warmup updates. + +Language modeling with WikiText-103. For the BASE model, we follow the model by Baevski and Auli (2019). The learning rate is warmed up for 240,000 steps. + +For all three experiments, the gating functions in our MAE model and the NOBCD baseline are implemented as tanh-MLPs. They have 256 hidden dimensions. We apply a batch normalization (Ioffe and Szegedy, 2015) to the input to the MLPs. We can see that the gating functions only have a small amount of parameters, accounting for less than $5\%$ parameters of the full MAE model. A dropout of 0.1 is applied to the output of the first + +layer. No weight decay is used. $\phi$ are updated using SGD with a fixed learning rate 1, separate from the one for the rest part of the models. This aims to avoid using momentum-based optimizing algorithms (e.g., Adam) for the gating functions, which we empirically find helps alleviate the "rich gets richer" degeneracy.[23] + +In the language modeling experiment, most recent 100 input vectors are averaged and then fed into the gating functions; while we average all the input vectors in the machine translation as the inputs to $\mathbf{g}(\cdot ;\phi)$ . + +# B Learning Curve Comparison for MAE and NOBCD + +In §3 (footnote 4) we discuss an overfitting issue by jointly updating the experts and the gating function. This section empirically studies it. We compare the learning curves of BASE, NOBCD, and MAE-7 trained on the IWSLT14 dataset, plotted in Figure 3. The models are described in §4.1. We tune dropout and $\ell_2$ regularization based on development performance. Other hyperparameters are the same for the compared models. + +The training loss for NOBCD decreases much faster than BASE; however, on the development set, it never outperforms BASE, and the development loss starts increasing after epoch 40. MAE-7 finds a nice middle ground in terms of training loss. It outperforms both BASE and NOBCD on the validation set. This provides further evidence for the importance of BCD training. + +# C Additional Results for §5.1 + +§5.1 describes a experiment with the MAE-7 model where we attribute the development instances of WMT14 to the experts maximizing the gating weights. Table 8 presents more results. The number of instances each expert receives is relatively balanced, and the trend is consistent across different layers. + +![](images/eafc74ca02f9c147027f08fb2f36df54a993669fc12789287cc6f949d6b782e3.jpg) +Figure 3: Learning curves of BASE, NOBCD, and MAE-7 (§B), trained on the IWSLT14 EN-DE using the same setup. NOBCD quickly fits the training data, but it does not outperform BASE on validation set. Trained with BCD, MAE finds a nice middle ground. For better readability, x-axis starts at epoch 8. + +
LayerE1E2E3E4E5E6E7E8
113.113.98.916.110.315.310.111.6
213.814.510.710.815.47.916.010.9
314.014.412.410.614.39.815.49.0
414.513.710.48.315.111.811.215.1
511.913.813.715.710.116.46.911.5
612.910.012.414.69.515.215.79.8
+ +Table 8: The percentage of WMT14 development instances attributed to each of the experts in MAE-7's encoder layers (§5.1). \ No newline at end of file diff --git a/amixtureofh1headsisbetterthanhheads/images.zip b/amixtureofh1headsisbetterthanhheads/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..26c2a53c2021645f47c9800c0ab98173d22b3f5b --- /dev/null +++ b/amixtureofh1headsisbetterthanhheads/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0fef80dea92209fcf47e5aba69bf4b87e726dec20a3a2e82831810b19be49404 +size 310959 diff --git a/amixtureofh1headsisbetterthanhheads/layout.json b/amixtureofh1headsisbetterthanhheads/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..c797f8282874f49200f8cd02fb6c778d47e72690 --- /dev/null +++ b/amixtureofh1headsisbetterthanhheads/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c29700d14640a7e9e493221942f34194179dfeea0cb94b3bdfcf505479f77eef +size 479770 diff --git a/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/03e7b1f6-1e62-40eb-85f3-7052894d7020_content_list.json b/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/03e7b1f6-1e62-40eb-85f3-7052894d7020_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..cc7c3e88b4ebf19e79aae8303d5004ff97fa4f00 --- /dev/null +++ b/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/03e7b1f6-1e62-40eb-85f3-7052894d7020_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:37c9f31705b1cbba32ca72b44740df4cd120bbc14fa38e8cca2f879912464456 +size 91478 diff --git a/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/03e7b1f6-1e62-40eb-85f3-7052894d7020_model.json b/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/03e7b1f6-1e62-40eb-85f3-7052894d7020_model.json new file mode 100644 index 0000000000000000000000000000000000000000..28895982fd9b707feefaa4ff8121f420ed4c4202 --- /dev/null +++ b/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/03e7b1f6-1e62-40eb-85f3-7052894d7020_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9cd61daf141650f96df98b95b51aee5a2ca7b40cb7ea46776f7475b3a699e324 +size 109347 diff --git a/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/03e7b1f6-1e62-40eb-85f3-7052894d7020_origin.pdf b/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/03e7b1f6-1e62-40eb-85f3-7052894d7020_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8677791f563d887a4260b7996decee18ff7abd5b --- /dev/null +++ b/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/03e7b1f6-1e62-40eb-85f3-7052894d7020_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:709a228879a5368e614004f8d13a03f1e8608218bdac92cc98b01c462d263047 +size 304382 diff --git a/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/full.md b/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/full.md new file mode 100644 index 0000000000000000000000000000000000000000..39645d423956327aa797a245a3f993984c44ae49 --- /dev/null +++ b/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/full.md @@ -0,0 +1,290 @@ +# A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages + +Pedro Javier Ortiz Suárez1,2 Laurent Romary1 Benoit Sagot1 + +1Inria, Paris, France + +$^{2}$ Sorbonne Université, Paris, France + +{pedro.ortiz, benoit.sagot, laurent.romary}@inria.fr + +# Abstract + +We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures. + +# 1 Introduction + +One of the key elements that has pushed the state of the art considerably in neural NLP in recent years has been the introduction and spread of transfer learning methods to the field. These methods can normally be classified in two categories according to how they are used: + +- Feature-based methods, which involve pretraining real-valued vectors ("embeddings") at the word, sentence, or paragraph level; and using them in conjunction with a specific architecture for each individual downstream task. +- Fine-tuning methods, which introduce a minimal number of task-specific parameters, and instead copy the weights from a pre-trained + +network and then tune them to a particular downstream task. + +Embeddings or language models can be divided into fixed, meaning that they generate a single representation for each word in the vocabulary; and contextualized, meaning that a representation is generated based on both the word and its surrounding context, so that a single word can have multiple representations, each one depending on how it is used. + +In practice, most fixed embeddings are used as feature-based models. The most notable examples are word2vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014) and fastText (Mikolov et al., 2018). All of them are extensively used in a variety of applications nowadays. On the other hand, contextualized word representations and language models have been developed using both feature-based architectures, the most notable examples being ELMo and Flair (Peters et al., 2018; Akbik et al., 2018), and transformer based architectures, that are commonly used in a fine-tune setting, as is the case of GPT-1, GPT-2 (Radford et al., 2018, 2019), BERT and its derivatives (Devlin et al., 2018; Liu et al., 2019; Lan et al., 2019) and more recently T5 (Raffel et al., 2019). All of them have repeatedly improved the state-of-the-art in many downstream NLP tasks over the last year. + +In general, the main advantage of using language models is that they are mostly built in an unsupervised manner and they can be trained with raw, unannotated plain text. Their main drawback is that enormous quantities of data seem to be required to properly train them especially in the case of contextualized models, for which larger corpora are thought to be needed to properly address polysemy and cover the wide range of uses that commonly exist within languages. + +For gathering data in a wide range of languages, + +Wikipedia is a commonly used option. It has been used to train fixed embeddings (Al-Rfou et al., 2013; Bojanowski et al., 2017) and more recently the multilingual BERT (Devlin et al., 2018), hereafter mBERT. However, for some languages, Wikipedia might not be large enough to train good quality contextualized word embeddings. Moreover, Wikipedia data all belong to the same specific genre and style. To address this problem, one can resort to crawled text from the internet; the largest and most widespread dataset of crawled text being Common Crawl.1 Such an approach generally solves the quantity and genre/style coverage problems but might introduce noise in the data, an issue which has earned the corpus some criticism, most notably by Trinh and Le (2018) and Radford et al. (2019). Using Common Crawl also leads to data management challenges as the corpus is distributed in the form of a large set of plain text each containing a large quantity of unclassified multilingual documents from different websites. + +In this paper we study the trade-off between quantity and quality of data for training contextualized representations. To this end, we use the OSCAR corpus (Ortiz Suárez et al., 2019), a freely available2 multilingual dataset obtained by performing language classification, filtering and cleaning of the whole Common Crawl corpus.3 OSCAR was created following the approach of Grave et al. (2018) but proposing a simple improvement on their filtering method. We then train OSCAR-based and Wikipedia-based ELMo contextualized word embeddings (Peters et al., 2018) for 5 languages: Bulgarian, Catalan, Danish, Finnish and Indonesian. We evaluate the models by attaching them to the to UDPipe 2.0 architecture (Straka, 2018; Straka et al., 2019) for dependency parsing and part-of-speech (POS) tagging. We show that the models using the OSCAR-based ELMo embeddings consistently outperform the Wikipedia-based ones, suggesting that big high-coverage noisy corpora might be better than small high-quality narrow-coverage corpora for training contextualized language representations4. We also establish a new state of the art for both POS tagging and dependency parsing in 6 different treebanks covering + +all 5 languages. + +The structure of the paper is as follows. In Section 2 we describe the recent related work. In Section 3 we present, compare and analyze the corpora used to train our contextualized embeddings, and the treebanks used to train our POS tagging and parsing models. In Section 4 we examine and describe in detail the model used for our contextualized word representations, as well as the parser and the tagger we chose to evaluate the impact of corpora in the embeddings' performance in downstream tasks. Finally we provide an analysis of our results in Section 5 and in Section 6 we present our conclusions. + +# 2 Related work + +Since the introduction of word2vec (Mikolov et al., 2013), many attempts have been made to create multilingual language representations; for fixed word embeddings the most remarkable works are those of (Al-Rfou et al., 2013) and (Bojanowski et al., 2017) who created word embeddings for a large quantity of languages using Wikipedia, and later (Grave et al., 2018) who trained the fast-Text word embeddings for 157 languages using Common Crawl and who in fact showed that using crawled data significantly increased the performance of the embeddings especially for mid- to low-resource languages. + +Regarding contextualized models, the most notable non-English contribution has been that of the mBERT (Devlin et al., 2018), which is distributed as (i) a single multilingual model for 100 different languages trained on Wikipedia data, and as (ii) a single multilingual model for both Simplified and Traditional Chinese. Four monolingual fully trained ELMo models have been distributed for Japanese, Portuguese, German and Basque; 44 monolingual ELMo models where also released by the HIT-SCIR team (Che et al., 2018) during the CoNLL 2018 Shared Task (Zeman et al., 2018), but their training sets where capped at 20 million words. A German BERT (Chan et al., 2019) as well as a French BERT model (called CamemBERT) (Martin et al., 2019) have also been released. In general no particular effort in creating a set of high-quality monolingual contextualized representations has been shown yet, or at least not on a scale that + +is comparable with what was done for fixed word embeddings. + +For dependency parsing and POS tagging the most notable non-English specific contribution is that of the CoNLL 2018 Shared Task (Zeman et al., 2018), where the $1^{\text{st}}$ place (LAS Ranking) was awarded to the HIT-SCIR team (Che et al., 2018) who used Dozat and Manning (2017)'s Deep Bi-affine parser and its extension described in (Dozat et al., 2017), coupled with deep contextualized ELMo embeddings (Peters et al., 2018) (capping the training set at 20 million words). The $1^{\text{st}}$ place in universal POS tagging was awarded to Smith et al. (2018) who used two separate instances of Bohnet et al. (2018)'s tagger. + +More recent developments in POS tagging and parsing include those of Straka et al. (2019) which couples another CoNLL 2018 shared task participant, UDPipe 2.0 (Straka, 2018), with mBERT greatly improving the scores of the original model, and UDify (Kondratyuk and Straka, 2019), which adds an extra attention layer on top of mBERT plus a Deep Bi-affine attention layer for dependency parsing and a Softmax layer for POS tagging. UDify is actually trained by concatenating the training sets of 124 different UD treebanks, creating a single POS tagging and dependency parsing model that works across 75 different languages. + +# 3 Corpora + +We train ELMo contextualized word embeddings for 5 languages: Bulgarian, Catalan, Danish, Finnish and Indonesian. We train one set of embeddings using only Wikipedia data, and another set using only Common-Crawl-based OSCAR data. We chose these languages primarily because they are morphologically and typologically different from one another, but also because all of the OSCAR datasets for these languages were of a sufficiently manageable size such that the ELMo pre-training was doable in less than one month. Contrary to HIT-SCIR team (Che et al., 2018), we do not impose any cap on the amount of data, and instead use the entirety of Wikipedia or OSCAR for each of our 5 chosen languages. + +# 3.1 Wikipedia + +Wikipedia is the biggest online multilingual open encyclopedia, comprising more than 40 million articles in 301 different languages. Because articles are curated by language and written in an + +
LanguageSize#Ktokens#Kwords#Ksentences
Bulgarian609M64,19054,7483,685
Catalan1.1G211,627179,1088,293
Danish338M60,64452,5383,226
Finnish669M89,58076,0356,847
Indonesian488M80,80968,9554,298
+ +Table 1: Size of Wikipedia corpora, measured in bytes, thousands of tokens, words and sentences. + +open collaboration model, its text tends to be of very high-quality in comparison to other free online resources. This is why Wikipedia has been extensively used in various NLP applications (Wu and Weld, 2010; Mihalcea, 2007; Al-Rfou et al., 2013; Bojanowski et al., 2017). We downloaded the XML Wikipedia dumps7 and extracted the plaintext from them using the wikietractor.py script8 from Giuseppe Attardi. We present the number of words and tokens available for each of our 5 languages in Table 1. We decided against deduplicating the Wikipedia data as the corpora are already quite small. We tokenize the 5 corpora using UD-Pipe (Straka and Straková, 2017). + +# 3.2 OSCAR + +Common Crawl is a non-profit organization that produces and maintains an open, freely available repository of crawled data from the web. Common Crawl's complete archive consists of petabytes of monthly snapshots collected since 2011. Common Crawl snapshots are not classified by language, and contain a certain level of noise (e.g. one-word "sentences" such as "OK" and "Cancel" are unsurprisingly very frequent). + +This is what motivated the creation of the freely available multilingual OSCAR corpus (Ortiz Suárez et al., 2019), extracted from the November 2018 snapshot, which amounts to more than 20 terabytes of plain-text. In order to create OSCAR from this Common Crawl snapshot, Ortiz Suárez et al. (2019) reproduced the pipeline proposed by (Grave et al., 2018) to process, filter and classify Common Crawl. More precisely, language classification was performed using the fastText linear classifier (Joulin et al., 2016, 2017), which was trained by Grave et al. (2018) to recognize 176 languages and was shown to have an extremely good accuracy to processing time trade-off. The filtering step as performed by Grave et al. (2018) consisted in only keeping the lines exceeding 100 + +
LanguageSize#Ktokens#Kwords#Ksentences
Bulgarian14G1,466,0511,268,11582,532
Catalan4.3G831,039729,33331,732
Danish9.7G1,828,8811,620,09199,766
Finnish14G1,854,4401,597,856142,215
Indonesian16G2,701,6272,394,958140,138
+ +bytes in length. $^{9}$ However, considering that Common Crawl is a multilingual UTF-8 encoded corpus, this 100-byte threshold creates a huge disparity between ASCII and non-ASCII encoded languages. The filtering step used to create OSCAR therefore consisted in only keeping the lines containing at least 100 UTF-8-encoded characters. Finally, as in (Grave et al., 2018), the OSCAR corpus is deduplicated, i.e. for each language, only one occurrence of a given line is included. + +As we did for Wikipedia, we tokenize OSCAR corpora for the 5 languages we chose for our study using UDPipe. Table 2 provides quantitative information about the 5 resulting tokenized corpora. + +We note that the original Common-Crawl-based corpus created by Grave et al. (2018) to train fast-Text is not freely available. Since running the experiments described in this paper, a new architecture for creating a Common-Crawl-based corpus named CCNet (Wenzek et al., 2019) has been published, although it includes specialized filtering which might result in a cleaner corpus compared to OSCAR, the resulting CCNet corpus itself was not published. Thus we chose to keep OSCAR as it remains the only very large scale, Common-Crawl-based corpus currently available and easily downloadable. + +# 3.3 Noisiness + +We wanted to address (Trinh and Le, 2018) and (Radford et al., 2019)'s criticisms of Common Crawl, so we devised a simple method to measure how noisy the OSCAR corpora were for our 5 languages. We randomly extract a number of lines from each corpus, such that the resulting random sample contains one million words.[10] We test if the words are in the corresponding GNU Aspell[11] dictionary. We repeat this task for each of the 5 languages, for both the OSCAR and the Wikipedia + +Table 2: Size of OSCAR subcorpora, measured in bytes, thousands of tokens, words and sentences. + +
LanguageOOV WikipediaOOV OSCAR
Bulgarian60,87966,558
Catalan34,91979,678
Danish134,677123,299
Finnish266,450267,525
Indonesian116,714124,607
+ +Table 3: Number of out-of-vocabulary words in random samples of 1M words for OSCAR and Wikipedia. + +corpora. We compile in Table 3 the number of out-of-vocabulary tokens for each corpora. + +As expected, this simple metric shows that in general the OSCAR samples contain more out-of-vocabulary words than the Wikipedia ones. However the difference in magnitude between the two is strikingly lower that one would have expected in view of the criticisms by Trinh and Le (2018) and Radford et al. (2019), thereby validating the usability of Commoncrawl data when it is properly filtered, as was achieved by the OSCAR creators. We even observe that, for Danish, the number of out-of-vocabulary words in OSCAR is lower than that in Wikipedia. + +# 4 Experimental Setting + +The main goal of this paper is to show the impact of training data on contextualized word representations when applied in particular downstream tasks. To this end, we train different versions of the Embeddings from Language Models (ELMo) (Peters et al., 2018) for both the Wikipedia and OSCAR corpora, for each of our selected 5 languages. We save the models' weights at different number of epochs for each language, in order to test how corpus size affects the embeddings and to see whether and when overfitting happens when training elmo on smaller corpora. + +We take each of the trained ELMo models and use them in conjunction with the UDPipe 2.0 (Straka, 2018; Straka et al., 2019) architecture for dependency parsing and POS-tagging to test our models. We train UDPipe 2.0 using gold tokenization and segmentation for each of our ELMo models, the only thing that changes from training to training is the ELMo model as hyperparameters always remain at the default values (except for number of training tokens) (Peters et al., 2018). + +# 4.1 Contextualized word embeddings + +Embeddings from Language Models (ELMo) (Peters et al., 2018) is an LSTM-based language model. + +More precisely, it uses a bidirectional language model, which combines a forward and a backward LSTM-based language model. ELMo also computes a context-independent token representation via a CNN over characters. + +We train ELMo models for Bulgarian, Catalan, Danish, Finnish and Indonesian using the OSCAR corpora on the one hand and the Wikipedia corpora on the other. We train each model for 10 epochs, as was done for the original English ELMo (Peters et al., 2018). We save checkpoints at $1^{\text{st}}$ , $3^{\text{rd}}$ and $5^{\text{th}}$ epoch in order to investigate some concerns about possible overfitting for smaller corpora (Wikipedia in this case) raised by the original ELMo authors.[12] + +# 4.2 UDPipe 2.0 + +For our POS tagging and dependency parsing evaluation, we use UDPipe 2.0, which has a freely available and ready to use implementation. $^{13}$ This architecture was submitted as a participant to the 2018 CoNLL Shared Task (Zeman et al., 2018), obtaining the $3^{\text{rd}}$ place in LAS ranking. UDPipe 2.0 is a multi-task model that predicts POS tags, lemmas and dependency trees jointly. + +The original UDPipe 2.0 implementation calculates 3 different embeddings, namely: + +- Pre-trained word embeddings: In the original implementation, the Wikipedia version of fastText embeddings is used (Bojanowski et al., 2017); we replace them in favor of the newer Common-Crawl-based fastText embeddings trained by Grave et al. (2018). +- Trained word embeddings: Randomly initialized word representations that are trained with the rest of the network. +- Character-level word embeddings: Computed using bi-directional GRUs of dimension 256. They represent every UTF-8 encoded character with two 256 dimensional vectors, one for the forward and one for the backward layer. This two vector representations are concatenated and are trained along the whole network. + +After the CoNLL 2018 Shared Task, the UD-Pipe 2.0 authors added the option to concatenate contextualized representations to the embedding + +
Treebank#Ktokens#Ksentences
Bulgarian-BTB15611
Catalan-AnCora53017
Danish-DDT1006
Finnish-FTB15919
Finnish-TDT20215
Indonesian-GSD1216
+ +Table 4: Size of treebanks, measured in thousands of tokens and sentences. + +section of the network (Straka et al., 2019), we use this new implementation and we concatenate our pretrained deep contextualized ELMo embeddings to the three embeddings mentioned above. + +Once the embedding step is completed, the concatenation of all vector representations for a word are fed to two shared bidirectional LSTM (Hochreiter and Schmidhuber, 1997) layers. The output of these two BiLSTMS is then fed to two separate specific LSTMs: + +- The tagger- and lemmatizer-specific bidirectional LSTMs, with Softmax classifiers on top, which process its output and generate UPOS, XPOS, UFeats and Lemmas. The lemma classifier also takes the character-level word embeddings as input. +- The parser-specific bidirectional LSTM layer, whose output is then passed to a bi-affine attention layer (Dozat and Manning, 2017) producing labeled dependency trees. + +# 4.3 Treebanks + +To train the selected parser and tagger (cf. Section 4.2) and evaluate the pre-trained language models in our 5 languages, we run our experiments using the Universal Dependencies $(\mathrm{UD})^{14}$ paradigm and its corresponding UD POS tag set (Petrov et al., 2012). We use all the treebanks available for our five languages in the UD treebank collection version 2.2 (Nivre et al., 2018), which was used for the CoNLL 2018 shared task, thus we perform our evaluation tasks in 6 different treebanks (see Table 4 for treebank size information). + +- Bulgarian BTB: Created at the Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, it consists of legal documents, news articles and fiction pieces. + +- Catalan-AnCora: Built on top of the Spanish-Catalan AnCora corpus (Taulé et al., 2008), it contains mainly news articles. +- Danish-DDT: Converted from the Danish Dependency Treebank (Buch-Kromann, 2003). It includes news articles, fiction and non fiction texts and oral transcriptions. +- Finnish-FTB: Consists of manually annotated grammatical examples from VISK15 (The Web Version of the Large Grammar of Finnish). +- Finnish-TDT: Based on the Turku Dependency Treebank (TDT). Contains texts from Wikipedia, Wikinews, news articles, blog entries, magazine articles, grammar examples, Europarl speeches, legal texts and fiction. +- Indonesian-GSD: Includes mainly blog entries and news articles. + +# 5 Results & Discussion + +# 5.1 Parsing and POS tagging results + +We use UDPipe 2.0 without contextualized embeddings as our baseline for POS tagging and dependency parsing. However, we did not train the model without contextualized word embedding ourselves. We instead take the scores as they are reported in (Kondratyuk and Straka, 2019). We also compare our UDPipe $2.0 + \mathrm{ELMo}$ models against the state-of-the-art results (assuming gold tokenization) for these languages, which are either UDify (Kondratyuk and Straka, 2019) or UDPipe $2.0 + \mathrm{mBERT}$ (Straka et al., 2019). + +Results for UPOS, UAS and LAS are shown in Table 5. We obtain the state of the art for the three metrics in each of the languages with the UDPipe $2.0 + \mathrm{ELMo}_{\mathrm{OSCAR}}$ models. We also see that in every single case the UDPipe $2.0 + \mathrm{ELMo}_{\mathrm{OSCAR}}$ result surpasses the UDPipe $2.0 + \mathrm{ELMo}_{\mathrm{Wikipedia}}$ one, suggesting that the size of the pre-training data plays an important role in downstream task results. This is also supports our hypothesis that the OSCAR corpora, being multi-domain, exhibits a better coverage of the different styles, genres and uses present at least in these 5 languages. + +Taking a closer look at the results for Danish, we see that ELMoWikipedia, which was trained with a mere 300MB corpus, does not show any sign + +
TreebankModelUPOSUASLAS
Bulgarian BTBUDify98.8995.5492.40
UDPipe 2.098.9893.3890.35
+mBERT99.2095.3492.62
+ELMoWikipedia99.1794.9392.05
+ELMoOSCAR99.4096.0193.56
Catalan-AnCoraUDify98.8994.2592.33
UDPipe 2.098.8893.2291.06
+mBERT99.0694.4992.74
+ELMoWikipedia99.0593.9992.24
+ELMoOSCAR99.0694.4992.88
Danish-DDTUDify97.5087.7684.50
UDPipe 2.097.7886.8884.31
+mBERT98.2189.3287.24
+ELMoWikipedia98.4589.0586.92
+ELMoOSCAR98.6289.8487.95
Finnish-FTBUDify93.8086.3781.40
UDPipe 2.096.6590.6887.89
+mBERT96.9791.6889.02
+ELMoWikipedia97.2792.0589.62
+ELMoOSCAR98.1393.8192.02
Finnish-TDTUDify94.4386.4282.03
UDPipe 2.097.4589.8887.46
+mBERT97.5791.6689.49
+ELMoWikipedia97.6591.6089.34
+ELMoOSCAR98.3693.5491.77
Indonesian-GSDUDify93.3686.4580.10
UDPipe 2.093.6985.3178.99
+mBERT94.0986.4780.40
+ELMoWikipedia93.9486.1680.10
+ELMoOSCAR94.1286.4980.59
+ +Table 5: Scores from UDPipe 2.0 (from Kondratyuk and Straka, 2019), the previous state-of-the-art models UDPipe $2.0 + \mathrm{mBERT}$ (Straka et al., 2019) and UDify (Kondratyuk and Straka, 2019), and our ELMo-enhanced UDPipe 2.0 models. Test scores are given for UPOS, UAS and LAS in all five languages. Best scores are shown in bold, second best scores are underlined. + +of overfitting, as the UDPipe $2.0 + \mathrm{ELMo_{Wikipedia}}$ results considerably improve the UDPipe 2.0 baseline. This is the case for all of our ELMoWikipedia models as we never see any evidence of a negative impact when we add them to the baseline model. In fact, the results of UDPipe $2.0 + \mathrm{ELMo_{Wikipedia}}$ give better than previous state-of-the-art results in all metrics for the Finnish-FTB and in UPOS for the Finnish-TDT. The results for Finnish are actually quite interesting, as mBERT was pre-trained on Wikipedia and here we see that the multilingual setting in which UDify was fine-tuned exhibits subbaseline results for all metrics, and that the UDPipe $^+$ mBERT scores are often lower than those of our UDPipe $2.0 + \mathrm{ELMo_{Wikipedia}}$ . This actually suggests that even though the multilingual approach of mBERT (in pre-training) or UDify (in pre-training and fine-tuning) leads to better performance for high-resource languages or languages + +
TreebankModelUPOSUASLAS
Bulgarian BTBUDPipe 2.098.9893.3890.35
+ELMowikipedia(1)98.8193.6090.21
+ELMowikipedia(3)99.0194.3291.36
+ELMowikipedia(5)99.0394.3291.38
+ELMowikipedia(10)99.1794.9392.05
+ELMowOSCAR(1)99.2895.4592.98
+ELMowOSCAR(3)99.3495.5893.12
+ELMowOSCAR(5)99.3495.6393.25
+ELMowOSCAR(10)99.4096.0193.56
Catalan-AnCoraUDPipe 2.098.8893.2291.06
+ELMowikipedia(1)98.9393.2491.21
+ELMowikipedia(3)99.0293.7591.93
+ELMowikipedia(5)99.0493.8692.05
+ELMowikipedia(10)99.0593.9992.24
+ELMowOSCAR(1)99.0793.9292.29
+ELMowOSCAR(3)99.1094.2992.69
+ELMowOSCAR(5)99.0794.3892.75
+ELMowOSCAR(10)99.0694.4992.88
Danish-DDTUDPipe 2.097.7886.8884.31
+ELMowikipedia(1)97.4786.9884.15
+ELMowikipedia(3)98.0388.1685.81
+ELMowikipedia(5)98.1588.2485.96
+ELMowikipedia(10)98.4589.0586.92
+ELMowOSCAR(1)98.5089.4787.43
+ELMowOSCAR(3)98.5989.6887.77
+ELMowOSCAR(5)98.5989.4687.64
+ELMowOSCAR(10)98.6289.8487.95
+ +
TreebankModelUPOSUASLAS
Finnish-FTBUDPipe 2.096.6590.6887.89
+ELMowikipedia(1)95.8689.6386.39
+ELMowikipedia(3)96.7691.0288.27
+ELMowikipedia(5)96.9791.6689.04
+ELMowikipedia(10)97.2792.0589.62
+ELMooSCAR(1)97.9193.4191.43
+ELMooSCAR(3)98.0093.9991.98
+ELMooSCAR(5)98.1593.9892.24
+ELMooSCAR(10)98.1393.8192.02
Finnish-TDTUDPipe 2.097.4589.8887.46
+ELMowikipedia(1)96.7389.1186.33
+ELMowikipedia(3)97.5590.8488.50
+ELMowikipedia(5)97.5591.1188.88
+ELMowikipedia(10)97.6591.6089.34
+ELMooSCAR(1)98.2793.0391.29
+ELMooSCAR(3)98.3893.6091.83
+ELMooSCAR(5)98.3993.5791.80
+ELMooSCAR(10)98.3693.5491.77
Indonesian-GSDUDPipe 2.093.6985.3178.99
+ELMowikipedia(1)93.7085.8179.46
+ELMowikipedia(3)93.9086.0479.72
+ELMowikipedia(5)94.0485.9379.97
+ELMowikipedia(10)93.9486.1680.10
+ELMooSCAR(1)93.9586.2580.23
+ELMooSCAR(3)94.0086.2180.14
+ELMooSCAR(5)94.2386.3780.40
+ELMooSCAR(10)94.1286.4980.59
+ +Table 6: UPOS, UAS and LAS scores for the UDPipe 2.0 baseline reported by (Kondratyuk and Straka, 2019), plus the scores for checkpoints at 1, 3, 5 and 10 epochs for all the ELMoOSCAR and ELMoWikipedia. All scores are test scores. Best ELMoOSCAR scores are shown in bold while best ELMoWikipedia scores are underlined. + +that are closely related to high-resource languages, it might also significantly degrade the representations for more isolated or even simply more morphologically rich languages like Finnish. In contrast, our monolingual approach with UDPipe 2.0 $+\mathrm{ELMo}_{\mathrm{OSCAR}}$ improves the previous SOTA considerably, by more than 2 points for some metrics. Note however that Indonesian, which might also be seen as a relatively isolated language, does not behave in the same way as Finnish. + +# 5.2 Impact of the number of training epochs + +An important topic we wanted to address with our experiments was that of overfitting and the number of epochs one should train the contextualized embeddings for. The ELMo authors have expressed that increasing the number of training epochs is generally better, as they argue that training the ELMo model for longer reduces held-out perplexity and further improves downstream task performance.[16] This is why we intentionally fully pre-trained the ELMoWikipedia to the 10 epochs of the original ELMo paper, as its authors also expressed concern over the possibility of overfitting for smaller corpora. We thus save checkpoints for + +each of our ELMo model at the 1, 3, 5 and 10 epoch marks so that we can properly probe for overfitting. The scores of all checkpoints are reported in Table 6. Here again we do not train the UDPipe 2.0 baselines without embedding, we just report the scores published in Kondratyuk and Straka (2019). + +The first striking finding is that even though all our Wikipedia data sets are smaller than 1GB in size (except for Catalan), none of the ELMoWikipedia models show any sign of overfitting, as the results continue to improve for all metrics the more we train the ELMo models, with the best results consistently being those of the fully trained 10 epoch ELMos. For all of our Wikipedia models, but those of Catalan and Indonesian, we see sub-baseline results at 1 epoch; training the model for longer is better, even if the corpora are small in size. + +ELMoOSCAR models exhibit exactly the same behavior as ELMoWikipedia models where the scores continue to improve the longer they are pre-trained, except for the case of Finnish. Here we actually see an unexpected behavior where the model performance caps around the $3^{\mathrm{rd}}$ to $5^{\mathrm{th}}$ epoch. This is surprising because the Finnish OSCAR corpus is more than 20 times bigger than our smallest Wikipedia corpus, the Danish Wikipedia, that did not exhibit + +this behavior. As previously mentioned Finnish is morphologically richer than the other languages in which we trained ELMo, we hypothesize that the representation space given by the ELMo embeddings might not be sufficiently big to extract more features from the Finnish OSCAR corpus beyond the $5^{\text{th}}$ epoch mark, however in order to test this we would need to train a larger language model like BERT which is sadly beyond our computing infrastructure limits (cf. Subsection 5.3). However we do note that pre-training our current language model architectures in a morphologically rich language like Finnish might actually better expose the limits of our existing approaches to language modeling. + +One last thing that it is important to note with respect to the number of training epochs is that even though we fully pre-trained our ELMoWikipedia's and ELMoOSCAR's to the recommended 10 epoch mark, and then compared them against one another, the number of training steps between both pre-trained models differs drastically due to the big difference in corpus size (for Indonesian, for instance, 10 epochs correspond to 78K steps for ELMoWikipedia and to 2.6M steps for OSCAR; the complete picture is provided in the Appendix, in Table 8). In fact, we can see in Table 6 that all the UDPipe $2.0 + \mathrm{ELMoOSCAR(1)}$ perform better than the UDPipe $2.0 + \mathrm{ELMoWikipedia(1)}$ models across all metrics. Thus we believe that talking in terms of training steps as opposed to training epochs might be a more transparent way of comparing two pretrained models. + +# 5.3 Computational cost and carbon footprint + +Considering the discussion above, we believe an interesting follow-up to our experiments would be training the ELMo models for more of the languages included in the OSCAR corpus. However training ELMo is computationally costly, and one way to estimate this cost, as pointed out by Strubell et al. (2019), is by using the training times of each model to compute both power consumption and $\mathrm{CO}_{2}$ emissions. + +In our set-up we used two different machines, each one having 4 NVIDIA GeForce GTX 1080 Ti graphic cards and 128GB of RAM, the difference between the machines being that one uses a single Intel Xeon Gold 5118 processor, while the other uses two Intel Xeon E5-2630 v4 processors. One GeForce GTX 1080 Ti card is rated at around + +
LanguagePowerHoursDaysKWh-PUECO2e
OSCAR-Based ELMos
Bulgarian1183515.0021.45962.6149.09
Catalan1118199.988.33353.2518.02
Danish1183200.898.58375.4919.15
Finnish1118591.2524.631044.4053.26
Indonesian1183694.2628.931297.6766.18
Wikipedia-Based ELMos
Bulgarian111815.450.6427.291.39
Catalan111851.082.1390.224.60
Danish111814.560.6125,721.31
Finnish111821.790.9138.491.96
Indonesian111820.280.8435.821.82
TOTAL EMISSIONS216.78
+ +Table 7: Average power draw (Watts), training times (in both hours and days), mean power consumption (KWh) and $\mathrm{CO}_{2}$ emissions (kg) for each ELMo model trained. + +250 W, $^{17}$ the Xeon Gold 5118 processor is rated at $105\mathrm{W},^{18}$ while one Xeon E5-2630 v4 is rated at $85\mathrm{W}.$ ^{19} For the DRAM we can use the work of Desrochers et al. (2016) to estimate the total power draw of 128GB of RAM at around 13W. Having this information, we can now use the formula proposed by Strubell et al. (2019) in order to compute the total power required to train one ELMo model: + +$$ +p _ {t} = \frac {1 . 5 8 t (c p _ {c} + p _ {r} + g p _ {g})}{1 0 0 0} +$$ + +Where $c$ and $g$ are the number of CPUs and GPUs respectively, $p_c$ is the average power draw (in Watts) from all CPU sockets, $p_r$ the average power draw from all DRAM sockets, and $p_g$ the average power draw of a single GPU. We estimate the total power consumption by adding GPU, CPU and DRAM consumptions, and then multiplying by the Power Usage Effectiveness (PUE), which accounts for the additional energy required to support the compute infrastructure. We use a PUE coefficient of 1.58, the 2018 global average for data centers (Strubell et al., 2019). In table 7 we report the training times in both hours and days, as well as the total power draw (in Watts) of the system used to train each individual ELMo model. We use this in + +formation to compute the total power consumption of each ELMo, also reported in table 7. + +We can further estimate the $\mathrm{CO}_{2}$ emissions in kilograms of each single model by multiplying the total power consumption by the average $\mathrm{CO}_{2}$ emissions per kWh in France (where the models were trained). According to the RTE (Réseau de transport d'électricité / Electricity Transmission Network) the average emission per kWh were around $51\mathrm{g/kWh}$ in November 2019,[20] when the models were trained. Thus the total $\mathrm{CO}_{2}$ emissions in kg for one single model can be computed as: + +$$ +\mathrm {C O} _ {2} \mathrm {e} = 0. 0 5 1 p _ {t} +$$ + +All emissions for the ELMo models are also reported in table 7. + +We do not report the power consumption or the carbon footprint of training the UDPipe 2.0 architecture, as each model took less than 4 hours to train on a machine using a single NVIDIA Tesla V100 card. Also, this machine was shared during training time, so it would be extremely difficult to accurately estimate the power consumption of these models. + +Even though it would have been interesting to replicate all our experiments and computational cost estimations with state-of-the-art fine-tuning models such as BERT, XLNet, RoBERTa or ALBERT, we recall that these transformer-based architectures are extremely costly to train, as noted by the BERT authors on the official BERT GitHub repository,[21] and are currently beyond the scope of our computational infrastructure. However we believe that ELMo contextualized word embeddings remain a useful model that still provide an extremely good trade-off between performance to training cost, even setting new state-of-the-art scores in parsing and POS tagging for our five chosen languages, performing even better than the multilingual mBERT model. + +# 6 Conclusions + +In this paper, we have explored the use of the Common-Crawl-based OSCAR corpora to train ELMo contextualized embeddings for five typologically diverse mid-resource languages. We have compared them with Wikipedia-based ELMo embeddings on two classical NLP tasks, POS tagging + +and parsing, using state-of-the-art neural architectures. Our goal was to explore whether the noisiness level of Commoncrawl data, often invoked to criticize the use of such data, could be compensated by its larger size; for some languages, the OSCAR corpus is several orders of magnitude larger than the corresponding Wikipedia. Firstly, we found that when properly filtered, Commoncrawl data is not massively noisier than Wikipedia. Secondly, we show that embeddings trained using OSCAR data consistently outperform Wikipedia-based embeddings, to the extent that they allow us to improve the state of the art in POS tagging and dependency parsing for all the 6 chosen treebanks. Thirdly, we observe that more training epochs generally results in better embeddings even when the training data is relatively small, as is the case for Wikipedia. + +Our experiments show that Common-Crawl-based data such as the OSCAR corpus can be used to train high-quality contextualized embeddings, even for languages for which more standard textual resources lack volume or genre variety. This could result in better performances in a number of NLP tasks for many non highly resourced languages. + +# Acknowledgments + +We want to thank Ganesh Jawahar for his insightful comments and suggestions during the early stages of this project. This work was partly funded by the French national ANR grant BASNUM (ANR-18-CE38-0003), as well as by the last author's chair in the PRAIRIE institute,[22] funded by the French national ANR as part of the "Investissements d'avirr" programme under the reference ANR-19-P3IA-0001. The authors are grateful to Inria Sophia Antipolis - Méditerranée "Nef"[23] computation cluster for providing resources and support. + +# References + +Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 1638-1649. Association for Computational Linguistics. +Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations + +ClustersSophia + +for multilingual NLP. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 183-192, Sofia, Bulgaria. Association for Computational Linguistics. +Bernd Bohnet, Ryan McDonald, Goncalo Simões, Daniel Andor, Emily Pittler, and Joshua Maynez. 2018. Morphosyntactic tagging with a metaBiLSTM model over context sensitive token encodings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2642-2652, Melbourne, Australia. Association for Computational Linguistics. +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146. +Matthias Buch-Kromann. 2003. The danish dependency treebank and the dtag treebank tool. In 2nd Workshop on Treebanks and Linguistic Theories (TLT), Sweden, pages 217-220. +Branden Chan, Timo Möller, Malte Pietsch, Tanay Soni, and Chin Man Yeung. 2019. German BERT. https://deepset.ai/german-bert. +Wanxiang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, and Ting Liu. 2018. Towards better UD parsing: Deep contextualized word embeddings, ensemble, and treebank concatenation. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 55-64, Brussels, Belgium. Association for Computational Linguistics. +Spencer Desrochers, Chad Paradis, and Vincent M. Weaver. 2016. A validation of dram rapl power measurements. In Proceedings of the Second International Symposium on Memory Systems, MEMSYS '16, page 455-470, New York, NY, USA. Association for Computing Machinery. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv e-prints, page arXiv:1810.04805. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Multilingual BERT. https://github.com/google-research/bert/blob/master/multilingual.md. +Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. +Timothy Dozat, Peng Qi, and Christopher D. Manning. 2017. Stanford's graph-based neural dependency parser at the CoNLL 2017 shared task. In Proceedings of the CoNLL 2017 Shared Task: Multilingual + +Parsing from Raw Text to Universal Dependencies, pages 20-30, Vancouver, Canada. Association for Computational Linguistics. +Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the 11th Language Resources and Evaluation Conference, Miyazaki, Japan. European Language Resource Association. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780. +Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hervé Jégou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models. CoRR, abs/1612.03651. +Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427-431, Valencia, Spain. Association for Computational Linguistics. +Dan Kondratyuk and Milan Straka. 2019. 75 Languages, 1 Model: Parsing Universal Dependencies Universally. arXiv e-prints, page arXiv:1904.02099. +Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv eprints, page arXiv:1909.11942. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. +Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah, and Benoit Sagot. 2019. CamemBERT: a Tasty French Language Model. arXiv e-prints, page arXiv:1911.03894. +Rada Mihalcea. 2007. Using Wikipedia for automatic word sense disambiguation. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 196-203, Rochester, New York. Association for Computational Linguistics. +Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. 2018. Advances in pre-training distributed word representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7-12, 2018. + +Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS'13, pages 3111-3119, USA. Curran Associates Inc. + +Joakim Nivre, Mitchell Abrams, Željko Agić, Lars Ahrenberg, Lene Antonsen, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mittelulu, John Bauer, Sandra Bellato, Kepa Bengoetxea, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Rogier Blokland, Victoria Bobicev, Carl Borstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Aljoscha Burchardt, Marie Candido, Bernard Caron, Gauthier Caron, Gülsen Cebiroğlu Eryiğit, Giuseppe G. A. Celano, Savas Cetin, Fabricio Chalub, Jinho Choi, Yongseok Cho, Jayeol Chun, Silvie Cinkova, Aurélie Collomb, Çağri Çoltekin, Miriam Connor, Marine Courtin, Elizabeth Davidson, Marie-Catherine de Marneffé, Valeria de Paiva, Arantza Diaz de Ilarraza, Carly Dickerson, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Tomaz Erjavec, Aline Etienne, Richard Farkas, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Katarína Gajdošová, Daniel Galbraith, Marcos Garcia, Moa Gartenfors, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökirmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta González Saavedra, Matias Grioni, Normunds Grūzītis, Bruno Guillaume, Céline Guillot-Barbance, Nizar Habash, Jan Hajic, Jan Hajic jr., Linh Hà Myh, Na-Rae Han, Kim Harris, Dag Haug, Barbora Hladka, Jaroslava Hlaváčová, Florinel Hociung, Petter Hohle, Jena Hwang, Radu Ion, Elena Irimia, Tomás Jelinek, Anders Johannsen, Fredrik Jørgensen, Hüner Kaskikara, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Tolga Kayadelen, Václava Kettnerová, Jesse Kirchner, Natalia Kotsyba, Simon Krek, Sookyoung Kwak, Veronika Laippala, Lorenzo Lambertino, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee Phuong Lê Hùng, Alessandro Lenci, Saran Lertpradit, Herman Leung, Cheuk Ying Li, Josie Li, Keying Li Kyung Tae Lim Nikola Ljubesic Olga Loginova Olga Lyashevskaya Teresa Lynn Vivien Macketanz Aibek Makazhanov Michael Mandl Christopher Manning Ruli Manurung Catăina Mârănduc David Mareček Katrin Marheinecke Hétor Martínez Alonso Andre Martins Jan Mašek Yuji Matsumoto Ryan McDonald Gustavo Mendonça Niko Miekka Anna Missilä,Cătălin Mititelu Yusuke Miyao Simonetta Montemagni Amir More Laura Moreno Romero Shinsuke Mori Bjartur Mortensen Bohdan Moskalevskyi Kadri Muischnek Yugo Murawaki Kaili Mûürisep Pinkey Nainwani Juan Ignacio Navarro Horniacek Anna Nedoluzhko Gunta Nespore-Bérzkalne Luong Nguyen Thi Huyen Nguyen Thi Minh Vitaly + +Nikolaev, Rattima Nitisaroj, Hanna Nurmi, Stina Ojala, Adedayo Oluokun, Mai Omura, Petya Osenova, Robert Ostling, Lilja Øvrelid, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Siyao Peng, Cenel-Augusto Perez, Guy Perrier, Slav Petrov, Jussi Piitulainen, Emily Pitler, Barbara Plank, Thierry Poibeau, Martin Popel, Lauma Pretkalnina, Sophie Prevost, Prokopis Prokopidis, Adam Przepiorkowski, Tiina Puolakainen, Sampo Pyysalo, Andriela Raabis, Alexandre Rademaker, Loganathan Ramasamy, Taraka Rama, Carlos Ramisch, Vinit Ravishankar, Livy Real, Siva Reddy, Georg Rehm, Michael Rießler, Larissa Rinaldi, Laura Rituma, Luisa Rocha, Mykhailo Romanenko, Rudolf Rosa, Davide Rovati, Valentin Rosca, Olga Rudina, Shoval Sadde, Shadi Saleh, Tanja Samardžić, Stephanie Samson, Manuela Sanguinetti, Baiba Saulite, Yanin Sawanakunanon, Nathan Schneider, Sebastian Schuster, Djame Seddah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Muh Shohibussirri, Dmitry Sichinava, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simko, María Simkóva, Kiril Simov, Aaron Smith, Isabela Soares-Bastos, Antonio Stella, Milan Straka, Jana Strnadová, Alane Suhr, Umut Sulubacak, Zsolt Szántó, Dima Taji, Yuta Takahashi, Takaaki Tanaka, Isabelle Tellier, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zdenka Urešová, Larritz Uria, Hans Uszkoreit, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Veronika Vincze, Lars Wallin, Jonathan North Washington, Seyi Williams, Mats Wiren, Tsegay Woldemariam, Tak-sum Wong, Chunxiao Yan, Marat M. Yavrumyan, Zhuoran Yu, Zdeněk Žabokrtský, Amir Zeldes, Daniel Zeman, Manying Zhang, and Hanzhi Zhu. 2018. Universal dependencies 2.2. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (UFAL), Faculty of Mathematics and Physics, Charles University. + +Pedro Javier Ortiz Suárez, Benoit Sagot, and Laurent Romary. 2019. Asynchronous pipeline for processing huge corpora on medium to low resource infrastructures. *Challenges in the Management of Large Corpora* (CMLC-7) 2019, page 9. + +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics. + +Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics. + +Slav Petrov, Dipanjan Das, and Ryan T. McDonald. 2012. A universal part-of-speech tagset. In Proceedings of the Eighth International Conference on Language Resources and Evaluation, LREC 2012, Istanbul, Turkey, May 23-25, 2012, pages 2089-2096. European Language Resources Association (ELRA). +Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. OpenAI Blog. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1:8. +Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. arXiv e-prints, page arXiv:1910.10683. +Aaron Smith, Bernd Bohnet, Miryam de Lhoneux, Joakim Nivre, Yan Shao, and Sara Stymne. 2018. 82 treebanks, 34 models: Universal dependency parsing with multi-treebank models. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 113-123, Brussels, Belgium. Association for Computational Linguistics. +Milan Straka. 2018. UDPipe 2.0 prototype at CoNLL 2018 UD shared task. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 197-207, Brussels, Belgium. Association for Computational Linguistics. +Milan Straka and Jana Straková. 2017. Tokenizing, POS tagging, lemmatizing and parsing UD 2.0 with UDPipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 88-99, Vancouver, Canada. Association for Computational Linguistics. +Milan Straka, Jana Straková, and Jan Hajic. 2019. Evaluating contextualized embeddings on 54 languages in POS tagging, lemmatization and dependency parsing. CoRR, abs/1908.07448. +Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645-3650, Florence, Italy. Association for Computational Linguistics. +Mariona Taule, Maria Antonia Martí, and Marta Re-casens. 2008. Ancora: Multilevel annotated corpora for catalan and spanish. In Proceedings of the International Conference on Language Resources and Evaluation, LREC 2008, 26 May - 1 June 2008, Marrakech, Morocco. European Language Resources Association. + +Trieu H. Trinh and Quoc V. Le. 2018. A simple method for commonsense reasoning. CoRR, abs/1806.02847. +Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, and Edouard Grave. 2019. CC-Net: Extracting High Quality Monolingual Datasets from Web Crawl Data. arXiv e-prints, page arXiv:1911.00359. +Fei Wu and Daniel S. Weld. 2010. Open information extraction using Wikipedia. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 118-127, Uppsala, Sweden. Association for Computational Linguistics. +Daniel Zeman, Jan Hajic, Martin Popel, Martin Pothast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Multilingual parsing from raw text to universal dependencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1-21, Brussels, Belgium. Association for Computational Linguistics. + +# A Appendix + +# A.1 Number of training steps for each checkpoint and each corpus + +
Language1 Epoch3 Epochs5 Epochs10 Epochs
Wikipedia-Based ELMos
Bulgarian6,26818,80431,34062,680
Catalan20,66661,998103,330206,660
Danish5,92217,76629,61059,220
Finnish8,76326,28943,81587,630
Indonesian7,89123,67339,45578,910
OSCAR-Based ELMos
Bulgarian143,169429,507715,8451,431,690
Catalan81,156243,468405,780811,560
Danish81,156243,468405,780811,560
Finnish181,230543,690906,1501,812,300
Indonesian263,830791,4901,319,1502,638,300
+ +Table 8: Number of training steps for each checkpoint, for the ELMoWikipedia and ELMoOSCAR of each language. \ No newline at end of file diff --git a/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/images.zip b/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..049348e8b578f6d88015425217a52fd8d8444aa1 --- /dev/null +++ b/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:944fec762741866ef4ff4399a1e261ca2d9ff68eea80b4a0cf4e39b42632f255 +size 475532 diff --git a/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/layout.json b/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..73ea991b8a9145380be9961ed83a13b18ba623ce --- /dev/null +++ b/amonolingualapproachtocontextualizedwordembeddingsformidresourcelanguages/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8ecdeadce75ef27f2edd066b9e3c933b4bdbf4673358228372fceab777d254f8 +size 347550 diff --git a/amrparsingviagraphsequenceiterativeinference/22ee846d-f4ea-4953-b982-80083f128313_content_list.json b/amrparsingviagraphsequenceiterativeinference/22ee846d-f4ea-4953-b982-80083f128313_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..06bd09d25a66b05a0163eab7bc68c39fa32fa3ea --- /dev/null +++ b/amrparsingviagraphsequenceiterativeinference/22ee846d-f4ea-4953-b982-80083f128313_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2174a0d11002c4e643fa885bbb63184f5032750d3523eef3b247be109843cce5 +size 82860 diff --git a/amrparsingviagraphsequenceiterativeinference/22ee846d-f4ea-4953-b982-80083f128313_model.json b/amrparsingviagraphsequenceiterativeinference/22ee846d-f4ea-4953-b982-80083f128313_model.json new file mode 100644 index 0000000000000000000000000000000000000000..1c63764bd7b26d89b10cecffac9c7cf45c83a39e --- /dev/null +++ b/amrparsingviagraphsequenceiterativeinference/22ee846d-f4ea-4953-b982-80083f128313_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c847f2567262243df38cb22385b8d467084d54c6ff5c1e1b4b896eaa5a588739 +size 104001 diff --git a/amrparsingviagraphsequenceiterativeinference/22ee846d-f4ea-4953-b982-80083f128313_origin.pdf b/amrparsingviagraphsequenceiterativeinference/22ee846d-f4ea-4953-b982-80083f128313_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3fc33b8a46bdf7b4a10b5f475bc3e96a8d1e38a5 --- /dev/null +++ b/amrparsingviagraphsequenceiterativeinference/22ee846d-f4ea-4953-b982-80083f128313_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a04b2bba551f7465e55d6e5a1800b439aadc08453d860b693dae77e99a7991ae +size 1354185 diff --git a/amrparsingviagraphsequenceiterativeinference/full.md b/amrparsingviagraphsequenceiterativeinference/full.md new file mode 100644 index 0000000000000000000000000000000000000000..24051b2c594dbe0628fe2e5735fa32abe7f1aa3d --- /dev/null +++ b/amrparsingviagraphsequenceiterativeinference/full.md @@ -0,0 +1,353 @@ +# AMR Parsing via Graph Sequence Iterative Inference* + +# Deng Cai + +The Chinese University of Hong Kong + +thisisjcykcd@gmail.com + +# Wai Lam + +The Chinese University of Hong Kong + +wlam@se.cuhk.edu.hk + +# Abstract + +We propose a new end-to-end model that treats AMR parsing as a series of dual decisions on the input sequence and the incrementally constructed graph. At each time step, our model performs multiple rounds of attention, reasoning, and composition that aim to answer two critical questions: (1) which part of the input sequence to abstract; and (2) where in the output graph to construct the new concept. We show that the answers to these two questions are mutually causalities. We design a model based on iterative inference that helps achieve better answers in both perspectives, leading to greatly improved parsing accuracy. Our experimental results significantly outperform all previously reported SMATCH scores by large margins. Remarkably, without the help of any large-scale pre-trained language model (e.g., BERT), our model already surpasses previous state-of-the-art using BERT. With the help of BERT, we can push the state-of-the-art results to $80.2\%$ on LDC2017T10 (AMR 2.0) and $75.4\%$ on LDC2014T12 (AMR 1.0). + +# 1 Introduction + +Abstract Meaning Representation (AMR) (Banarescu et al., 2013) is a broad-coverage semantic formalism that encodes the meaning of a sentence as a rooted, directed, and labeled graph, where nodes represent concepts and edges represent relations (See an example in Figure 1). AMR parsing is the task of transforming natural language text into AMR. One biggest challenge of AMR parsing is the lack of explicit alignments between nodes (concepts) in the graph and words in the text. This characteristic not only poses great difficulty in concept + +prediction but also brings a close tie for concept prediction and relation prediction. + +While most previous works rely on a pre-trained aligner to train a parser, some recent attempts include: modeling the alignments as latent variables (Lyu and Titov, 2018), attention-based sequence-to-sequence transduction models (Barzdins and Gosko, 2016; Konstas et al., 2017; van Noord and Bos, 2017), and attention-based sequence-to-graph transduction models (Cai and Lam, 2019; Zhang et al., 2019b). Sequence-to-graph transduction models build a semantic graph incrementally via spanning one node at every step. This property is appealing in terms of both computational efficiency and cognitive modeling since it mimics what human experts usually do, i.e., first grasping the core ideas then digging into more details (Banarescu et al., 2013; Cai and Lam, 2019). + +Unfortunately, the parsing accuracy of existing works including recent state-of-the-arts (Zhang et al., 2019a,b) remain unsatisfactory compared to human-level performance, $^{1}$ especially in cases where the sentences are rather long and informative, which indicates substantial room for improvement. One possible reason for the deficiency is the inherent defect of one-pass prediction process; that is, the lack of the modeling capability of the interactions between concept prediction and relation prediction, which is critical to achieving fully-informed and unambiguous decisions. + +We introduce a new approach tackling AMR parsing, following the incremental sequence-to-graph transduction paradigm. We explicitly characterize each spanning step as the efforts for finding which part to abstract with respect to the input sequence, and where to construct with respect to the partially constructed output graph. Equivalently, + +we treat AMR parsing as a series of dual decisions on the input sequence and the incrementally constructed graph. Intuitively, the answer of what concept to abstract decides where to construct (i.e., the relations to existing concepts), while the answer of where to construct determines what concept to abstract. Our proposed model, supported by neural networks with explicit structure for attention, reasoning, and composition, integrated with an iterative inference algorithm. It iterates between finding supporting text pieces and reading the partially constructed semantic graph, inferring more accurate and harmonious expansion decisions progressively. Our model is aligner-free and can be effectively trained with limited amount of labeled data. Experiments on two AMR benchmarks demonstrate that our parser outperforms the previous best parsers on both benchmarks. It achieves the best-reported SMATCH scores (F1): $80.2\%$ on LDC2017T10 and $75.4\%$ on LDC2014T12, surpassing the previous state-of-the-art models by large margins. + +# 2 Related Work & Background + +On a coarse-grained level, we can categorize existing AMR parsing approaches into two main classes: Two-stage parsing (Flanigan et al., 2014; Lyu and Titov, 2018; Zhang et al., 2019a) uses a pipeline design for concept identification and relation prediction, where the concept decisions precede all relation decisions; One-stage parsing constructs a parse graph incrementally. For more fine-grained analysis, those one-stage parsing methods can be further categorized into three types: Transition-based parsing (Wang et al., 2016; Damonte et al., 2017; Ballesteros and Al-Onaizan, 2017; Peng et al., 2017; Guo and Lu, 2018; Liu et al., 2018; Wang and Xue, 2017; Naseem et al., 2019) processes a sentence from left-to-right and constructs the graph incrementally by alternately inserting a new node or building a new edge. Seq2seq-based parsing (Barzdins and Gosko, 2016; Konstas et al., 2017; van Noord and Bos, 2017; Peng et al., 2018) views parsing as sequence-to-sequence transduction by some linearization of the AMR graph. The concept and relation prediction are then treated equally with a shared vocabulary. The third class is graph-based parsing (Cai and Lam, 2019; Zhang et al., 2019b), where at each time step, a new node along with its connections to existing nodes are jointly decided, either in order (Cai and Lam, 2019) or in parallel (Zhang et al., 2019b). So far, the recip- + +![](images/c9bc932bbc46980a9cb74230c0708030e30541a636df18b87a33f7b9404bbc04.jpg) + +![](images/c1dbabebc5ad978f9fb72f502105db5e3f4e594631e9472ee0daa16723266be9.jpg) +The current partial (solid) and full (solid + dashed) AMR graphs for the sentence "The boy must no go" + +![](images/d8e76eff263fa593aa226633e518905943737620cc62a889471df2c3acdc60f4.jpg) +Figure 1: AMR graph construction given the partially constructed graph: (a) one possible expansion resulting in the boy concept. (b) another possible expansion resulting in the - (negation) concept. + +rocal causation of relation prediction and concept prediction has not been closely-studied and well-utilized. + +There are also some exceptions staying beyond the above categorization. Peng et al. (2015) introduce a synchronous hyperedge replacement grammar solution. Pust et al. (2015) regard the task as a machine translation problem, while Artzi et al. (2015) adapt combinatory categorical grammar. Groschwitz et al. (2018); Lindemann et al. (2019) view AMR graphs as the structure AM algebra. + +# 3 Motivation + +Our approach is inspired by the deliberation process when a human expert is deducing a semantic graph from a sentence. The output graph starts from an empty graph and spans incrementally in a node-by-node manner. At any time step of this process, we are distilling the information for the next expansion. We call it expansion because the new node, as an abstract concept of some specific text fragments in the input sentence, is derived to complete some missing elements in the current semantic graph. Specifically, given the input sentence and the current partially constructed graph, we are answering two critical questions: which part of the input sequence to abstract, and where in the output graph to construct the new concept. For instance, Figure 1(a) and (b) show two possible choices for the next expansion. In Figure 1(a), the word "boy" is abstracted to the concept boy to complement the subject information of the event go-02. On the + +![](images/37d971bdf1be471dbc622c9de0eb0cae8a22dc1ee69f642878faa19a5a81de96.jpg) +Figure 2: Overview of the dual graph-sequence iterative inference for AMR parsing. Given the current graph $G^i$ and input sequence $W$ . The inference starts with an initial concept decision $x_0$ and follows the inference chain $x_0 \to f(G^i, x_0) \to y_1 \to g(W, y_1) \to x_1 \to f(G^i, x_1) \to y_2 \to g(W, y_2) \to \dots$ . The details of $f$ and $g$ are shown in red and blue boxes, where nodes in graph and tokens in sequence are selected via attention mechanisms. + +other hand, in Figure 1(b), a polarity attribute of the event go-2 is constructed, which is triggered by the word "not" in the sentence. + +We note that the answer to one of the questions can help answer the other. For instance, if we have decided to render the word "not" to the graph, then we will consider adding an edge labeled as polarity, and finally determine its attachment to the existing event go-2 (rather than an edge labeled ARG0 to the same event go-2, though it is also present in the golden graph). On the other hand, if we have decided to find the subject (ARG0 relation) of the action go-02, we are confident to locate the word "boy" instead of function words like "not" or "must", thus unambiguously predict the right concept boy. Another possible circumstance is that we may make a mistake trying to ask something that is not present in the sentence (e.g., the destination of the go-02 action). This attempt will be rejected by a review of the sentence. The rationale is that literally we cannot find the destination information in the sentence. Similarly, if we mistakenly propose to abstract some parts of the sentence that are not ready for construction yet, the proposal will be rejected by another inspection on the graph since that there is nowhere to place such a new concept. + +We believe the mutual causalities, as described above, are useful for action disambiguation and harmonious decision making, which eventually result in more accurate parses. We formulate AMR parsing as a series of dual graph-sequence decisions and design an iterative inference approach + +to tackle each of them. It is sort of analogous to the cognition procedure of a person, who might first notice part of the important information in one side (graph or sequence), then try to confirm her decision at the other side, which could just refute her former hypothesis and propose a new one, and finally converge to a conclusion after multiple rounds of reasoning. + +# 4 Proposed Model + +# 4.1 Overview + +Formally, the parsing model consists of a series of graph expansion procedures $\{G^0\to \ldots \to G^i\to \ldots \}$ , starting from an empty graph $G^0$ . In each turn of expansion, the following iterative inference process is performed: + +$$ +y _ {t} ^ {i} = g (G ^ {i}, x _ {t} ^ {i}), +$$ + +$$ +x _ {t + 1} ^ {i} = f (W, y _ {t} ^ {i}), +$$ + +where $W, G^{i}$ are the input sequence and the current semantic graph respectively. $g(\cdot), f(\cdot)$ seek where to construct (edge prediction) and what to abstract (node prediction) respectively, and $x_{t}^{i}, y_{t}^{i}$ are the $t$ -th graph hypothesis (where to construct) and $t$ -th sequence hypothesis (what to abstract) for the $i$ -th expansion step respectively. For clarity, we may drop the superscript $i$ in the following descriptions. + +Figure 2 depicts an overview of the graph sequence iterative inference process. Our model has four main components: (1) Sequence Encoder, which generates a set of text memories (per token) + +to provide grounding for concept alignment and abstraction; (2) Graph Encoder, which generates a set of graph memories (per node) to provide grounding for relation reasoning; (3) Concept Solver, where a previous graph hypothesis is used for concept prediction; and (4) Graph Solver, where a previous concept hypothesis is used for relation prediction. The last two components correspond to the reasoning functions $g(\cdot)$ and $f(\cdot)$ respectively. + +The text memories can be computed by Sentence Encoder at the beginning of the whole parsing while the graph memories are constructed by Graph Encoder incrementally as the parsing progresses. During the iterative inference, a semantic representation of current state is used to attend to both graph and text memories (blue and red arrows) in order to locate the new concept and obtain its relations to the existing graph, both of which subsequently refine each other. Intuitively, after a first glimpse of the input sentence and the current graph, specific sub-areas of both sequence and graph are revisited to obtain a better understanding of the current situation. Later steps typically read the text in detail with specific learning aims, either confirming or overturning a previous hypothesis. Finally, after several iterations of reasoning steps, the refined sequence/graph decisions are used for graph expansion. + +# 4.2 Sequence Encoder + +As mentioned above, we employ a sequence encoder to convert the input sentence into vector representations. The sequence encoder follows the multi-layer Transformer architecture described in Vaswani et al. (2017). At the bottom layer, each token is firstly transformed into the concatenation of features learned by a character-level convolutional neural network (charCNN, Kim et al., 2016) and randomly initialized embeddings for its lemma, part-of-speech tag, and named entity tag. Additionally, we also include features learned by pre-trained language model BERT (Devlin et al., 2019). + +Formally, for an input sequence $w_{1},w_{2},\ldots ,w_{n}$ with length $n$ we insert a special token BOS at the beginning of the sequence. For clarity, we omit the detailed transformations (Vaswani et al., 2017) and denote the final output from our sequence encoder as $\{h_0,h_1,\dots ,h_n\} \in \mathbb{R}^d$ where $h_0$ corresponds the special token BOS and serves as an overall rep + +resentation while others are considered as contextualized word representations. Note that the sequence encoder only needs to be invoked once, and the produced text memories are used for the whole parsing procedure. + +# 4.3 Graph Encoder + +We use a similar idea in Cai and Lam (2019) to encode the incrementally expanding graph. Specifically, a graph is simply treated as a sequence of nodes (concepts) in the chronological order of when they are inserted into the graph. We employ multi-layer Transformer architecture with masked self-attention and source-attention, which only allows each position in the node sequence to attend to all positions up to and including that position, and every position in the node sequence to attend over all positions in the input sequence.3 While this design allows for significantly more parallelization during training and computation-saving incrementality during testing,4 it inherently neglects the edge information. We attempted to alleviate this problem by incorporating the idea of Strubell et al. (2018) that applies auxiliary supervision at attention heads to encourage them to attend to each node's parents in the AMR graph. However, we did not see performance improvement. We attribute the failure to the fact that the neural attention mechanisms on their own are already capable of learning to attend to useful graph elements, and the auxiliary supervision is likely to disturb the ultimate parsing goal. + +Consequently, for the current graph $G$ with $m$ nodes, we take its output concept sequence $c_{1}, c_{2}, \ldots, c_{m}$ as input. Similar to the sequence encoder, we insert a special token BOG at the beginning of the concept sequence. Each concept is firstly transformed into the concatenation of feature vector learned by a char-CNN and randomly initialized embedding. Then, a multi-layer Transformer encoder with masked self-attention and source-attention is applied, resulting in vector representations $\{s_{0}, s_{1}, \ldots, s_{m}\} \in \mathbb{R}^{d}$ , where $s_{0}$ represents the special concept BOG and serves as a dummy node while others are considered as contextualized node representations. + +# 4.4 Concept Solver + +At each sequence reasoning step $t$ , the concept solver receives a state vector $y_{t}$ that carries the latest graph decision and the input sequence memories $h_1, \ldots, h_n$ from the sequence encoder, and aims to locate the proper parts in the input sequence to abstract and generate a new concept. We employ the scaled dot-product attention proposed in Vaswani et al. (2017) to solve this problem. Concretely, we first calculate an attention distribution over all input tokens: + +$$ +\alpha_ {t} = \mathrm {s o f t m a x} (\frac {(W ^ {Q} y _ {t}) ^ {\mathrm {T}} W ^ {K} h _ {1 : n}}{\sqrt {d _ {k}}}), +$$ + +where $\{W^Q,W^K\} \in \mathbb{R}^{d_k\times d}$ denote learnable linear projections that transform the input vectors into the query and key subspace respectively, and $d_{k}$ represents the dimensionality of the subspace. + +The attention weights $\alpha_{t} \in \mathbb{R}^{n}$ provide a soft alignment between the new concept and the tokens in the input sequence. We then compute the probability distribution of the new concept label through a hybrid of three channels. First, $\alpha_{t}$ is fed through an MLP and softmax to obtain a probability distribution over a pre-defined vocabulary: + +$$ +\operatorname {M L P} \left(\alpha_ {t}\right) = \left(W ^ {V} h _ {1: n}\right) \alpha_ {t} + y _ {t} \tag {1} +$$ + +$$ +P ^ {(\mathrm {v o c a b})} = \operatorname {s o f t m a x} \left(W ^ {(\mathrm {v o c a b})} \mathrm {M L P} \left(\alpha_ {t}\right) + b ^ {(\mathrm {v o c a b})}\right), +$$ + +where $W^{V} \in \mathbb{R}^{d \times d}$ denotes the learnable linear projection that transforms the text memories into the value subspace, and the value vectors are averaged according to $\alpha_{t}$ for concept label prediction. Second, the attention weights $\alpha_{t}$ directly serve as a copy mechanism (Gu et al., 2016; See et al., 2017), i.e., the probabilities of copying a token lemma from the input text as a node label. Third, to address the attribute values such as person names or numerical strings, we also use $\alpha_{t}$ for another copy mechanism that directly copies the original strings of input tokens. The above three channels are combined via a soft switch to control the production of the concept label from different sources: + +$$ +[ p _ {0}, p _ {1}, p _ {2} ] = \operatorname {s o f t m a x} (W ^ {(\text {s w i t c h})} \mathrm {M L P} (\alpha_ {t})), +$$ + +where MLP is the same as in Eq. 1, and $p_0, p_1$ and $p_2$ are the probabilities of three prediction channels respectively. Hence, the final prediction probability + +of a concept $c$ is given by: + +$$ +\begin{array}{l} P (c) = p _ {0} \cdot P ^ {\left(\text {v o c a b}\right)} (c) \\ + p _ {1} \cdot \left(\sum_ {i \in L (c)} \alpha_ {t} [ i ]\right) + p _ {2} \cdot \left(\sum_ {i \in T (c)} \alpha_ {t} [ i ]\right), \\ \end{array} +$$ + +where $[i]$ indexes the $i$ -th element and $L(c)$ and $T(c)$ are index sets of lemmas and tokens respectively that have the surface form as $c$ . + +# 4.5 Relation Solver + +At each graph reasoning step $t$ , the relation solver receives a state vector $x_{t}$ that carries the latest concept decision and the output graph memories $s_0, s_1, \ldots, s_m$ from the graph encoder, and aims to point out the nodes in the current graph that have an immediate relation to the new concept (source nodes) and generate corresponding edges. Similar to Cai and Lam (2019); Zhang et al. (2019b), we factorize the task as two stages: First, a relation identification module points to some preceding nodes as source nodes; Then, the relation classification module predicts the relation type between the new concept and predicted source nodes. We leave the latter to be determined after iterative inference. + +AMR is a rooted, directed, and acyclic graph. The reason for AMR being a graph instead of a tree is that it allows reentrancies where a concept participates in multiple semantic relations with different semantic roles. Following Cai and Lam (2019), we use multi-head attention for a more compact parsing procedure where multiple source nodes are simultaneously determined. Formally, our relation identification module employs $H$ different attention heads, for each head $h$ , we calculate an attention distribution over all existing node (including the dummy node $s_0$ ): + +$$ +\beta_ {t} ^ {h} = \operatorname {s o f t m a x} (\frac {\left(W _ {h} ^ {Q} x _ {t}\right) ^ {\mathrm {T}} W _ {h} ^ {K} s _ {0 : m}}{\sqrt {d _ {k}}}). +$$ + +Then, we take the maximum over different heads as the final edge probabilities: + +$$ +\beta_ {t} [ i ] = \max _ {h = 1} ^ {H} \beta_ {t} ^ {h} [ i ]. +$$ + +Therefore, different heads may points to different nodes at the same time. Intuitively, each head represents a distinct relation detector for a particular + +![](images/693fb116a1ed3cf8e049101ca2c95c94e19a23f3de6ef154d9436bf59ebbe51c.jpg) +Figure 3: Multi-head attention for relation identification. At left is the attention matrix, where each column corresponds to a unique attention head, and each row corresponds to an existing node. + +set of relation types. For each attention head, it will point to a source node if certain relations exist between the new node and the existing graph, otherwise it will point to the dummy node. An example with four attention heads and three existing nodes (excluding the dummy node) is illustrated in Figure 3. + +# 4.6 Iterative Inference + +As described above, the concept solver and the relation solver are conceptually two attention mechanisms over the sequence and graph respectively, addressing the concept prediction and relation prediction separately. The key is to pass the decisions between the solvers so that they can examine each other's answer and make harmonious decisions. Specifically, at each spanning step $i$ , we start the iterative inference by setting $x_0 = h_0$ and solving $f(G^i,x_0)$ . After the $t$ -th graph reasoning, we compute the state vector $y_{t}$ , which will be handed over to the concept solver as $g(W,y_{t})$ , as: + +$$ +y _ {t} = \mathrm {F F N} ^ {(y)} (x _ {t} + (W ^ {V} h _ {1: n}) \alpha_ {t}), +$$ + +where $\mathrm{FFN}^{(y)}$ is a feed-forward network and $W^V$ projects text memories into a value space. Similarly, after the $t$ -th sequence reasoning, we update the state vector from $y_{t}$ to $x_{t + 1}$ as: + +$$ +x _ {t + 1} = \mathrm {F F N} ^ {(x)} (y _ {t} + \sum_ {h = 1} ^ {H} (W _ {h} ^ {V} s _ {0: n}) \beta_ {t} ^ {h}), +$$ + +where $\mathrm{FFN}^{(x)}$ is a feed-forward network and $W_h^V$ projects graph memories into a value space for each head $h$ . After $N$ steps of iterative inference, i.e., + +$$ +\begin{array}{l} x _ {0} \rightarrow f (G ^ {i}, x _ {0}) \rightarrow y _ {1} \rightarrow g (W, y _ {1}) \rightarrow x _ {1} \rightarrow \dots \\ \rightarrow f (G ^ {i}, x _ {N - 1}) \rightarrow y _ {N} \rightarrow g (W, y _ {N}) \rightarrow x _ {N}, \\ \end{array} +$$ + +we finally employ a deep biaffine classifier (Dozat and Manning, 2016) for edge label prediction. The + +Algorithm 1 AMR Parsing via Graph $\leftrightarrows$ Sequence Iterative Inference + +Input: the input sentence $W = (w_{1}, w_{2}, \ldots, w_{n})$ + +Output: the corresponding AMR graph $G$ // compute text memories + +1: $h_0, h_1, \ldots, h_n = \text{SequenceEncoder}((\text{BOS}, w_1, \ldots, w_n))$ // initialize graph +2: $G^0 = (\mathrm{nodes} = \{\mathrm{BOG}\},\mathrm{edges} = \emptyset)$ // start graph expansio +3: $i = 0$ +4: while True do +5: $s_0,\ldots ,s_i = \mathrm{GraphEncoder}(G^i)$ // the graph memories can be computed $\star$ incrementally\* +6: $x_0 = h_0$ // iterative inference +7: for $t \gets 1$ to $N$ do +8: $y_{t} = f(G^{i},x_{t - 1}) / / \operatorname{Seq.}\rightarrow \operatorname{Graph}$ +9: $x_{t} = g(W,y_{t}) / /\mathrm{Graph}\to \mathrm{Seq}$ +10: end for +11: if concept prediction is EOG then +12: break +13: end if +14: update $G^{i+1}$ based on $G^i$ , $x_N$ and $y_N$ +15: $i = i + 1$ +16: end while +17: return $G^{i}$ + +classifier uses a biaffine function to score each label, given the final concept representation $x_{N}$ and the node vector $s_{1:m}$ as input. The resulted concept, edge, and edge label predictions will added to the new graph $G^{i+1}$ if the concept prediction is not EOG, a special concept that we add for indicating termination. Otherwise, the whole parsing process is terminated and the current graph is returned as final result. The complete parsing process adopting the iterative inference is described in Algorithm 1. + +# 5 Training & Prediction + +Our model is trained with the standard maximum likelihood estimate. The optimization objective is to maximize the sum of the decomposed step-wise log-likelihood, where each is the sum of concept, edge, and edge label probabilities. To facilitate training, we create a reference generation order of nodes by running a breadth-first-traversal over target AMR graphs, as it is cognitively appealing (core-semantic-first principle, Cai and Lam, 2019) and the effectiveness of pre-order traversal is also + +empirically verified by Zhang et al. (2019a) in a depth-first setting. For the generation order for sibling nodes, we adopt the uniformly random order and the deterministic order sorted by the relation frequency in a $1:1$ ratio at first then change to the deterministic order only in the final training steps. We empirically find that the deterministic-after-random strategy slightly improves performance. + +During testing, our model searches for the best output graph through beam search based on the log-likelihood at each spanning step. The time complexity of our model is $O(k|V|)$ , where $k$ is the beam size, and $|V|$ is the number of nodes. + +# 6 Experiments + +# 6.1 Experimental Setup + +Datasets Our evaluation is conducted on two AMR public releases: AMR 2.0 (LDC0217T10) and AMR 1.0 (LDC2014T12). AMR 2.0 is the latest and largest AMR sembank that was extensively used in recent works. AMR 1.0 shares the same development and test set with AMR, while the size of its training set is only about one-third of AMR 2.0, making it a good testbed to evaluate our model's sensitivity for data size. $^6$ + +Implementation Details We use Stanford CoreNLP (Manning et al., 2014) for tokenization, lemmatization, part-of-speech, and named entity tagging. The hyper-parameters of our models are chosen on the development set of AMR 2.0. Without explicit specification, we perform $N = 4$ steps of iterative inference. Other hyper-parameter settings can be found in the Appendix. Our models are trained using ADAM (Kingma and Ba, 2014) for up to 60K steps (first 50K with the random sibling order and last 10K with deterministic order), with early stopping based on development set performance. We fix BERT parameters similar to Zhang et al. (2019a,b) due to the GPU memory limit. During testing, we use a beam size of 8 for the highest-scored graph approximation. $^7$ + +AMR Pre- and Post-processing We remove senses as done in Lyu and Titov (2018); Zhang et al. (2019a,b) and simply assign the most frequent sense for nodes in post-processing. Notably, + +most existing methods including the state-the-of-art parsers (Zhang et al., 2019a,b; Lyu and Titov, 2018; Guo and Lu, 2018, inter alia) often rely on heavy graph re-categorization for reducing the complexity and sparsity of the original AMR graphs. For graph re-categorization, specific subgraphs of AMR are grouped together and assigned to a single node with a new compound category, which usually involves non-trivial expert-level manual efforts for hand-crafting rules. We follow the exactly same pre- and post-processing steps of those of Zhang et al. (2019a,b) for graph re-categorization. More details can be found in the Appendix. + +Ablated Models As pointed out by Cai and Lam (2019), the precise set of graph re-categorization rules differs among different works, making it difficult to distinguish the performance improvement from model optimization and carefully designed rules. In addition, only recent works (Zhang et al., 2019a,b; Lindemann et al., 2019; Naseem et al., 2019) have started to utilize the large-scale pretrained language model, BERT (Devlin et al., 2019; Wolf et al., 2019). Therefore, we also include ablated models for addressing two questions: (1) How dependent is our model on performance from handcrafted graph re-categorization rules? (2) How much does BERT help? We accordingly implement three ablated models by removing either one of them or removing both. The ablation study not only reveals the individual effect of two model components but also helps facilitate fair comparisons with prior works. + +# 6.2 Experimental Results + +Main Results The performance of AMR parsing is conventionally evaluated by SMATCH (F1) metric (Cai and Knight, 2013). The left block of Table 1 shows the SMATCH scores on the AMR 2.0 test set of our models against the previous best approaches and recent competitors. On AMR 2.0, we outperform the latest push from Zhang et al. (2019b) by $3.2\%$ and, for the first time, obtain a parser with over $80\%$ SMATCH score. Note that even without BERT, our model still outperforms the previous state-of-the-art approaches using BERT (Zhang et al., 2019b,a) with $77.3\%$ . This is particularly remarkable since running BERT is computationally expensive. As shown in Table 2, on AMR 1.0 where the training instances are only around 10K, we improve the best-reported results by $4.1\%$ and reach at $75.4\%$ , which is already higher than + +
ModelG. R.BERTSMATCHfine-grained evaluation
UnlabeledNo WSDConceptSRLReent.Neg.NERWiki
van Noord and Bos (2017)××71.07472826652627965
Groschwitz et al. (2018)×71.07472846449577871
Lyu and Titov (2018)×74.477.175.585.969.852.358.486.075.7
Cai and Lam (2019)××73.277.074.284.466.755.362.982.073.2
Lindemann et al. (2019)75.3--------
Naseem et al. (2019)75.58076867256678380
Zhang et al. (2019a)×74.6--------
Zhang et al. (2019a)76.379.076.884.869.760.075.277.985.8
Zhang et al. (2019b)77.08078867161777986
Ours××74.577.875.185.968.557.765.082.981.1
×77.380.177.986.469.458.575.678.486.1
×78.781.579.288.174.563.866.187.181.3
80.282.880.888.174.264.678.981.186.3
+ +Table 1: SMATCH scores (\%) (left) and fine-grained evaluations (\%) (right) on the test set of AMR 2.0. G.R./BERT indicates whether or not the results use Graph Re-categorization/BERT respectively. + +
ModelG. R.BERTSMATCH
Flanigan et al. (2016)××66.0
Pust et al. (2015)××67.1
Wang and Xue (2017)×68.1
Guo and Lu (2018)×68.3
Zhang et al. (2019a)70.2
Zhang et al. (2019b)71.3
Ours××68.8
×71.2
×74.0
75.4
+ +Table 2: SMATCH scores on the test set of AMR 1.0. + +most models trained on AMR 2.0. The even more substantial performance gain on the smaller dataset suggests that our method is both effective and data-efficient. Besides, again, our model without BERT already surpasses previous state-of-the-art results using BERT. For ablated models, it can be observed that our models yield the best results in all settings if there are any competitors, indicating BERT and graph re-categorization are not the exclusive key for our superior performance. + +Fine-grained Results In order to investigate how our parser performs on individual sub-tasks, we also use the fine-grained evaluation tool (Damonte et al., 2017) and compare to systems which reported these scores. As shown in the right block of Table 1, our best model obtains the highest scores on almost all sub-tasks. The improvements in all sub-tasks are consistent and uniform (around $2\% \sim 3\%$ ) compared to the previous state-of-the-art performance (Zhang et al., 2019b), partly confirming that our model boosts performance via consolidated and harmonious decisions rather than fixing particular phenomena. By our ablation study, + +![](images/3513dd0a03a1ab83d6a3eea4b9ba27162164f2671bc4a72587fc007efc74c2ab.jpg) +Figure 4: SMATCH scores with different numbers of inference steps. Sentences are grouped by length. + +it is worth noting that the NER scores are much lower when using graph re-categorization. This is because the rule-based system for NER in graph re-categorization does not generalize well to unseen entities, which suggest a potential improvement by adapting better NER taggers. + +# 6.3 More Analysis + +Effect of Iterative Inference We then turn to study the effect of our key idea, namely, the iterative inference design. To this end, we run a set of experiments with different values of the number of the inference steps $N$ . The results on AMR 2.0 are shown in Figure 4 (solid line). As seen, the performance generally goes up when the number of inference steps increases. The difference is most noticeable between 1 (no iterative reasoning is performed) and 2, while later improvements gradually diminish. One important point here is that the model size in terms of the number of parameters is constant regardless of the number of inference steps, making it different from general over-parameterized problems. + +![](images/b353d390adb0b64e1693b1f5e5a924c04e8a92b96fd1e5911ad3299aca7b37bf.jpg) +$\alpha_{1}$ +I have little or no pity for you . + +![](images/5fee0b298d775fa3b3b65199f746db990e4e59fac94c5b2bd7e595e04b0b75fa.jpg) +α +I hay +e little or noy for you . + +![](images/cbb0af6ead23b0d9cd28bd1edf282a77f95e7a43fb779078d897b9353080d46e.jpg) + +![](images/a708f41ed0c1aaa7b55ce88df55a040c579458211e35ef2fb08e648e897cd092.jpg) +α3 +have little or nopity for you . + +![](images/8be89dbc942ade90e44520f348e7ffb7e5d50731de099674e3db5e2b7d93209c.jpg) +$\alpha_{4}$ I have little or no pity for you. + +![](images/7236b471e8b19840d7789dc1b707eee9ab34ae6da9aec91eaa0c42fd9c01356e.jpg) +Predicted +Expansion + +![](images/fbfa1083f95db5e90591c9c81bfe904eb94e575a53539e3f48b1f8a3fe2a03e1.jpg) +Golden +AMR + +For a closer study on the effect of the inference steps with respect to the lengths of input sentences, we group sentences into three classes by length and also show the individual results in Figure 4 (dashed lines). As seen, the iterative inference helps more for longer sentences, which confirms our intuition that longer and more complex input needs more reasoning. Another interesting observation is that the performance on shorter sentences reaches the peaks earlier. This observation suggests that the number of inference steps can be adjusted according to the input sentence, which we leave as future work. + +Effect of Beam Size We are also interested in the effect of beam size during testing. Ideally, if a model is able to make accurate predictions in the first place, it should rely less on the search algorithm. We vary the beam size and plot the curve in Figure 6. The results show that the performance generally gets better with larger beam sizes. However, a small beam size of 2 already gets the most of the credits, which suggests that our model is robust enough for time-stressing environments. + +Visualization We visualize the iterative reasoning process with a case study in Figure 5. We illustrate the values of $\alpha_{t},\beta_{t}$ as the iterative inference progresses. As seen, the parser makes mistakes in the first step, but gradually corrects its decisions and finally makes the right predictions. Later reasoning steps typically provide a sharper attention distribution than earlier steps, narrowing down the most likely answer with more confidence. + +Speed We also report the parsing speed of our non-optimized code: With BERT, the parsing speed of our system is about 300 tokens/s, while without BERT, it is about 330 tokens/s on a single Nvidia P4 GPU. The absolute speed depends on various implementation choices and hardware performance. + +![](images/ac6e3ccf23b290f22b4fce2518f89e09776b7b02b4b471fc9616b2e85f6c061b.jpg) +Figure 5: Case study (viewed in color). Color shading intensity represents the value of the attention score. +Figure 6: SMATCH scores with different beam sizes. + +In theory, the time complexity of our parsing algorithm is $O(kbn)$ , where $k$ is the number of iterative steps, $b$ is beam size, and $n$ is the graph size (number of nodes) respectively. It is important to note that our algorithm is linear in the graph size. + +# 7 Conclusion + +We presented the dual graph-sequence iterative inference method for AMR Parsing. Our method constructs an AMR graph incrementally in a node-by-node fashion. Each spanning step is explicitly characterized as answering two questions: which parts of the sequence to abstract, and where in the graph to construct. We leverage the mutual causalities between the two and design an iterative inference algorithm. Our model significantly advances the state-of-the-art results on two AMR corpora. An interesting future work is to make the number of inference steps adaptive to input sentences. Also, the idea proposed in this paper may be applied to a broad range of structured prediction tasks (not only restricted to other semantic parsing tasks) where the complex output space can be divided into two interdependent parts with a similar iterative inference process to achieve harmonious predictions and better performance. + +# References + +Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage ccg semantic parsing with amr. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1699-1710. +Miguel Ballesteros and Yaser Al-Onaizan. 2017. AMR parsing using stack-LSTMs. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1269-1275. +Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffith, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178-186. +Guntis Barzdins and Didzis Gosko. 2016. RIGA at SemEval-2016 task 8: Impact of Smatch extensions and character-level neural translation on AMR parsing accuracy. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1143-1147. +Deng Cai and Wai Lam. 2019. Core semantic first: A top-down approach for AMR parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3797-3807. +Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 748-752. +Joachim Daiber, Max Jakob, Chris Hokamp, and Pablo N Mendes. 2013. Improving efficiency and accuracy in multilingual entity extraction. In Proceedings of the 9th International Conference on Semantic Systems, pages 121-124. +Marco Damonte, Shay B. Cohen, and Giorgio Satta. 2017. An incremental parser for abstract meaning representation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 536-546. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. +Timothy Dozat and Christopher D Manning. 2016. Deep bioaffine attention for neural dependency parsing. arXiv preprint arXiv:1611.01734. + +Jeffrey Flanigan, Chris Dyer, Noah A Smith, and Jaime Carbonell. 2016. Cmu at semeval-2016 task 8: Graph-based amr parsing with infinite ramp loss. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1202-1206. +Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A Smith. 2014. A discriminative graph-based parser for the abstract meaning representation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1426-1436. +Jonas Groschwitz, Matthias Lindemann, Meaghan Fowlie, Mark Johnson, and Alexander Koller. 2018. AMR dependency parsing with a typed semantic algebra. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1831-1841. +Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631-1640. +Zhijiang Guo and Wei Lu. 2018. Better transition-based amr parsing with refined search space. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1712-1722. +Yoon Kim, Yacine Jernite, David Sontag, and Alexander M Rush. 2016. Character-aware neural language models. In Thirtieth AAAI Conference on Artificial Intelligence, pages 2741-2749. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural AMR: Sequence-to-sequence models for parsing and generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 146-157. +Matthias Lindemann, Jonas Groschwitz, and Alexander Koller. 2019. Compositional semantic parsing across graphbanks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4576-4585. +Yijia Liu, Wanxiang Che, Bo Zheng, Bing Qin, and Ting Liu. 2018. An AMR aligner tuned by transition-based parser. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2422-2430. +Chunchuan Lyu and Ivan Titov. 2018. AMR parsing as graph prediction with latent alignment. In Proceedings of the 56th Annual Meeting of the Association + +for Computational Linguistics (Volume 1: Long Papers), pages 397-407. +Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55-60. +Tahira Naseem, Abhishek Shah, Hui Wan, Radu Florian, Salim Roukos, and Miguel Ballesteros. 2019. Rewarding Smatch: Transition-based AMR parsing with reinforcement learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4586-4592. +Rik van Noord and Johan Bos. 2017. Neural semantic parsing by character-based translation: Experiments with abstract meaning representations. arXiv preprint arXiv:1705.09980. +Xiaochang Peng, Linfeng Song, and Daniel Gildea. 2015. A synchronous hyperedge replacement grammar based approach for amr parsing. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, pages 32-41. +Xiaochang Peng, Linfeng Song, Daniel Gildea, and Giorgio Satta. 2018. Sequence-to-sequence models for cache transition systems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1842-1852. +Xiaochang Peng, Chuan Wang, Daniel Gildea, and Nianwen Xue. 2017. Addressing the data sparsity issue in neural AMR parsing. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 366-375. +Michael Pust, Ulf Hermjakob, Kevin Knight, Daniel Marcu, and Jonathan May. 2015. Parsing english into abstract meaning representation using syntax-based machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1143-1154. +Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958. +Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, and Andrew McCallum. 2018. Linguistically-informed self-attention for semantic + +role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5027-5038. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Chuan Wang, Sameer Pradhan, Xiaoman Pan, Heng Ji, and Nianwen Xue. 2016. Camr at semeval-2016 task 8: An extended transition-based amr parser. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1173-1178. +Chuan Wang and Nianwen Xue. 2017. Getting the most out of amr parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1257-1268. +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771. +Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019a. AMR parsing as sequence-to-graph transduction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 80-94. +Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019b. Broad-coverage semantic parsing as transduction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3784-3796. + +# A Hyper-parameter Settings + +Table 3 lists the hyper-parameters used in our full models. Char-level CNNs and Transformer layers in the sentence encoder and the graph encoder share the same hyper-parameter settings. The BERT model (Devlin et al., 2019) we used is the Huggingface's implementation (Wolf et al., 2019) (bert-base-cased). To mitigate overfitting, we apply dropout (Srivastava et al., 2014) with the drop rate 0.2 between different layers. We randomly mask (replacing inputs with a special UNK token) the input lemmas, POS tags, and NER tags with a rate of 0.33. Parameter optimization is performed with the ADAM optimizer (Kingma and Ba, 2014) with $\beta_{1} = 0.9$ and $\beta_{2} = 0.999$ . The learning rate schedule is similar to that in Vaswani et al. (2017), with warm-up steps being set to 2K. We use early stopping on the development set for choosing the best model. + +# B AMR Pre- and Post-processing + +We follow exactly the same pre- and post-processing steps of those of Zhang et al. (2019a,b) for graph re-categorization. In preprocessing, we anonymize entities, remove wiki links and polarity attributes, and convert the resultant AMR graphs into a compact format by compressing certain subgraphs. In post-processing, we recover the original AMR format from the compact format, restore Wikipedia links using the DBpedia Spotlight API (Daiber et al., 2013), add polarity attributes based on rules observed from the training data. More details can be found in Zhang et al. (2019a). + +
Embeddings
lemma300
POS tag32
NER tag16
concept300
char32
Char-level CNN
#filters256
ngram filter size[3]
output size128
Sentence Encoder
#transformer layers4
Graph Encoder
#transformer layers2
Transformer Layer
#heads8
hidden size512
feed-forward hidden size1024
Concept Solver
feed-forward hidden size1024
Relation Solver
#heads8
feed-forward hidden size1024
Deep biaffine classifier
hidden size100
+ +Table 3: Hyper-parameters settings. \ No newline at end of file diff --git a/amrparsingviagraphsequenceiterativeinference/images.zip b/amrparsingviagraphsequenceiterativeinference/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..178b8f4329a979a2ec25db0523f7590b75a43037 --- /dev/null +++ b/amrparsingviagraphsequenceiterativeinference/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bbfcf00ef9a9bb6f521311f14722fdac5aa981c84512c876377d53359c597893 +size 466680 diff --git a/amrparsingviagraphsequenceiterativeinference/layout.json b/amrparsingviagraphsequenceiterativeinference/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..f526be64e4a0b385218612ded7a55016bc744803 --- /dev/null +++ b/amrparsingviagraphsequenceiterativeinference/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eebb20f8cbccd9b9c64c71d21773a8444316f3fe618ab15243a757efd7b79e79 +size 438738 diff --git a/amrparsingwithlatentstructuralinformation/c7f90b18-dd47-4b93-a611-9d81d6369953_content_list.json b/amrparsingwithlatentstructuralinformation/c7f90b18-dd47-4b93-a611-9d81d6369953_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1c867562691e63c57174850f947a9bf104eb3faf --- /dev/null +++ b/amrparsingwithlatentstructuralinformation/c7f90b18-dd47-4b93-a611-9d81d6369953_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:878eebdd8f9fa7dba179151b1ed9fce66ab3371547d967b9d5ba33a40f242c64 +size 93549 diff --git a/amrparsingwithlatentstructuralinformation/c7f90b18-dd47-4b93-a611-9d81d6369953_model.json b/amrparsingwithlatentstructuralinformation/c7f90b18-dd47-4b93-a611-9d81d6369953_model.json new file mode 100644 index 0000000000000000000000000000000000000000..361665aea00f0427785aa863df730a215cd061d4 --- /dev/null +++ b/amrparsingwithlatentstructuralinformation/c7f90b18-dd47-4b93-a611-9d81d6369953_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:edd5d331ef75987b741fbd6b1a5c8cbd1750727a5086298d672144c98ce00751 +size 118325 diff --git a/amrparsingwithlatentstructuralinformation/c7f90b18-dd47-4b93-a611-9d81d6369953_origin.pdf b/amrparsingwithlatentstructuralinformation/c7f90b18-dd47-4b93-a611-9d81d6369953_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..48337f01c9d33f3483892ed6ebf773f84a69651e --- /dev/null +++ b/amrparsingwithlatentstructuralinformation/c7f90b18-dd47-4b93-a611-9d81d6369953_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2bd63f87135f545e390af96c4a705fffe57a20669673654283b4209a361a3dc5 +size 1019833 diff --git a/amrparsingwithlatentstructuralinformation/full.md b/amrparsingwithlatentstructuralinformation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..3a2acad6ac01e14380644e28977380ca6197aefd --- /dev/null +++ b/amrparsingwithlatentstructuralinformation/full.md @@ -0,0 +1,470 @@ +# AMR Parsing with Latent Structural Information + +Qiji Zhou $^{1}$ , Yue Zhang $^{2,3}$ , Donghong Ji $^{1*}$ , Hao Tang $^{1}$ + +1Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, China + +{qiji.zhou, dhji, tanghaopro}@whu.edu.cn + +$^{2}$ School of Engineering, Westlake University + +$^{3}$ Institute of Advanced Technology, Westlake Institute for Advanced Study + +yue.zhang@wias.org.cn + +# Abstract + +Abstract Meaning Representations (AMRs) capture sentence-level semantics structural representations to broad-coverage natural sentences. We investigate parsing AMR with explicit dependency structures and interpretable latent structures. We generate the latent soft structure without additional annotations, and fuse both dependency and latent structure via an extended graph neural networks. The fused structural information helps our experiments results to achieve the best reported results on both AMR 2.0 (77.5% Smatch F1 on LDC2017T10) and AMR 1.0 (71.8% Smatch F1 on LDC2014T12). + +# 1 Introduction + +Abstract Meaning Representations (AMRs) (Banarescu et al., 2013) model sentence level semantics as rooted, directed, acyclic graphs. Nodes in the graph are concepts which represent the events, objects and features of the input sentence, and edges between nodes represent semantic relations. AMR introduces re-entrance relation to depict the node reuse in the graphs. It has been adopted in downstream NLP tasks, including text summarization (Liu et al., 2015; Dohare and Karnick, 2017), question answering (Mitra and Baral, 2016) and machine translation (Jones et al., 2012; Song et al., 2019). + +AMR parsing aims to transform natural language sentences into AMR semantic graphs. Similar to constituent parsing and dependency parsing (Nivre, 2008; Dozat and Manning, 2017), AMR parsers mainly employ two parsing techniques: transition-based parsing (Wang et al., 2016; Damonte et al., 2017; Wang and Xue, 2017; Liu et al., 2018; Guo and Lu, 2018) use a sequence of transition actions + +![](images/f83bb4162088f02eba7a6acbd80abfc547a0f5d76ec53b51e4f436c1810cc133.jpg) +Figure 1: An example of AMR graph and its corresponding syntactic dependencies for the sentence "The boy came and left". The dashed lines denote the connected relations in the syntactic dependencies but not appear in the AMR graph. + +to incrementally construct the graph, while graph-based parsing (Flanigan et al., 2014; Lyu and Titov, 2018; Zhang et al., 2019a; Cai and Lam, 2019) divides the task into concept identification and relation extraction stages and then generate a full AMR graph with decoding algorithms such as greedy and maximum spanning tree (MST). Additionally, reinforcement learning (Naseem et al., 2019) and sequence-to-sequence (Konstas et al., 2017) have been exploited in AMR parsing as well. + +Previous works (Wang et al., 2016; Artzi et al., 2015) shows that structural information can bring benefit to AMR parsing. Illustrated by Figure 1, for example syntactic dependencies can convey the main predicate-argument structure. However, dependency structural information may be noisy due to the error propagation of external parsers. Moreover, AMR concentrates on semantic relations, which can be different from syntactic dependencies. For instance, in Figure 1, AMR prefers to select the coordination (i.e. "and") as the root, which is different from syntactic dependencies (i.e. "came"). + +Given the above observations, we investigate the effectiveness of latent syntactic dependencies for + +AMR parsing. Different from existing work (Wang et al., 2016), which uses a dependency parser to provide explicit syntactic structures, we make use of a two-parameter distribution (Bastings et al., 2019) to induce latent graphs, which is differentiable under reparameterization (Kingma and Welling, 2014). We thus build a end-to-end model for AMR parsing with induced latent dependency structures as a middle layer, which is tuned in AMR training and thus can be more aligned to the need of AMR structure. + +For better investigating the correlation between induced and gold syntax, and better combine the strengths, we additionally consider fusing gold and induced structural dependencies into an align-free AMR parser (Zhang et al., 2019a). Specifically, we first obtain the input sentence's syntactic dependencies1 and treat the input sentence as prior of the probabilistic graph generator for inferring the latent graph. Second, we propose an extended graph neural network (GNN) for encoding above structural information. Subsequently we feed the encoded structural information into a two stage align-free AMR parser (Zhang et al., 2019a) for promoting AMR parsing. + +To our knowledge, we are the first to incorporate syntactic latent structure in AMR parsing. Experimented results show that our model achieves $77.5\%$ and $71.8\%$ SMATCH F1 on standard AMR benchmarks LDC2017T10 and LDC2014T12, respectively, outperforming all previous best reported results. Beyond that, to some extent, our model can interpret the probabilistic relations between the input words in AMR parsing by generating the latent graph2. + +# 2 Baseline: Align-Free AMR Parsing + +We adopt the parser of Zhang et al. (2019a) as our baseline, which treats AMR parsing as sequence-to-graph transduction. + +# 2.1 Task Formalization + +Our baseline splits AMR parsing into a two-stage procedure: concept identification and edge prediction. The first task aims to identify the concepts (nodes) in AMR graph from input tokens, and the second task is designed to predict semantic relations between identified concepts. + +Formally, for a given input sequence of words $\pmb{w} = \langle w_1,\dots,w_n\rangle$ , the goal of concept identification in our baseline is sequentially predicting the concept nodes $\pmb{u} = \langle u_{1},\dots,u_{m}\rangle$ in the output AMR graph, and deterministically assigning corresponding indices $\pmb{d} = \langle d_1,\dots,d_m\rangle$ . + +$$ +P (\boldsymbol {u}) = \prod_ {i = 1} ^ {m} P (u _ {i} \mid u _ {< i}, d _ {< i}, \boldsymbol {w}), +$$ + +After identifying the concept nodes $c$ and their corresponding indices $d$ , we predict the semantic relations in the searching space $\mathcal{R}(u)$ . + +$$ +\operatorname {Predict}(\boldsymbol {u}) = \underset {r\in \mathcal{R}(\boldsymbol {u})}{\arg \max}\sum_{(u_{i},u_{j})\in r}\operatorname {score}(u_{i},u_{j}), +$$ + +where $r = \{(u_i, u_j) \mid 1 \leq i, j \leq m\}$ is a set of directed relations between concept nodes. + +# 2.2 Align-Free Concept Identification + +Our baseline extends the pointer-generator network with self-copy mechanism for concept identification (See et al., 2017; Zhang et al., 2018a). The extended model can copy the nodes not only from the source text, but also from the previously generated list of nodes on the target side. + +The concept identifier firstly encodes the input sentence into concatenated vector embeddings with GloVe (Pennington et al., 2014), BERT (Devlin et al., 2019), POS (part-of-speech) and character-level (Kim et al., 2016) embeddings. Subsequently, we encode the embedded sentence by a two-layer bidirectional LSTM (Schuster and Paliwal, 1997; Hochreiter and Schmidhuber, 1997): + +$$ +h _ {i} ^ {l} = [ \overrightarrow {f} ^ {l} (h _ {i} ^ {l - 1}, h _ {i - 1} ^ {l}); \overleftarrow {f} ^ {l} (h _ {i} ^ {l - 1}, h _ {i + 1} ^ {l}) ], +$$ + +where $h_i^l$ is the $l$ -th layer encoded hidden state at the time step $i$ and $h_i^0$ is the embedded token $w_i$ . + +Different from the encoding stage, the decoder does not use pre-trained BERT embeddings, but employs a two-layer LSTM to generate the decoding hidden state $s_t^l$ at each time step: + +$$ +s _ {t} ^ {l} = f ^ {l} (s _ {t} ^ {l - 1}, s _ {t - 1} ^ {l}), +$$ + +where $s_t^{l-1}$ and $s_{t-1}^l$ are hidden states from last layer and previous time step respectively, and $s_0^l$ is the concatenation of the last bi-directional encoding hidden states. In addition, $s_t^0$ is generated from the concatenation of the previous node $u_{t-1}$ embedding and the attention vector $\widetilde{s}_{t-1}$ , which combine both source and target information: + +$$ +\widetilde {s} _ {t} = \tanh (W _ {c} [ c _ {t}; s _ {t} ^ {l} ] + b _ {c}), +$$ + +where $W_{c}$ and $b_{c}$ are trainable parameters, $c_{t}$ is the context vector calculated by the attention weighted encoding hidden states and the source attention distribution $a_{\mathrm{src}}^{t}$ following Bahdanau et al. (2015) + +The produced attention vector $\widetilde{s}$ is used to generate the vocabulary distribution: + +$$ +P _ {\mathrm {v o c a b}} = \mathrm {s o f t m a x} (W _ {\mathrm {v o c a b}} \widetilde {s} _ {t} + b _ {\mathrm {v o c a b}}), +$$ + +as well as the target attention distribution: + +$$ +\begin{array}{l} e _ {\mathrm {t g t}} ^ {t} = v _ {\mathrm {t g t}} ^ {\top} \operatorname {t a n h} \left(W _ {\mathrm {t g t}} \widetilde {s} _ {1: t - 1} + U _ {\mathrm {t g t}} \widetilde {s} _ {t} + b _ {\mathrm {t g t}}\right), \\ a _ {\mathrm {t g t}} ^ {t} = \operatorname {s o f t m a x} \left(e _ {\mathrm {t g t}} ^ {t}\right), \\ \end{array} +$$ + +The source-side copy probability $p_{\mathrm{src}}$ , target-side copy probability $p_{\mathrm{tgt}}$ and generation probability $p_{\mathrm{gen}}$ are calculated by $\widetilde{s}$ , which can be treated as generation switches: + +$$ +\left[ p _ {\mathrm {s r c}}, p _ {\mathrm {t g t}}, p _ {\mathrm {g e n}} \right] = \operatorname {s o f t m a x} \left(W _ {\text {s w i t c h}} \widetilde {s} _ {t} + b _ {\text {s w i t c h}}\right), +$$ + +The final distribution is defined below, if $v_{t}$ is copied from existing nodes: + +$$ +P ^ {(\mathrm {n o d e})} (u _ {t}) = p _ {\mathrm {t g t}} \sum_ {i: u _ {i} = u _ {t}} ^ {t - 1} a _ {\mathrm {t g t}} ^ {t} [ i ], +$$ + +otherwise: + +$$ +P ^ {(\text {n o d e})} (u _ {t}) = p _ {\text {g e n}} P _ {\text {v o c a b}} (u _ {t}) + p _ {\text {s r c}} \sum_ {i: w _ {i} = u _ {t}} ^ {n} a _ {\text {s r c}} ^ {t} [ i ], +$$ + +where $a^t [i]$ is the $i$ -th element of $a^t$ , and then deterministically assigned the existing indices to the identified nodes based on whether the node is generated from the target-side distribution. + +# 2.3 Edge Prediction + +Our baseline employs a deep biaffine attention classifier for semantic edge prediction (Dozat and Manning, 2017), which have been widely used in graph-based structure parsing (Peng et al., 2017; Lyu and Titov, 2018; Zhang et al., 2019a). + +For a node $u_{t}$ , the probability of $u_{k}$ being the head node of $u_{t}$ and the probability of edge $(u_{k}, u_{t})$ are defined below: + +$$ +\begin{array}{l} P _ {t} ^ {\left(\text {h e a d}\right)} \left(u _ {k}\right) = \frac {\exp \left(\operatorname {s c o r e} _ {k , t} ^ {\left(\text {e d g e}\right)}\right)}{\sum_ {j = 1} ^ {m} \exp \left(\operatorname {s c o r e} _ {j , t} ^ {\left(\text {e d g e}\right)}\right)}, \\ P _ {k, t} ^ {\left(\text {l a b e l}\right)} (l) = \frac {\exp \left(\operatorname {s c o r e} _ {k , t} ^ {\left(\text {l a b e l}\right)} [ l ]\right)}{\sum_ {l ^ {\prime}} \exp \left(\operatorname {s c o r e} _ {k , t} ^ {\left(\text {l a b e l}\right)} [ l ^ {\prime} ]\right)}, \\ \end{array} +$$ + +where score $(\mathrm{score})$ and label $(\mathrm{edge})$ are calculated via bi-affine attentions. + +# 3 Model + +The overall structure of our model is shown in Figure 2. First, we use an external dependency parser (Manning et al., 2014) to obtain the explicit structural information, and obtain the latent structural information via a probabilistic latent graph generator. We then combine both explicit and latent structural information by encoding the input sentence through an extended graph neural network. Finally, we incorporate our model with an align-free AMR parser for parsing AMR graphs with the benefit of structural information. + +# 3.1 Latent Graph Generator + +We generate the latent graph of input sentence via the HardKuma distribution (Bastings et al., 2019), which has both continuous and discrete behaviours. HardKuma can generate samples from the closed interval [0, 1] probabilistically. This feature allows us to predict soft connections probabilities between input words, which can be seen as a latent graph. Specifically, we treat embedded input words as a prior of a two-parameters distribution, and then sample a soft adjacency matrix between input words for representing a dependency. + +HardKuma Distribution The HardKuma distribution is derived from the Kumaraswamy distribution (Kuma) (Kumaraswamy, 1980), which is a two-parameters distribution over an open interval $(0,1)$ , i.e., $K \sim \mathrm{Kuma}(a,b)$ , where $a \in \mathbb{R}_{>0}$ and $b \in \mathbb{R}_{>0}$ . The Kuma distribution is similar to Beta distribution, but its CDF function has a simpler analytical solution and inverse of the CDF is: + +$$ +C _ {K} ^ {- 1} (u; \boldsymbol {a}, \boldsymbol {b}) = \left(1 - (1 - u) ^ {1 / b}\right) ^ {1 / a}, +$$ + +We can generate the samples by: + +$$ +C _ {K} ^ {- 1} (U; \boldsymbol {\alpha}, \boldsymbol {\beta}) \sim \operatorname {K u m a} (\boldsymbol {\alpha}, \boldsymbol {\beta}), +$$ + +where $U \sim \mathcal{U}(0,1)$ is the uniform distribution, and we can reconstruct this inverse CDF function by the reparameterizing fashion (Kingma and Welling, 2014; Nalisnick and Smyth, 2017). + +In order to include the two discrete points 0 and 1, HardKuma employs a stretch-and-rectify method with support (Louizos et al., 2017), which leads + +![](images/45a3ae3c16d3ff350d0ea6c0f5a3d291a10752dfb11628207702fa871ccda08c.jpg) +Figure 2: Stretch of the model which has four main components: (1) A latent graph generator for producing the soft-connected latent graph ( $\S\S3.1$ ); (2) An extended syntactic graph convolutional network for encoding the structural information ( $\S\S3.2$ ); (3) An align-free concept identification for concept node generation ( $\S\S2.2$ ); (4) A deep biaffine classifier for relation edge prediction ( $\S\S2.3$ ). + +the variable $T\sim \mathrm{Kuma}(\pmb {a},\pmb {b},l,r)$ to be sampled from Kuma distribution with an open interval $(l,r)$ where $l < 0$ and $r > 0$ . The new CDF is: + +$$ +C _ {T} (t; \boldsymbol {a}, \boldsymbol {b}, l, r) = C _ {K} \left(^ {(t - l)} / (r - l); \boldsymbol {a}, \boldsymbol {b}\right), +$$ + +We pass the stretched variable $T \sim \mathrm{Kuma}(a, b, l, r)$ through a hard-sigmoid function (i.e., $h = \min(1, \max(0, t))$ ) to obtain the rectified variable $H \sim \mathrm{HardKuma}(a, b, l, r)$ . Therefore, the rectified variable covers the closed interval [0, 1]. Note that all negative values of $t$ are deterministically mapped to 0. In contrast, all samples $t > 1$ are mapped to $1^3$ . Because the rectified variable is sampled based on Kuma distribution, HardKuma first sample a uniform variable over open interval (0, 1) from uniform distribution $U \sim \mathcal{U}(0, 1)$ , and then generate a Kuma variable through inverse CDF: + +$$ +k = C _ {K} ^ {- 1} (u; \pmb {a}, \pmb {b}), +$$ + +Second, we transform the Kuma variable for covering the stretched support: + +$$ +t = l + (r - l) k, +$$ + +Finally, we rectify the stretched variable including closed interval [0, 1] via a hard-sigmoid function: + +$$ +h = \min (1, \max (0, t)). +$$ + +Latent Graph We generate the latent graph of input words $w$ by sampling from HardKuma distribution with trained parameters $a$ and $b$ . We first calculate the prior $c$ of $(a, b)$ by employing multi-head self-attention (Vaswani et al., 2017): + +$$ +\boldsymbol {c} _ {a} = \operatorname {T r a n s f o m e r} _ {a} (\boldsymbol {v}), +$$ + +$$ +\boldsymbol {c} _ {b} = \operatorname {T r a n s f o m e r} _ {b} (\boldsymbol {v}), +$$ + +where $\pmb{v} = \langle v_1, \dots, v_n \rangle$ is the embedded input words. Subsequently, we compute $\pmb{a}$ and $\pmb{b}$ as: + +$$ +\boldsymbol {a} = \operatorname {N o r m} \left(\boldsymbol {c} _ {a} \boldsymbol {c} _ {a} ^ {T}\right), +$$ + +$$ +\boldsymbol {b} = \operatorname {N o r m} \left(\boldsymbol {c} _ {b} \boldsymbol {c} _ {b} ^ {T}\right), +$$ + +where $\mathbf{a}_i = \langle a_{i1},\dots,a_{in}\rangle$ and $\pmb {b}_i = \langle b_{i1},\dots,b_{in}\rangle$ , $\pmb {c}_a,\pmb {c}_b\in \mathbb{R}^{n\times n}$ and $\mathrm{Norm}(x)$ is the normalization function. Hence, the latent graph $L$ is sampled via learned parameters $\pmb{a}$ and $\pmb{b}$ : + +$$ +l _ {i j} \sim \mathrm {H a r d K u m a} (a _ {i j}, b _ {i j}, l, r). +$$ + +# 3.2 Graph Encoder + +For a syntactic graph with $n$ nodes, the cell $A_{ij} = 1$ in the corresponding adjacent matrix represents that an edge connects word $w_i$ to word $w_j$ . An L-layer syntactic GCN of $l$ -th layer can be used to represent A, where the hidden vector for each word $w_i$ at the $l - th$ layer is: + +$$ +h _ {i} ^ {(l)} = \sigma (\sum_ {j = 1} ^ {n} \tilde {A} _ {i j} W ^ {(l)} h _ {j} ^ {(l - 1)} / d _ {i} + b ^ {(l)}), +$$ + +where $\tilde{A} = A + I$ with the $n\times n$ identity matrix $I$ , $d_{i} = \sum_{j = 1}^{n}\tilde{A}_{ij}$ is the degree of word $w_{i}$ in the graph for normalizing the activation to avoid the word representation with significantly different magnitudes (Marcheggiani and Titov, 2017; Kipf and Welling, 2017), and $\sigma$ is a nonlinear activation function. + +In order to take benefits from both explicit and latent structural information in AMR parsing, we extend the Syntactic-GCN (Marcheggiani and Titov, 2017; Zhang et al., 2018b) with a graph fusion layer and omit labels in the graph (i.e. we only consider the connected relation in GCN). Specifically, we propose to merge the parsed syntactic dependencies and sampled latent graph through a graph fusion layer: + +$$ +\boldsymbol {F} = \pi \boldsymbol {L} + (1 - \pi) \boldsymbol {D} +$$ + +where $\pi$ is trainable gate variables are calculated via the sigmoid function, $D$ and $L$ are the parsed syntactic dependencies and generated latent graph respectively, and $F$ represent the fused soft graph. Furthermore, $F$ is a $n\times n$ adjacent matrix for the input words $\boldsymbol{w}$ , different from the sparse adjacent matrix $A$ , $F_{ij}$ denote a soft connection degree from word $w_{i}$ to word $w_{j}$ . We adapt syntactic-GCN with a fused adjacent matrix $F$ , and employ a gate mechanism: + +$$ +\begin{array}{l} h _ {i} ^ {(l)} = G E L U ( \\ L _ {n o r m} (\sum_ {j = 1} ^ {n} G _ {j} (F _ {i j} W ^ {(l)} h _ {j} ^ {(l - 1)} + b ^ {(l)})), \\ \end{array} +$$ + +We use GELU (Hendrycks and Gimpel, 2016) as the activation function, and apply layer normalization $L_{norm}$ (Ba et al., 2016) before passing the results into GELU. The scalar gate $G_{j}$ is calculated by each edge-node pair: + +$$ +G _ {j} = \mu (h _ {j} ^ {(l - 1)} \cdot \hat {v} ^ {(l - 1)} + \hat {b} ^ {(l - 1)}), +$$ + +where $\mu$ is the logistic sigmoid function, $\hat{v}$ and $\hat{b}$ are trainable parameters. + +# 3.3 Training + +Similar to our baseline (Zhang et al., 2019a), we linearize the AMR concepts nodes by a pre-order traversal over the training dataset. We obtain gradient estimates of $\mathcal{E}(\phi, \theta)$ through Monte Carlo sampling from: + +$$ +\begin{array}{l} \mathcal {E} (\phi , \theta) = \mathbb {E} _ {\mathcal {U} (0, I)} [ \log P (n o d e | u _ {t}, g _ {\phi} (\boldsymbol {u}, \boldsymbol {w}), \theta) \\ + \log P _ {t} (\text {h e a d} | u _ {k}, g _ {\phi} (\boldsymbol {u}, \boldsymbol {w}), \theta) \\ + \log P _ {k, t} (l a b e l | l, g _ {\phi} (\boldsymbol {u}, \boldsymbol {w}), \theta) ] \\ + \lambda \text {c o v l o s s} _ {t} \\ \end{array} +$$ + +where $u_{t}$ is the reference node at time step $t$ with reference head $u_{k}$ and $l$ is the reference edge label between $u_{k}$ and $u_{j}$ . The form $g_{\phi}(\mathbf{u},\mathbf{w})$ is short for the latent graph samples from uniform distribution to HardKuma distribution (§§3.1). + +Different from Bastings et al. (2019), we do not limit the sparsity of sampled latent graphs, i.e. we do not control the proportion of zeros in the latent graph, because we prefer to retain the probabilistic connection information of each word in $w$ . Finally, we introduce coverage loss into our estimation due to reduce duplication of node generation (See et al., 2017). + +# 3.4 Parsing + +We directly generate the latent graph by the PDF function of HardKuma distribution with trained parameters $\mathbf{a}$ and $\mathbf{b}$ . In the concept identification stage, we decode the node from the final probability distribution $P^{(\mathrm{node})}(u_t)$ at each time step, and apply beam search for sequentially generating the concept nodes $\mathbf{u}$ and deterministically assigning corresponding indices $\mathbf{d}$ . For edge prediction, we use a bi-affine classifier to calculate the edge scores under the generated nodes $\mathbf{u}$ and indices $\mathbf{d}$ : + +$$ +\boldsymbol {S} = \left\{\text {s c o r e} _ {i, j} ^ {\text {(e d g e)}} \mid 0 \leq i, j \leq m \right\}. +$$ + +Similar to Zhang et al. (2019a), we apply a maximum spanning tree (MST) algorithm (Chu, 1965; Edmonds, 1967) to generate complete AMR graph and restore the re-entrance relations by merging the receptive nodes via their indices. + +# 4 Experiments + +# 4.1 Setup + +We use two standard AMR corpora: AMR1.0 (LDC2014T12) and AMR 2.0 (LDC2017T10). AMR 1.0 contains 13051 sentences in total. AMR + +
DataParserF1(%)
AMR 2.0Cai and Lam (2019)73.2
Lyu and Titov (2018)74.4±0.2
Lindemann et al. (2019)75.3±0.1
Naseem et al. (2019)75.5
Zhang et al. (2019a)76.3±0.1
- w/o BERT74.6
Zhang et al. (2019b)77.0±0.1
Ours77.5±0.2
- w/o BERT75.5±0.2
AMR 1.0Flanigan et al. (2016)66.0
Pust et al. (2015)67.1
Wang and Xue (2017)68.1
Guo and Lu (2018)68.3±0.4
Zhang et al. (2019a)70.2±0.1
- w/o BERT68.8
Zhang et al. (2019b)71.3±0.1
Ours71.8±0.2
- w/o BERT70.0±0.2
+ +2.0 is larger which is split into 36521, 1368 and 1371 sentences in training, development and testing sets respectively. We treat in AMR 2.0 as the main dataset in our experiments since it is larger. + +We tune hyperparameters on the development set, and store the checkpoints under best development results for evaluation. We employ the pre-processing and post-processing methods from Zhang et al. (2019a), and get the syntactic dependencies via Stanford Corenlp (Manning et al., 2014). We train our model jointly with the Adam optimizer (Kingma and Ba, 2015). The learning rate is decayed based on the results of development set in training. Training takes approximately 22 hours on two Nivida GeForce GTX 2080 Ti. + +# 4.2 Results + +Main Results We compare the SMATCH F1 scores (Cai and Knight, 2013) against previous best reported models and other recent AMR parsers. Table 1 summarizes the results on both AMR 1.0 and AMR 2.0 data sets. For AMR 2.0, with the benefit from the fused structural information, we improve our baseline (Zhang et al., 2019a) by $1.2\%$ F1 in the full model, and $0.9\%$ F1 + +Table 1: Main results of SMATCH F1 on AMR 2.0 (LDC2017T10) and 1.0 (LDC2014T12) test sets. Results are evaluated over 3 runs. + +
MetricN'19Z'19aZ'19bOurs
SMATCH75.576.37777.5
Unlabeled80798080.4
No WSD76777878.2
Reentrancies56606161.1
Concepts86858685.9
Named Ent.83787978.8
Wikification80868686.5
Negation67757776.1
SRL72707171.0
+ +Table 2: Fine-grained F1 scores on the AMR 2.0 (LDC2017T10) test set. N'18 is Naseem et al. (2019); $Z^{\prime}19_{a}$ is Zhang et al. (2019a); $Z^{\prime}19_{b}$ is Zhang et al. (2019b) + +is gained without pre-trained BERT embeddings4. In addition, our model outperforms the best reported model (Zhang et al., 2019b) by $0.5\%$ F1. On AMR 1.0, there are only about 10k sentences for training. We outperform the best results by $0.5\%$ Smatch F1. We observe that for the smaller data set, our model has a greater improvement of $1.6\%$ F1 than for the larger data set ( $1.2\%$ F1 comparing with our baseline.) + +Fine-grained Results Table 2 shows fined-grained parsing results of each sub-tasks in AMR 2.0, which are evaluated by the enhance AMR evaluation tools (Damonte et al., 2017). We notice that our model brings more than $1\%$ average improvement to our baseline (Zhang et al., 2019a) for most sub-tasks, in particular, the unlabeled is gained $1.4\%$ F1 score increasing with the structural information, and the sub-task of no WSD, reentrancies, negation and SRL are all improved more than $1.0\%$ score under our graph encoder. In addition, our model achieves comparable results to the best reported method (Zhang et al., 2019b) for each subtask. + +Ablation Study We investigate the impacts of different structural information in our model on AMR 2.0 with main sub-tasks5. Table 3 shows the fused structure perform better in most sub-task than explicit and latent structure. In particular, the model with explicit structures (i.e. both explicit and fused) + +
MetricExplicitLatentFused
SMATCH77.477.477.5
Unlabeled80.280.180.4
Reentrancies61.160.661.1
Concepts85.686.085.7
Negation75.675.176.1
SRL70.870.971.0
+ +Table 3: Ablation studies of the results for AMR2.0 (LDC2017T10) on different kind of structural information in our model. + +
Graph TypeUAS
Fused84.9%
Latent64.1%
+ +Table 4: The UAS of fused and latent graph by calculating from the corresponding explicit dependencies in test set (we calculate the UAS by predicting the maximum probability heads in the latent graph). + +outperform the model with only latent structure by $0.5\%$ F1 in Reentrancies sub-task, which demonstrates that the explicit dependencies information can improve the this sub-task. Latent structure performs better in concepts sub-task, and fused structure brings more information to the negation subtask which obtain $0.5\%$ and $1.0\%$ improvement than explicit and latent structure respectively. + +Additionally, we can notice that both latent and explicit models outperform the previous best reported Smatch F1 score, and fused model reach the best results. It shows that different types of structural information can help the AMR parsing, we discuss the connection tendencies of each structure in (§§4.3). + +# 4.3 Discussion + +Experiment results show that both the explicit structure and latent structure can improve the performance of AMR parsing, and latent structural information reduces the errors in sub-tasks such as concept and SRL. Different from the discrete relation of explicit structures, the internal latent structure holds soft connection probabilities between words in the input sentence, so that, each fully-connected word receives information from all the other words. + +Figure 3 depicts the latent and fused soft adjacent matrix of the input sentence "The boy came and left" respectively. It can be seen that the la + +![](images/795dce90bfa649d4af4f1729faebd69850569c4fd59959f81a63d8cf01a0e9a6.jpg) +(a) Latent Matrix +Figure 3: The latent soft adjacent matrix (a) and fused soft adjacent matrix (b) of the input sentence "The boy came and left". + +![](images/8e6f2ec8e3e88767c6fa7fe0a1dbfe6a4a8c3a7e18061ca84a3e1f0e7c617a9d.jpg) +(b) Fused Matrix + +tent matrix (Figure 3a) tries to retain information from most word pairs, and the AMR root "and" holds high connection probabilities to each word in the sentence. In addition, the mainpredicates and arguments in the sentence tend to be connected with high probabilities. The fused matrix (Figure 3b) holds similar connection probabilities to predicates and arguments in the sentence as well, and it reduces the connection degrees to the determiner "The" which does not appear in corresponding AMR graph. Moreover, the syntactic root "came" and semantic root "and" reserve most connection probabilistic to other words. + +We compare the connections in different structures in Figure 4. The latent graph (Figure 4a) prefers to connect most words, and the main predicates and arguments in the graph have higher connection probabilities. The fused graph (Figure 4c) shows that our model provides core structural information between interpretable relations. Specifically, it holds similar potential relations to annotated AMR graph, and tries to alleviate the connection information to the words which are not aligned in AMR concept nodes. + +Beyond that, we calculate the Unlabeled Attachment Score (UAS) for fused and latent graph in Table 4, the unsupervised latent graph captures less explicit edges than fused graph, and both fused and latent graph ignore some arcs on explicit graph. It shows that a lower UAS does not mean lower AMR parsing score and some arcs are more useful to AMR parsing but not in explicit gold trees. Consequently, we preserve the explicit and latent structure information simultaneously. The latent structure can not only improve AMR parsing, but also have ability to interpret the latent connections between input words. + +![](images/4e998e8bb809add0a985a23fb6b7c43fb9642b25c2306ecfc455a7b77e47be37.jpg) + +![](images/db451a83bf56074013a3901a35b7b2949a82400a041a10d9248cb3eab9e07128.jpg) +(a) Latent Graph +(c) Fused Graph + +![](images/c17430a455fa29f9e01b87012e7f2b1f64b7ed6cdc15c4f1f2666a2e3071992d.jpg) +(b) Syntactic Graph + +![](images/d0b0b17d8e3b287362255140a0f94222d298633721635040001711f035f9c961.jpg) +(d) AMR Graph +Figure 4: Different structures of the sentence "The boy came and left". (a): The Latent Graph; (b): The Syntactic Graph; (c): The Fused Graph; (d): The AMR graph. (We construct the latent and fused graph by selecting the top 2 possible soft connections between each word, in addition, we ignore the edges whose connection probabilities are less than 0.5.). + +# 5 Related Work + +Transition-based AMR parsers (Wang et al., 2016; Damonte et al., 2017; Wang and Xue, 2017; Liu et al., 2018; Guo and Lu, 2018; Naseem et al., 2019) suffer from the lack of annotated alignments between words and concept notes is crucial in these models. Lyu and Titov (2018) treat the alignments as an latent variable for their probabilistic model, which jointly obtains the concepts, relations and alignments variables. Sequence-to-sequence AMR parsers transform AMR graphs into serialized sequences by external traversal rules, and then restore the generated the AMR sequence to avoid aligning issue (Konstas et al., 2017; van Noord and Bos, 2017). Moreover, Zhang et al. (2019a) extend a pointer generator (See et al., 2017), which can generate a node multiple times without alignment through the copy mechanism. + +With regards to latent structure, Naradowsky et al. (2012) couples syntactically-oriented NLP tasks to combinatorially constrained hidden syntactic representations. Bowman et al. (2016); Yogatama et al. (2017) and Choi et al. (2018) generate unsupervised constituent tree for text classification. The latent constituent trees are shallower than human annotated, and it can boost the performance of downstream NLP tasks (e.g., text classification). Guo et al. (2019) and Ji et al. (2019) employ self + +attention and bi-affine attention mechanism respectively to generate soft connected graphs, and then adopt GNNs to encode the soft structure to take advantage from the structural information to their works. + +GCN and its variants are increasingly applied in embedding syntactic and semantic structures in NLP tasks (Kipf and Welling, 2017; Marcheggiani and Titov, 2017; Damonte and Cohen, 2019). Syntactic-GCN tries to alleviate the error propagation in external parsers with gates mechanism, it encodes both relations and labels with the gates, and filters the output of each GCN layer over the dependencies. (Marcheggiani and Titov, 2017; Bastings et al., 2017). Damonte and Cohen (2019) encodes AMR graphs via GCN to promote the AMR-to-text generation task. + +# 6 Conclusion + +We investigate latent structure for AMR parsing, and we denote that the inferred latent graph can interpret the connection probabilities between input words. Experiment results show that the latent structural information improves the best reported parsing performance on both AMR 2.0 (LDC2017T10) and AMR 1.0 (LDC2014T12). We also propose to incorporate the latent graph into other multi-task learning problems (Chen et al., + +2019; Kurita and Søgaard, 2019). + +# Acknowledgments + +We thank the anonymous reviewers for their detailed comments. We are grateful to Zhiyang Teng's discussions and suggestions. This work is supported by the National Natural Science Foundation of China (NSFC-61772378), the National Key Research and Development Program of China (No.2017YFC1200500) and the Major Projects of the National Social Science Foundation of China (No.11&ZD189). We also would like to acknowledge funding support from the Westlake University and Bright Dream Joint Institute for Intelligent Robotics. + +# References + +Yoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1699-1710, Lisbon, Portugal. Association for Computational Linguistics. +Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. Layer normalization. CoRR, abs/1607.06450. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffith, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178-186, Sofia, Bulgaria. Association for Computational Linguistics. +Joost Bastings, Wilker Aziz, and Ivan Titov. 2019. Interpretable neural predictions with differentiable binary variables. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2963-2977, Florence, Italy. Association for Computational Linguistics. +Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Sima'an. 2017. Graph convolutional encoders for syntax-aware neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1957-1967. + +Samuel R. Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning, and Christopher Potts. 2016. A fast unified model for parsing and sentence understanding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. +Deng Cai and Wai Lam. 2019. Core semantic first: A top-down approach for AMR parsing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3797-3807, Hong Kong, China. Association for Computational Linguistics. +Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 748-752, Sofia, Bulgaria. Association for Computational Linguistics. +Mingda Chen, Qingming Tang, Sam Wiseman, and Kevin Gimpel. 2019. A multi-task approach for disentangling syntax and semantics in sentence representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2453-2464, Minneapolis, Minnesota. Association for Computational Linguistics. +Jihun Choi, Kang Min Yoo, and Sang-goo Lee. 2018. Learning to compose task-specific tree structures. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5094-5101. +Yoeng-Jin Chu. 1965. On the shortest arborescence of a directed graph. Scientia Sinica, 14:1396-1400. +Marco Damonte and Shay B. Cohen. 2019. Structural neural encoders for AMR-to-text generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3649-3658, Minneapolis, Minnesota. Association for Computational Linguistics. +Marco Damonte, Shay B. Cohen, and Giorgio Satta. 2017. An incremental parser for abstract meaning representation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 536-546, Valencia, Spain. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of + +deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. +Shibhansh Dohare and Harish Karnick. 2017. Text summarization using abstract meaning representation. CoRR, abs/1706.01678. +Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. +Jack Edmonds. 1967. Optimum branchings. Journal of Research of the national Bureau of Standards B, 71(4):233-240. +Jeffrey Flanigan, Chris Dyer, Noah A. Smith, and Jaime Carbonell. 2016. CMU at SemEval-2016 task 8: Graph-based AMR parsing with infinite ramp loss. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1202–1206, San Diego, California. Association for Computational Linguistics. +Jeffrey Flanigan, Sam Thomson, Jaime Carbonell, Chris Dyer, and Noah A. Smith. 2014. A discriminative graph-based parser for the abstract meaning representation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1426-1436, Baltimore, Maryland. Association for Computational Linguistics. +Zhijiang Guo and Wei Lu. 2018. Better transition-based AMR parsing with a refined search space. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1712-1722, Brussels, Belgium. Association for Computational Linguistics. +Zhijiang Guo, Yan Zhang, and Wei Lu. 2019. Attention guided graph convolutional networks for relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 241-251, Florence, Italy. Association for Computational Linguistics. +Dan Hendrycks and Kevin Gimpel. 2016. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. CoRR, abs/1606.08415. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780. +Tao Ji, Yuanbin Wu, and Man Lan. 2019. Graph-based dependency parsing with graph neural networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2475-2485, Florence, Italy. Association for Computational Linguistics. + +Bevan Jones, Jacob Andreas, Daniel Bauer, Karl Moritz Hermann, and Kevin Knight. 2012. Semantics-based machine translation with hyperedge replacement grammars. In Proceedings of COLING 2012, pages 1359-1376, Mumbai, India. The COLING 2012 Organizing Committee. +Yoon Kim, Yacine Jernite, David A. Sontag, and Alexander M. Rush. 2016. Character-aware neural language models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 2741-2749. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings. +Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. +Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, and Luke Zettlemoyer. 2017. Neural AMR: Sequence-to-sequence models for parsing and generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 146-157, Vancouver, Canada. Association for Computational Linguistics. +Ponnambalam Kumaraswamy. 1980. A generalized probability density function for double-bounded random processes. Journal of Hydrology, 46(1-2):79-88. +Shuhei Kurita and Anders Søgaard. 2019. Multi-task semantic dependency parsing with policy gradient for learning easy-first strategies. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2420-2430, Florence, Italy. Association for Computational Linguistics. +Matthias Lindemann, Jonas Groschwitz, and Alexander Koller. 2019. Compositional semantic parsing across graphbanks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4576-4585, Florence, Italy. Association for Computational Linguistics. +Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, and Noah A. Smith. 2015. Toward abstractive summarization using semantic representations. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, + +pages 1077-1086, Denver, Colorado. Association for Computational Linguistics. +Yijia Liu, Wanxiang Che, Bo Zheng, Bing Qin, and Ting Liu. 2018. An AMR aligner tuned by transition-based parser. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2422-2430. +Christos Louizos, Max Welling, and Diederik P. Kingma. 2017. Learning sparse neural networks through $l_0$ regularization. CoRR, abs/1712.01312. +Chunchuan Lyu and Ivan Titov. 2018. AMR parsing as graph prediction with latent alignment. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 397-407, Melbourne, Australia. Association for Computational Linguistics. +Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55-60. +Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for semantic role labeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1506-1515, Copenhagen, Denmark. Association for Computational Linguistics. +Arindam Mitra and Chitta Baral. 2016. Addressing a question answering challenge by combining statistical methods with inductive rule learning and reasoning. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 2779-2785. +Eric T. Nalisnick and Padhraic Smyth. 2017. Stick-breaking variational autoencoders. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. +Jason Naradowsky, Sebastian Riedel, and David Smith. 2012. Improving NLP through marginalization of hidden syntactic structure. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 810-820, Jeju Island, Korea. Association for Computational Linguistics. +Tahira Naseem, Abhishek Shah, Hui Wan, Radu Florian, Salim Roukos, and Miguel Ballesteros. 2019. Rewarding Smatch: Transition-based AMR parsing with reinforcement learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4586-4592, Florence, Italy. Association for Computational Linguistics. + +Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34(4):513-553. +Rik van Noord and Johan Bos. 2017. Neural semantic parsing by character-based translation: Experiments with abstract meaning representations. CoRR, abs/1705.09980. +Hao Peng, Sam Thomson, and Noah A. Smith. 2017. Deep multitask learning for semantic dependency parsing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2037-2048, Vancouver, Canada. Association for Computational Linguistics. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics. +Michael Pust, Ulf Hermjakob, Kevin Knight, Daniel Marcu, and Jonathan May. 2015. Parsing English into abstract meaning representation using syntax-based machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1143-1154, Lisbon, Portugal. Association for Computational Linguistics. +Mike Schuster and Kuldip K. Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Trans. Signal Processing, 45(11):2673-2681. +Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver, Canada. Association for Computational Linguistics. +Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, and Jinsong Su. 2019. Semantic neural machine translation using AMR. Transactions of the Association for Computational Linguistics, 7:19-31. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998-6008. +Chuan Wang, Sameer Pradhan, Xiaoman Pan, Heng Ji, and Nianwen Xue. 2016. CAMR at semeval-2016 task 8: An extended transition-based AMR parser. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016, San Diego, CA, USA, June 16-17, 2016, pages 1173-1178. + +Chuan Wang and Nianwen Xue. 2017. Getting the most out of AMR parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1257-1268, Copenhagen, Denmark. Association for Computational Linguistics. +Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. 2017. Learning to compose words into sentences with reinforcement learning. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. +Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019a. AMR parsing as sequence-tograph transduction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 80-94, Florence, Italy. Association for Computational Linguistics. +Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019b. Broad-coverage semantic parsing as transduction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3784-3796, Hong Kong, China. Association for Computational Linguistics. +Sheng Zhang, Xutai Ma, Rachel Rudinger, Kevin Duh, and Benjamin Van Durme. 2018a. Cross-lingual decompositional semantic parsing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1664–1675, Brussels, Belgium. Association for Computational Linguistics. +Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018b. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2205-2215, Brussels, Belgium. Association for Computational Linguistics. + +# A Appendix + +# A.1 Details of Model Structures and Parameters + +
GloVe embeddings
dim300
BERT embeddings
sourceBERT-base-cased
dim768
POS tag embeddings
dim100
CharCNN
num_filters100
ngram_filter_sizes[3]
Graph Encoder
gcn Hidden_dim512
gcn_layers1
Latent Graph Generator
HardKuma_support[-0.1, 1.1]
k_dim64
v_dim64
n_heads8
Encoder
hidden_size512
num_layers2
Decoder
hidden_size1024
num_layers2
Deep Biaffine Classifier
edge Hidden_size256
label Hidden_size128
Optimizer
typeAdam
learning_rate0.001
max_grad_norm5.0
Coverage loss weight λ1.0
Beam size5
Dropout0.33
Batch Size
train_batch_size64
test_batch_size32
+ +Table 5: Hyper-parameter settings + +We select the best hyper-parameters under the results of the development set, and we fix the hyperparameters at the test stage. We use two-layer highway LSTM as the encoder and two-layer LSTM as the decoder for the align-free node generator. Table 5 shows the details. + +# A.2 More Examples + +To discuss the generated latent graph in different situations, We provide two examples from the test set on the next page. + +Figure 5 gives the analysis of an interrogative sentence: "What advice could you give me?" It shows that the latent graph of the sentence is going to hold the most information between predicates and arguments. Both the AMR root "advice" and the dependency root "give" are paid more attention from other words, and the fused graph retains more information of the predicates and arguments in the original sentence as well. + +For a longer sentence with multiple predicate-argument structures, Figure 6 depicts the latent and fused graph of the sentence "You could go to the library on Saturdays and do a good 8 hours of studying there". In this case, the corresponding latent graph becomes shallower, and the AMR root "and" holds most information from other words. Besides, the fused graph indicates that predicates will receive more information from other words, and to some extent, phrases tend to be connected by the fused graph generator. + +![](images/447d83a7387946f0f2b6c495b120affda3b0e517f3fe290e73e48be4e15bd1b6.jpg) +Sentence: +What advice could you give me? +Dependency: +Latent Matrix: + +![](images/07e26844a501ed770f56c2239247fc7aa6f635770e172b802074122afb995656.jpg) +AMR: +Fused Matrix: + +![](images/121f1cb3c3b8a20b3cedbd9960b558d88ebb7e493d08fe5fabb62c388babe066.jpg) + +![](images/c9fcf5438b50510f91ed9a94a3dd36781a1ff8671f605fc49600fd417dd91f05.jpg) +Figure 5: An analysis of the sentence "What advice could you give me?" + +![](images/1a6b22d16c7e59909d822527468b5edf5ef1369578ba4e3da7a082abb379802f.jpg) +Sentence: +You could go to the library on Saturdays and do a good 8 hours of studying there. +Dependency: +Latent Matrix: + +![](images/01e3c20778319980740a825e5738749e3e7be385e0d2af76203c509c52199348.jpg) +AMR: +Fused Matrix: + +![](images/7a0ca55fbf5f81aa296b40c11036f3dc8a576061e929b11b00a9d1e6ff2a6fd3.jpg) +Figure 6: An analysis of the sentence "You could go to the library on Saturdays and do a good 8 hours of studying there." + +![](images/13841320381c3f181b9ed8e62108e1973ce9cd6da75a31bbc3b605418dafc82c.jpg) \ No newline at end of file diff --git a/amrparsingwithlatentstructuralinformation/images.zip b/amrparsingwithlatentstructuralinformation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b5ba411e41819f31843d73fafcdf68f54b985cd5 --- /dev/null +++ b/amrparsingwithlatentstructuralinformation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e1d71251788632e118cd164710197bbf1e7f06254c336bffe379c2372dd213c4 +size 639061 diff --git a/amrparsingwithlatentstructuralinformation/layout.json b/amrparsingwithlatentstructuralinformation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..5672da51f5b5ce113a69b21e47ee42bee480a140 --- /dev/null +++ b/amrparsingwithlatentstructuralinformation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:806a5169a74b37e5c8cd3ca68851eeb7a3595ab0e817debab2ccb8f27b4e807f +size 528335 diff --git a/amultiperspectivearchitectureforsemanticcodesearch/71143fc7-c388-4dd7-a8a9-fd93804eb623_content_list.json b/amultiperspectivearchitectureforsemanticcodesearch/71143fc7-c388-4dd7-a8a9-fd93804eb623_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6c13c54d63bc5ae9e51271bebc426bc249fb23ce --- /dev/null +++ b/amultiperspectivearchitectureforsemanticcodesearch/71143fc7-c388-4dd7-a8a9-fd93804eb623_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3dd5ffd17d18c15de65d6edd4f7c10dcef490c38a188d673077bdbfefe12c6b6 +size 40789 diff --git a/amultiperspectivearchitectureforsemanticcodesearch/71143fc7-c388-4dd7-a8a9-fd93804eb623_model.json b/amultiperspectivearchitectureforsemanticcodesearch/71143fc7-c388-4dd7-a8a9-fd93804eb623_model.json new file mode 100644 index 0000000000000000000000000000000000000000..79f20604fd68bf74f635af16ba8c4df71288a9fb --- /dev/null +++ b/amultiperspectivearchitectureforsemanticcodesearch/71143fc7-c388-4dd7-a8a9-fd93804eb623_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4c6e28840d7609485a4a8a312687876a2c80d0654dca83ef4b9033e2b02c095 +size 45854 diff --git a/amultiperspectivearchitectureforsemanticcodesearch/71143fc7-c388-4dd7-a8a9-fd93804eb623_origin.pdf b/amultiperspectivearchitectureforsemanticcodesearch/71143fc7-c388-4dd7-a8a9-fd93804eb623_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..82dfc7d2f6a963508709ba63ed685bc052fe676e --- /dev/null +++ b/amultiperspectivearchitectureforsemanticcodesearch/71143fc7-c388-4dd7-a8a9-fd93804eb623_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82bd640591467405144cbaa60a3460a6f1f592c9dd5f05a485f943b93da5904d +size 531243 diff --git a/amultiperspectivearchitectureforsemanticcodesearch/full.md b/amultiperspectivearchitectureforsemanticcodesearch/full.md new file mode 100644 index 0000000000000000000000000000000000000000..af1526d8f486c90acc17cfc3d7de375dd618a9ab --- /dev/null +++ b/amultiperspectivearchitectureforsemanticcodesearch/full.md @@ -0,0 +1,191 @@ +# A Multi-Perspective Architecture for Semantic Code Search + +Rajarshi Haldar†, Lingfei Wu‡, Jinjun Xiong‡, Julia Hockenmaier† + +†University of Illinois at Urbana-Champaign, Champaign, IL, USA +‡IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA + +{rhaldar2, juliahmr}@illinois.edu + +{wuli, jinjun}@us.ibm.com + +# Abstract + +The ability to match pieces of code to their corresponding natural language descriptions and vice versa is fundamental for natural language search interfaces to software repositories. In this paper, we propose a novel multiperspective cross-lingual neural framework for code-text matching, inspired in part by a previous model for monolingual text-to-text matching, to capture both global and local similarities. Our experiments on the CoNaLa dataset show that our proposed model yields better performance on this cross-lingual text-to-code matching task than previous approaches that map code and text to a single joint embedding space. + +# 1 Introduction + +In semantic code search or retrieval, the user provides a natural language query, and the system returns a ranked list of relevant code snippets from a database or repository for that query. This task is usually performed using a matching model that computes the similarity between code snippets and natural language descriptions by mapping code and natural language embeddings into a common space where the distance between a piece of code and its corresponding description is small (Gu et al., 2018; Yao et al., 2019). + +But current models do not explicitly model any interactions between the code and the description until the final step when their global similarity is calculated. + +In this paper, we propose a novel multiperspective neural framework for code-text matching that captures both global and local similarities. We show that it yields improved results on semantic code search. + +We apply our model to the CoNaLa benchmark dataset (Yin et al., 2018), which consists of Python code snippets and their corresponding annotations + +in English. We believe that our model could be applied to other programming languages as well. We have made our code publicly available for research purpose1. + +# 2 Background + +Semantic code search is a cross-modal ranking problem where items in one modality (code) need to be ranked according to how well they match queries in another (natural language). One standard way to compute the similarity of items drawn from two different modalities or languages is to map each modality into a common "semantic" vector space such that matching pairs are mapped to vectors that are close to each other. + +Gu et al. (2018) propose a code retrieval framework that jointly embeds code snippets and NL descriptions into a high dimensional embedding space such that the vectors representing a code snippet and its corresponding description have high similarity. + +A variety of different approaches for learning embeddings for code have been proposed. Because source code is less ambiguous than natural language, there are ways to exploit the underlying structure of code to obtain better representations. Wan et al. (2019); LeClair et al. (2020) show that using features extracted from Abstract Syntax Trees (AST's) and Control Flow Graphs (CFG's) lead to creating better representations of code. Hu et al. (2018); Haque et al. (2020) show that ASTs represented as compact strings can be used to represent code. Following these approaches, we developed a multi-modal framework that generates embeddings for code using both the code tokens and an AST representation. + +# 3 Models + +We compare four models: a baseline model (CT) that only considers text and source code, a (CAT) model that also includes embedding of Abstract Syntax Trees, a multi-perspective model (MP) that leverages multi-perspective matching operations as defined in a bilateral multi-perspective model (Wang et al., 2017), and our MP-CAT model that combines both MP and CAT architectures. + +# 3.1 CT: A Baseline Code and Text Model + +Our baseline model (CT) is based on Gu et al. (2018)'s CODEnn model. It maps both code and natural language descriptions to vectors in the same embedding space and then computes the similarity between these vectors using the L2 distance metric. These vectors are computed by two sets of three layers (one set per modality): + +The Word Embedding Module consists of two independently pre-trained lookup tables that map code tokens or natural language tokens to embeddings. We use FastText (Bojanowski et al., 2017) for all embeddings in this paper. + +The Context Representation Module consists of bi-directional LSTM layers (one for code, one for text) that map the word embedding sequences into another pair of sequences of embeddings that contain contextual information. + +The Maxpool Layer performs max pool (separately per dimension) over the Context Representation embedding sequences to obtain a single vector. + +The Similarity Module computes the similarity of the two vectors $v_{c}$ and $v_{c}$ produced by the Maxpool Layers as + +$$ +d \left(v _ {1}, v _ {2}\right) = \sum_ {i = 1} ^ {d} \left(v _ {1 i} - v _ {2 i}\right) ^ {2} +$$ + +$$ +s i m (v _ {c}, v _ {d}) = 1 - d \big (\frac {v _ {c}}{\| v _ {c} \| _ {2}}, \frac {v _ {d}}{\| v _ {d} \| _ {2}} \big) +$$ + +where $d$ returns the L2 distance between d-dimensional vectors $v_{c}$ and $v_{d}$ . + +# 3.2 CAT: An AST-Based Model + +To capture both syntactic and semantic features, we augment our baseline CT model with embeddings based on the Abstract Syntax Tree (AST) representation of the code. Most programming languages, including Python, come with a deterministic parser that outputs the AST representation of a code snippet. Python has a library module called + +ast that generates AST representations of code. We convert this AST representation to a string using structure-based traversal (SBT) (Hu et al., 2018). The CAT model is similar to the CT model, except that it extracts features from both the source code tokens and its corresponding AST representation. So the Word Embedding Module now contains three lookup tables: for code, AST, and natural language, respectively. Similarly, the Context Representation Module has 3 bi-directional LSTM layers which is followed by 3 Maxpool Layers. Before the output is passed to the similarity module, the output vectors of the two max pool layers representing code and AST are concatenated to form a single representation of the source code. Because of this, the hidden dimension in the bidirectional LSTM's of the Context Representation Module for the natural language sequence is double that of code and AST sequences' LSTM hidden dimensions. This ensures that, after concatenation, the vectors representing the candidate code snippet and the natural language description are of the same dimension. After that, the Similarity Module computes the similarity of these vectors via the same L2-distance-based operation as in CT. + +# 3.3 MP: A Multi-Perspective Model + +The CT and CAT models learn to map source code and natural language tokens into a joint embedding space such that semantically similar code-natural language pairs are projected to vectors that are close to each other. However, these two representations interact only in the final step when the global similarity of the sequence embeddings is calculated, but not during the first step when each sequence is encoded into its corresponding embedding. Wang et al. (2017) show that, for tasks such as paraphrase identification and natural language inference that require two pieces of texts from the same language to compare, it is beneficial to include a number of different (i.e., multi-perspective) local matching operations between the two input sequences when computing their vector representations. Given contextual sequence encodings $P$ and $Q$ (computed, e.g., by biLSTMs) for the two sequences to be compared, Wang et al. (2017)'s Bilateral Multi-Perspective Matching (BiMPM) model includes a matching mechanism that compares $P$ and $Q$ by matching each position in $P$ with all positions in $Q$ , and by matching each position in $Q$ with all positions in $P$ , under four different match- + +ing strategies. We will discuss these strategies in more detail under the Bilateral Multi-Perspective Matching (BiMPM) Module. + +We apply the MP model to our cross-modal codetext matching task as follows: The Word Embedding Layer takes as input the code sequence, AST sequence, and description sequence. The output of this layer is three independent sequences of token embeddings, one for each input sequence. + +The Context Representation Module consists of three sets of BiLSTM layers that each computes a contextual representation of each token in the corresponding input sequence. We concatenate the hidden states of the sequences representing the code and AST, respectively, to get one set of sequence embeddings representing the source code input. + +The Bilateral Multi-Perspective Matching (BiMPM) Module compares the two sequences, say $P$ and $Q$ , by matching each position in $P$ with all positions in $Q$ , and by matching each position in $Q$ with all positions in $P$ , under four different matching strategies $m$ that each produce new embedding sequences $P_{m}^{\prime}$ and $Q_{m}^{\prime}$ that have the same length as the original $P$ and $Q$ . Each matching strategy is parameterized by a feedforward network (e.g. $P^{\prime}[i]_{m} = f_{m}^{P\rightarrow Q}(P[i],Q_{m};W_{m}^{P\rightarrow Q})$ ) that takes in a token embedding $P[i]$ and a strategy-specific single-vector representation of $Q_{m}$ , and returns a new vector $P^{\prime}[i]_{m}$ for $P[i]$ . For each token $P[i]\in P$ (and conversely for any $Q[j]\in Q$ ), $Q_{m}(P_{m})$ is defined as follows: + +Full matching sets $Q_{m}(P_{m})$ to be the final hidden state of $Q$ (and vice versa for $P$ ). + +Maxpool matching obtains $Q_{m}$ by performing maximum pooling (per dimension) across the elements of $Q$ . + +Attentive matching computes $Q_{m}$ as a weighted average of all $Q[j] \in Q$ , where $Q[j]$ 's weight is the cosine similarity of $P[i]$ and $Q[j]$ . + +Max-Attentive matching sets $Q_{m}$ to be the $Q[j]$ with the highest cosine similarity to $P[i]$ . + +We concatenate the four $P'[i]_m$ ( $Q'[i]_m$ ) for each token $i$ to get two new sequences $P'$ and $Q'$ . + +The Local Aggregation Module aggregates these sequence embeddings into two fixed-length multi-perspective hidden representations by passing them through two different bi-LSTM layers (one for each sequence). For each sequence, we concatenate the final hidden states of both the forward and reverse directions to get a vector repre + +sentation of that sequence. + +The Similarity Module computes the similarity of the two vectors returned by the Aggregation Module as before. + +# 3.4 MP-CAT: A Combined Model + +Our final model combines the MP and the CAT models. It contains the following components: + +The CAT module reads in the code sequence, the AST sequence, and the natural language sequence and outputs two vectors, one jointly representing the code and the AST and the other representing the natural language description. + +The MP module also reads in the code sequence, the AST sequence, and the natural language sequence. It returns two vectors, one for code and AST, and the other for the natural language description. The difference between this module and the previous is that MP contains local information that is ignored in the global CAT embeddings. + +The Global and Local Fusion Module concatenates the two CAT and MP vectors representing the code to get the final code representation, and does the same for the CAT and MP vectors representing the natural language description, before computing their L2 distance in the same manner as the other similarity modules. Figure 1 shows the pipeline of the MP-CAT framework. + +# 4 Experiments + +The CoNaLa Dataset The CoNaLa dataset (Yin et al., 2018) has two parts, a manually curated parallel corpus of 2,379 training and 500 test examples, and a large automatically-mined dataset with 600k examples (which we ignore here). Each example consists of a snippet of Python code and its corresponding English description. + +Pre-processing We pre-process the text representing both the source code and the natural language descriptions using sub-word regularization based on unigram language modeling (Kudo, 2018) transforms the original tokens into sequences of shorter (and hence more common) substrings. We use the sentencepiece library (Kudo and Richardson, 2018) and follow the same approach as used by Yin et al. (2018) for the CoNaLa dataset. + +Training procedure During training, we use triplets consisting of a code snippet, a correct description, and an incorrect description (obtained by + +![](images/28b2e545290dd2354f0aa9c42c3b280503bd657a0971321bbb34b0ba354adddd.jpg) +Figure 1: The MP-CAT framework that contains both global-level and local-level features for code-text matching + +
FrameworkTraining Time (s)Evaluation Time (s)
CT4663.106755.62
CAT6702.6911050.68
MP183393.4717374.14
MP-CAT240062.3825306.97
+ +Table 1: Training and Evaluation times for all our models. The models were trained for 100 epochs and the evaluation time was computed on 500 test queries. + +
FrameworksMRRR@1R@5R@10
CT0.1727.424.039.6
CAT0.2079.032.245.0
MP0.1546.421.633.6
MP-CAT0.22011.032.247.4
+ +Table 2: Code Search Results + +random sampling from the training set). We sample 5 incorrect descriptions for each code-text pair, giving us five triplets for each training example. During the evaluation phase, for every natural language query $\mathcal{D}$ , we calculate the rank of its corresponding code snippet $\mathcal{C}$ among all 500 candidates in the test set. + +# 4.1 Experimental Setup + +We train our models on triplets $\langle C, D^{+}, D^{-} \rangle$ consisting of a snippet of code $C$ , a natural language description $D^{+}$ that correctly describes what the code does (a positive example), and a description $D^{-}$ that does not describe what the code does (a negative example). We minimize the ranking loss with margin $\epsilon$ , following Gu et al. (2018): + +$$ +\mathcal {L} (\theta) = \sum_ {\langle C, D ^ {+}, D ^ {-} \rangle} \max \left(0, \epsilon - \cos (C, D ^ {+}) + \cos (C, D ^ {-})\right) +$$ + +In the CAT model, since we first concatenate the vectors for the code and AST before comparing them with the vector for the natural language description, the first two vectors are each half the dimension size of the third one. Our models are implemented in PyTorch (Paszke et al., 2017) and trained using Adam (Kingma and Ba, 2014). + +Each model is trained for 100 epochs, and during the evaluation step, we use a set of 500 natural language queries from the test set. The training and evaluation times are shown in Table 2. + +# 4.2 Results + +Table 2 shows our test set results for code search. We report Recall@K (K=1,5,10) and mean reciprocal rank (MRR) of the correct answer. + +The Impact of Modeling ASTs: In going from the first (CT) row to the second (CAT) row in Table 2, we see that the AST features alone increase MRR from 0.172 to 0.207. There is also an increase in R@k for all values of k. In fact, its R@5 values are competitive with our best model. + +Multi-Perspective Results: The results for the multi-perspective models are both surprising and interesting. Row 3 of Table 2 shows that the MP model on its own under-performs and actually has the worst results out of all the models we tested. On the other hand, we see that combining the MP and the CAT models into one framework gives the best performance across the board. This shows that even if we use a multi-perspective framework to model local features, we still need encoders to capture the global features of code and text in addition to the local features; otherwise, we end up missing the forest for the trees. + +
QueryMP-CATCAT
Sort dictionary 'x' by value in ascending ordersorted(list(x.items()), key = operator.itemgetter(1))for k in sorted( foo.keys( )): +pass
Run a command 'echo hello world' in bash instead of shellos.system (/bin/bash -c "echo hello world")os.system ('GREPDB= "echo 123"; /bin/bash -c "$GREPDB}")
Select records of dataframe 'df' where the sum of column 'X' for each value in column 'User' is 0df.groupby('User')['X'].filter( lambda x: x.sum() == 0)print(df.loc[df['B'].isin(['one', 'three']))
+ +Table 3: The top hits returned by the MP-CAT and CAT models for a natural language query. + +
QueryMP-CATMP
Concatenate elements of a list 'x' of multiple integers to a single integersum(d*10**i for i, d in enumerate(x[::-1]))[float(i) for i in lst]
convert pandas DataFrame 'df' to a dictionary using 'id' field as the keydf.set_index('id').to_dict()data[data['Value'] == True]
Replace repeated instances of a character '* with a single instance in a string 'text're.sub('\*\*\*+, ', text)re.sub(['*(?!cat).)*cat(?!?;cat).)*cat', '\\|\Bull', s)
+ +Table 4: The top hits returned by the MP-CAT and MP models for a natural language query. + +Comparison of MP-CAT, MP and CAT Models In Table 3, we present the retrieval results for select natural language queries from the development set returned by the MP-CAT and CAT models. We do the same thing for MP-CAT and MP models in Table 4. Comparing MP-CAT and CAT, we observe that while CAT correctly identifies the data structures and libraries required to solve the user's problem, it ends up returning the wrong command. MP, on the other hand, sometimes fails to identify even the correct libraries required. In the second example in Table 4, it fails to understand that there is also a dictionary involved and ends up returning the wrong command. MP-CAT successfully finds the required code snippet when the user queries are longer and have multiple data structures involved. + +# 5 Conclusions + +In this paper, we consider the task of semantic code search or retrieval using a code-text similarity model. We propose MP-CAT, a novel multiperspective deep neural network framework for this task. In contrast to previous approaches, the multiperspective nature of our model allows it to capture richer similarities between the two sequences. + +# Acknowledgement + +This work is supported by the IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR), a research collaboration as part of the IBM AI Horizons Network. + +# References + +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146. + +Xiaodong Gu, Hongyu Zhang, and Sunghun Kim. 2018. Deep code search. In Proceedings of the 2018 40th International Conference on Software Engineering (ICSE 2018). ACM. + +Sakib Haque, Alexander LeClair, Lingfei Wu, and Collin McMillan. 2020. Improved automatic summarization of subroutines via attention to file context. ArXiv, abs/2004.04881. + +Xing Hu, Ge Li, Xin Xia, David Lo, and Zhi Jin. 2018. Deep code comment generation. In Proceedings of the 26th Conference on Program Comprehension, ICPC '18, pages 200-210, New York, NY, USA. ACM. + +Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. International Conference on Learning Representations. + +Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple subword candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 66-75, Melbourne, Australia. Association for Computational Linguistics. + +Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics. + +Alexander LeClair, Sakib Haque, Linfgei Wu, and Collin McMillan. 2020. Improved code summarization via a graph neural network. ArXiv, abs/2004.02843. + +Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W. + +Yao Wan, Jingdong Shu, Yulei Sui, Guandong Xu, Zhou Zhao, Jian Wu, and Philip S. Yu. 2019. Multi-modal attention network learning for semantic source code retrieval. 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 13-25. + +Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pages 4144-4150. + +Ziyu Yao, Jayavardhan Reddy Peddamail, and Huan Sun. 2019. Coacor: Code annotation for code retrieval with reinforcement learning. In The World Wide Web Conference, WWW '19, pages 2203-2214, New York, NY, USA. ACM. +Pengcheng Yin, Bowen Deng, Edgar Chen, Bogdan Vasilescu, and Graham Neubig. 2018. Learning to mine aligned code and natural language pairs from stack overflow. In International Conference on Mining Software Repositories, MSR, pages 476-486. ACM. \ No newline at end of file diff --git a/amultiperspectivearchitectureforsemanticcodesearch/images.zip b/amultiperspectivearchitectureforsemanticcodesearch/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..db9a6433ce307413f5e8ab41e1ef1ac99d6e3dbc --- /dev/null +++ b/amultiperspectivearchitectureforsemanticcodesearch/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eeb8b2fe6c7ac79ad5528ed2980bb1be8ce5b3269cd0c6ea0f614b7e115a6d90 +size 165120 diff --git a/amultiperspectivearchitectureforsemanticcodesearch/layout.json b/amultiperspectivearchitectureforsemanticcodesearch/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a021eb97c2412786f46fac504864097164fa983f --- /dev/null +++ b/amultiperspectivearchitectureforsemanticcodesearch/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d5904325fe95263289ab1217550cd607bea4a0dcad295f345f1e22207f6c3841 +size 193769 diff --git a/amultitasklearningapproachfordiacriticrestoration/9908e425-990c-4c86-bd11-8425809123a9_content_list.json b/amultitasklearningapproachfordiacriticrestoration/9908e425-990c-4c86-bd11-8425809123a9_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..5fe9978b7e6803b701c21b165879ffb779c181df --- /dev/null +++ b/amultitasklearningapproachfordiacriticrestoration/9908e425-990c-4c86-bd11-8425809123a9_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:354e455ccaf35cab82fef2b518c3a62540b8df251112525256dcd3fd6cd10cd7 +size 66932 diff --git a/amultitasklearningapproachfordiacriticrestoration/9908e425-990c-4c86-bd11-8425809123a9_model.json b/amultitasklearningapproachfordiacriticrestoration/9908e425-990c-4c86-bd11-8425809123a9_model.json new file mode 100644 index 0000000000000000000000000000000000000000..bb814959a143e6039df13a5f3899a0da6d6c664d --- /dev/null +++ b/amultitasklearningapproachfordiacriticrestoration/9908e425-990c-4c86-bd11-8425809123a9_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea7ccbdb1517447d8fbe0aa2c8d4dd5d09865e49a4a497bf9fabebd743ae695a +size 81684 diff --git a/amultitasklearningapproachfordiacriticrestoration/9908e425-990c-4c86-bd11-8425809123a9_origin.pdf b/amultitasklearningapproachfordiacriticrestoration/9908e425-990c-4c86-bd11-8425809123a9_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..6f1722bf24525d6aa625d1694681ebb2a4064ea3 --- /dev/null +++ b/amultitasklearningapproachfordiacriticrestoration/9908e425-990c-4c86-bd11-8425809123a9_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bfc01566e747b70e60fb539135fcb9e8f729dd47a51d671a763cfd631d43a303 +size 433525 diff --git a/amultitasklearningapproachfordiacriticrestoration/full.md b/amultitasklearningapproachfordiacriticrestoration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..31cc7465cd65db65cbb3dac3d7eab695e8c346ae --- /dev/null +++ b/amultitasklearningapproachfordiacriticrestoration/full.md @@ -0,0 +1,229 @@ +# A Multitask Learning Approach for Diacritic Restoration + +Sawsan Alqahtani $^{1,2}$ and Ajay Mishra $^{1}$ and Mona Diab $^{2*}$ + +$^{1}$ AWS, Amazon AI + +2The George Washington University + +sawsa@amazon.com, misaja@amazon.com, mtdiab@gwu.edu + +# Abstract + +In many languages like Arabic, diacritics are used to specify pronunciations as well as meanings. Such diacritics are often omitted in written text, increasing the number of possible pronunciations and meanings for a word. This results in a more ambiguous text making computational processing on such text more difficult. Diacritic restoration is the task of restoring missing diacritics in the written text. Most state-of-the-art diacritic restoration models are built on character level information which helps generalize the model to unseen data, but presumably lose useful information at the word level. Thus, to compensate for this loss, we investigate the use of multi-task learning to jointly optimize diacritic restoration with related NLP problems namely word segmentation, part-of-speech tagging, and syntactic diacritization. We use Arabic as a case study since it has sufficient data resources for tasks that we consider in our joint modeling. Our joint models significantly outperform the baselines and are comparable to the state-of-the-art models that are more complex relying on morphological analyzers and/or a lot more data (e.g. dialectal data). + +# 1 Introduction + +In contrast to English, some vowels in languages such as Arabic and Hebrew are not part of the alphabet and diacritics are used for vowel specification. In addition to pertaining vowels, diacritics can also represent other features such as case marking and phonological gemination in Arabic. Not including diacritics in the written text in such languages increases the number of possible meanings as well as pronunciations. Humans rely on the surrounding + +context and their previous knowledge to infer the meanings and/or pronunciations of words. However, computational models, on the other hand, are inherently limited to deal with missing diacritics which pose a challenge for such models due to increased ambiguity. + +Diacritic restoration (or diacritization) is the process of restoring these missing diacritics for every character in the written texts. It can specify pronunciation and can be viewed as a relaxed variant of word sense disambiguation. For example, the Arabic word $\text{Elm}^2$ can mean "flag" or "knowledge", but the meaning as well as pronunciation is specified when the word is diacritized ( $\text{Ealamu}$ means "flag" while $\text{Eilomo}$ means "knowledge"). As an illustrative example in English, if we omit the vowels in the word $pn$ , the word can be read as $pan$ , $pin$ , $pun$ , and $pen$ , each of these variants have different pronunciations and meanings if it composes a valid word in the language. + +The state-of-the-art diacritic restoration models reached a decent performance over the years using recurrent or convolutional neural networks in terms of accuracy (Zalmout and Habash, 2017; Alqahtani et al., 2019; Orife, 2018) and/or efficiency (Alqahtani et al., 2019; Orife, 2018); yet, there is still room for further improvements. Most of these models are built on character level information which help generalize the model to unseen data, but presumably lose some useful information at the word level. Since word level resources are insufficient to be relied upon for training diacritic restoration models, we integrate additional linguistic information that considers word morphology as well as word relationships within a sentence to partially compensate for this loss. + +In this paper, we improve the performance of diacritic restoration by building a multitask learning model (i.e. joint modeling). Multitask learning refers to models that learn more than one task at the same time, and has recently been shown to provide good solutions for a number of NLP tasks (Hashimoto et al., 2016; Kendall et al., 2018). + +The use of a multitask learning approach provides an end-to-end solution, in contrast to generating the linguistic features for diacritic restoration as a preprocessing step. In addition, it alleviates the reliance on other computational and/or data resources to generate these features. Furthermore, the proposed model is flexible such that a task can be added or removed depending on the data availability. This makes the model adaptable to other languages and dialects. + +We consider the following auxiliary tasks to boost the performance of diacritic restoration: word segmentation, part-of-speech (POS) tagging, and syntactic diacritization. We use Arabic as a case study for our approach since it has sufficient data resources for tasks that we consider in our joint modeling. $^3$ + +The contributions of this paper are twofold: + +1. We investigate the benefits of automatically learning related tasks to boost the performance of diacritic restoration; +2. In doing so, we devise a state-of-the-art model for Arabic diacritic restoration as well as a framework for improving diacritic restoration in other languages that include diacritics. + +# 2 Diacritization and Auxiliary Tasks + +We formulate the problem of (full) diacritic restoration (DIAC) as follows: given a sequence of characters, we identify the diacritic corresponding to each character in that sequence from the following set of diacritics $\{\mathrm{a},\mathrm{u},\mathrm{i},\mathrm{o},\mathrm{K},\mathrm{F},\mathrm{N},\sim ,\sim \mathrm{a},\sim \mathrm{u},$ $\sim \mathrm{i},\sim \mathrm{F},\sim \mathrm{K}$ and $\sim \mathbf{N}\}$ . We additionally consider three auxiliary tasks: syntactic diacritization, part-of-speech tagging, and word segmentation. Two of which operate at the word level (syntactic diacritization and POS tagging) and the remaining tasks (diacritic restoration and word segmentation) operate at the character level. This helps diacritic restoration utilize information from both charac + +ter and word level information, bridging the gap between the two levels. + +Syntactic Diacritization (SYN): This refers to the task of retrieving diacritics related to the syntactic positions for each word in the sentence, which is a sub-task of full diacritic restoration. Arabic is a templatic language where words comprise roots and patterns in which patterns are typically reflective of diacritic distributions. Verb patterns are more or less predictable however nouns tend to be more complex. Arabic diacritics can be divided into lexical and inflectional (or syntactic) diacritics. Lexical diacritics change the meanings of words as well as their pronunciations and their distribution is bound by patterns/templates. In contrast, inflectional diacritics are related to the syntactic positions of words in the sentence and are added to the last letter of the main morphemes of words (word finally), changing their pronunciations.4 Inflectional diacritics are also affected by word's root (e.g. weak roots) and semantic or morphological properties (e.g. with the same grammatical case, masculine and feminine plurals take different diacritics). + +Thus, the same word can be assigned a different syntactic diacritic reflecting syntactic case, i.e. depending on its relations to the remaining words in the sentence (e.g. subject or object). For example, the diacritized variants Ealama and Ealamu which both mean "flag" have the corresponding syntactic diacritics: a and u, respectively. That being said, the main trigger for accurate syntactic prediction is the relationships between words, capturing semantic and most importantly, syntactic information. + +Because Arabic has a unique set of diacritics, this study formulates syntactic diacritization in the following way: each word in the input is tagged with a single diacritic representing its syntactic position in the sentence. The set of diacritics in syntactic diacritization is the same as the set of diacritics for full diacritic restoration. Other languages that include diacritics can include syntactic related diacritics but in a different manner and complexity + +compared to Arabic. + +Word segmentation (SEG): This refers to the process of separating affixes from the main unit of the word. Word segmentation is commonly used as a preprocessing step for different NLP applications and its usefulness is apparent in morphologically rich languages. For example, the undiacritized word whm $\rho_{\phi}$ might be diacritized as waham~a "and concerned", waham "illusion", where the first diacritized word consists of two segments "wa ham~a" while the second is composed of one word. Word segmentation can be formulated in the following way: each character in the input is tagged following IOB tagging scheme ( $B$ : beginning of a segment; $I$ : inside a segment; $O$ : out of the segment) (Diab et al., 2004). + +Part-Of-Speech Tagging (POS): This refers to the task of determining the syntactic role of a word (i.e. part of speech) within a sentence. POS tags are highly correlated with diacritics (both syntactic and lexical): knowing one helps determine or reduce the possible choices of the other. For instance, the word ktb in the sentence ktb [someone] means "books" if we know it to be a noun whereas the word would be either katab "someone wrote" or kat~ab "made someone write" if it is known to be a verb. + +POS tagging can be formulated in the following way: each word in the input is assigned a POS tag from the Universal Dependencies tagset (Taji et al., 2017).6 + +# 3 Approach + +We built a diacritic restoration joint model and studied the extent to which sharing information is plausible to improve diacritic restoration performance. Our joint model is motivated by the recent success of the hierarchical modeling proposed in (Hashimoto et al., 2016) such that information learned from an auxiliary task is passed as input to the diacritic restoration related layers.[7] + +# 3.1 Input Representation + +Since our joint model may involve both character and word level based tasks, we began our investigation by asking the following question: how to integrate information between these two levels? Starting from the randomly initialized character embeddings as well as a pretrained set of embeddings for words, we follow two approaches (Figure 1 visually illustrates the two approaches with an example). + +![](images/70a2ad59a80f74656e353be067869c1d41cb897a8594bda367e4bafa6e68df07.jpg) +Figure 1: An example of embedding vectors for the word $cat$ and its individual characters: c,a, and t. (i) A character-based representation for the word $cat$ from its individual characters; (ii) A concatenation for the word embedding with each of its individual characters. + +(1) Character Based Representation: We pass information learned by character level tasks into word level tasks by composing a word embedding from the word's characters. We first concatenate the individual embeddings of characters in that word, and then apply a Bidirectional Long Short Term Memory (BiLSTM) layer to generate denser vectors. This helps representing morphology and word composition into the model. +(2) Word-To-Character Representation: To pass information learned by word level tasks into character level tasks, we concatenate each word with each of its composed characters during each pass, similar to what is described in Watson et al. (2018)'s study. This helps distinguishing the individual characters based on the surrounding context, implicitly capturing additional semantic and syntactic information. + +![](images/7f3ba7885e4a422af0c9f0864f2c5b1d0a54bf31b300649d014455ab341ecc2a.jpg) +(i) Input representation + +![](images/36504e11ca700692f8437393a51ef416523e45d61af4101707b28c3a9e207abf.jpg) +(ii) Diacritic Restoration +Figure 2: The diacritic restoration joint model. All Char Embed entities refer to the same randomly initialized character embedding learned during the training process. Pretrained embeddings refer to fixed word embeddings obtained from fastText (Bojanowski et al., 2017). (i) shows the input representation for CharToWord and WordToChar embedding which is the same as in Figure 1. (ii) represents the diacritic restoration joint model; output labels from each task are concatenated with WordToChar embedding and optionally with segmentation hidden. + +# 3.2 The Joint Model + +For all architectures, the main component is BiLSTM (Hochreiter and Schmidhuber, 1997; Schuster and Paliwal, 1997), which preserves the temporal order of the sequence and has been shown to provide the state-of-the-art performance in terms of accuracy (Zalmout and Habash, 2017; Alqahtani et al., 2019). After representing characters through random initialization and representing words using pretrained embeddings obtained from fastText (Bojanowski et al., 2017), the learning process for each batch runs as follows: + +1. We extract the two additional input representation described in Section 3.1; +2. We apply BiLSTM for each of the different tasks separately to obtain their corresponding outputs; +3. We pass all outputs from all tasks as well as WordToChar embedding vectors as input to the diacritic restoration model and obtain our diacritic outputs. + +Figure 2 illustrates the diacritic restoration joint model. As can be seen, SYN as well as POS tagging are trained on top of CharToWord representation which is basically the concatenation of the pretrained embedding for each word with the character-based representations described in Figure 1. SEG is also trained separately on top of the + +character embeddings. We pass the outputs of all these tasks along with WordToChar representation to train the BiLSTM diacritic restoration model. Omitting a task is rather easy, we just remove the related components for that task to yield the appropriate model. We optionally pass the last hidden layer for SEG along with the remaining input to the diacritic restoration model.9 + +# 4 Experimental Setups + +Dataset: We use the Arabic Treebank (ATB) dataset: parts 1, 2, and 3 and follow the same data division as Diab et al. (2013). Table 1 illustrates the data statistics. For word based tasks, we segment each sentence into space tokenized words. For character based tasks, we, in addition, add the special boundary “” between these words, and then each word is further segmented into its characters, similar to that in (Alqahtani et al., 2019). We pass each word through the model along with a specific number of previous and future words (+/- 10 words). + +Parameter Settings: For all tasks, we use 250 hidden units in each direction (500 units in both directions combined) and 300 as embedding size. We use 3 hidden layers for tasks except in SEG in + +
TrainTestDevOOV
502,93863,16863,1267.3%
+ +Table 1: Number of words and out of vocabulary (OOV) rate for Arabic. OOV rate indicates the percentage of undiacritized words in the test set that have not been observed during training. + +which we use only one layer. We use Adam for learning optimization with a learning rate of 0.001. We use 20 for epoch size, 16 for batch size, 0.3 for hidden dropout, and 0.5 for embedding dropout. We initialize the embedding with a uniform distribution [-0.1,0.1] and the hidden layers with normal distribution. The loss scores for all considered tasks are combined and then normalized by the number of tasks in the model. + +Evaluation metrics: We use accuracy for all tasks except diacritic restoration. For diacritic restoration, the two most typically used metrics are Word Error Rate (WER) and Diacritic Error Rate (DER), the percentages of incorrectly diacritized words and characters, respectively. In order to approximate errors in the syntactic diacritics, we use Last Diacritic Error Rate (LER), the percentage of words that have incorrect diacritics in the last positions of words. To evaluate the models' ability to generalize beyond observed data, we compute WER on OOV (out-of-vocabulary) words.[10] + +Significance testing: We ran each experiment three times and reported the mean score.11 We used the t-test with $p = 0.05$ to evaluate whether the difference between models' performance and the diacritic restoration is significant (Dror et al., 2018). + +# 5 Results and Analysis + +Table 2 shows the performance of joint diacritic restoration models when different tasks are considered. When we consider WordToChar as input to the diacritic restoration model, we observe statistically significant improvements for all evaluation metrics. This is justified by the ability of word embeddings to capture syntactic and semantic information at the sentence level. The same character is disambiguated in terms of the surrounding context + +as well as the word it appears in (e.g. the character $t$ in the word cat would be represented slightly different than $t$ in a related word cats or even a different word table). We consider both character based model as well as WordToChar based model as our baselines (BASE). + +We use WordToChar representation rather than characters for all remaining models that jointly learn more than one task. For all experiments, we observe improvements compared to both baselines across all evaluation metrics. Furthermore, all models except DIAC+SEG outperform WordToChar diacritic restoration model in terms of WER, showing the benefits of considering output distributions for the other tasks. Despite leveraging tasks focused on syntax (SYN/POS) or morpheme boundaries (SEG), the improvements extend to lexical diacritics as well. Thus, the proposed joint diacritic restoration model is also helpful in settings beyond word final syntactic related diacritics. The best performance is achieved when we consider all auxiliary tasks within the diacritic restoration model. + +Impact of Auxiliary Tasks: We discuss the impact of adding each investigated task towards the performance of the diacritic restoration model. + +Word segmentation (DIAC+SEG): When morpheme boundaries as well as diacritics are learned jointly, the WER performance is slightly reduced on all and OOV words. This reduction is attributed mostly to lexical diacritics. As Arabic exhibits a non-concatenative fusional morphology, reducing its complexity to a segmentation task might inherently obscure morphological processes for each form. + +Observing only slight improvement is surprising; we believe that this is due to our experimental setup and does not negate the importance of having morphemes that assign the appropriate diacritics. We speculate that the reason for this is that we do not capture the interaction between morphemes as an entity, losing some level of morphological information. + +For instances, the words $\text{waham} \sim a$ versus $\text{wahum}$ for the undiacritized words $\text{whm}$ (bold letters refer to consonants distinguishing it from diacritics) would benefit from morpheme boundary identifications to tease apart $\text{wa}$ from $\text{hum}$ in the second variant $(\text{wahum})$ , emphasizing that these are two words. But on the other hand, it adds an + +
TaskWERDERLER/LexOOV WER
Zalmout and Habash (2017)8.21--20.2
Zalmout and Habash (2019a)7.50---
Alqahtani and Diab (2019a)7.62.7-32.1
BASE (Char)8.51 (±0.01)2.805.20/5.5434.56
BASE (WordToChar)8.09 (±0.05)2.735.00/5.3032.10
DIAC+SEG8.35 (±0.02)2.825.20/5.4633.97
DIAC+SYN7.70* (±0.02)2.604.72/5.0830.94
DIAC+POS7.86* (±0.14)2.654.72/5.2032.28
DIAC+SEG+SYN7.70* (±0.05)2.594.65/5.0331.33
DIAC+SEG+POS7.73* (±0.08)2.624.73/5.0131.31
DIAC+SYN+POS7.72* (±0.06)2.614.62/5.0631.05
ALL7.51* (±0.09)2.544.54/4.9131.07
+ +Table 2: Performance of the joint diacritic restoration model when different related tasks are considered. Bold numbers represent the highest score per column. Almost all scores are higher than the base model BASE (char). * denotes statistically significant improvements compared to the baselines. Lex refers to the percentage of words that have incorrect lexical diacritics only, excluding syntactic diacritics. + +additional layer of ambiguity for other cases like the morpheme ktb in the diacritic variants kataba, kutubu, sayakotubo - note that the underlined segment has the same consonants as the other variants - in which identifying morphemes increased the number of possible diacritic variants without learning the interactions between adjacent morphemes. + +Furthermore, we found inconsistencies in the dataset for morphemes which might cause the drop in performance when we only consider SEG. When we consider all tasks together, these inconsistencies are reduced because of the combined information from different linguistic signals towards improving the performance of the diacritic restoration model. + +Syntactic diacritization (DIAC+SYN): By enforcing inflectional diacritics through an additional focused layer within the diacritic restoration model, we observe improvements on WER compared to the baselines. We notice improvements on syntactic related diacritics (LER score), which is expected given the nature of syntactic diacritization in which it learns the underlying syntactic structure to assign the appropriate syntactic diacritics for each word. Improvements also extend to lexical diacritics, and this is because word relationships are captured during learning syntactic diacritics in which BiLSTM modeling for words is integrated. + +POS tagging (DIAC+POS): When we jointly train POS tagging with full diacritic restoration, we notice improvements compared to both baselines. Compared to syntactic diacritization, we obtain similar findings across all evaluation metrics except for WER on OOV words in which POS + +tagging drops. Including POS tagging within diacritic restoration also captures important information about the words; the idea of POS tagging is to learn the underlying syntax of the sentence. In comparison to syntactic diacritization, it involves different types of information like passivization which could be essential in learning correct diacritics. + +Ablation Analysis: Incorporating all the auxiliary tasks under study within the diacritic restoration model (ALL) provides the best performance across all measures except WER on OOV words in which the best performance was given by DIAC+SYN. We discuss the impact of removing one task at a time from ALL and examine whether its exclusion significantly impacts the performance. Excluding SEG from the process drops the performance of diacritic restoration. This shows that even though SEG did not help greatly when it was combined solely with diacritic restoration, the combinations of SEG and the other word based tasks filled in the gaps that were missing from just identifying morpheme boundaries. Excluding either POS tagging or syntactic diacritization also hurts the performance which shows that these tasks complement each other and, taken together, they improve the performance of diacritic restoration model. + +# Input Representation: + +Impact of output labels: Table 3 shows the different models when we do not pass the labels of the investigated tasks (the input is only Word-ToChar representation) against the same models when we do. We noticed a drop in performance + +across all models. Notice that all models - even when we do not consider the label have better performance than the baselines. This also supports the benefits of WordToChar representation. + +
TasksWith LabelsWithout Labels
DIAC+SYN7.707.99
DIAC+POS7.867.93
DIAC+SEG+SYN7.707.93
DIAC+SEG+POS7.737.99
DIAC+SYN+POS7.727.97
ALL7.517.91
+ +Last hidden layer of SEG: Identifying morpheme boundaries did not increase accuracy as we expected. Therefore, we examined whether information learned from the BiLSTM layer would help us learn morpheme interactions by passing the output of last BiLSTM layer to the diacritic restoration model along with segmentation labels. We did not observe any improvements towards predicting accurate diacritics when we pass information regarding the last BiLSTM layer. For ALL, the WER score increased by $0.22\%$ . Thus, it is sufficient to only utilize the segment labels for diacritic restoration. + +Passive and active verbs: Passivation in Arabic is denoted through diacritics and missing such diacritic can cause ambiguity in some cases (Hermena et al., 2015; Diab et al., 2007). To examine its impact, we further divide verbs in the POS tagset into passive and active, increasing the size by one. Table 4 shows the diacritic restoration performance with and without considering passivation. We notice improvements, in some combinations of tasks, across all evaluation metrics compared to the pure POS tagging, showing its importance in diacritic restoration models. + +Table 3: WER performance when we do not consider the output labels for the investigated tasks. Bold numbers represent the highest score per row. + +
TaskWith PassWithout Pass
DIAC+POS7.657.86
DIAC+SEG+POS7.657.73
DIAC+SYN+POS7.787.72
ALL7.627.51
+ +Table 4:WER performance for different diacritic restoration models when passivation is considered. Bold numbers represent the highest score per row. + +Level of linguistic information: The joint diacritic restoration model were built empirically and tested against the development set. We noticed + +that to improve the performance, soft parameter sharing in a hierarchical fashion performs better on diacritic restoration. We experimented with building a joint diacritic restoration model that jointly learns segmentation and diacritics through hard parameter sharing. To learn segmentation with diacritic restoration, we shared the embedding layer between the two tasks as well as sharing some or all layers of BiLSTM. We got WER on all words (8.53~9.35) in which no improvements were shown compared to character based diacritic restoration. To learn word based tasks with diacritic restoration, we pass WordToChar representation to the diacritic restoration and/or CharToWord representation for word-based tasks. The best that we could get for both tasks is 8.23%~9.6%; no statistically significant improvements were found. This shows the importance of hierarchical structure for appropriate diacritic assignments. + +Qualitative analysis: We compared random errors that are correct in DIAC (character-based diacritic restoration) with ALL in which we consider all investigated tasks. Although ALL provides accurate results for more words, it introduces errors in other words that have been correctly diacritized by DIAC. The patterns of such words are not clear. We did not find a particular category that occurs in one model but not the other. Rather, the types and quantity of errors differ in each of these categories. + +State-of-the-art Comparison: Table 2 also shows the performance of the state-of-the-art models. ALL model surpasses the performance of Zalmout and Habash (2017). However, Zalmout and Habash (2017)'s model performs significantly better on OOV words. Zalmout and Habash (2019a) provides comparable performance to ALL model. The difference between their work and that in (Zalmout and Habash, 2017) is the use of a joint model to learn morphological features other than diacritics (or features at the word level), rather than learning these features individually. Zalmout and Habash (2019a) obtained an additional boost in performance (0.3% improvement over ours) when they add a dialect variant of Arabic in the learning process, sharing information between both languages. + +Alqahtani and Diab (2019a) provides comparable performance to ALL and better performance on some task combinations in terms of WER on all and OOV words. The difference between their model and our BASE model is the addition of a + +CRF (Conditional Random Fields) layer which incorporate dependencies in the output space at the cost of model's computational efficiency (memory and speed). + +Zalmout and Habash (2019b) provides the current state-of-the-art performance in which they build a morphological disambiguation framework in Arabic similar to (Zalmout and Habash, 2017, 2019a). They reported their scores based on the development set which was not used for tuning. In the development set, they obtained $93.9\%$ which significantly outperforms our best model (ALL) by $1.4\%$ . Our approach is similar to (Zalmout and Habash, 2019b). We both follow WordToChar as well as CharToWord input representations discussed in Section 3.1, regardless of the specifics. Furthermore, we both consider the morphological outputs as features in our diacritic restoration model. In Zalmout and Habash (2019b), morphological feature space that are considered is larger, making use of all morphological features in Arabic. Furthermore, Zalmout and Habash (2019b) use sequence-to-sequence modeling rather than sequence classification as ours. Unlike Zalmout and Habash (2019b), our model is more flexible allowing additional tasks to be added when sufficient resources are available. + +We believe that neither the underlying architecture nor the consideration of all possible features were the crucial factor that led to the significant reduction in WER performance. Rather, morphological analyzers is crucial in such significant improvement. As a matter of fact, in Zalmout and Habash (2019b), the performance significantly drops to 7.2 when they, similar to our approach, take the highest probabilistic value as a solution. Thus, we believe that the use of morphological analyzers enforces valid word composition in the language and filter out invalid words (a side effect of using characters as input representation). This also justifies the significant improvement on OOV words obtained by (Zalmout and Habash, 2017). Thus, we believe that a global knowledge of words and internal constraints within words are captured. + +Auxiliary tasks: We compared the base model of the auxiliary tasks to the state-of-the-art (SOTA). For SEG, BiLSTM model has comparable performance to that in (Zalmout and Habash, 2017) (SEG yields $99.88\%$ F1 compared to SOTA $99.6\%$ ). For POS, we use a shallower tag set (16 number of tags compared to $\sim 70$ ) than typically used in previous + +models hence we do not have a valid comparison set. For SYN, we compare our results with (Hifny, 2018) which uses a hybrid network of BiLSTM and Maximum Entropy to solve syntactic diacritization. The SYN yields results comparable to SOTA (our model performs 94.22 vs. SOTA 94.70). + +# 6 Related Work + +The problem of diacritization has been addressed using classical machine learning approaches (e.g. Maximum Entropy and Support Vector Machine) (Zitouni and Sarikaya, 2009; Pasha et al., 2014) or neural based approaches for different languages that include diacritics such as Arabic, Vietnamese, and Yoruba. Neural based approaches yield state-of-the-art performance for diacritic restoration by using Bidirectional LSTM or temporal convolutional networks (Zalmout and Habash, 2017; Orife, 2018; Alqahtani et al., 2019; Alqahtani and Diab, 2019a). + +Arabic syntactic diacritization has been consistently reported to be difficult, degrading the performance of full diacritic restoration (Zitouni et al., 2006; Habash et al., 2007; Said et al., 2013; Shaalan et al., 2009; Shahrour et al., 2015; Darwish et al., 2017). To improve the performance of syntactic diacritization or full diacritic restoration in general, previous studies followed different approaches. Some studies separate lexical from syntactic diacritization (Shaalan et al., 2009; Darwish et al., 2017). Other studies consider additional linguistic features such as POS tags and word segmentation (i.e. tokens or morphemes) (Ananthakrishnan et al., 2005; Zitouni et al., 2006; Zitouni and Sarikaya, 2009; Shaalan et al., 2009). + +Hifny (2018) addresses syntactic diacritization by building BiLSTM model in which its input embeddings are augmented with manually generated features of context, POS tags, and word segments. Rashwan et al. (2015) use deep belief network to build a diacritization model for Arabic that focuses on improving syntactic diacritization and build sub-classifiers based on the analysis of a confusion matrix and POS tags. + +Regarding incorporating linguistic features into the model, previous studies have either used morphological features as a preprocessing step or as a ranking step for building diacritic restoration models. As a preprocessing step, the words are converted to their constituents (e.g. morphemes, lemmas, or $n$ -grams) and then diacritic restoration + +models are built on top of that (Ananthakrishnan et al., 2005; Alqahtani and Diab, 2019b). Ananthakrishnan et al. (2005) use POS tags to improve diacritic restoration at the syntax level assuming that POS tags are known at inference time. + +As a ranking procedure, all possible analyses of words are generated and then the most probable analysis is chosen (Pasha et al., 2014; Zalmout and Habash, 2017, 2019a,b). Zalmout and Habash (2017) develop a morphological disambiguation model to determine Arabic morphological features including diacritization. They train the model using BiLSTM and consult with a LSTM-based language model as well as other morphological features to rank and score the output analysis. Similar methodology can be found in (Pasha et al., 2014) but using Support Vector Machines. This methodology shows better performance on out of vocabulary (OOV) words compared to pure character models. + +# 7 Discussion & Conclusion + +We present a diacritic restoration joint model that considers the output distributions for different related tasks to improve the performance of diacritic restoration. Our results show statistically significant improvements across all evaluation metrics. This shows the importance of considering additional linguistic information at morphological and/or sentence levels. Including semantic information through pretrained word embeddings within the diacritic restoration model also helped boosting the diacritic restoration performance. Although we apply our joint model on Arabic, this model provides a framework for other languages that include diacritics whenever resources become available. Although we observed improvements in terms of generalizing beyond observed data when using the proposed linguistic features, the OOV performance is still an issue for diacritic restoration. + +# References + +Sawsan Alqahtani and Mona Diab. 2019a. Investigating input and output units in diacritic restoration. In 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). IEEE. +Sawsan Alqahtani and Mona Diab. 2019b. Investigating input and output units in diacritic restoration. In 2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA). +Sawsan Alqahtani, Ajay Mishra, and Mona Diab. 2019. + +Convolutional neural networks for diacritic restoration. In EMNLP. +Sankaranarayanan Ananthakrishnan, Shrikanth Narayanan, and Srinivas Bangalore. 2005. Automatic diacritization of arabic transcripts for automatic speech recognition. In Proceedings of the 4th International Conference on Natural Language Processing, pages 47-54. +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics. +Kareem Darwish, Hamdy Mubarak, and Ahmed Abdelali. 2017. Arabic diacritization: Stats, rules, and hacks. In Proceedings of the Third Arabic Natural Language Processing Workshop, pages 9-17. +Mona Diab, Mahmoud Ghoneim, and Nizar Habash. 2007. Arabic diacritization in the context of statistical machine translation. In Proceedings of MTSummit. +Mona Diab, Nizar Habash, Owen Rambow, and Ryan Roth. 2013. Ldc arabic treebanks and associated corpora: Data divisions manual. arXiv preprint arXiv:1309.5652. +Mona Diab, Kadri Hacioglu, and Daniel Jurafsky. 2004. Automatic tagging of arabic text: From raw text to base phrase chunks. In Proceedings of HLT-NAACL 2004: Short papers. Association for Computational Linguistics. +Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhikers guide to testing statistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 1383-1392. +Nizar Habash, Ryan Gabbard, Owen Rambow, Seth Kulick, and Mitch Marcus. 2007. Determining case in arabic: Learning complex linguistic behavior requires complex linguistic features. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). +Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, and Richard Socher. 2016. A joint many-task model: Growing a neural network for multiple nlp tasks. arXiv preprint arXiv:1611.01587. +Ehab W Hermena, Denis Drieghe, Sam Hellmuth, and Simon P Liversedge. 2015. Processing of arabic diacritical marks: Phonological-syntactic disambiguation of homographic verbs and visual crowding effects. Journal of Experimental Psychology: Human Perception and Performance, 41(2):494. +Yasser Hifny. 2018. Hybrid lstm/maxent networks for arabic syntactic diacritics restoration. IEEE Signal Processing Letters, 25(10):1515-1519. + +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. +Alex Kendall, Yarin Gal, and Roberto Cipolla. 2018. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7482-7491. +Iroro Orife. 2018. Attentive sequence-to-sequence learning for diacritic restoration of yor\ub\a language text. arXiv preprint arXiv:1804.00832. +Arfath Pasha, Mohamed Al-Badrashiny, Mona T Diab, Ahmed El Kholy, Ramy Eskander, Nizar Habash, Manoj Pooleery, Owen Rambow, and Ryan Roth. 2014. Madamira: A fast, comprehensive tool for morphological analysis and disambiguation of arabic. In LREC, volume 14, pages 1094-1101. +Mohsen AA Rashwan, Ahmad A Al Sallab, Hazem M Raafat, and Ahmed Rafea. 2015. Deep learning framework with confused sub-set resolution architecture for automatic arabic diacritization. IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), 23(3):505-516. +Ahmed Said, Mohamed El-Sharqwi, Achraf Chalabi, and Eslam Kamal. 2013. A hybrid approach for arabic diacritization. In International Conference on Application of Natural Language to Information Systems, pages 53-64. Springer. +Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681. +Khaled Shaalan, Hitham M Abo Bakr, and Ibrahim Ziedan. 2009. A hybrid approach for building arabic diacritizer. In Proceedings of the EACL 2009 workshop on computational approaches to semitic languages, pages 27-35. Association for Computational Linguistics. +Anas Shahrour, Salam Khalifa, and Nizar Habash. 2015. Improving arabic diacritization through syntactic analysis. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1309-1315. +Dima Taji, Nizar Habash, and Daniel Zeman. 2017. Universal dependencies for arabic. In Proceedings of the Third Arabic Natural Language Processing Workshop, pages 166-176. +Daniel Watson, Nasser Zalmout, and Nizar Habash. 2018. Utilizing character and word embeddings for text normalization with sequence-to-sequence models. arXiv preprint arXiv:1809.01534. +JC Wells. 2000. Orthographic diacritics and multilingual computing. Language problems and language planning, 24(3):249-272. + +Nasser Zalmout and Nizar Habash. 2017. Don't throw those morphological analyzers away just yet: Neural morphological disambiguation for arabic. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 704-713. +Nasser Zalmout and Nizar Habash. 2019a. Adversarial multitask learning for joint multi-feature and multialect morphological modeling. arXiv preprint arXiv:1910.12702. +Nasser Zalmout and Nizar Habash. 2019b. Joint diacritization, lemmatization, normalization, and fine-grained morphological tagging. arXiv preprint arXiv:1910.02267. +Imed Zitouni and Ruhi Sarikaya. 2009. Arabic diacritic restoration approach based on maximum entropy models. Computer Speech & Language, 23(3):257-276. +Imed Zitouni, Jeffrey S Sorensen, and Ruhi Sarikaya. 2006. Maximum entropy based restoration of arabic diacritics. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 577-584. Association for Computational Linguistics. \ No newline at end of file diff --git a/amultitasklearningapproachfordiacriticrestoration/images.zip b/amultitasklearningapproachfordiacriticrestoration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..df1df55b5b39f4115194cb891c069100f9a9e0cb --- /dev/null +++ b/amultitasklearningapproachfordiacriticrestoration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a55fc8443df50c1573ae65a01f275567b75f0af0c202a1984c81ef7ed236c0cc +size 173244 diff --git a/amultitasklearningapproachfordiacriticrestoration/layout.json b/amultitasklearningapproachfordiacriticrestoration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..dc5344981c3c5ca5e232e7b603bf9c2c20cd893d --- /dev/null +++ b/amultitasklearningapproachfordiacriticrestoration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:201952b9b52ca78e4e7d21fa32d96800f2e731fb374e9d7a3e062d8c26c023ce +size 273582 diff --git a/anegativecaseanalysisofvisualgroundingmethodsforvqa/53733ea5-5d68-4d04-9ee0-ed5626b9838f_content_list.json b/anegativecaseanalysisofvisualgroundingmethodsforvqa/53733ea5-5d68-4d04-9ee0-ed5626b9838f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..85bcbd395aa2462d8955b8bdae2753ad748165e2 --- /dev/null +++ b/anegativecaseanalysisofvisualgroundingmethodsforvqa/53733ea5-5d68-4d04-9ee0-ed5626b9838f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0d41bd446e256ee17a267b1405cff7655237df3520d2d314f4b484eca052155 +size 72423 diff --git a/anegativecaseanalysisofvisualgroundingmethodsforvqa/53733ea5-5d68-4d04-9ee0-ed5626b9838f_model.json b/anegativecaseanalysisofvisualgroundingmethodsforvqa/53733ea5-5d68-4d04-9ee0-ed5626b9838f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..2fcb92777f53985c14800577ae36d4acf8bc19d2 --- /dev/null +++ b/anegativecaseanalysisofvisualgroundingmethodsforvqa/53733ea5-5d68-4d04-9ee0-ed5626b9838f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bf31c9cb3199e3c37efb3725c2ff7250381d546edf8f43a10000be7414749afa +size 83978 diff --git a/anegativecaseanalysisofvisualgroundingmethodsforvqa/53733ea5-5d68-4d04-9ee0-ed5626b9838f_origin.pdf b/anegativecaseanalysisofvisualgroundingmethodsforvqa/53733ea5-5d68-4d04-9ee0-ed5626b9838f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..07e32d3bfae3efa6653a5fd4447575a63748d1df --- /dev/null +++ b/anegativecaseanalysisofvisualgroundingmethodsforvqa/53733ea5-5d68-4d04-9ee0-ed5626b9838f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:aad228f4960540ddd65bdc600be9e07561851320b7797779a9321773660bb6a1 +size 776024 diff --git a/anegativecaseanalysisofvisualgroundingmethodsforvqa/full.md b/anegativecaseanalysisofvisualgroundingmethodsforvqa/full.md new file mode 100644 index 0000000000000000000000000000000000000000..23a3ae56b9a50992fa6f9fe45c7dec75a901bc7d --- /dev/null +++ b/anegativecaseanalysisofvisualgroundingmethodsforvqa/full.md @@ -0,0 +1,335 @@ +# A Negative Case Analysis of Visual Grounding Methods for VQA + +Robik Shrestha $^{1}$ Kushal Kafle $^{1,2}$ Christopher Kanan $^{1,3,4}$ + +Rochester Institute of Technology $^{1}$ Adobe Research $^{2}$ Paige $^{3}$ Cornell Tech $^{4}$ + +{rss9369, kk6055, kanan}@rit.edu + +# Abstract + +Existing Visual Question Answering (VQA) methods tend to exploit dataset biases and spurious statistical correlations, instead of producing right answers for the right reasons. To address this issue, recent bias mitigation methods for VQA propose to incorporate visual cues (e.g., human attention maps) to better ground the VQA models, showcasing impressive gains. However, we show that the performance improvements are not a result of improved visual grounding, but a regularization effect which prevents over-fitting to linguistic priors. For instance, we find that it is not actually necessary to provide proper, human-based cues; random, insensible cues also result in similar improvements. Based on this observation, we propose a simpler regularization scheme that does not require any external annotations and yet achieves near state-of-the-art performance on VQA-CPv2 $^1$ . + +# 1 Introduction + +Visual Question Answering (VQA) (Antol et al., 2015), the task of answering questions about visual content, was proposed to facilitate the development of models with human-like visual and linguistic understanding. However, existing VQA models often exploit superficial statistical biases to produce responses, instead of producing the right answers for the right reasons (Kafle et al., 2019). + +The VQA-CP dataset (Agrawal et al., 2018) showcases this phenomenon by incorporating different question type/answer distributions in the train and test sets. Since the linguistic priors in the train and test sets differ, models that exploit these priors fail on the test set. To tackle this issue, recent works have endeavored to enforce proper visual grounding, where the goal is to make models produce answers by looking at relevant visual regions (Gan et al., 2017; Selvaraju et al., + +![](images/afe6aeb89507fba588d631b9cbe8d42cd7ab2e89b3abbb623498e4b8e7384587.jpg) +VOA-CP Dataset Q: What color is the couch? A:Green + +![](images/337cc708220cee0d68999ca8bdb299189783fb7cd1ae0304d44e086611f4e2fa.jpg) +Answer distribution +Figure 1: We find that existing visual sensitivity enhancement methods improve performance on VQACpv2 through regularization as opposed to proper visual grounding. + +![](images/86ee8aeb52af94c8a8cf7cfd6178963c2c0724713f43b6a3e37196d8f6c53f37.jpg) +Baseline Methods Affected by language priors + +![](images/5d23fa6564cee5f6ed1e6f7392bc6bb95a0c3d682d325bca0f48fb42ee3a3323.jpg) + +![](images/cabef01b11017fdbf4d6f3380c10f36000fb2667acfd8aa22f5027103a482f3d.jpg) +Recent Methods Improve by grounding on relevant regions $+9\%$ over baselines Prediction:Green + +![](images/7bce160276e30afa6384e6187361c0b59ec55c30793504da300cefccc7276a66.jpg) +Our Findings Irrelevant/random regions result in similar gains $+9\%$ over baselines Prediction:Green + +2019; Wu and Mooney, 2019), instead of exploiting linguistic priors. These approaches rely on additional annotations/cues such as human-based attention maps (Das et al., 2017), textual explanations (Huk Park et al., 2018) and object label predictions (Ren et al., 2015) to identify relevant regions, and train the model to base its predictions on those regions, showing large improvements (8- $10\%$ accuracy) on the VQA-CPv2 dataset. + +Here, we study these methods. We find that their improved accuracy does not actually emerge from proper visual grounding, but from regularization effects, where the model forgets the linguistic priors in the train set, thereby performing better on the test set. To support these claims, we first show that it is possible to achieve such gains even when the model is trained to look at: a) irrelevant visual regions, and b) random visual regions. Second, we show that differences in the predictions from the + +variants trained with relevant, irrelevant and random visual regions are not statistically significant. Third, we show that these methods degrade performance when the priors remain intact and instead work on VQA-CPv2 by hurting its train accuracy. + +Based on these observations, we hypothesize that controlled degradation on the train set allows models to forget the training priors to improve test accuracy. To test this hypothesis, we introduce a simple regularization scheme that zeros out the ground truth answers, thereby always penalizing the model, whether the predictions are correct or incorrect. We find that this approach also achieves near state-of-the-art performance (48.9% on VQACpv2), providing further support for our claims. + +While we agree that visual grounding is a useful direction to pursue, our experiments show that the community requires better ways to test if systems are actually visually grounded. We make some recommendations in the discussion section. + +# 2 Related Work + +# 2.1 Biases in VQA + +As expected of any real world dataset, VQA datasets also contain dataset biases (Goyal et al., 2017). The VQA-CP dataset (Agrawal et al., 2018) was introduced to study the robustness of VQA methods against linguistic biases. Since it contains different answer distributions in the train and test sets, VQA-CP makes it nearly impossible for the models that rely upon linguistic correlations to perform well on the test set (Agrawal et al., 2018; Shrestha et al., 2019). + +# 2.2 Bias Mitigation for VQA + +VQA algorithms without explicit bias mitigation mechanisms fail on VQA-CP, so recent works have focused on the following solutions: + +# 2.2.1 Reducing Reliance on Questions + +Some recent approaches employ a question-only branch as a control model to discover the questions most affected by linguistic correlations. The question-only model is either used to perform adversarial regularization (Grand and Belinkov, 2019; Ramakrishnan et al., 2018) or to re-scale the loss based on the difficulty of the question (Cadene et al., 2019). However, when these ideas are applied to the UpDn model (Anderson et al., 2018), which attempts to learn correct visual grounding, + +these approaches achieve $4 - 7\%$ lower accuracy compared to the state-of-the-art methods. + +# 2.2.2 Enhancing Visual Sensitivities + +Both Human Importance Aware Network Tuning (HINT) (Selvaraju et al., 2019) and Self Critical Reasoning (SCR) (Wu and Mooney, 2019), train the network to be more sensitive towards salient image regions by improving the alignment between visual cues and gradient-based sensitivity scores. HINT proposes a ranking loss between human-based importance scores (Das et al., 2016) and the gradient-based sensitivities. In contrast, SCR does not require exact saliency ranks. Instead, it penalizes the model if correct answers are more sensitive towards non-important regions as compared to important regions, and if incorrect answers are more sensitive to important regions than correct answers. + +# 3 Existing VQA Methods + +Given a question $\mathcal{Q}$ and an image $\mathcal{I}$ , e.g., represented by bottom-up region proposals: $v$ (Anderson et al., 2018), a VQA model is tasked with predicting the answer $a$ : + +$$ +P (a | \mathcal {Q}, \mathcal {I}) = f _ {V Q A} (v, \mathcal {Q}). \tag {1} +$$ + +# 3.1 Baseline VQA Methods + +Without additional regularization, existing VQA models such as the baseline model used in this work: UpDn (Anderson et al., 2018), tend to rely on the linguistic priors: $P(a|\mathcal{Q})$ to answer questions. Such models fail on VQA-CP, because the priors in the test set differ from the train set. + +# 3.2 Visual Sensitivity Enhancement Methods + +To reduce the reliance on linguistic priors, visual sensitivity enhancement methods attempt to train the model to be more sensitive to relevant visual regions when answering questions. Following (Wu and Mooney, 2019), we define the sensitivity of an answer $a$ with respect to a visual region $v_{i}$ as: + +$$ +\mathcal {S} (a, v _ {i}) := \left(\nabla_ {v _ {i}} P (a | \mathcal {I}, \mathcal {Q})\right) ^ {T} \mathbf {1}. \tag {2} +$$ + +Existing methods propose the following training objectives to improve grounding using $S$ : + +- HINT uses a ranking loss, which penalizes the model if the pair-wise rankings of the sensitivities of visual regions towards ground truth answers $a_{gt}$ are different from the ranks computed from the human-based attention maps. + +- SCR divides the region proposals into influential and non-influential regions and penalizes the model if: 1) $S(a_{gt})$ of a non-influential region is higher than an influential region, and 2) the region most influential for the correct answer has even higher sensitivity for incorrect answers. + +Both methods improve baseline accuracy by $8 - 10\%$ . Is this actually due to better visual grounding? + +# 4 Why Did the Performance Improve? + +We probe the reasons behind the performance improvements of HINT and SCR. We first analyze if the results improve even when the visual cues are irrelevant (Sec. 4.2) or random (Sec. 4.3) and examine if their differences are statistically significant (Sec. 4.4). Then, we analyze the regularization effects by evaluating the performance on VQACPv2's train split (Sec. 4.5) and the behavior on a dataset without changing priors (Sec. 4.6). We present a new metric to assess visual grounding in Sec. 4.7 and describe our regularization method in Sec. 5. + +# 4.1 Experimental Setup + +We compare the baseline UpDn model with HINT and SCR-variants trained on VQAv2 or VQA-CPv2 to study the causes behind the improvements. We report mean accuracies across 5 runs, where a pretrained UpDn model is fine-tuned on subsets with human attention maps and textual explanations for HINT and SCR respectively. Further training details are provided in the Appendix. + +# 4.2 Training on Irrelevant Visual Cues + +In our first experiment we studied how irrelevant visual cues performed compared to relevant ones. We fine-tune the model with irrelevant cues defined as: $S_{irrelevant} \coloneqq (1 - S_h)$ , where, $S_h$ represents the human-based importance scores. As shown in the 'Grounding using irrelevant cues' section of Table 1, both HINT and SCR are within $0.3\%$ of the results obtained from looking at relevant regions, which indicates the gains for HINT and SCR are not necessarily from looking at relevant regions. + +# 4.3 Training on Random Visual Cues + +In our next experiment we studied how random visual cues performed with HINT and SCR. We assign random importance scores to the visual regions: $S_{rand} \sim uniform(0,1)$ . We test two variants of randomness: Fixed random regions, where + +Table 1: Results on VQA-CPv2 and VQAv2 datasets for the baseline UpDn, visual sensitivity enhancement methods (HINT and SCR) and our own regularization method, including the published (pub.) numbers. + +
VQA-CPv2VQAv2
TrainTestTrainVal
Baseline - Without visual grounding
UpDn84.040.183.464.4
Grounding using human-based cues
HINTpub.N/A46.7N/A63.41
SCRpub.N/A49.5N/A62.2
HINT73.948.275.761.3
SCR75.949.177.961.3
Grounding using irrelevant cues
HINT71.248.073.560.3
SCR75.749.274.159.1
Grounding using fixed random cues
HINT72.048.173.059.5
SCR70.049.178.061.4
Grounding using variable random cues
HINT71.948.172.959.4
SCR69.649.278.161.5
Regularization by zeroing out answers
Ours1% fixed78.048.980.162.6
Ours1% var.77.648.580.062.6
Ours100%75.748.279.962.4
+ +1 The published number is a result of fine-tuning HINT on the entire training set, but as described in Sec. 4.6, other published numbers and our experiments fine-tune only on the instances with cues. + +$S_{rand}$ are fixed once chosen, and Variable random regions, where $S_{rand}$ are regenerated every epoch. As shown in Table 1, both of these variants obtain similar results as the model trained with human-based importance scores. The performance improves even when the importance scores are changed every epoch, indicating that it is not even necessary to look at the same visual regions. + +# 4.4 Significance of Statistical Differences + +To test if the changes in results were statistically significant, we performed Welch's t-tests (Welch, 1938) on the predictions of the variants trained on relevant, irrelevant and random cues. We pick Welch's t-test over the Student's t-test, because the latter assumes equal variances for predictions from different variants. To perform the tests, we first randomly sample 5000 subsets of non-overlapping test instances. We then average the accuracy of each subset across 5 runs, obtaining 5000 values. Next, we run the t-tests for HINT and SCR separately on the subset accuracies. As shown in Table 2, the $p$ -values across the variants of HINT and SCR are + +Table 2: $p$ -values from the Welch's t-tests and the percentage of overlap between the predictions (Ovp.) of different variants of HINT and SCR. + +
MethodspOvp.(%)
HINT variants against Baseline
Default vs. Baseline0.083.6
Irrelevant vs. Baseline0.082.4
Fixed Random vs. Baseline0.082.0
Variable Random vs. Baseline0.081.5
Among HINT variants
Default vs Irrelevant0.389.7
Default vs Fixed random0.790.9
Default vs Variable random0.691.9
Irrelevant vs Fixed random0.595.6
Irrelevant vs Variable random0.793.9
Fixed random vs Variable random0.996.9
SCR variants against Baseline
Default vs. Baseline0.085.6
Irrelevant vs. Baseline0.084.2
Fixed Random vs. Baseline0.080.7
Variable Random vs. Baseline0.080.6
Among SCR variants
Default vs Irrelevant0.692.0
Default vs Fixed random0.889.3
Default vs Variable random0.689.5
Irrelevant vs Fixed random0.491.7
Irrelevant vs Variable random1.091.6
Fixed random vs Variable random0.496.7
+ +greater than or equal to 0.3. Using a confidence level of $95\%$ $(\alpha = 0.05)$ , we fail to reject the null hypothesis that the mean difference between the paired values is 0, showing that the variants are not statistically significantly different from each other. We also compare the predictions of HINT/SCR against baseline, and find that $p$ -values are all zeros, showing that the differences have statistical significance. + +Percentage of Overlaps: To further check if the variants trained on irrelevant or random regions gain performance in a manner similar to the models trained on relevant regions, we compute the overlap between their predictions on VQA-CPv2's test set. The percentage of overlap is defined as: + +$$ +\% Overlap = \frac{n_{same}}{n_{total}}\times 100\% , +$$ + +where, $n_{\text{same}}$ denotes the number of instances where either both variants were correct or both were incorrect and $n_{\text{total}}$ denotes the total number of test instances. As shown in Table 2, we compare %Overlap between different variants of HINT/SCR with baseline and against each other. + +![](images/3220aa17567ca2233d96230510e6e2e337c3fbfc3c106b9d6d4b2a235e91ffe7.jpg) +Figure 2: Accuracies for HINT and SCR on VQAv2's val set, when fine-tuned either on the full train set or on the subset containing visual cues. + +We find $89.7 - 91.9\%$ and $89.5 - 92.0\%$ overlaps for different variants of HINT and SCR respectively. These high overlaps suggest that the variants are not working in fundamentally different manners. + +# 4.5 Drops in Training Accuracy + +We compare the training accuracies to analyze the regularization effects. As shown in Table 1, the baseline method has the highest training results, while the other methods cause $6.0 - 14.0\%$ and $3.3 - 10.5\%$ drops in the training accuracy on VQACPv2 and VQAv2, respectively. We hypothesize that degrading performance on the train set helps forget linguistic biases, which in turn helps accuracy on VQA-CPv2's test set but hurts accuracy on VQAv2's val set. + +# 4.6 Drops in VQAv2 Accuracy + +As observed by Selvaraju et al. (2019) and as shown in Fig. 2, we observe small improvements on VQAv2 when the models are fine-tuned on the entire train set. However, if we were to compare against the improvements in VQA-CPv2 in a fair manner, i.e., only use the instances with visual cues while fine-tuning, then, the performance on VQAv2 drops continuously during the course of the training. This indicates that HINT and SCR help forget linguistic priors, which is beneficial for VQA-CPv2 but not for VQAv2. + +# 4.7 Assessment of Proper Grounding + +In order to quantitatively assess visual grounding, we propose a new metric called: Correctly Predicted but Improperly Grounded (CPIG): + +$$ +\% C P I G = \frac {N _ {\text {correct ans , improper grounding}}}{N _ {\text {correct ans}}} \times 100 \% +$$ + +which is the number instances for which the most sensitive visual region used to correctly predict the + +answer is not within top-3 most relevant ground truth regions, normalized by the total number of correct predictions. HINT and SCR trained on relevant regions obtained lower CPIG values that other variants (70.24% and 80.22% respectively), indicating they are better than other variants at finding relevant regions. However, these numbers are still high, and show that only 29.76% and 19.78% of the correct predictions for HINT and SCR were properly grounded. Further analysis is presented in the Appendix. + +# 5 Embarrassingly Simple Regularizer + +The usage of visual cues and sensitivities in existing methods is superfluous because the results indicate that performance improves through degradation of training accuracy. We hypothesize that simple regularization that does not rely on cues or sensitivities can also achieve large performance gains for VQA-CP. To test this hypothesis, we devise a simple loss function which continuously degrades the training accuracy by training the network to always predict a score of zero for all possible answers i.e. produce a zero vector $(\mathbf{0})$ . The overall loss function can be written as: + +$$ +L := B C E (P (\mathcal {A}), \mathcal {A} _ {g t}) + \lambda B C E (P (\mathcal {A}), \mathbf {0}), +$$ + +where, BCE refers to the binary cross entropy loss and $P(\mathcal{A})$ is a vector consisting of predicted scores for all possible answers. The first term is the binary cross entropy loss between model predictions and ground truth answer vector $(\mathcal{A}_{gt})$ , and the second term is our regularizer with a coefficient of $\lambda = 1$ . Note that this regularizer continually penalizes the model during the course of the training, whether its predictions are correct or incorrect. + +As shown in Table 1, we present results when this loss is used on: a) Fixed subset covering $1\%$ of the dataset, b) Varying subset covering $1\%$ of the dataset, where a new random subset is sampled every epoch and c) $100\%$ of the dataset. Confirming our hypothesis, all variants of our model achieve near state-of-the-art results, solidifying our claim that the performance gains for recent methods come from regularization effects. + +It is also interesting to note that the drop in training accuracy is lower with this regularization scheme as compared to the state-of-the-art methods. Of course, if any model was actually visually grounded, then we would expect it to improve performances on both train and test sets. We do not + +observe such behavior in any of the methods, indicating that they are not producing right answers for the right reasons. + +# 6 Discussion on Proper Grounding + +While our results indicate that current visual grounding based bias mitigation approaches do not suffice, we believe this is still a good research direction. However, future methods must seek to verify that performance gains are not stemming from spurious sources by using an experimental setup similar to that presented in this paper. We recommend that both train and test accuracy be reported, because a model truly capable of visual grounding would not cause drastic drops in training accuracy to do well on the test sets. Finally, we advocate for creating a dataset with ground truth grounding available for $100\%$ of the instances using synthetically generated datasets (Kafle et al., 2017; Kafle and Kanan, 2017; Kafle et al., 2018; Acharya et al., 2019b; Hudson and Manning, 2019; Johnson et al., 2017), enabling the community to evaluate if their methods are able to focus on relevant information. Another alternative is to use tasks that explicitly test grounding, e.g., in visual query detection an agent must output boxes around any regions of a scene that match the natural language query (Acharya et al., 2019a). + +# 7 Conclusion + +Here, we showed that existing visual grounding based bias mitigation methods for VQA are not working as intended. We found that the accuracy improvements stem from a regularization effect rather than proper visual grounding. We proposed a simple regularization scheme which, despite not requiring additional annotations, rivals state-of-the-art accuracy. Future visual grounding methods should be tested with a more comprehensive experimental setup and datasets for proper evaluation. + +Acknowledgement. This work was supported in part by AFOSR grant [FA9550-18-1-0121], NSF award #1909696, and a gift from Adobe Research. We thank NVIDIA for the GPU donation. The views and conclusions contained herein are those of the authors and should not be interpreted as representing the official policies or endorsements of any sponsor. We are grateful to Tyler Hayes for agreeing to review the paper at short notice and suggesting valuable edits and corrections for the paper. + +# References + +Manoj Acharya, Karan Jariwala, and Christopher Kanan. 2019a. VQD: Visual query detection in natural scenes. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1955-1961, Minneapolis, Minnesota. Association for Computational Linguistics. +Manoj Acharya, Kushal Kafle, and Christopher Kanan. 2019b. Tallyqa: Answering complex counting questions. In Association for the Advancement of Artificial Intelligence (AAAI). +Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Aniruddha Kembhavi. 2018. Dont just assume; look and answer: Overcoming priors for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4971-4980. +Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). +Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question answering. In The IEEE International Conference on Computer Vision (ICCV). +Remi Cadene, Corentin Dancette, Matthieu Cord, Devi Parikh, et al. 2019. Rubi: Reducing unimodal biases for visual question answering. In Advances in Neural Information Processing Systems (NeurIPS), pages 839-850. +Abhishek Das, Harsh Agrawal, C Lawrence Zitnick, Devi Parikh, and Dhruv Batra. 2016. Human attention in visual question answering: Do humans and deep networks look at the same regions? In _Conference on Empirical Methods on Natural Language Processing (EMNLP)\. +Abhishek Das, Harsh Agrawal, Larry Zitnick, Devi Parikh, and Dhruv Batra. 2017. Human attention in visual question answering: Do humans and deep networks look at the same regions? Computer Vision and Image Understanding (CVIU), 163:90-100. +Chuang Gan, Yandong Li, Haoxiang Li, Chen Sun, and Boqing Gong. 2017. Vqs: Linking segmentations to questions and answers for supervised attention in vqa and question-focused semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision, pages 1811-1820. +Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the + +V in VQA matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, page 3. +Gabriel Grand and Yonatan Belinkov. 2019. Adversarial regularization for visual question answering: Strengths, shortcomings, and side effects. In Proceedings of the Second Workshop on Shortcomings in Vision and Language, pages 1-13, Minneapolis, Minnesota. Association for Computational Linguistics (ACL). +Drew A Hudson and Christopher D Manning. 2019. GQA: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6700-6709. +Dong Huk Park, Lisa Anne Hendricks, Zeynep Akata, Anna Rohrbach, Bernt Schiele, Trevor Darrell, and Marcus Rohrbach. 2018. Multimodal explanations: Justifying decisions and pointing to the evidence. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 8779-8788. +Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1988-1997. IEEE. +Kushal Kafle and Christopher Kanan. 2017. An analysis of visual question answering algorithms. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 1983-1991. IEEE. +Kushal Kafle, Brian Price, Scott Cohen, and Christopher Kanan. 2018. DVQA: Understanding data visualizations via question answering. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5648-5656. +Kushal Kafle, Robik Shrestha, and Christopher Kanan. 2019. Challenges and prospects in vision and language research. Frontiers in Artificial Intelligence. +Kushal Kafle, Mohammed Yousefhussien, and Christopher Kanan. 2017. Data augmentation for visual question answering. In Proceedings of the 10th International Conference on Natural Language Generation (INLG), pages 198-202. +Sainandan Ramakrishnan, Aishwarya Agrawal, and Stefan Lee. 2018. Overcoming language priors in visual question answering with adversarial regularization. In Advances in Neural Information Processing Systems (NeurIPS), pages 1541-1551. +Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NeurIPS). + +Ramprasaath R Selvaraju, Stefan Lee, Yilin Shen, Hongxia Jin, Shalini Ghosh, Larry Heck, Dhruv Batra, and Devi Parikh. 2019. Taking a hint: Leveraging explanations to make vision and language models more grounded. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 2591-2600. + +Robik Shrestha, Kushal Kafle, and Christopher Kanan. 2019. Answer them all! toward universal visual question answering models. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). + +Bernard L Welch. 1938. The significance of the difference between two means when the population variances are unequal. Biometrika, 29(3/4):350-362. + +Jialin Wu and Raymond Mooney. 2019. Self-critical reasoning for robust visual question answering. In Advances in Neural Information Processing Systems (NeurIPS), pages 8601-8611. + +# A Appendix + +# A.1 Training Details + +We compare four different variants of HINT and SCR to study the causes behind the improvements including the models that are fine-tuned on: 1) relevant regions (state-of-the-art methods) 2) irrelevant regions 3) fixed random regions and 4) variable random regions. For all variants, we fine-tune a pretrained UpDn, which was trained on either VQACpv2 or VQAv2 for 40 epochs with a learning rate of $10^{-3}$ . When fine-tuning with HINT, SCR or our method, we also use the main binary cross entropy VQA loss, whose weight is set to 1. The batch size is set to 384 for all of the experiments. + +# HINT + +Following (Selvaraju et al., 2019), we train HINT on the subset with human-based attention maps (Das et al., 2017), which are available for $9\%$ of the VQA-CPv2 train and test sets. The same subset is used for VQAv2 too. The learning rate is set to $2 \times 10^{-5}$ and the weight for the HINT loss is set to 2. + +# SCR + +Since (Wu and Mooney, 2019) reported that human-based textual explanations (Huk Park et al., 2018) gave better results than human-based attention maps for SCR, we train all of the SCR variants on the subset containing textual explanation-based cues. SCR is trained in two phases. For the first phase, which strengthens the influential objects, we use a learning rate of $5 \times 10^{-5}$ , loss weight of 3 + +Table A3: Results on VQA-CPv2 and VQAv2 datasets for the baseline UpDn, visual sensitivity enhancement methods (HINT and SCR) and our own regularization method, including the published (pub.) numbers. + +
VQA-CPv2VQAv2
Baseline - Without visual grounding
UpDn0.01100.0155
Grounding using human-based cues
HINT0.10200.1350
SCR0.0340-0.0670
Grounding using irrelevant cues
HINT-0.0048-0.0200
SCR0.0580-0.0100
Grounding using fixed random cues
HINT0.05100.0620
SCR-0.0250-0.0350
Grounding using variable random cues
HINT0.05700.0623
SCR-0.03800.0246
Regularization by zeroing out answers
Ours1% fixed-0.1050-0.1200
Ours100%-0.0750-0.0100
+ +and train the model to a maximum of 12 epochs. Then, following (Wu and Mooney, 2019), for the second phase, we use the best performing model from the first phase to train the second phase, which criticizes incorrect dominant answers. For the second phase, we use a learning rate of $10^{-4}$ and weight of 1000, which is applied alongside the loss term used in the first phase. The specified hyperparameters worked better for us than the values provided in the original paper. + +# Our Zero-Out Regularizer + +Our regularization method, which is a binary cross entropy loss between the model predictions and a zero vector, does not use additional cues or sensitivities and yet achieves near state-of-the-art performance on VQA-CPv2. We set the learning rate to: $\frac{2 \times 10^{-6}}{r}$ , where $r$ is the ratio of the training instances used for fine-tuning. The weight for the loss is set to 2. We report the performance obtained at the $8^{th}$ epoch. + +# A.2 Results + +# Correlation with Ground Truth Visual Cues + +Following (Selvaraju et al., 2019), we report Spearman's rank correlation between network's sensitivity scores and human-based scores in Table A3. For HINT and our zero-out regularizer, we use human-based attention maps. For SCR, we use textual explanation-based scores. We find that HINT + +trained on human attention maps has the highest correlation coefficients for both datasets. However, compared to baseline, HINT variants trained on random visual cues also show improved correlations. For SCR, we obtain surprising results, with the model trained on irrelevant cues obtaining higher correlation than that trained on relevant visual cues. As expected, applying our regularizer does not improve rank correlation. Since HINT trained on relevant cues obtains the highest correlation values, it does indicate improvement in visual grounding. However, as we have seen, the improvements in performance cannot necessarily be attributed to better overlap with ground truth localizations. + +# A Note on Qualitative Examples + +Presentation of qualitative examples in visual grounding models for VQA suffers from confirmation bias i.e., while it is possible to find qualitative samples that look at relevant regions to answer questions properly, it is also possible to find samples that produce correct answers without looking at relevant regions. We present examples for such cases in Fig. A3. We next present a quantitative assessment of visual grounding, which does not suffer from the confirmation bias. + +# Quantitative Assessment of Grounding + +In order to truly assess if existing methods are using relevant regions to produce correct answers, we use our proposed metric: Correctly Predicted but Improperly Grounded (CPIG). If the CPIG values are large, then it implies that large portion of correctly predicted samples were not properly grounded. Fig. A4 shows $\%$ CPIG for different variants of HINT trained on human attention-based cues, whereas Fig. A5 shows the metric for different variants of SCR trained on textual explanation-based cues. We observe that HINT and SCR trained on relevant regions have the lowest $\%$ CPIG values (70.24% and 80.22% respectively), indicating that they are better than other variants in finding relevant regions. However, only a small percentage of correctly predicted samples were properly grounded (29.76% and 19.78% for HINT and SCR respectively), even when trained on relevant cues. + +# Breakdown by Answer Types + +Table A4 shows VQA accuracy for each answer type on VQACPv2's test set. HINT/SCR and our regularizer show large gains in 'Yes/No' questions. + +Table A4: VQA accuracy per answer-type on VQACPv2 test set. + +
Over-allYes/NoNumOther
Baseline - Without visual grounding
UpDn40.141.112.047.2
Grounding using human-based cues
HINT48.265.213.847.5
SCR49.170.311.548.0
Grounding using irrelevant cues
HINT48.067.213.547.1
SCR49.273.411.546.4
Grounding using fixed random cues
HINT48.166.913.846.9
SCR49.174.712.245.1
Grounding using variable random cues
HINT48.167.113.946.9
SCR49.274.712.245.1
Regularization by zeroing out answers
Ours1% fixed48.969.811.347.8
Ours100%48.266.711.747.9
+ +We hypothesize that the methods help forget linguistic priors, which improves test accuracy of such questions. In the train set of VQACPv2, the answer 'no' is more frequent than the answer 'yes', tempting the baseline model to answer 'yes/no' questions with 'no'. However, in the test set, answer 'yes' is more frequent. Regularization effects caused by HINT/SCR and our method cause the models to weaken this prior i.e., reduce the tendency to just predict 'no', which would increase accuracy at test because 'yes' is more frequent in the test set. Next, all of the methods perform poorly on 'Number (Num)' answer type, showing that methods find it difficult to answer questions that are most reliant on correct visual grounding such as: localizing and counting objects. Finally, we do not observe large improvements in 'Other' question type, most likely due to the large number of answers present under this answer type. + +# Accuracy versus Size of Train Set + +We test our regularization method on random subsets of varying sizes. Fig. A6 shows the results when we apply our loss to $1 - 100\%$ of the training instances. Clearly, the ability to regularize the model does not vary much with respect to the size of the train subset, with the best performance occurring when our loss is applied to $1\%$ of the training instances. These results support our claims that it is possible to improve performance without actually performing visual grounding. + +![](images/ca3e9a4978dc3f483b5293dabf13b8bf0cc4988659ebeb071beed64beefa2791.jpg) +Ground Truth Localization + +![](images/2757fefb6b456761eb7dd7add8f8b3b2724ccd44e0c6d46a0372188d858935f7.jpg) +HINT trained on relevant cues + +![](images/80b92ddea93818ad3715e750f5e330ce3834f8eea8c420ab817a5734d8dee54c.jpg) +HINT trained on irrelevant cues + +![](images/912363b4f5d6375c645c95d144c5b28f3c89721ba6b8d0a16f68a4193c3e0b42.jpg) +HINT trained on random cues + +# Q: Is this food sweet? A: yes + +Remarks: The most sensitive regions for irrelevant/random variants do not contain food, yet their answers are correct. + +![](images/9665168496235f2e22ea7f5d8414b42421a6c98f21ae30d2d74dbf93f39b33f6.jpg) + +![](images/5053094c35efed3ee70108a04e730bc00bdbf6d38fc59239600fbe596b87ae86.jpg) + +![](images/547f69e777a018e296c3443e0b905e7efc3c999d5e97280bd3d6b75af71af84e.jpg) + +![](images/b894f3c5273ea867461e9eb4cd8d491cf8c76e362976057a06b1bc2ba5b00b16.jpg) + +# Q: What is the swimmer doing? A: surfing + +Remarks: Models trained on irrelevant/random cues do not look at the swimmer at all, yet produce correct answer. + +![](images/2ce22e147ce6d7426d4109825baef047f49ad2379e1b8776aee3e92af3c1c2c7.jpg) + +![](images/186b5b00c11dbe2e24530771d5f6a90a22e65ed525355844f016d031eb537131.jpg) + +![](images/ff8091fbd0a9b69d591336c481455d4ef4baf987d24a50aee59b8b68004cf2f2.jpg) + +![](images/87c4e4ccba788e3bdcfac3979f9620c35a47f15662d8447de48c21a5b23e6ab4.jpg) + +# Q: Is the sport being played tennis or volleyball? A: tennis + +Remarks: None of the variants look at relevant regions, and yet produce correct answer. + +![](images/8b2260249e4b18f1107c854e314de21fd42b9d526367f1c7d54376e551033792.jpg) +Figure A3: Visualizations of most sensitive visual regions used by different variants of HINT to make predictions. We pick samples where all variants produce correct response to the question. The first column shows ground truth regions and columns 2-4 show visualizations from HINT trained on relevant, irrelevant and fixed random regions respectively. + +![](images/ab723b063031ff84aca2a06efd4fb6119fdd69073d59ee9b6aac823e075ce33e.jpg) + +![](images/0126f367ac4ad392b576a8c9faf56cba04d6c7c9f2f3ed92feb3c0bc10bff090.jpg) + +![](images/07778583c9d9567d29f612618ef5f8f6b0b28cbd75e4d8353fd1cadfc01767da.jpg) + +# Q: Has the boy worn out his jeans? A: yes + +Remarks: All of the variants look at both relevant and irrelevant regions to produce correct answer. + +![](images/8a9cfeba8c28b31105e6ab110d8b9a59ae7ba7da9bca6db1cfade1ab01f8c638.jpg) +Figure A4: $\%$ CPIG for baseline and different variants of HINT and our method, computed using ground truth relevant regions taken from human attention maps (lower is better). + +![](images/ba0ec8c310859bf0216b4b42c91ae6ca6acf466aaae7ab7a72d11cf60f7f7f58.jpg) +Figure A5: $\%$ CPIG for baseline and different variants of SCR and our method, computed using ground truth relevant regions taken from textual explanations (txt). + +![](images/135b1bbd9c8545bd64971bd5a54ed819686d5a0e3f61daee2c30ad1c74e056df.jpg) +Figure A6: The regularization effect of our loss is invariant with respect to the dataset size. \ No newline at end of file diff --git a/anegativecaseanalysisofvisualgroundingmethodsforvqa/images.zip b/anegativecaseanalysisofvisualgroundingmethodsforvqa/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e7b3d27c534b6b4a90f054871ebea5843032df43 --- /dev/null +++ b/anegativecaseanalysisofvisualgroundingmethodsforvqa/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c932e593d295c1eb44acc9447b1137233fde0a600f3e82126a932453137de2ae +size 549598 diff --git a/anegativecaseanalysisofvisualgroundingmethodsforvqa/layout.json b/anegativecaseanalysisofvisualgroundingmethodsforvqa/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..eddd3bf50b1263ad36c8b16c4f01af6a97c42e93 --- /dev/null +++ b/anegativecaseanalysisofvisualgroundingmethodsforvqa/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75b3893fe6d670b29973cbb8f0d983f5c213e35e0789f8cbac823bf07a51671a +size 344497 diff --git a/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/59b7003a-0a7f-448b-a204-dab3ab75b123_content_list.json b/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/59b7003a-0a7f-448b-a204-dab3ab75b123_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..b71837897df3f254b87e7c566af00b4b20f60e15 --- /dev/null +++ b/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/59b7003a-0a7f-448b-a204-dab3ab75b123_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f4ac8a1b3e338465298d1fba713a885e579f99926015fe43beb74b0fcc29949a +size 92866 diff --git a/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/59b7003a-0a7f-448b-a204-dab3ab75b123_model.json b/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/59b7003a-0a7f-448b-a204-dab3ab75b123_model.json new file mode 100644 index 0000000000000000000000000000000000000000..50aa269d3660e04de11ad09ecade915e90117abe --- /dev/null +++ b/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/59b7003a-0a7f-448b-a204-dab3ab75b123_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:71f07dc63b3aa3a918d8dcd87fd2dbeec8ff5c75ae466ec3e03de5767b4eda86 +size 108914 diff --git a/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/59b7003a-0a7f-448b-a204-dab3ab75b123_origin.pdf b/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/59b7003a-0a7f-448b-a204-dab3ab75b123_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2d1743696c33d6703bcfa5636431c498d0e584aa --- /dev/null +++ b/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/59b7003a-0a7f-448b-a204-dab3ab75b123_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:54625d31be5a6f3eb5408be482fc454fb37140d1c1aac2541a33324fe73afe00 +size 718876 diff --git a/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/full.md b/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a892e4886a1e502707df543918f8eab09e829415 --- /dev/null +++ b/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/full.md @@ -0,0 +1,328 @@ +# A Novel Cascade Binary Tagging Framework for Relational Triple Extraction + +Zhepei Wei $^{1,2}$ , Jianlin Su $^{4}$ , Yue Wang $^{5}$ , Yuan Tian $^{1,2*}$ , Yi Chang $^{1,2,3*}$ + +$^{1}$ School of Artificial Intelligence, Jilin University + +$^{2}$ Key Laboratory of Symbolic Computation and Knowledge Engineering, Jilin University + +$^{3}$ International Center of Future Science, Jilin University + +4Shenzhen Zhuiyi Technology Co., Ltd. + +$^{5}$ School of Information and Library Science, University of North Carolina at Chapel Hill + +weizp19@ mails.jlu.edu.cn, bojonesu@ wezhuiyi.com, wangyue $@$ email.unc.edu, + +yuantian@jlu.edu.cn, yichang@jlu.edu.cn + +# Abstract + +Extracting relational triples from unstructured text is crucial for large-scale knowledge graph construction. However, few existing works excel in solving the overlapping triple problem where multiple relational triples in the same sentence share the same entities. In this work, we introduce a fresh perspective to revisit the relational triple extraction task and propose a novel cascade binary tagging framework (CASREL) derived from a principled problem formulation. Instead of treating relations as discrete labels as in previous works, our new framework models relations as functions that map subjects to objects in a sentence, which naturally handles the overlapping problem. Experiments show that the CAS-REL framework already outperforms state-of-the-art methods even when its encoder module uses a randomly initialized BERT encoder, showing the power of the new tagging framework. It enjoys further performance boost when employing a pre-trained BERT encoder, outperforming the strongest baseline by 17.5 and 30.2 absolute gain in F1-score on two public datasets NYT and WebNLG, respectively. In-depth analysis on different scenarios of overlapping triples shows that the method delivers consistent performance gain across all these scenarios. The source code and data are released online1. + +# 1 Introduction + +The key ingredient of a knowledge graph is relational facts, most of which consist of two entities connected by a semantic relation. These facts are in the form of (subject, relation, object), or $(s,r,o)$ referred to as relational triples. Extracting relational triples from natural language text is a crucial step towards constructing large-scale knowledge graphs. + +![](images/28823bd7706aea7bed73c2bc2a9f3e035307df8e10e768808b45c3ba9f6ec651.jpg) +Figure 1: Examples of Normal, EntityPairOverlap (EPO) and SingleEntityOverlap (SEO) overlapping patterns. + +Early works in relational triple extraction took a pipeline approach (Zelenko et al., 2003; Zhou et al., 2005; Chan and Roth, 2011). It first recognizes all entities in a sentence and then performs relation classification for each entity pair. Such an approach tends to suffer from the error propagation problem since errors in early stages cannot be corrected in later stages. To tackle this problem, subsequent works proposed joint learning of entities and relations, among them are feature-based models (Yu and Lam, 2010; Li and Ji, 2014; Miwa and Sasaki, 2014; Ren et al., 2017) and, more recently, neural network-based models (Gupta et al., 2016; Katiyar and Cardie, 2017; Zheng et al., 2017; Zeng et al., 2018; Fu et al., 2019). By replacing manually constructed features with learned representations, neural network-based models have achieved considerable success in the triple extraction task. + +However, most existing approaches cannot efficiently handle scenarios in which a sentence contains multiple relational triples that overlap with each other. Figure 1 illustrates these scenarios, where triples share one or two entities in a sentence. This overlapping triple problem directly challenges conventional sequence tagging schemes that assume each token bears only one tag (Zheng et al., 2017). It also brings significant difficulty to relation classification approaches where an entity + +pair is assumed to hold at most one relation (Miwa and Bansal, 2016). Zeng et al. (2018) is among the first to consider the overlapping triple problem in relational triple extraction. They introduced the categories for different overlapping patterns as shown in Figure 1 and proposed a sequence-to-sequence (Seq2Seq) model with copy mechanism to extract triples. Based on the Seq2Seq model, they further investigate the impact of extraction order (Zeng et al., 2019) and gain considerable improvement with reinforcement learning. Fu et al. (2019) also studied the overlapping triple problem by modeling text as relational graphs with a graph convolutional networks (GCNs) based model. + +Despite their success, previous works on extracting overlapping triples still leave much to be desired. Specifically, they all treat relations as discrete labels to be assigned to entity pairs. This formulation makes relation classification a hard machine learning problem. First, the class distribution is highly imbalanced. Among all pairs of extracted entities, most do not form valid relations, generating too many negative examples. Second, the classifier can be confused when the same entity participates in multiple valid relations (overlapping triples). Without enough training examples, the classifier can hardly tell which relation the entity participates in. As a result, the extracted triples are usually incomplete and inaccurate. + +In this work, we start with a principled formulation of relational triple extraction right at the triple level. This gives rise to a general algorithmic framework that handles the overlapping triple problem by design. At the core of the framework is the fresh perspective that instead of treating relations as discrete labels on entity pairs, we can model relations as functions that map subjects to objects. More precisely, instead of learning relation classifiers $f(s,o) \to r$ , we learn relation-specific taggers $f_{r}(s) \to o$ , each of which recognizes the possible object(s) of a given subject under a specific relation; or returns no object, indicating that there is no triple with the given subject and relation. Under this framework, triple extraction is a two-step process: first we identify all possible subjects in a sentence; then for each subject, we apply relation-specific taggers to simultaneously identify all possible relations and the corresponding objects. + +We implement the above idea in CASREL, an end-to-end cascade binary tagging framework. It consists of a BERT-based encoder module, a sub + +ject tagging module, and a relation-specific object tagging module. Empirical experiments show that the proposed framework outperforms state-of-the-art methods by a large margin even when the BERT encoder is not pre-trained, showing the superiority of the new framework itself. The framework enjoys a further large performance gain after adopting a pre-trained BERT encoder, showing the importance of rich prior knowledge in triple extraction task. + +This work has the following main contributions: + +1. We introduce a fresh perspective to revisit the relational triple extraction task with a principled problem formulation, which implies a general algorithmic framework that addresses the overlapping triple problem by design. +2. We instantiate the above framework as a novel cascade binary tagging model on top of a Transformer encoder. This allows the model to combine the power of the novel tagging framework with the prior knowledge in pretrained large-scale language models. +3. Extensive experiments on two public datasets show that the proposed framework overwhelmingly outperforms state-of-the-art methods, achieving 17.5 and 30.2 absolute gain in F1-score on the two datasets respectively. Detailed analyses show that our model gains consistent improvement in all scenarios. + +# 2 Related Work + +Extracting relational triples from unstructured natural language texts is a well-studied task in information extraction (IE). It is also an important step for the construction of large scale knowledge graph (KG) such as DBpedia (Auer et al., 2007), Freebase (Bollacker et al., 2008) and Knowledge Vault (Dong et al., 2014). + +Early works (Mintz et al., 2009; Gormley et al., 2015) address the task in a pipelined manner. They extract relational triples in two separate steps: 1) first run named entity recognition (NER) on the input sentence to identify all entities and 2) then run relation classification (RC) on pairs of extracted entities. The pipelined methods usually suffer from the error propagation problem and neglect the relevance between the two steps. To ease these issues, many joint models that aim to learn entities and relations jointly have been proposed. Traditional joint models (Yu and Lam, 2010; Li and Ji, + +2014; Miwa and Sasaki, 2014; Ren et al., 2017) are feature-based, which heavily rely on feature engineering and require intensive manual efforts. To reduce manual work, recent studies have investigated neural network-based methods, which deliver state-of-the-art performance. However, most existing neural models like (Miwa and Bansal, 2016) achieve joint learning of entities and relations only through parameter sharing but not joint decoding. To obtain relational triples, they still have to pipeline the detected entity pairs to a relation classifier for identifying the relation of entities. The separated decoding setting leads to a separated training objective for entity and relation, which brings a drawback that the triple-level dependencies between predicted entities and relations cannot be fully exploited. Different from those works, Zheng et al. (2017) achieves joint decoding by introducing a unified tagging scheme and convert the task of relational triple extraction to an end-to-end sequence tagging problem without need of NER or RC. The proposed method can directly model relational triples as a whole at the triple level since the information of entities and relations is integrated into the unified tagging scheme. + +Though joint models (with or without joint decoding) have been well studied, most previous works ignore the problem of overlapping relational triples. Zeng et al. (2018) introduced three patterns of overlapping triples and try to address the problem via a sequence-to-sequence model with copy mechanism. Recently, Fu et al. (2019) also study the problem and propose a graph convolutional networks (GCNs) based method. Despite their initial success, both methods still treat the relations as discrete labels of entity pairs, making it quite hard for the model to learn overlapping triples. + +Our framework is based on a training objective that is carefully designed to directly model the relational triples as a whole like (Zheng et al., 2017), i.e., to learn both entities and relations through joint decoding. Moreover, we model the relations as functions that map subjects to objects, which makes it crucially different from previous works. + +# 3 The CASREL Framework + +The goal of relational triple extraction is to identify all possible (subject, relation, object) triples in a sentence, where some triples may share the same entities as subjects or objects. Towards this goal, we directly model the triples and design a training + +objective right at the triple level. This is in contrast to previous approaches like (Fu et al., 2019) where the training objective is defined separately for entities and relations without explicitly modeling their integration at the triple level. + +Formally, given annotated sentence $x_{j}$ from the training set $D$ and a set of potentially overlapping triples $T_{j} = \{(s,r,o)\}$ in $x_{j}$ , we aim to maximize the data likelihood of the training set $D$ : + +$$ +\begin{array}{l} \prod_ {j = 1} ^ {| D |} \left[ \prod_ {(s, r, o) \in T _ {j}} p ((s, r, o) | x _ {j}) \right] (1) \\ = \prod_ {j = 1} ^ {| D |} \left[ \prod_ {s \in T _ {j}} p (s | x _ {j}) \prod_ {(r, o) \in T _ {j} | s} p ((r, o) | s, x _ {j}) \right] (2) \\ = \prod_ {j = 1} ^ {| D |} \left[ \prod_ {s \in T _ {j}} p (s \mid x _ {j}) \prod_ {r \in T _ {j} \mid s} p _ {r} (o \mid s, x _ {j}) \prod_ {r \in R \backslash T _ {j} \mid s} p _ {r} \left(o _ {\varnothing} \mid s, x _ {j}\right) \right]. (3) \\ \end{array} +$$ + +Here we slightly abuse the notation $T_{j}$ . $s \in T_{j}$ denotes a subject appearing in the triples in $T_{j}$ . $T_{j}|s$ is the set of triples led by subject $s$ in $T_{j}$ . $(r,o) \in T_{j}|s$ is a $(r,o)$ pair in the triples led by subject $s$ in $T_{j}$ . $R$ is the set of all possible relations. $R\backslash T_j|s$ denotes all relations except those led by $s$ in $T_{j}$ . $o_{\emptyset}$ denotes a "null" object (explained below). + +Eq. (2) applies the chain rule of probability. Eq. (3) exploits the crucial fact that for a given subject $s$ , any relation relevant to $s$ (those in $T_{j}|s$ ) would lead to corresponding objects in the sentence, and all other relations would necessarily have no object in the sentence, i.e. a "null" object. + +This formulation provides several benefits. First, since the data likelihood starts at the triple level, optimizing this likelihood corresponds to directly optimizing the final evaluation criteria at the triple level. Second, by making no assumption on how multiple triples may share entities in a sentence, it handles the overlapping triple problem by design. Third, the decomposition in Eq. (3) inspires a novel tagging scheme for triple extraction: we learn a subject tagger $p(s|x_j)$ that recognizes subject entities in a sentence; and for each relation $r$ , we learn an object tagger $p_r(o|s,x_j)$ that recognizes relation-specific objects for a given subject. In this way we can model each relation as a function that maps subjects to objects, as opposed to classifying relations for (subject, object) pairs. + +Indeed, this novel tagging scheme allows us to extract multiple triples at once: we first run the subject tagger to find all possible subjects in the + +sentence, and then for each subject found, apply relation-specific object taggers to find all relevant relations and the corresponding objects. + +The key components in the above general framework, i.e., the subject tagger and relation-specific object taggers, can be instantiated in many ways. In this paper, we instantiate them as binary taggers on top of a deep bidirectional Transformer BERT (Devlin et al., 2019). We describe its detail below. + +# 3.1 BERT Encoder + +The encoder module extracts feature information $\mathbf{x}_j$ from sentence $x_j$ , which will feed into subsequent tagging modules2. We employ a pre-trained BERT model (Devlin et al., 2019) to encode the context information. + +Here we briefly review BERT, a multi-layer bidirectional Transformer based language representation model. It is designed to learn deep representations by jointly conditioning on both left and right context of each word, and it has recently been proven surprisingly effective in many downstream tasks (Zhong et al., 2019). Specifically, it is composed of a stack of $N$ identical Transformer blocks. We denote the Transformer block as $\operatorname{Trans}(\mathbf{x})$ , in which $\mathbf{x}$ represents the input vector. The detailed operations are as follows: + +$$ +\mathbf {h} _ {0} = \mathbf {S} \mathbf {W} _ {s} + \mathbf {W} _ {p} \tag {4} +$$ + +$$ +\mathbf {h} _ {\alpha} = \operatorname {T r a n s} \left(\mathbf {h} _ {\alpha - 1}\right), \alpha \in [ 1, N ] \tag {5} +$$ + +where $\mathbf{S}$ is the matrix of one-hot vectors of subwords $^3$ indices in the input sentence, $\mathbf{W}_s$ is the sub-words embedding matrix, $\mathbf{W}_p$ is the positional embedding matrix where $p$ represents the position index in the input sequence, $\mathbf{h}_{\alpha}$ is the hidden state vector, i.e., the context representation of input sentence at $\alpha$ -th layer and $N$ is the number of Transformer blocks. Note that in our work the input is a single text sentence instead of sentence pair, hence the segmentation embedding as described in original BERT paper was not taken into account in Eq. (4). For a more comprehensive description of the Transformer structure, we refer readers to (Vaswani et al., 2017). + +# 3.2 Cascade Decoder + +Now we describe our instantiation of the novel cascade binary tagging scheme inspired by the previous formulation. The basic idea is to extract triples in two cascade steps. First, we detect subjects from the input sentence. Then for each candidate subject, we check all possible relations to see if a relation can associate objects in the sentence with that subject. Corresponding to the two steps, the cascade decoder consists of two modules as illustrated in Figure 2: a subject tagger; and a set of relation-specific object taggers. + +Subject Tagger The low level tagging module is designed to recognize all possible subjects in the input sentence by directly decoding the encoded vector $\mathbf{h}_N$ produced by the $N$ -layer BERT encoder. More precisely, it adopts two identical binary classifiers to detect the start and end position of subjects respectively by assigning each token a binary tag (0/1) that indicates whether the current token corresponds to a start or end position of a subject. The detailed operations of the subject tagger on each token are as follows: + +$$ +p _ {i} ^ {\text {s t a r t - s}} = \sigma \left(\mathbf {W} _ {\text {s t a r t}} \mathbf {x} _ {i} + \mathbf {b} _ {\text {s t a r t}}\right) \tag {6} +$$ + +$$ +p _ {i} ^ {e n d \_ s} = \sigma (\mathbf {W} _ {e n d} \mathbf {x} _ {i} + \mathbf {b} _ {e n d}) \qquad (7) +$$ + +where $p_i^{start\_s}$ and $p_i^{end\_s}$ represent the probability of identifying the $i$ -th token in the input sequence as the start and end position of a subject, respectively. The corresponding token will be assigned with a tag 1 if the probability exceeds a certain threshold or with a tag 0 otherwise. $\mathbf{x}_i$ is the encoded representation of the $i$ -th token in the input sequence, i.e., $\mathbf{x}_i = \mathbf{h}_N[i]$ , where $\mathbf{W}_{(\cdot)}$ represents the trainable weight, and $\mathbf{b}_{(\cdot)}$ is the bias and $\sigma$ is the sigmoid activation function. + +The subject tagger optimizes the following likelihood function to identify the span of subject $s$ given a sentence representation $\mathbf{x}$ : + +$$ +\begin{array}{l} p _ {\theta} (s | \mathbf {x}) \\ = \prod_ {t \in \{s t a r t _ {-} s, e n d _ {-} s \}} \prod_ {i = 1} ^ {L} \left(p _ {i} ^ {t}\right) ^ {\mathbf {I} \left\{y _ {i} ^ {t} = 1 \right\}} \left(1 - p _ {i} ^ {t}\right) ^ {\mathbf {I} \left\{y _ {i} ^ {t} = 0 \right\}}. \tag {8} \\ \end{array} +$$ + +where $L$ is the length of the sentence. $\mathbf{I}\{z\} = 1$ if $z$ is true and 0 otherwise. $y_{i}^{start\_s}$ is the binary tag of subject start position for the $i$ -th token in $\mathbf{x}$ , and $y_{i}^{end\_s}$ indicates the subject end position. The parameters $\theta = \{\mathbf{W}_{start}, \mathbf{b}_{start}, \mathbf{W}_{end}, \mathbf{b}_{end}\}$ . + +![](images/0e0a40ac26198ecdb46559aa390728b15380f9935022f569c7b3287be8a008f7.jpg) +Figure 2: An overview of the proposed CASREL framework. In this example, there are three candidate subjects detected at the low level, while the presented $0/1$ tags at high level are specific to the first subject Jackie R. Brown, i.e., a snapshot of the iteration state when $k = 1$ is shown as above. For the subsequent iterations ( $k = 2, 3$ ), the results at high level will change, reflecting different triples detected. For instance, when $k = 2$ , the high-level orange (green) blocks will change to 0 (1), respectively, reflecting the relational triple (Washington, Capital_of, United States Of America) led by the second candidate subject Washington. + +For multiple subjects detection, we adopt the nearest start-end pair match principle to decide the span of any subject based on the results of the start and end position taggers. For example, as shown in Figure 2, the nearest end token to the first start token "Jackie" is "Brown", hence the detected result of the first subject span will be "Jackie R. Brown". Notably, to match an end token for a given start token, we don't consider tokens whose position is prior to the position of the given token. Such match strategy is able to maintain the integrity of any entity span if the start and end positions are both correctly detected due to the natural continuity of any entity span in a given sentence. + +Relation-specific Object Taggers The high level tagging module simultaneously identifies the objects as well the involved relations with respect to the subjects obtained at lower level. As Figure 2 shows, it consists of a set of relation-specific object taggers with the same structure as subject tagger in low level module for all possible relations. All object taggers will identify the corresponding object(s) for each detected subject at the same time. + +Different from subject tagger directly decoding the encoded vector $\mathbf{h}_N$ , the relation-specific object tagger takes the subject features into account as well. The detailed operations of the relation-specific object tagger on each token are as follows: + +$$ +\begin{array}{l} p _ {i} ^ {\text {s t a r t - o}} = \sigma \left(\mathbf {W} _ {\text {s t a r t}} ^ {r} \left(\mathbf {x} _ {i} + \mathbf {v} _ {\text {s u b}} ^ {k}\right) + \mathbf {b} _ {\text {s t a r t}} ^ {r}\right) \tag {9} \\ p _ {i} ^ {e n d - o} = \sigma \big (\mathbf {W} _ {e n d} ^ {r} (\mathbf {x} _ {i} + \mathbf {v} _ {s u b} ^ {k}) + \mathbf {b} _ {e n d} ^ {r} \big) (1 0) \\ \end{array} +$$ + +where $p_i^{start - o}$ and $p_i^{end - o}$ represent the probability of identifying the $i$ -th token in the input sequence as the start and end position of a object respectively, and $\mathbf{v}_{sub}^k$ represents the encoded representation vector of the $k$ -th subject detected in low level module. + +For each subject, we iteratively apply the same decoding process on it. Note that the subject is usually composed of multiple tokens, to make the additions of $\mathbf{x}_i$ and $\mathbf{v}_{sub}^k$ in Eq. (9) and Eq. (10) possible, we need to keep the dimension of two vectors consistent. To do so, we take the averaged vector representation between the start and end tokens of the $k$ -th subject as $\mathbf{v}_{sub}^k$ . + +The object tagger for relation $r$ optimizes the following likelihood function to identify the span + +of object $o$ given a sentence representation $\mathbf{x}$ and a subject $s$ : + +$$ +\begin{array}{l} p _ {\phi_ {r}} (o | s, \mathbf {x}) \\ = \prod_ {t \in \{s t a r t _ {-} o, e n d _ {-} o \}} \prod_ {i = 1} ^ {L} \left(p _ {i} ^ {t}\right) ^ {\mathbf {I} \left\{y _ {i} ^ {t} = 1 \right\}} \left(1 - p _ {i} ^ {t}\right) ^ {\mathbf {I} \left\{y _ {i} ^ {t} = 0 \right\}}. \tag {11} \\ \end{array} +$$ + +where $y_{i}^{start\_o}$ is the binary tag of object start position for the $i$ -th token in $\mathbf{x}$ , and $y_{i}^{end\_o}$ is the tag of object end position for the $i$ -th token. For a "null" object $o_{\emptyset}$ , the tags $y_{i}^{start\_o\emptyset} = y_{i}^{end\_o\emptyset} = 0$ for all $i$ . The parameters $\phi_r = \{\mathbf{W}_{start}^r, \mathbf{b}_{start}^r, \mathbf{W}_{end}^r, \mathbf{b}_{end}^r\}$ . + +Note that in the high level tagging module, the relation is also decided by the output of object taggers. For example, the relation "Work_in" does not hold between the detected subject "Jackie R. Brown" and the candidate object "Washington". Therefore, the object tagger for relation "Work_in" will not identify the span of "Washington", i.e., the output of both start and end position are all zeros as shown in Figure 2. In contrast, the relation "Birth_place" holds between "Jackie R. Brown" and "Washington", so the corresponding object tagger outputs the span of the candidate object "Washington". In this setting, the high level module is capable of simultaneously identifying the relations and objects with regard to the subjects detected in low level module. + +# 3.3 Data Log-likelihood Objective + +Taking log of Eq. (3), the objective $J(\Theta)$ is: + +$$ +\begin{array}{l} \sum_ {j = 1} ^ {| D |} \left[ \sum_ {s \in T _ {j}} \log p _ {\theta} (s | \mathbf {x} _ {j}) + \sum_ {r \in T _ {j} | s} \log p _ {\phi_ {r}} (o | s, \mathbf {x} _ {j}) \right. \\ \left. + \sum_ {r \in R \backslash T _ {j} | s} \log p _ {\phi_ {r}} \left(o _ {\varnothing} \mid s, \mathbf {x} _ {j}\right) \right]. \tag {12} \\ \end{array} +$$ + +where parameters $\Theta = \{\theta, \{\phi_r\}_{r \in R}\}$ . $p_{\theta}(s|\mathbf{x})$ is defined in Eq. (8) and $p_{\phi_r}(o|s,\mathbf{x})$ is defined in Eq. (11). We train the model by maximizing $J(\Theta)$ through Adam stochastic gradient descent (Kingma and Ba, 2014) over shuffled mini-batches. + +# 4 Experiments + +# 4.1 Experimental Setting + +Datasets and Evaluation Metrics We evaluate the framework on two public datasets NYT (Riedel + +
CategoryNYTWebNLG
TrainTestTrainTest
Normal3701332661596246
EPO978297822726
SEO1473512973406457
ALL5619550005019703
+ +Table 1: Statistics of datasets. Note that a sentence can belong to both EPO class and SEO class. + +et al., 2010) and WebNLG (Gardent et al., 2017). NYT dataset was originally produced by distant supervision method. It consists of 1.18M sentences with 24 predefined relation types. WebNLG dataset was originally created for Natural Language Generation (NLG) tasks and adapted by (Zeng et al., 2018) for relational triple extraction task. It contains 246 predefined relation types. The sentences in both datasets commonly contain multiple relational triples, thus NYT and WebNLG datasets suit very well to be the testbed for evaluating model on extracting overlapping relational triples4. We use the datasets released by (Zeng et al., 2018), in which NYT contains 56195 sentences for training, 5000 sentences for validation, and 5000 sentences for test, and WebNLG contains 5019 sentences for training, 500 sentences for validation and 703 sentences for test. According to different overlapping patterns of relational triples, we split the sentences into three categories, namely, Normal, EntityPairOverlap (EPO) and SingleEntityOverlap (SEO) for detailed experiments on different types of overlapping relational triples. The statistics of the two datasets are described in Table 1. + +Following previous work (Fu et al., 2019), an extracted relational triple (subject, relation, object) is regarded as correct only if the relation and the heads of both subject and object are all correct. For fair comparison, we report the standard micro Precision (Prec.), Recall (Rec.) and F1-score as in line with baselines. + +Implementation Details The hyper-parameters are determined on the validation set. More implementation details are described in Appendix A. + +
MethodNYTWebNLG
Prec.Rec.F1Prec.Rec.F1
NovelTagging (Zheng et al., 2017)62.431.742.052.519.328.3
CopyROneDecoder (Zeng et al., 2018)59.453.156.032.228.930.5
CopyRMultiDecoder (Zeng et al., 2018)61.056.658.737.736.437.1
GraphRel1p (Fu et al., 2019)62.957.360.042.339.240.7
GraphRel2p (Fu et al., 2019)63.960.061.944.741.142.9
CopyRRL (Zeng et al., 2019)77.967.272.163.359.961.6
CopyR*RL72.869.471.160.961.161.0
CASRELrandom81.575.778.584.779.582.0
CASRELLSTM84.283.083.686.980.683.7
CASREL89.789.589.693.490.191.8
+ +Table 2: Results of different methods on NYT and WebNLG datasets. Our re-implementation is marked by * . + +# 4.2 Experimental Result + +Compared Methods We compare our model with several strong state-of-the-art models, namely, NovelTagging (Zheng et al., 2017), CopyR (Zeng et al., 2018), GraphRel (Fu et al., 2019) and CopyR $_{RL}$ (Zeng et al., 2019). The reported results for the above baselines are directly copied from the original published literature. + +Note that we instantiate the CASREL framework on top of a pre-trained BERT model to combine the power of the proposed novel tagging scheme and the pre-learned prior knowledge for better performance. To evaluate the impact of introducing the Transformer-based BERT model, we conduct a set of ablation tests. CASRELrandom is the framework where all parameters of BERT are randomly initialized; CASREL LSTM is the framework instantiated on a LSTM-based structure as in (Zheng et al., 2017) with pre-trained Glove embedding (Pennington et al., 2014); CASREL is the full-fledged framework using pre-trained BERT weights. + +Main Results Table 2 shows the results of different baselines for relational triple extraction on two datasets. The CASREL model overwhelmingly outperforms all the baselines in terms of all three evaluation metrics and achieves encouraging $17.5\%$ and $30.2\%$ improvements in F1-score over the best state-of-the-art method (Zeng et al., 2019) on NYT and WebNLG datasets respectively. Even without taking advantage of the pre-trained BERT, CASREL random and CASREL LSTM are still competitive to existing state-of-the-art models. This validates the utility of the proposed cascade decoder that adopts a novel binary tagging scheme. The performance improvements from CASREL random to CASREL highlight the importance of the prior knowledge in a pre-trained language model. + +We can also observe from the table that there is a significant gap between the performance on NYT and WebNLG datasets for existing models, and we believe this gap is due to their drawbacks in dealing with overlapping triples. More precisely, as presented in Table 1, we can find that NYT dataset is mainly comprised of Normal class sentences while the majority of sentences in WebNLG dataset belong to EPO and SEO classes. Such inconsistent data distribution of two datasets leads to a comparatively better performance on NYT and a worse performance on WebNLG for all the baselines, exposing their drawbacks in extracting overlapping relational triples. In contrast, the CASREL model and its variants (i.e., CASRELrandom and CASREL LSTM) all achieve a stable and competitive performance on both NYT and WebNLG datasets, demonstrating the effectiveness of the proposed framework in solving the overlapping problem. + +Detailed Results on Different Types of Sentences To further study the capability of the proposed CASREL framework in extracting overlapping relational triples, we conduct two extended experiments on different types of sentences and compare the performance with previous works. + +The detailed results on three different overlapping patterns are presented in Figure 3. It can be seen that the performance of most baselines on Normal, EPO and SEO presents a decreasing trend, reflecting the increasing difficulty of extracting relational triples from sentences with different overlapping patterns. That is, among the three overlapping patterns, Normal class is the easiest pattern while EPO and SEO classes are the relatively harder ones for baseline models to extract. In contrast, the proposed CASREL model attains consistently strong performance over all three overlapping patterns, es + +![](images/0d89f72c2801dd32f2151f66e6e3a3c734302f69cea3b243676e862365777fd9.jpg) +(a) F1 of Normal Class + +![](images/f243b338240c52f5b3869c6ac2e3e258e65e9e5ed4809580828357283b63218c.jpg) +(b) F1 of EntityPairOverlap Class + +![](images/47f0d3da279e1fdfbf79ca2c71b58cc4369ee1aeb3b11ae6456bab1550571bf8.jpg) +(c) F1 of SingleEntityOverlap Class +Figure 3: F1-score of extracting relational triples from sentences with different overlapping pattern. + +
MethodNYTWebNLG
N=1N=2N=3N=4N≥5N=1N=2N=3N=4N≥5
CopyROneDecoder66.652.649.748.720.365.233.022.214.213.2
CopyRMultiDecoder67.158.652.053.630.059.242.531.724.230.0
GraphRel1p69.159.554.453.937.563.846.334.730.829.4
GraphRel2p71.061.557.455.141.166.048.337.032.132.1
CopyR*RL71.772.672.577.945.963.462.264.457.255.7
CASREL88.290.391.994.283.7 (+37.8)89.390.894.292.490.9 (+35.2)
+ +Table 3: F1-score of extracting relational triples from sentences with different number (denoted as N) of triples. + +ppecially for those hard patterns. We also validate the CASREL's capability in extracting relational triples from sentences with different number of triples. We split the sentences into five classes and Table 3 shows the results. Again, the CASREL model achieves excellent performance over all five classes. Though it's not surprising to find that the performance of most baselines decreases with the increasing number of relational triples that a sentence contains, some patterns still can be observed from the performance changes of different models. Compared to previous works that devote to solving the overlapping problem in relational triple extraction, our model suffers the least from the increasing complexity of the input sentence. Though the CASREL model gain considerable improvements on all five classes compared to the best state-of-the-art method CopyR $_{RL}$ (Zeng et al., 2019), the greatest improvement of F1-score on the two datasets both come from the most difficult class $(N \geq 5)$ , indicating that our model is more suitable for complicated scenarios than the baselines. + +Both of these experiments validate the superiority of the proposed cascade binary tagging framework in extracting multiple (possibly overlapping) relational triples from complicated sentences compared to existing methods. Previous works have to explicitly predict all possible relation types con + +tained in a given sentence, which is quite a challenging task, and thus many relations are missing in their extracted results. In contrast, our CASREL model side-steps the prediction of relation types and tends to extract as many relational triples as possible from a given sentence. We attribute this to the relation-specific object tagger setting in high level tagging module of the cascade decoder that considers all the relation types simultaneously. + +# 5 Conclusion + +In this paper, we introduce a novel cascade binary tagging framework (CASREL) derived from a principled problem formulation for relational triple extraction. Instead of modeling relations as discrete labels of entity pairs, we model the relations as functions that map subjects to objects, which provides a fresh perspective to revisit the relational triple extraction task. As a consequent, our model can simultaneously extract multiple relational triples from sentences, without suffering from the overlapping problem. We conduct extensive experiments on two widely used datasets to validate the effectiveness of the proposed CASREL framework. Experimental results show that our model overwhelmingly outperforms state-of-the-art baselines over different scenarios, especially on the extraction of overlapping relational triples. + +# Acknowledgments + +The authors would like to thank the anonymous referees for their valuable comments. This work is supported by the National Natural Science Foundation of China (No.61976102, No.U19A2065). + +# References + +Soren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In Proceedings of the 6th International The Semantic Web and 2nd Asian Conference on Asian Semantic Web Conference, pages 722-735. +Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247-1250. +Yee Seng Chan and Dan Roth. 2011. Exploiting syntactico-semantic structures for relation extraction. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 551-560. Association for Computational Linguistics. +Dai Dai, Xinyan Xiao, Yajuan Lyu, Shan Dou, Qiao-qiao She, and Haifeng Wang. 2019. Joint extraction of entities and overlapping relations using position-attentive sequence labeling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6300-6308. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. +Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 601-610. +Tsu-Jui Fu, Peng-Hsuan Li, and Wei-Yun Ma. 2019. Graphrel: Modeling text as relational graphs for joint entity and relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1409-1418. +Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating training corpora for nlg micro-planners. In Proceedings + +of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 179-188. +Matthew R Gormley, Mo Yu, and Mark Dredze. 2015. Improved relation extraction with feature-rich compositional embedding models. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1774-1784. +Pankaj Gupta, Hinrich Schütze, and Bernt Andrassy. 2016. Table filling multi-task recurrent neural network for joint entity and relation extraction. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2537-2547. +Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 541-550. +Arzoo Katiyar and Claire Cardie. 2017. Going out on a limb: Joint extraction of entity mentions and relations without dependency trees. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 917-928. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. +Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 402-412. +Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, and Jiwei Li. 2019. Entity-relation extraction as multi-turn question answering. In Annual Meeting of the Association for Computational Linguistics (ACL). +Liyuan Liu, Xiang Ren, Qi Zhu, Shi Zhi, Huan Gui, Heng Ji, and Jiawei Han. 2017. Heterogeneous supervision for relation extraction: A representation learning approach. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 46-56. +Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003-1011. +Makoto Miwa and Mohit Bansal. 2016. End-to-end relation extraction using lstms on sequences and tree + +structures. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1105-1116. +Makoto Miwa and Yutaka Sasaki. 2014. Modeling joint entity and relation extraction with table representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1858-1869. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543. +Xiang Ren, Zeqiu Wu, Wenqi He, Meng Qu, Clare R Voss, Heng Ji, Tarek F Abdelzaher, and Jiawei Han. 2017. Cotype: Joint extraction of typed entities and relations with knowledge bases. In Proceedings of the 26th International Conference on World Wide Web, pages 1015-1024. +Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In *Joint European Conference on Machine Learning and Knowledge Discovery in Databases*, pages 148-163. +Ryuichi Takanobu, Tianyang Zhang, Jiexi Liu, and Minlie Huang. 2019. A hierarchical framework for relation extraction with reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7072-7079. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008. +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. +Xiaofeng Yu and Wai Lam. 2010. Jointly identifying entities and extracting relations in encyclopedia text via a graphical model approach. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1399-1407. Association for Computational Linguistics. +Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. Journal of machine learning research, 3(Feb):1083-1106. +Xiangrong Zeng, Shizhu He, Daojian Zeng, Kang Liu, Shengping Liu, and Jun Zhao. 2019. Learning the extraction order of multiple relational facts in a sentence with reinforcement learning. In Proceedings + +of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 367-377. +Xiangrong Zeng, Daojian Zeng, Shizhu He, Kang Liu, and Jun Zhao. 2018. Extracting relational facts by an end-to-end neural model with copy mechanism. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 506-514. +Suncong Zheng, Feng Wang, Hongyun Bao, Yuexing Hao, Peng Zhou, and Bo Xu. 2017. Joint extraction of entities and relations based on a novel tagging scheme. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1227-1236. +Peixiang Zhong, Di Wang, and Chunyan Miao. 2019. Knowledge-enriched transformer for emotion detection in textual conversations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 165-176. +GuoDong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL05), pages 427-434. + +# Appendix + +# A Implementation Details + +We adopt mini-batch mechanism to train our model with batch size as 6; the learning rate is set to $1\mathrm{e}^{-5}$ ; the hyper-parameters are determined on the validation set. We also adopt early stopping mechanism to prevent the model from over-fitting. Specifically, we stop the training process when the performance on validation set does not gain any improvement for at least 7 consecutive epochs. The number of stacked bidirectional Transformer blocks $N$ is 12 and the size of hidden state $\mathbf{h}_N$ is 768. The pre-trained BERT model we used is [BERT-Base, Cased] $^5$ , which contains 110M parameters. + +For fair comparison, the max length of input sentence to our model is set to 100 words as previous works (Zeng et al., 2018; Fu et al., 2019) suggest. We did not tune the threshold for both start and end position taggers to predict tag 1, but heuristically set the threshold to 0.5 as default. The performance might be better after carefully tuning the threshold, however it is beyond the research scope of this paper. + +# B Error Analysis + +To explore the factors that affect the extracted relational triples of the CASREL model, we analyze the performance on predicting different elements of the triple $(E1, R, E2)$ where $E1$ represents the subject entity, $E2$ represents the object entity and $R$ represents the relation between them. An element like $(E1, R)$ is regarded as correct only if the subject and the relation in the predicted triple $(E1, R, E2)$ are both correct, regardless of the correctness of the predicted object. Similarly, we say an instance of $E1$ is correct as long as the subject in the extracted triple is correct, so is $E2$ and $R$ . + +Table 4 shows the results on different relational triple elements. For NYT, the performance on $E1$ and $E2$ are consistent with that on $(E1, R)$ and $(R, E2)$ , demonstrating the effectiveness of our proposed framework in identifying both subject and object entity mentions. We also find that there is only a trivial gap between the F1-score on $(E1, E2)$ and $(E1, R, E2)$ , but an obvious gap between $(E1, R, E2)$ and $(E1, R)/(R, E2)$ . It reveals that most relations for the entity pairs in extracted triples are correctly identified while some extracted entities + +
ElementNYTWebNLG
Prec.Rec.F1Prec.Rec.F1
E194.692.493.598.792.895.7
E294.193.093.597.793.095.3
R96.093.894.996.691.594.0
(E1, R)93.690.992.294.890.392.5
(R, E2)93.191.392.295.491.193.2
(E1, E2)89.290.189.795.391.793.5
(E1, R, E2)89.789.589.693.490.191.8
+ +Table 4: Results on relational triple elements. + +fail to form a valid relational triple. In other words, it implies that identifying relations is somehow easier than identifying entities for our model. + +In contrast to NYT, for WebNLG, the performance gap between $(E1,E2)$ and $(E1,R,E2)$ is comparatively larger than that between $(E1,R,E2)$ and $(E1,R)/(R,E2)$ . It shows that misidentifying the relations will bring more performance degradation than misidentifying the entities. Such observation also indicates that it's more challenging for the proposed CASREL model to identify the relations than entities in WebNLG, as opposed to what we observed in NYT. We attribute such difference to the different number of relations contained in the two datasets (i.e., 24 in NYT and 246 in WebNLG), which makes the identification of relation much harder in WebNLG. + +# C Supplemental Experiments + +In addition to validating the effectiveness of the proposed CASREL framework in handling the overlapping triple problem, we also conduct a set of supplemental experiments to show the generalization capability in more general cases on four widely used datasets, namely, ACE04, NYT10-HRL, NYT11-HRL and Wiki-KBP. Unlike the datasets we adopt in the main experiment, most test sentences in these datasets belong to the Normal class where no triples overlap with each other. Table 5 shows the result of a comprehensive comparison with recent state-of-the-art methods. + +Notably, there are two different evaluation metrics selectively adopted among previous works: (1) The widely used one is Partial Match as we described in Section 4.1, i.e., an extracted relational triple (subject, relation, object) is regarded as correct only if the relation and the heads of both subject and object are all correct (Li and Ji, 2014; Miwa and Bansal, 2016; Katiyar and Cardie, 2017; Zheng et al., 2017; Zeng et al., 2018; Takanobu + +
MethodPartial MatchExact Match
ACE04NYT10-HRLNYT11-HRLWiki-KBP
Prec.Rec.F1Prec.Rec.F1Prec.Rec.F1Prec.Rec.F1
Chan and Roth (2011)42.938.940.8---------
MultiR (Hoffmann et al., 2011)------32.830.631.730.153.038.0
DS-Joint (Li and Ji, 2014)64.738.548.3---------
FCM (Gormley et al., 2015)------43.229.435.0---
SPTree (Miwa and Bansal, 2016)---49.255.752.252.254.153.1---
CoType (Ren et al., 2017)------48.638.643.031.153.738.8
Katiyar and Cardie (2017)50.248.849.3---------
NovelTagging (Zheng et al., 2017)---59.338.146.446.948.947.953.630.338.7
ReHession (Liu et al., 2017)---------36.749.342.1
CopyR (Zeng et al., 2018)---56.945.250.434.753.442.1---
HRL (Takanobu et al., 2019)---71.458.664.453.853.853.8---
PA-LSTM-CRF (Dai et al., 2019)---------51.139.344.4
CASREL57.247.652.077.768.873.050.158.453.949.842.745.9
+ +Table 5: Relational triple extraction results of different methods under Partial Match and Exact Match metrics. + +et al., 2019; Li et al., 2019; Fu et al., 2019); (2) The stricter but less popular one is Exact Match adopted by Dai et al. (2019), where an extracted relational triple (subject, relation, object) is regarded as correct only if the relation and the heads and tails of both subject and object are all correct. + +Since some works like (Zeng et al., 2018) can't handle multi-word entities and can only be evaluated under the Partial Match metric and some works like (Dai et al., 2019) are not open-source, it's hard to use a unified metric to compare our model with existing models. To properly compare our model with various baselines, we adopt the Partial Match metric for ACE04, NYT10-HRL and NYT11-HRL, and adopt the Exact Match metric for Wiki-KBP. + +ACE04 We follow the same 5-fold cross-validation setting as adopted in previous works (Li and Ji, 2014; Miwa and Bansal, 2016; Li et al., 2019) and use the code released by (Miwa and Bansal, 2016) to preprocess the raw XML-style data for fair comparison. Eventually, it has 2,171 valid sentences in total and each sentence contains at least one relational triple. + +NYT10-HRL & NYT11-HRL NYT corpus has two versions: (1) the original version of which both the training set and test set are produced via distant supervision by Riedel et al. (2010) and (2) a smaller version with fewer relation types, where the training set is produced by distant supervision while the test set is manually annotated by Hoffmann et al. (2011). Here we denote the original one and the smaller one as NYT10 and NYT11, respectively. + +These two versions have been selectively adopted and preprocessed in many different ways among various previous works, which may be confusing sometimes and lead to incomparable results if not specifying the version. To fairly compare these models, HRL (Takanobu et al., 2019) adopted a unified preprocessing for both NYT10 and NYT11, and provided a comprehensive comparison with previous works using the same datasets. Here we denote the preprocessed two versions as NYT10-HRL and NYT11-HRL. + +For fair comparison, we use the preprocessed datasets released by Takanobu et al. (2019), where NYT10-HRL contains 70,339 sentences for training and 4,006 sentences for test and NYT11-HRL contains 62,648 sentences for training and 369 sentences for test. We also create a validation set by randomly sampling $0.5\%$ data from the training set for each dataset as in (Takanobu et al., 2019). + +Wiki-KBP We use the same version as Dai et al. (2019) adopted, where the training set is from (Liu et al., 2017) and the test set is from (Ren et al., 2017). It has 79,934 sentences for training and 289 sentences for test. We also create a validation set by randomly sampling $10\%$ data from the test set as Dai et al. (2019) suggested. + +Dataset Study As stated beforehand, these datasets are not suitable for testing the overlapping problem. To further explain this argument, we analyze the datasets in detail and the statistics are shown in Table 6. We find that the test data in these datasets suffer little from the so-called overlapping triple problem since the sentences contain few overlapping triples. Even worse, we also find that the + +
CategoryACE04NYT10-HRLNYT11-HRLWiki-KBP
ALLTrainTestTrainTestTrainTest
Normal16045939629635339536857020265
EPO853767152100032174
SEO5618772742736512123820
ALL21717033940066264836979934289
+ +Table 6: Statistics of datasets. Note that a sentence can belong to both EPO class and SEO class. + +annotated triples are not always the ground truth, which may affect the evaluation of our model. Take a real instance from the test set of NYT11-HRL for example, for the sentence "Mr. Gates, who arrived in Paris on Tuesday, was the first American defense secretary to visit France in nearly a decade . . .", the annotated triple set only contains one relational triple: + +$\{(Paris, / location/administrative_division/country, France)\}$ . However, the output of our model has three triples: $\{(Paris, / location/administrative_division/country, France), (France, / location/country/administrative_divisions, Paris), (France, / location/location/contains, Paris)\}$ . The last two triples should have been annotated in the sentence but were omitted, which will significantly affect the values of both precision and recall when quantifying the performance of our model. + +This observation demonstrates that our CASREL model could extract more relational triples than the manually annotated ones in some cases due to the imperfect annotation. For this reason, the performance on such dataset like NYT11-HRL can only partially reflect the potential of the proposed model and probably underestimate the real value. Nonetheless, the CASREL model still achieves a competitive performance, showing the effectiveness of the proposed novel cascade binary tagging framework in relational triple extraction. + +We also note that there is a significant gap (from 53.9 to 89.6 in terms of F1-score) between the performance on NYT11-HRL that preprocessed by Takanobu et al. (2019) and on NYT11-CopyR7 that preprocessed by Zeng et al. (2018). Though both of the above two versions are adapted from the original NYT11 dataset (Hoffmann et al., 2011), there are two key differences in the NYT11-CopyR version as Dai et al. (2019) pointed out. First, instead of using the manually annotated test set, Zeng et al. (2018) randomly select 5000 sentences from the training data as the test data. The reason + +is that the manually annotated data contains few overlapping triples and thus not suitable for testing the overlapping triple problem; Second, Zeng et al. (2018) only annotated the last word of an entity in both training and test sets because their model can not handle multi-word entities. Hence, any entity in their dataset is taken as a single-word entity, leading to that the Partial Match and Exact Match evaluation metrics make no difference. Moreover, such setting makes it much easier for our CASREL model to detect the span of an entity since the start and end positions are actually the same. We attribute the significant gap to these different settings between the above two versions. Noticeably, multi-word entities are common in real-world scenarios, so evaluating on a more proper dataset like NYT10-HRL (which also contains overlapping triples in test set) may better reveal the model's real value in relational triple extraction than on the ad-hoc one. \ No newline at end of file diff --git a/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/images.zip b/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..02bb701758351b6403513bafa53a4c38907b9d11 --- /dev/null +++ b/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ab53e718644965fa9ff5abcee546c013204fbd6c5f19bd855aaa89adb3be9f5 +size 516781 diff --git a/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/layout.json b/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..32e7ba4966558d2d8b67adc2bb986ca1b60388bf --- /dev/null +++ b/anovelcascadebinarytaggingframeworkforrelationaltripleextraction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a821eb2b60539d88bd9e8529513ad6544902b001164ed33423c864b46b62cf44 +size 431221 diff --git a/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/50fde7bd-f503-42ce-9c10-e8c3810b2632_content_list.json b/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/50fde7bd-f503-42ce-9c10-e8c3810b2632_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..ffaf1289df26315278631cb839b01571a7a28fa6 --- /dev/null +++ b/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/50fde7bd-f503-42ce-9c10-e8c3810b2632_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0a2de24937242dd1a42473fff7f11271b05aa42ec3a47d337f1a34e66e3889d9 +size 74947 diff --git a/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/50fde7bd-f503-42ce-9c10-e8c3810b2632_model.json b/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/50fde7bd-f503-42ce-9c10-e8c3810b2632_model.json new file mode 100644 index 0000000000000000000000000000000000000000..b1e9b3ec720ac4e3269180a40c1c6f1a755bbb23 --- /dev/null +++ b/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/50fde7bd-f503-42ce-9c10-e8c3810b2632_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dfe8327ff953beefd36bbc16e7b5584af9bea8aa348b9eb573a3c8a77d324c54 +size 93948 diff --git a/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/50fde7bd-f503-42ce-9c10-e8c3810b2632_origin.pdf b/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/50fde7bd-f503-42ce-9c10-e8c3810b2632_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..fb3bdc6b7bbfa08fd8b36f4f65fd0a944cfc5626 --- /dev/null +++ b/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/50fde7bd-f503-42ce-9c10-e8c3810b2632_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:56537cdd910395d75a0189eb1f835d1b7582f4971e90c78e95c411f5b0e568cd +size 464878 diff --git a/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/full.md b/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2dea2d93ec5ef4a101cc949fe897f3ab9bc6e699 --- /dev/null +++ b/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/full.md @@ -0,0 +1,347 @@ +# A Novel Graph-based Multi-modal Fusion Encoder for Neural Machine Translation + +Yongjing Yin $^{1*}$ , Fandong Meng $^{2}$ , Jinsong Su $^{1\dagger}$ , Chulun Zhou $^{1}$ , Zhengyuan Yang $^{3}$ , Jie Zhou $^{2}$ , Jiebo Luo $^{3}$ + +$^{1}$ Xiamen University, Xiamen, China + +$^{2}$ Pattern Recognition Center, WeChat AI, Tencent Inc, Beijing, China + +$^{3}$ Department of Computer Science, University of Rochester, Rochester NY 14627, USA + +yinyongjing@stu.xmu.edu.cn fandongmeng@tencent.com + +jssu@xmu.edu.cn + +# Abstract + +Multi-modal neural machine translation (NMT) aims to translate source sentences into a target language paired with images. However, dominant multi-modal NMT models do not fully exploit fine-grained semantic correspondences between semantic units of different modalities, which have potential to refine multi-modal representation learning. To deal with this issue, in this paper, we propose a novel graph-based multi-modal fusion encoder for NMT. Specifically, we first represent the input sentence and image using a unified multi-modal graph, which captures various semantic relationships between multi-modal semantic units (words and visual objects). We then stack multiple graph-based multi-modal fusion layers that iteratively perform semantic interactions to learn node representations. Finally, these representations provide an attention-based context vector for the decoder. We evaluate our proposed encoder on the Multi30K datasets. Experimental results and in-depth analysis show the superiority of our multi-modal NMT model. + +# 1 Introduction + +Multi-modal neural machine translation (NMT) (Huang et al., 2016; Calixto et al., 2017) has become an important research direction in machine translation, due to its research significance in multimodal deep learning and wide applications, such as translating multimedia news and web product information (Zhou et al., 2018). It significantly extends the conventional text-based machine translation by taking images as additional inputs. The assumption behind this is that the translation is expected to be more accurate compared to purely text-based + +translation, since the visual context helps to resolve ambiguous multi-sense words (Ive et al., 2019). + +Apparently, how to fully exploit visual information is one of the core issues in multi-modal NMT, which directly impacts the model performance. To this end, a lot of efforts have been made, roughly consisting of: (1) encoding each input image into a global feature vector, which can be used to initialize different components of multi-modal NMT models, or as additional source tokens (Huang et al., 2016; Calixto et al., 2017), or to learn the joint multi-modal representation (Zhou et al., 2018; Calixto et al., 2019); (2) extracting object-based image features to initialize the model, or supplement source sequences, or generate attention-based visual context (Huang et al., 2016; Ive et al., 2019); and (3) representing each image as spatial features, which can be exploited as extra context (Calixto et al., 2017; Delbrouck and Dupont, 2017a; Ive et al., 2019), or a supplement to source semantics (Delbrouck and Dupont, 2017b) via an attention mechanism. + +Despite their success, the above studies do not fully exploit the fine-grained semantic correspondences between semantic units within an input sentence-image pair. For example, as shown in Figure 1, the noun phrase "a toy car" semantically corresponds to the blue dashed region. The neglect of this important clue may be due to two big challenges: 1) how to construct a unified representation to bridge the semantic gap between two different modalities, and 2) how to achieve semantic interactions based on the unified representation. However, we believe that such semantic correspondences can be exploited to refine multimodal representation learning, since they enable the representations within one modality to incorporate cross-modal information as supplement during multi-modal semantic interactions (Lee et al., 2018; Tan and Bansal, 2019). + +![](images/480f47efab49a8ebee2cb80f6755efcdaca94dd0a55b18b131182d86827898a8.jpg) +Figure 1: The multi-modal graph for an input sentence-image pair. The blue and green solid circles denote textual nodes and visual nodes respectively. An intra-modal edge (dotted line) connects two nodes in the same modality, and an inter-modal edge (solid line) links two nodes in different modalities. Note that we only display edges connecting the textual node "playing" and other textual ones for simplicity. + +In this paper, we propose a novel graph-based multi-modal fusion encoder for NMT. We first represent the input sentence and image with a unified multi-modal graph. In this graph, each node indicates a semantic unit: textual word or visual object, and two types of edges are introduced to model semantic relationships between semantic units within the same modality (intra-modal edges) and semantic correspondences between semantic units of different modalities (inter-modal edges) respectively. Based on the graph, we then stack multiple graph-based multi-modal fusion layers that iteratively perform semantic interactions among the nodes to conduct graph encoding. Particularly, during this process, we distinguish the parameters of two modalities, and sequentially conduct intra- and inter-modal fusions to learn multi-modal node representations. Finally, these representations can be exploited by the decoder via an attention mechanism. + +Compared with previous models, ours is able to fully exploit semantic interactions among multimodal semantic units for NMT. Overall, the major contributions of our work are listed as follows: + +- We propose a unified graph to represent the input sentence and image, where various semantic relationships between multi-modal semantic units can be captured for NMT. +- We propose a graph-based multi-modal fusion encoder to conduct graph encoding based on the above graph. To the best of our knowledge, our work is the first attempt to explore multimodal graph neural network (GNN) for NMT. +- We conduct extensive experiments on Multi30k datasets of two language pairs. + +Experimental results and in-depth analysis indicate that our encoder is effective to fuse multi-modal information for NMT. Particularly, our multi-modal NMT model significantly outperforms several competitive baselines. + +- We release the code at https://github.com/DeepLearnXMU/GMNMT. + +# 2 NMT with Graph-based Multi-modal Fusion Encoder + +Our multi-modal NMT model is based on attentional encoder-decoder framework with maximizing the log likelihood of training data as the objective function. + +# 2.1 Encoder + +Essentially, our encoder can be regarded as a multimodal extension of GNN. To construct our encoder, we first represent the input sentence-image pair as a unified multi-modal graph. Then, based on this graph, we stack multiple multi-modal fusion layers to learn node representations, which provides the attention-based context vector to the decoder. + +# 2.1.1 Multi-modal Graph + +In this section, we take the sentence and the image shown in Figure 1 as an example, and describe how to use a multi-modal graph to represent them. Formally, our graph is undirected and can be formalized as $G = (V, E)$ , which is constructed as follows: + +In the node set $V$ , each node represents either a textual word or a visual object. Specifically, we adopt the following strategies to construct these two kinds of nodes: (1) We include all words as separate textual nodes in order to fully exploit textual + +![](images/81fbf17e6693abaabe8f21cf1f3b8667044baf039c5603475551b9600d520e47.jpg) +Figure 2: The architecture of our NMT model with the graph-based multi-modal fusion encoder. Note that we actually do not apply a Visual FFN to the last layer in the encoder. + +information. For example, in Figure 1, the multimodal graph contains totally eight textual nodes, each of which corresponds to a word in the input sentence; (2) We employ the Stanford parser to identify all noun phrases in the input sentence, and then apply a visual grounding toolkit (Yang et al., 2019) to detect bounding boxes (visual objects) for each noun phrase. Subsequently, all detected visual objects are included as independent visual nodes. In this way, we can effectively reduce the negative impact of abundant unrelated visual objects. Let us revisit the example in Figure 1, where we can identify two noun phrases "Two boys" and "a toy car" from the input sentence, and then include three visual objects into the multi-modal graph. + +To capture various semantic relationships between multi-modal semantic units for NMT, we consider two kinds of edges in the edge set $E$ : (1) Any two nodes in the same modality are connected by an intra-modal edge; and (2) Each textual node representing any noun phrase and the corresponding visual node are connected by an inter-modal edge. Back to Figure 1, we can observe that all visual nodes are connected to each other, and all textual nodes are fully-connected. However, only nodes $v_{o_1}$ and $v_{x_1}$ , $v_{o_1}$ and $v_{x_2}$ , $v_{o_2}$ and $v_{x_1}$ , $v_{o_2}$ and $v_{x_2}$ , $v_{o_3}$ and $v_{x_6}$ , $v_{o_3}$ and $v_{x_7}$ , $v_{o_3}$ and $v_{x_8}$ are connected by inter-modal edges. + +# 2.1.2 Embedding Layer + +Before inputting the multi-modal graph into the stacked fusion layers, we introduce an embedding + +layer to initialize the node states. Specifically, for each textual node $v_{x_i}$ , we define its initial state $H_{x_i}^{(0)}$ as the sum of its word embedding and position encoding (Vaswani et al., 2017). To obtain the initial state $H_{o_j}^{(0)}$ of the visual node $v_{o_j}$ , we first extract visual features from the fully-connected layer that follows the ROI pooling layer in Faster-RCNN (Ren et al., 2015), and then employ a multi-layer perceptron with ReLU activation function to project these features onto the same space as textual representations. + +# 2.1.3 Graph-based Multi-modal Fusion Layers + +As shown in the left part of Figure 2, on the top of embedding layer, we stack $L_{e}$ graph-based multimodal fusion layers to encode the above-mentioned multi-modal graph. At each fusion layer, we sequentially conduct intra- and inter-modal fusions to update all node states. In this way, the final node states encode both the context within the same modality and the cross-modal semantic information simultaneously. Particularly, since visual nodes and textual nodes are two types of semantic units containing the information of different modalities, we apply similar operations but with different parameters to model their state update process, respectively. + +Specifically, in the $l$ -th fusion layer, both updates of textual node states $\mathbf{H}_x^{(l)} = \{H_{x_i}^{(l)}\}$ and visual node states $\mathbf{H}_o^{(l)} = \{H_{o_j}^{(l)}\}$ mainly involve the following steps: + +Step1: Intra-modal fusion. At this step, we employ self-attention to generate the contextual representation of each node by collecting the message from its neighbors of the same modality. + +Formally, the contextual representations $\mathbf{C}_x^{(l)}$ of all textual nodes are calculated as follows: + +$$ +\mathbf {C} _ {x} ^ {(l)} = \operatorname {M u l t i H e a d} \left(\mathbf {H} _ {x} ^ {(l - 1)}, \mathbf {H} _ {x} ^ {(l - 1)}, \mathbf {H} _ {x} ^ {(l - 1)}\right), \tag {1} +$$ + +where MultiHead(Q, K, V) is a multi-head self-attention function taking a query matrix Q, a key matrix K, and a value matrix V as inputs. Similarly, we generate the contextual representations $\mathbf{C}_o^{(l)}$ of all visual nodes as + +$$ +\mathbf {C} _ {o} ^ {(l)} = \operatorname {M u l t i H e a d} \left(\mathbf {H} _ {o} ^ {(l - 1)}, \mathbf {H} _ {o} ^ {(l - 1)}, \mathbf {H} _ {o} ^ {(l - 1)}\right). \tag {2} +$$ + +In particular, since the initial representations of visual objects are extracted from deep CNNs, we apply a simplified multi-head self-attention to preserve the initial representations of visual objects, where the learned linear projects of values and final outputs are removed. + +Step2: Inter-modal fusion. Inspired by studies in multi-modal feature fusion (Teney et al., 2018; Kim et al., 2018), we apply a cross-modal gating mechanism with an element-wise operation to gather the semantic information of the cross-modal neighbours of each node. + +Concretely, we generate the representation $M_{x_i}^{(l)}$ of a text node $v_{x_i}$ in the following way: + +$$ +M _ {x _ {i}} ^ {(l)} = \sum_ {j \in A \left(v _ {x _ {i}}\right)} \alpha_ {i, j} \odot C _ {o j} ^ {(l)}, \tag {3} +$$ + +$$ +\alpha_ {i, j} = \operatorname {S i g m o i d} \left(\mathbf {W} _ {1} ^ {(l)} C _ {x _ {i}} ^ {(l)} + \mathbf {W} _ {2} ^ {(l)} C _ {o _ {j}} ^ {(l)}\right), \tag {4} +$$ + +where $A(v_{x_i})$ is the set of neighboring visual nodes of $v_{x_i}$ , and $\mathbf{W}_1^{(l)}$ and $\mathbf{W}_2^{(l)}$ are parameter matrices. Likewise, we produce the representation $M_{o_j}^{(l)}$ of a visual node $v_{o_j}$ as follows: + +$$ +M _ {o _ {j}} ^ {(l)} = \sum_ {i \in A \left(v _ {o _ {j}}\right)} \beta_ {j, i} \odot C _ {x _ {i}} ^ {(l)}, \tag {5} +$$ + +$$ +\beta_ {j, i} = \operatorname {S i g m o i d} \left(\mathbf {W} _ {3} ^ {(l)} C _ {o _ {j}} ^ {(l)} + \mathbf {W} _ {4} ^ {(l)} C _ {x _ {i}} ^ {(l)}\right), \tag {6} +$$ + +where $A(v_{o_j})$ is the set of adjacent textual nodes of $v_{o_j}$ , and $\mathbf{W}_3^{(l)}$ and $\mathbf{W}_4^{(l)}$ are also parameter matrices. + +The advantage is that the above fusion approach can better determine the degree of inter-modal fusion according to the contextual representations of + +each modality. Finally, we adopt position-wise feed forward networks $\mathrm{FFN}(*)$ to generate the textual node states $\mathbf{H}_x^{(l)}$ and visual node states $\mathbf{H}_o^{(l)}$ : + +$$ +\mathbf {H} _ {x} ^ {(l)} = \operatorname {F F N} \left(\mathbf {M} _ {x} ^ {(l)}\right), \tag {7} +$$ + +$$ +\mathbf {H} _ {o} ^ {(l)} = \operatorname {F F N} \left(\mathbf {M} _ {o} ^ {(l)}\right), \tag {8} +$$ + +where $\mathbf{M}_x^{(l)} = \{M_{x_i}^{(l)}\}$ , $\mathbf{M}_o^{(l)} = \{M_{o_j}^{(l)}\}$ denote the above updated representations of all textual nodes and visual nodes respectively. + +# 2.2 Decoder + +Our decoder is similar to the conventional Transformer decoder. Since visual information has been incorporated into all textual nodes via multiple graph-based multi-modal fusion layers, we allow the decoder to dynamically exploit the multi-modal context by only attending to textual node states. + +As shown in the right part of Figure 2, we follow Vaswani et al. (2017) to stack $L_{d}$ identical layers to generate target-side hidden states, where each layer $l$ is composed of three sub-layers. Concretely, the first two sub-layers are a masked self-attention and an encoder-decoder attention to integrate target-and source-side contexts respectively: + +$$ +\mathbf {E} ^ {(l)} = \operatorname {M u l t i H e a d} \left(\mathbf {S} ^ {(l - 1)}, \mathbf {S} ^ {(l - 1)}, \mathbf {S} ^ {(l - 1)}\right), \tag {9} +$$ + +$$ +\mathbf {T} ^ {(l)} = \operatorname {M u l t i H e a d} \left(\mathbf {E} ^ {(l)}, \mathbf {H} _ {x} ^ {\left(L _ {e}\right)}, \mathbf {H} _ {x} ^ {\left(L _ {e}\right)}\right), \tag {10} +$$ + +where $\mathbf{S}^{(l - 1)}$ denotes the target-side hidden states in the $l$ -th layer. In particular, $\mathbf{S}^{(0)}$ are the embeddings of input target words. Then, a position-wise fully-connected forward neural network is uesd to produce $\mathbf{S}^{(l)}$ as follows: + +$$ +\mathbf {S} ^ {(l)} = \operatorname {F F N} \left(\mathbf {T} ^ {(l)}\right). \tag {11} +$$ + +Finally, the probability distribution of generating the target sentence is defined by using a softmax layer, which takes the hidden states in the top layer as input: + +$$ +P (Y | X, I) = \prod_ {t} \operatorname {S o f t m a x} \left(\mathbf {W S} _ {t} ^ {(L _ {d})} + b\right), \tag {12} +$$ + +where $X$ is the input sentence, $I$ is the input image, $Y$ is the target sentence, and $\mathbf{W}$ and $b$ are the parameters of the softmax layer. + +# 3 Experiment + +We carry out experiments on multi-modal English $\Rightarrow$ German (En $\Rightarrow$ De) and English $\Rightarrow$ French (En $\Rightarrow$ Fr) translation tasks. + +# 3.1 Setup + +Datasets We use the Multi30K dataset (Elliott et al., 2016), where each image is paired with one English description and human translations into German and French. Training, validation and test sets contain 29,000, 1,014 and 1,000 instances respectively. In addition, we evaluate various models on the WMT17 test set and the ambiguous MSCOCO test set, which contain 1,000 and 461 instances respectively. Here, we directly use the preprocessed sentences ${}^{2}$ and segment words into subwords via byte pair encoding (Sennrich et al., 2016) with 10,000 merge operations. + +Visual Features We first apply the Stanford parser to identify noun phrases from each source sentence, and then employ the visual ground toolkit released by Yang et al. (2019) to detect associated visual objects of the identified noun phrases. For each phrase, we keep the visual object with the highest prediction probability, so as to reduce negative effects of abundant visual objects. In each sentence, the average numbers of objects and words are around 3.5 and 15.0 respectively. Finally, we compute 2,048-dimensional features for these objects with the pre-trained ResNet-100 Faster-RCNN (Ren et al., 2015). + +Settings We use Transformer (Vaswani et al., 2017) as our baseline. Since the size of training corpus is small and the trained model tends to be over-fitting, we first perform a small grid search to obtain a set of hyper-parameters on the $\mathrm{En} \Rightarrow \mathrm{De}$ validation set. Specifically, the word embedding dimension and hidden size are 128 and 256 respectively. The decoder has $L_{d} = 4$ layers $^{4}$ and the number of attention heads is 4. The dropout is set to 0.5. Each batch consists of approximately 2,000 source and target tokens. We apply the Adam optimizer with a scheduled learning rate to optimize various models, and we use other same settings as (Vaswani et al., 2017). Finally, we use the metrics BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) to evaluate the quality of translations. Particularly, we run all models three times for each experiment and report the average results. + +![](images/2e7822ce7379e602ce49a9cfb05fd6af453efb2bdb4e4e05cfc88abb86f2f54b.jpg) +Figure 3: Results on the $\mathrm{En} \Rightarrow \mathrm{De}$ validation set regarding the number $L_{e}$ of graph-based multi-modal fusion layers. + +Baseline Models In addition to the text-based Transformer (Vaswani et al., 2017), we adapt several effective approaches to Transformer using our visual features, and compare our model with them5: + +- ObjectAsToken(TF) (Huang et al., 2016). It is a variant of the Transformer, where all visual objects are regarded as extra source tokens and placed at the front of the input sentence. +- Enc-att(TF) (Delbrouck and Dupont, 2017b). An encoder-based image attention mechanism is incorporated into Transformer, which augments each source annotation with an attention-based visual feature vector. +- Doubly-att(TF) (Helcl et al., 2018). It is a doubly attentive Transformer. In each decoder layer, a cross-modal multi-head attention sublayer is inserted before the fully connected feed-forward layer to generate the visual context vector from visual features. + +We also display the performance of several dominant multi-modal NMT models such as Doubly-att(RNN) (Calixto et al., 2017), Soft-att(RNN) (Delbrouck and Dupont, 2017a), Stochastic-att(RNN) (Delbrouck and Dupont, 2017a), Fusion-conv(RNN) (Caglayan et al., 2017), Trg-mul(RNN) (Caglayan et al., 2017), VMMT(RNN) (Calixto et al., 2019) and Deliberation Network(TF) (Ive et al., 2019) on the same datasets. + +# 3.2 Effect of Graph-based Multi-modal Fusion Layer Number $L_{e}$ + +The number $L_{e}$ of multi-modal fusion layer is an important hyper-parameter that directly determines + +
ModelEn→De
Test2016Test2017MSCOCO
BLEUMETEORBLEUMETEORBLEUMETEOR
Existing Multi-modal NMT Systems
Doubly-att(RNN) (Calixto et al., 2017)36.555.0----
Soft-att(RNN) (Delbrouck and Dupont, 2017a)37.655.3----
Stochastic-att(RNN) (Delbrouck and Dupont, 2017a)38.255.4----
Fusion-conv(RNN) (Caglayan et al., 2017)37.057.029.851.225.146.0
Trg-mul(RNN)(Caglayan et al., 2017)37.857.730.752.226.447.4
VMMT(RNN) (Calixto et al., 2019)37.756.030.149.925.544.8
Deliberation Network(TF) (Ive et al., 2019)38.055.6----
Our Multi-modal NMT Systems
Transformer (Vaswani et al., 2017)38.456.530.650.427.346.2
ObjectAsToken(TF) (Huang et al., 2016)39.057.231.751.328.447.0
Enc-att(TF) (Delbrouck and Dupont, 2017b)38.756.631.350.628.046.6
Doubly-att(TF) (Helcl et al., 2018)38.856.831.450.527.446.5
Our model39.857.632.251.928.747.6
+ +Table 1: Experimental results on the $\mathrm{{En}} \Rightarrow \mathrm{{De}}$ translation task. + +the degree of fine-grained semantic fusion in our encoder. Thus, we first inspect its impact on the EN $\Rightarrow$ DE validation set. + +Figure 3 provides the experimental results using different $L_{e}$ and our model achieves the best performance when $L_{e}$ is 3. Hence, we use $L_{e} = 3$ in all subsequent experiments. + +# 3.3 Results on the $\mathbf{E}\mathbf{n}\Rightarrow \mathbf{D}\mathbf{e}$ Translation Task + +Table 1 shows the main results on the $\mathrm{En}\Rightarrow \mathrm{De}$ translation task. Ours outperforms most of the existing models and all baselines, and is comparable to Fusion-conv(RNN) and Trg-mul(RNN) on METEOR. The two results are from the state-of-the-ar system on the WMT2017 test set, which is selected based on METEOR. Comparing the baseline models, we draw the following interesting conclusions + +First, our model outperforms ObjectAsToken(TF), which concatenates regional visual features with text to form attendable sequences and employs self-attention mechanism to conduct intermodal fusion. The underlying reasons consist of two aspects: explicitly modeling semantic correspondences between semantic units of different modalities, and distinguishing model parameters for different modalities. + +Second, our model also significantly outperforms Enc-att(TF). Note that Enc-att(TF) can be considered as a single-layer semantic fusion encoder. In addition to the advantage of explicitly modeling semantic correspondences, we conjecture that multi-layer multi-modal semantic interactions are also beneficial to NMT. + +Third, compared with Doubly-att(TF) simply using an attention mechanism to exploit visual in + +![](images/67f7c790ad8c09d462b6feac38ce476cc00e9bcdfbe354d332abaacae6d705b6.jpg) +Figure 4: BLEU scores on different translation groups divided according to source sentence lengths. + +![](images/c575921384d0db7a4628fa00cd5f2dbceb5e415ead4c6ee822c716c294abe8f4.jpg) +Figure 5: BLEU scores on different translation groups divided according to source phrase numbers. + +formation, our model achieves a significant improvement, because of sufficient multi-modal fusion in our encoder. + +Besides, we divide our test sets into different groups based on the lengths of source sentences and the numbers of noun phrases, and then compare the performance of different models in each group. Figures 4 and 5 report the BLEU scores on these groups. Overall, our model still consistently achieves the best performance in all groups. Thus, we confirm again the effectiveness and gen + +
ModelEn→De
Test2016Test2017MSCOCO
BLEUMETEORBLEUMETEORBLEUMETEOR
Our model39.857.632.251.928.747.6
w/o inter-modal fusion38.756.730.750.627.046.7
visual grounding ⇒ fully-connected36.453.428.347.024.442.9
different parameters ⇒ unified parameters39.257.331.951.427.747.4
w/ attending to visual nodes39.657.332.051.327.946.8
attending to textual nodes ⇒ attending to visual nodes30.948.622.341.520.438.7
+ +Table 2: Ablation study of our model on the EN $\Rightarrow$ DE translation task. + +
ModelEn⇒Fr
Test2016Test2017
BLEUMETEORBLEUMETEOR
Existing Multi-modal NMT Systems
Fusion-conv(RNN) (Caglayan et al., 2017)53.570.451.668.6
Trg-mul(RNN)(Caglayan et al., 2017)54.771.352.769.5
Deliberation Network(TF) (Ive et al., 2019)59.874.4--
Our Multi-modal NMT Systems
Transformer (Vaswani et al., 2017)59.573.752.068.0
ObjectAsToken(TF) (Huang et al., 2016)60.074.352.968.6
Enc-att(TF) (Delbrouck and Dupont, 2017b)60.074.352.868.3
Doubly-att(TF) (Helcl et al., 2018)59.974.152.468.1
Our model60.974.953.969.3
+ +Table 3: Experimental results on the $\mathrm{{En}} \Rightarrow \mathrm{{Fr}}$ translation task. + +
ModelTrainingDecodingParameter
Transformer2.6K17.83.4M
ObjectAsToken(TF)1.6K17.23.7M
Enc-att(TF)1.3K16.93.6M
Doubly-att(TF)1.0K12.93.8M
Our model1.1K16.74.0M
+ +Table 4: Training speed (tokens/second), decoding speed (sentences/second) and the number of parameters of different models on the $\mathrm{En}\Rightarrow \mathrm{De}$ translation task. + +erality of our proposed model. Note that in the sentences with more phrases, which are usually long sentences, the improvements of our model over baselines are more significant. We speculate that long sentences often contain more ambiguous words. Thus compared with short sentences, long sentences may require visual information to be better exploited as supplementary information, which can be achieved by the multi-modal semantic interaction of our model. + +We also show the training and decoding speed of our model and the baselines in Table 4. During training, our model can process approximately $1.1\mathrm{K}$ tokens per second, which is comparable to other multi-modal baselines. When it comes to decoding procedure, our model translates about 16.7 sentences per second and the speed drops slightly compared to Transformer. Moreover, our model only introduces a small number of extra parameters + +and achieves better performance. + +# 3.4 Ablation Study + +To investigate the effectiveness of different components, we further conduct experiments to compare our model with the following variants in Table 2: + +(1) w/o inter-modal fusion. In this variant, we apply two separate Transformer encoders to learn the semantic representations of words and visual objects, respectively, and then use the doubly-attentive decoder (Helcl et al., 2018) to incorporate textual and visual contexts into the decoder. The result in line 3 indicates that removing the inter-modal fusion leads to a significant performance drop. It suggests that semantic interactions among multi-modal semantic units are indeed useful for multi-modal representation learning. +(2) visual grounding $\Rightarrow$ fully-connected. We make the words and visual objects fully-connected to establish the inter-modal correspondences. The result in line 4 shows that this change causes a significant performance decline. The underlying reason is the fully-connected semantic correspondences introduce much noise to our model. +(3) different parameters $\Rightarrow$ unified parameters. When constructing this variant, we assign unified parameters to update node states in different modalities. Apparently, the performance drop reported in line 5 also demonstrates the validity of our ap + +proach using different parameters. + +(4) $w$ /attending to visual nodes. Different from our model attending to only textual nodes, we allow our decoder of this variant to consider both two types of nodes using doubly-attentive decoder. From line 6, we can observe that considering all nodes does not bring further improvement. The result confirms our previous assumption that visual information has been fully incorporated into textual nodes in our encoder. +(5) attending to textual nodes $\Rightarrow$ attending to visual nodes. However, when only considering visual nodes, the model performance drops drastically (line 7). This is because the number of visual nodes is far fewer than that of textual nodes, which is unable to produce sufficient context for translation. + +# 3.5 Case Study + +Figure 6 displays the 1-best translations of a sampled test sentence generated by different models. The phrase "a skateboard ramp" is not translated correctly by all baselines, while our model correctly translates it. This reveals that our encoder is able to learn more accurate representations. + +# 3.6 Results on the $\mathbf{E}\mathbf{n}\Rightarrow \mathbf{F}\mathbf{r}$ Translation Task + +We also conduct experiments on the EN $\Rightarrow$ Fr dataset. From Table 3, our model still achieves better performance compared to all baselines, which demonstrates again that our model is effective and general to different language pairs in multi-modal NMT. + +# 4 Related Work + +Multi-modal NMT Huang et al. (2016) first incorporate global or regional visual features into attention-based NMT. Calixto and Liu (2017) also study the effects of incorporating global visual features into different NMT components. Elliott and Kádár (2017) share an encoder between a translation model and an image prediction model to learn visually grounded representations. Besides, the most common practice is to use attention mechanisms to extract visual contexts for multimodal NMT (Caglayan et al., 2016; Calixto et al., 2017; Delbrouck and Dupont, 2017a,b; Barrault et al., 2018). Recently, Ive et al. (2019) propose a translate-and-refine approach and Calixto et al. (2019) employ a latent variable model to capture the multi-modal interactions for multi-modal NMT. + +Apart from model design, Elliott (2018) reveal that visual information seems to be ignored by the multimodal NMT models. Caglayan et al. (2019) conduct a systematic analysis and show that visual information can be better leveraged under limited textual context. + +Different from the above-mentioned studies, we first represent the input sentence-image pair as a unified graph, where various semantic relationships between multi-modal semantic units can be effectively captured for multi-modal NMT. Benefiting from the multi-modal graph, we further introduce an extended GNN to conduct graph encoding via multi-modal semantic interactions. + +Note that if we directly adapt the approach proposed by Huang et al. (2016) into Transformer, the model (ObjectAsToken(TF)) also involves multimodal fusion. However, ours is different from it in following aspects: (1) We first learn the contextual representation of each node within the same modality, so that it can better determine the degree of inter-modal fusion according to its own context. (2) We assign different encoding parameters to different modalities, which has been shown effective in our experiments. + +Additionally, the recent study LXMERT (Tan and Bansal, 2019) also models relationships between vision and language, which differs from ours in following aspects: (1) Tan and Bansal (2019) first apply two transformer encoders for two modalities, and then stack two cross-modality encoders to conduct multi-modal fusion. In contrast, we sequentially conduct self-attention and cross-modal gating at each layer. (2) Tan and Bansal (2019) leverage an attention mechanism to implicitly establish cross-modal relationships via large-scale pretraining, while we utilize visual grounding to capture explicit cross-modal correspondences. (3) We focus on multi-modal NMT rather than vision-and-language reasoning in (Tan and Bansal, 2019). + +Graph Neural Networks Recently, GNNs (Marco Gori and Scarselli, 2005) including gated graph neural network (Li et al., 2016), graph convolutional network (Duvenaud et al., 2015; Kipf and Welling, 2017) and graph attention network (Velickovic et al., 2018) have been shown effective in many tasks such as VQA (Teney et al., 2017; Norcliffe-Brown et al., 2018; Li et al., 2019), text generation (Gildea et al., 2018; Becky et al., 2018; Song et al., 2018b, 2019) and text representation (Zhang et al., 2018; Yin et al., 2019; Song et al., + +![](images/9c63dc96f768dee23135362213b1ea3c2114958748f54e1782292a0baa34f25a.jpg) +Figure 6: A translation example of different multi-modal NMT models. The baseline models do not accurately understand the phrase "a skateboard ramp" (orange), while our model correctly translate it. + +Source: A boy riding a skateboard on a skateboard ramp. + +Reference: Ein junge fahr skateboard auf einer skateboardrampe. + +Tranformer: Ein junge fahrt auf einem skateboard auf einer rampe. + +Doubly-att(TF): Ein junge fahrmit einem skateboard auf einer rampe. + +Enc-att(TF): Ein junge führ ein skateboard auf einer rampe. + +ObjectAsToken(TF): Ein junge fahrct auf einem skateboard auf einer rampe. + +Our model: Ein junge führ auf einem skateboard auf einer skateboardrampe. + +# 2018a; Xue et al., 2019). + +In this work, we mainly focus on how to extend GNN to fuse multi-modal information in NMT. Close to our work, Teney et al. (2017) introduce GNN for VQA. The main difference between their work and ours is that they build an individual graph for each modality, while we use a unified multimodal graph. + +# 5 Conclusion + +In this paper, we have proposed a novel graph-based multi-modal fusion encoder, which exploits various semantic relationships between multimodal semantic units for NMT. Experiment results and analysis on the Multi30K dataset demonstrate the effectiveness of our model. + +In the future, we plan to incorporate attributes of visual objects and dependency trees to enrich the multi-modal graphs. Besides, how to introduce scene graphs into multi-modal NMT is a worthy problem to explore. Finally, we will apply our model into other multi-modal tasks such as multimodal sentiment analysis. + +# Acknowledgments + +This work was supported by the Beijing Advanced Innovation Center for Language Resources (No. TYR17002), the National Natural Science Foundation of China (No. 61672440), and the Scientific Research Project of National Language Committee of China (No. YB135-49). + +# References + +Loic Barrault, Fethi Bougares, Lucia Specia, Chiraag Lala, Desmond Elliott, and Stella Frank. 2018. Findings of the third shared task on multimodal machine translation. In Proceedings of WMT 2018, pages 304-323. +Daniel Becky, Gholamreza Haffariz, and Trevor Cohn. 2018. Graph-to-sequence learning using gated + +graph neural networks. In Proceedings of ACL 2018, pages 273-283. + +Ozan Caglayan, Walid Aransa, Adrien Bardet, Mercedes García-Martínez, Fethi Bougares, Loic Barrault, Marc Masana, Luis Herranz, and Joost van de Weijer. 2017. LIUM-CVC submissions for WMT17 multimodal translation task. In Proceedings of WMT 2017, pages 432-439. +Ozan Caglayan, Loic Barrault, and Fethi Bougares. 2016. Multimodal attention for neural machine translation. CoRR, abs/1609.03976. +Ozan Caglayan, Pranava Madhyastha, Lucia Specia, and Loic Barrault. 2019. Probing the need for visual context in multimodal machine translation. In Proceedings of NAACL-HLT 2019, pages 4159-4170. +Iacer Calixto and Qun Liu. 2017. Incorporating global visual features into attention-based neural machine translation. In Proceedings of ACL 2017, pages 992-1003. +Iacer Calixto, Qun Liu, and Nick Campbell. 2017. Doubly-attentive decoder for multi-modal neural machine translation. In Proceedings of ACL 2017, pages 1913-1924. +Iacer Calixto, Miguel Rios, and Wilker Aziz. 2019. Latent variable model for multi-modal translation. In Proceedings of ACL 2019, pages 6392-6405. +Jean-Benoit Delbrouck and Stéphane Dupont. 2017a. An empirical study on the effectiveness of images in multimodal neural machine translation. In Proceedings of EMNLP 2017, pages 910-919. +Jean-Benoit Delbrouck and Stéphane Dupont. 2017b. Modulating and attending the source image during encoding improves multimodal translation. CoRR, abs/1712.03449. +Michael J. Denkowski and Alon Lavie. 2014. Meteor universal: Language specific translation evaluation for any target language. In Proceedings of WMT 2014, pages 376-380. +David K. Duvenaud, Dougal Maclaurin, Jorge Aguilera-Iparraguirre, Rafael Gómez-Bombarelli, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P. Adams. 2015. Convolutional networks on graphs for learning molecular fingerprints. In Proceedings of NeurIPS 2015, pages 2224-2232. + +Desmond Elliott. 2018. Adversarial evaluation of multimodal machine translation. In Proceedings of EMNLP 2018, pages 2974-2978. +Desmond Elliott, Stella Frank, Khalil Sima'an, and Lucia Specia. 2016. Multi30k: Multilingual english-german image descriptions. In Proceedings of ACL 2016, pages 70-74. +Desmond Elliott and Ákos Kádár. 2017. Imagination improves multimodal translation. In Proceedings of IJCNLP 2017, pages 130-141. +Daniel Gildea, Zhiguo Wang, Yue Zhang, and Linfeng Song. 2018. A graph-to-sequence model for amr-to-text generation. In Proceedings of ACL 2018, pages 1616-1626. +Jindrich Helcl, Jindrich Libovický, and Dusan Varis. 2018. CUNI system for the WMT18 multimodal translation task. In Proceedings of WMT 2018, pages 616-623. +Po-Yao Huang, Frederick Liu, Sz-Rung Shiang, Jean Oh, and Chris Dyer. 2016. Attention-based multimodal neural machine translation. In Proceedings of WMT 2016, pages 639-645. +Julia Ive, Pranava Madhyastha, and Lucia Specia. 2019. Distilling translations with visual awareness. In Proceedings of ACL 2019, pages 6525-6538. +Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. 2018. Bilinear attention networks. In Proceedings of NeurIPS 2018, pages 1571-1581. +Thomas N. Kipf and Max Welling. 2017. Semi-supervised classification with graph convolutional networks. In Proceedings of ICLR 2017. +Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, and Xiaodong He. 2018. Stacked cross attention for image-text matching. In Proceedings of ECCV 2018, pages 212-228. +Linjie Li, Zhe Gan, Yu Cheng, and Jingjing Liu. 2019. Relation-aware graph attention network for visual question answering. In Proceedings of ICCV 2019, pages 10312-10321. +Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard S. Zemel. 2016. Gated graph sequence neural networks. In Proceedings of ICLR 2016. +Gabriele Monfardini Marco Gori and Franco Scarselli. 2005. A new model for learning in graph domains. In Proceedings of IJCNN 2005, pages 729-734. +Will Norcliffe-Brown, Stathis Vafeias, and Sarah Parisot. 2018. Learning conditioned graph structures for interpretable visual question answering. In Proceedings of NeurIPS 2018, pages 8344-8353. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL 2002, pages 311-318. + +Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster R-CNN: towards real-time object detection with region proposal networks. In Proceedings of NeurIPS 2015, pages 91-99. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of ACL 2016, pages 1715-1725. +Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, and Jinsong Su. 2019. Semantic neural machine translation using amr. Transactions of the Association for Computational Linguistics, 7:19-31. +Linfeng Song, Zhiguo Wang, Mo Yu, Yue Zhang, Radu Florian, and Daniel Gildea. 2018a. Exploring graph-structured passage representation for multi-hop reading comprehension with graph neural networks. arXiv preprint arXiv:1809.02040. +Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018b. A graph-to-sequence model for amr-to-text generation. In Proceedings of ACL 2018, pages 1616-1626. +Hao Tan and Mohit Bansal. 2019. LXMERT: learning cross-modality encoder representations from transformers. In Proceedings of EMNLP 2019, pages 5099-5110. +Damien Teney, Peter Anderson, Xiaodong He, and Anton van den Hengel. 2018. Tips and tricks for visual question answering: Learnings from the 2017 challenge. In Proceedings of CVPR 2018, pages 4223-4232. +Damien Teney, Lingqiao Liu, and Anton van den Hengel. 2017. Graph-structured representations for visual question answering. In Proceedings of CVPR 2017, pages 3233-3241. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NeurIPS 2017, pages 4831-4836. +Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In Proceedings of ICLR 2018. +Mengge Xue, Weiming Cai, Jinsong Su, Linfeng Song, Yubin Ge, Yubao Liu, and Bin Wang. 2019. Neural collective entity linking based on recurrent random walk network learning. In Proceedings of IJ-CAI 2019, pages 5327-5333. +Zhengyuan Yang, Boqing Gong, Liwei Wang, Wenbing Huang, Dong Yu, and Jiebo Luo. 2019. A fast and accurate one-stage approach to visual grounding. In Proceedings of ICCV 2019, pages 4682-4692. + +Yongjing Yin, Linfeng Song, Jinsong Su, Jiali Zeng, Chulun Zhou, and Jiebo Luo. 2019. Graph-based neural sentence ordering. In Proceedings of IJCAI 2019, pages 5387-5393. +Yue Zhang, Qi Liu, and Linfeng Song. 2018. Sentence-state LSTM for text representation. In Proceedings of ACL 2018, pages 317-327. +Mingyang Zhou, Runxiang Cheng, Yong Jae Lee, and Zhou Yu. 2018. A visual attention grounding neural model for multimodal machine translation. In Proceedings of EMNLP 2018, pages 3643-3653. \ No newline at end of file diff --git a/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/images.zip b/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a92d36f3484993b7ad6212317d635fadc83ec56f --- /dev/null +++ b/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:87c14701cc6f30d5923bc727fe40ab963b2e59df81539dc5e0b79de5dc96f7ba +size 474566 diff --git a/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/layout.json b/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1f0937d7bc82639c7f4d24fac4fdd9b75c646860 --- /dev/null +++ b/anovelgraphbasedmultimodalfusionencoderforneuralmachinetranslation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a60e821f634d7522c831953be6a47e41903bd46eccb7bbb7e93aa5b143fb31eb +size 387962 diff --git a/aprioritizationmodelforsuicidalityriskassessment/3a4f6951-3076-4e6d-843c-5aec598565be_content_list.json b/aprioritizationmodelforsuicidalityriskassessment/3a4f6951-3076-4e6d-843c-5aec598565be_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f051d4905d106d8347ba74717c2c78bee6843826 --- /dev/null +++ b/aprioritizationmodelforsuicidalityriskassessment/3a4f6951-3076-4e6d-843c-5aec598565be_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3b7635480557670d1c474521aa8809522c7b1b85bee12662439541959de9573f +size 107568 diff --git a/aprioritizationmodelforsuicidalityriskassessment/3a4f6951-3076-4e6d-843c-5aec598565be_model.json b/aprioritizationmodelforsuicidalityriskassessment/3a4f6951-3076-4e6d-843c-5aec598565be_model.json new file mode 100644 index 0000000000000000000000000000000000000000..865b467d685091feb38abfb52f0ee4c51f8f922b --- /dev/null +++ b/aprioritizationmodelforsuicidalityriskassessment/3a4f6951-3076-4e6d-843c-5aec598565be_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1b393814b7b1af2e0d841150f0264d6f719075b6fe6c3d447eebeb30319d4c80 +size 128373 diff --git a/aprioritizationmodelforsuicidalityriskassessment/3a4f6951-3076-4e6d-843c-5aec598565be_origin.pdf b/aprioritizationmodelforsuicidalityriskassessment/3a4f6951-3076-4e6d-843c-5aec598565be_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..07f64a5e4a0ea53f1712ce7079cddd30a4da5fac --- /dev/null +++ b/aprioritizationmodelforsuicidalityriskassessment/3a4f6951-3076-4e6d-843c-5aec598565be_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:20f4ba4ab02e537ce8393c3d3844149935c33656e4bf369e0820ea1438e1a39e +size 867988 diff --git a/aprioritizationmodelforsuicidalityriskassessment/full.md b/aprioritizationmodelforsuicidalityriskassessment/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b145b6eb071a1c78c086f206232157f942bea7eb --- /dev/null +++ b/aprioritizationmodelforsuicidalityriskassessment/full.md @@ -0,0 +1,517 @@ +# A Prioritization Model for Suicidality Risk Assessment + +Han-Chin Shing + +Computer Science University of Maryland + +College Park, MD + +shing@cs.umd.edu + +Philip Resnik + +Linguistic/UMIACS + +University of Maryland + +Ilege Park, MD + +resnik@umd.edu + +Douglas W. Oard + +iSchool/UMIACS + +University of Maryland + +College Park, MD + +oard@umd.edu + +# Abstract + +We reframe suicide risk assessment from social media as a ranking problem whose goal is maximizing detection of severely at-risk individuals given the time available. Building on measures developed for resource-bounded document retrieval, we introduce a well founded evaluation paradigm, and demonstrate using an expert-annotated test collection that meaningful improvements over plausible cascade model baselines can be achieved using an approach that jointly ranks individuals and their social media posts. + +# 1 Introduction + +Mental illness is one of the most significant problems in healthcare: in economic terms alone, by 2030 mental illness worldwide is projected to cost more than cardiovascular disease, and more than cancer, chronic respiratory diseases, and diabetes combined (Bloom et al., 2012). Suicide takes a terrible toll: in 2016 it became the second leading cause of death in the U.S. among those aged 10-34, fourth among those aged 35-54 (Hedegaard et al., 2018). Prevalence statistics suggest that roughly 141 of the 3,283 people who attended ACL 2019 have since had serious thoughts of suicide, 42 have made a plan, and 19 have actually made attempts.1 + +The good news is that NLP and machine learning are showing strong promise for impact in mental health, just as they are having large impacts everywhere else. Traditional methods for predicting suicidal thoughts and behaviors have failed to make progress for fifty years (Franklin et al., 2017), but with the advent of machine learning approaches (Linthicum et al., 2019), including text analysis methods for psychology (Chung and Pennebaker, 2007) and the rise of research on mental + +health using social media (Choudhury, 2013), algorithmic classification has reached the point where it can now dramatically outstrip performance of prior, more traditional prediction methods (Linthicum et al., 2019; Coppersmith et al., 2018). Further progress is on the way as the community shows increasing awareness and enthusiasm in this problem space (e.g., Milne et al., 2016; Losada et al., 2020; Zirikly et al., 2019). + +The bad news is that moving these methods from the lab into practice will create a major new challenge: identifying larger numbers of people who may require clinical assessment and intervention will increase stress on a severely resource-limited mental health ecosystem that cannot easily scale up. This motivates a reformulation of the technological problem from classification to prioritization of individuals who might be at risk, for clinicians or other suitably trained staff as downstream users. + +Perhaps the most basic way to do prioritization is with a single priority queue that the user scans from top to bottom. This "ranked retrieval" paradigm is common for Information Retrieval (IR) tasks such as document retrieval. The same approach has been applied to ranking people based on their expertise (Balog et al., 2012), or more generally to ranking entities based on their characteristics (Balog, 2018). Rather than evaluating categorical accuracy, ranked retrieval systems are typically evaluated by some measure of search quality that rewards placing desired items closer to the top (Voorhees, 2001). Most such measures use only item position, but we find it important to also model the time it takes to recognize desired items, since in our setting the time of qualified users is the most limited resource. + +In this paper, we do so by building on Time- + +![](images/5421329620a673129977358e84692948716c3511bd0facff0691c00724d22d68.jpg) +Figure 1: Illustration of an assessment framework in which individuals are ranked by predicted suicide risk based on social media posts, posts are ranked by expected usefulness for downstream review by a clinician, and word-attention highlighting helps foreground important information for risk assessment. Real Reddit posts, obfuscated and altered for privacy. + +Biased Gain (TBG, Smucker and Clarke, 2012), an IR evaluation measure that models the expected number of relevant items a user can find in a ranked list given a time budget. We observe that in many risk assessment settings (e.g., Yates et al. (2017); Coppersmith et al. (2018); Zirikly et al. (2019)), the available information comprises a (possibly large and/or longitudinal) set of documents, e.g. social media posts, associated with each individual, of which possibly only a small number contain a relevant signal. $^{3}$ This gives rise to a formulation of our scenario as a nested, or hierarchical, ranking problem, in which individuals are ordered by priority, but each individual's documents must also be ranked (Figure 1). Accordingly, we introduce hierarchical Time-Biased Gain (hTBG), a variant of TBG in which individuals are the top level ranked items, and expected reading time is modeled for the ranked list of documents that provides evidence for each individual's assessment. In addition, we introduce a prioritization model that uses a three-level hierarchical attention network to jointly optimize the nested ranking task; this model also addresses the fact that in our scenario, as in many other healthcare-related scenarios, relevance obtains at the level of individuals rather than individual documents (cf. Shing et al., 2019). Using a test collection of Reddit-posting individuals who have been assessed for suicide risk by clinicians based on their posts (Shing et al., 2018), we use hTBG to model prioritization of individuals and demonstrate that our joint model substantially outperforms cascade model baselines in which the nested rankings are produced independently. + +# 2 Related Work + +NLP for Risk Assessment. Calvo et al. (2017) survey NLP for mental health applications using non-clinical texts such as social media. Several recent studies and shared tasks focus on risk assessment of individuals in social media using a multi-level scale (Milne et al., 2016; Yates et al., 2017; Losada et al., 2020). Shing et al. (2018) introduce the dataset we use, and Zirikly et al. (2019) describe a shared task in which 11 teams tackled the individual-level classification that feeds into our prioritization model (their Task B). Our work contributes by modeling the downstream users' prioritization task as taking a key step closer to the real-world problem. + +Hierarchical Attention Attention, especially in the context of NLP, has two main advantages: it allows the network to attend to likely-relevant parts of the input (either words or sentences), often leading to improved performance, and it provides insight into which parts of the input are being used to make the prediction. These characteristics have made attention mechanisms a popular choice for deep learning that requires human investigation, such as automatic clinical coding (Baumel et al., 2018; Mullenbach et al., 2018; Shing et al., 2019). Although concerns about using attention for interpretation exist (Jain and Wallace, 2019; Wegreiffe and Pinter, 2019; Wallace, 2019), Shing et al. (2019) show hierarchical document attention can align well with human-provided ground truth. + +Our prediction model, 3HAN, is a variant of Hierarchical Attention Networks (HAN, Yang et al., 2016). Yang et al. use a two-level attention mechanism that learns to pay attention to specific words in a sentence to form a sentence representation, and at the next higher level to weight specific sentences in + +a document in forming a document representation. Adapting this approach to suicide assessment of at-risk individuals, our model moves a level up the representational hierarchy, learning also to weight documents to form representations of individuals. This allows us to jointly model ranking individuals and ranking their documents as potentially relevant evidence, without document-level annotations. + +Evaluating rankings. There is an extensive IR literature on quality measures for ranked lists (Järvelin and Kekäläinen, 2002; Chapelle et al., 2009; Smucker and Clarke, 2012; Sakai, 2019), which generally reward placing highly relevant items near the top of the list, and are often relatively insensitive to mistakes made near the bottom. + +In the setting of suicidality risk assessment, we care about how much gain (number of at-risk individuals found) can be achieved for a given time budget. Time-biased gain (TBG, Smucker and Clarke, 2012) measures this by assuming a determined user working down a ranked list, with the discount being a function of the time it takes to reach that position. However, neither TBG nor other ranking measures, to the best of our knowledge, can measure the hierarchical ranking found in the scenario that motivates our work: ranking items (i.e. individuals) when each item itself contains a ranked list of potential evidence (their posts). In this paper, we design a new metric, hierarchical time-biased gain (hTBG), to measure the hierarchical ranking by incorporating the cascading user model found in Expected Reciprocal Rank (ERR, Chapelle et al., 2009) into TBG. + +# 3 A Measure for Risk Prioritization + +Section 1 argued for formulating risk assessment as a prioritization process where the assessor has a limited time budget. This leads to four desired properties in an evaluation measure:4 + +- Risk-based: Individuals with high risk should be ranked above others. +- Head-weighted: Ranking quality near the top of the list, where assessors are more likely to assess, should matter more than near the bottom. +- Speed-biased: For equally at-risk individuals, the measure should reward ranking the one who can be assessed more quickly closer to + +![](images/401d29d0ee47d6f0b0613bd0455cf893c2ecccb5f924b2d9903fd127dfb893b5.jpg) +Figure 2: User model for Time-Biased Gain (TBG) + +the top, so that more people at risk can be identified within a given time budget. + +- Interpretable: The evaluation score assigned to a system should be meaningful to assessors. + +Among many rank-based measures that satisfy the risk-based and head-weighted criteria, TBG directly accounts for assessment time in a way that also satisfies the speed-biased criterion (see Theorem 3.1). Furthermore, the numeric value of TBG is a lower bound on the expected number of relevant items — in our case, high-risk individuals — found in a given time budget (Smucker and Clarke, 2012), making it interpretable. After introducing TBG, in Section 3.2 we develop hierarchical Time-Biased Gain (hTBG), an extension of TBG, to account for specific properties of risk assessment using social media posts.[5] + +# 3.1 Time-Biased Gain + +TBG was originally developed in IR for the case of a user seeking to find a relevant document, but here we frame it in the context of risk assessment (Figure 2). TBG assumes a determined user (say a clinician) examining a ranked list of individuals in the order presented by the system. For each individual, the clinician first examines a summary and then decides whether to check relevance via more detailed examination, or to move on. Checking requires more time to make an assessment of whether the individual is indeed at-risk. TBG is a weighted sum of gain, $g_{k}$ , and discount, $D(\cdot)$ , a function of time: + +$$ +\mathrm {T B G} = \sum_ {k = 1} ^ {\infty} g _ {k} D (T (k)). \tag {1} +$$ + +
ParameterDescriptionValue
Pcheck(reli)Prob. to check, given the relevance of summary0.64, if reli = 1
0.39, if reli = 0
Flag(reli)Prob. to flag, given the relevance of individual0.77, if reli = 1
0.27, if reli = 0
TsSeconds to evaluate a summary4.4
TαW + TβSeconds to judge W words0.018W + 7.8
+ +Table 1: Parameters used for TBG and hierarchical TBG. + +$T(k)$ is the expected amount of time it takes a user to reach position $k$ : + +$$ +T (k) = \sum_ {i = 1} ^ {k - 1} t (i) \tag {2} +$$ + +$$ +t (i) = T _ {s} + P _ {\text {c h e c k}} \left(\operatorname {r e l} _ {i}\right) E _ {i} \tag {3} +$$ + +where $t(i)$ is expected time spent at position $i$ . Breaking down $t(i)$ , $T_{s}$ is the time it takes to read a summary and decide whether to check the individual; if yes (probability $P_{\mathrm{check}}(\mathrm{rel}_i)$ ), $E_{i}$ is expected time for detailed assessment, calculated as a function of the individual's total word count $W_{i}$ : + +$$ +E _ {i} = T _ {\alpha} W _ {i} + T _ {\beta} \tag {4} +$$ + +where $T_{\alpha}$ and $T_{\beta}$ scales words to time. The discount function $D(t)$ decays exponentially with half-life $h$ : + +$$ +D (t) = 2 ^ {- \frac {t}{h}} \tag {5} +$$ + +where $h$ is the time at which half of the clinicians will stop, on average. The expected stop time (or mean-life) is $\frac{h}{\ln(2)}$ . Finally, the gain, $g_{k}$ is: + +$$ +g _ {k} = P _ {\text {c h e c k}} \left(\operatorname {r e l} _ {k}\right) P _ {\text {f l a g}} \left(\operatorname {r e l} _ {k}\right) \mathbb {1} _ {\left[ r e l _ {k} = 1 \right]} \tag {6} +$$ + +where $P_{\mathrm{check}}(\mathrm{rel}_k)$ is the probability of checking the individual after reading the summary at position $k$ , and $P_{\mathrm{flag}}(\mathrm{rel}_k)$ is the probability of then flagging that individual as high risk. Gain thus accrues only if a clinician actually finds a high-risk individual. + +The decay function in Equation 5 monotonically decreases with increasing time (and thus rank), so TBG satisfies the head-weighted criterion. Table 1 shows the parameters used in Smucker and Clarke (2012), which were estimated from user studies using data from TREC 2005 Robust track. + +Particularly of interest in a time-limited assessment, we can prove that TBG is speed-biased: + +Theorem 3.1 (TGB satisfies the speed-biased criterion). Swapping an at-risk individual of longer + +![](images/b83fb5d11e23fd5c6316d52cffff2fd101b89bca6a10ef225dff7c266b3612ce.jpg) +Figure 3: hTBG's model for calculating expected assessment time for an individual, replacing shaded box in Figure 2. + +assessment time ranked at $k$ with an equally attrisk individual of shorter assessment time ranked at $k + r$ , where $r > 0$ , always increases TBG. + +Proof. See Appendix B.1 + +![](images/bd91536d1a0ccbf65edae4652089538f309b5d4f6a21708d818c4a6e1c7f7070.jpg) + +# 3.2 Hierarchical Time-Biased Gain + +TBG assumes that detailed assessment involves looking at all available evidence (Equation 4). However, in our setting, an individual may have a large or even overwhelming number of social media posts. One severe risk individual in the SuicideWatch dataset, for example, has 1,326 posts in Reddit, the vast majority of which would provide the assessor with no useful information. Therefore we need to prioritize the documents to be read, and a way of estimating when the user will have read enough to make a decision. + +In general, clinicians engage in a sensemaking process as they examine evidence, and modeling the full complexity of that process would be difficult. We therefore make two simplifying assumptions: (1) that there is a high-signal document that suffices, once read, to support a positive relevance judgment, and (2) that the clinician will not read more than some maximum number of documents. These assumptions align well with those of Expected Reciprocal Rank (ERR), whose cascading user model assumes that as the user works down a ranked list (in our case, the ranked documents posted by a single individual), they are more likely to stop after viewing a highly relevant document than after viewing an irrelevant one, as their information need is more likely to have been satisfied (Chapelle et al., 2009). This results in a cascade model of user behavior: $\mathrm{ERR} = \sum_{k=1}^{\infty} \frac{1}{k} P(\text{stop at k})$ , in which $P(\text{stop at k}) = R_k \prod_{i=1}^{k-1} (1 - R_i)$ , where $R_k = f(rel_k)$ is the probability of stopping at position $k$ as a function of relevance. + +This suggests replacing Equation 4 with the following expected time estimate for detailed assessment of an individual: + +$$ +E _ {i} = T _ {\alpha} \sum_ {l = 1} ^ {L} \left(W _ {i, l} \prod_ {m = 1} ^ {l - 1} (1 - R _ {i, m})\right) + T _ {\beta} (7) +$$ + +where $R_{i,l}$ is the probability of stopping at the $l$ -th document for individual $i$ , and $W_{i,l} > 0$ is the cost (in our case, word count) of reading the $l$ -th document for individual $i$ . Note that for the special case of $\forall i, l \in N, R_{i,l} = 0$ , hTBG reduces to TBG. See Figure 3 for an illustration of $E_i$ of hTBG. For derivation of Equation 7 from ERR's cascading user model, see Appendix B.3. + +# 3.3 Optimal Values for TBG and hTBG + +Calculation of the optimal value for a measure is often important for normalization, though not always easy; in some cases it can be NP-hard (Agrawal et al., 2009, ERR-IA). Another popular approach is to normalize by calculating the metric with an ideal collection. For example, Smucker and Clarke (2012) calculate the normalization factor of TBG by assuming a collection with an infinite number of relevant documents, each of which lack any content. In our case, however, we are actually interested in an optimal value achievable for a given test collection: the optimal values of TBG and hTBG are properties of the bottleneck that occurs due to the user's limited time-budget. We find that: + +Theorem 3.2 (Optimal TBG). The optimal value of TBG under binary relevance is obtained if and only if (1) all at-risk individuals are ranked above not-at-risk individuals, and (2) within the at-risk individuals, they are sorted based on time spent in ascending order. + +Proof. See Appendix B.1 + +![](images/86622017e294bdd6220a243d47c9b175453f556e5586e2dc4ed5720beb53f7d2.jpg) + +Theorem 3.2 makes sense, as any time spent on assessing a not-at-risk individual is time not spent on assessing other potentially at-risk individuals. Preference in assessing individuals with shorter assessment time also increased the chance of assessing more individuals in the given time budget. + +Minimum Individual Assessment Time. To calculate optimal hTBG, we need to minimize individual assessment time. A natural question to ask, then, is whether a result similar to Theorem 3.2 holds for the individual assessment time of hTBG + +in Equation 7. By swapping paired documents, we can use proof by contradiction to show that: + +Theorem 3.3. Minimum individual assessment time is obtained if the documents are sorted in descending order by $\frac{R_{i,l}}{W_{i,l}}$ . + +Proof. See Appendix B.2 + +![](images/1e61aaccbfc65cd89f04d708ccdc213e3e868ed39cfd9e58437442c36fda0d2f.jpg) + +Theorem 3.3 shows a surprisingly intuitive tradeoff between how relevant a document might be, and how much time (proportional to word counts) the expert needs to take to read it: highly relevant documents with short reading time are preferred. + +Observe that Theorem 3.1 (speed-biased criterion) and Theorem 3.2 both apply to hTBG, as the two theorems only concern the ranking of individuals, not documents, and hTBG is an extension of TBG to measure the document ranking. Using Theorem 3.3 and Theorem 3.2, calculation of optimal TBG and hTBG values is simply a matter of sorting. For TBG, time complexity is $O(n \log(n))$ , where $n \leq K$ is the number of at-risk individuals in the test collection. For hTBG, worst-case time complexity is $O(n \log(n) + nm \log(m))$ , where $m \leq L$ is the maximum number of relevant documents per individual. + +# 4 Classification Model + +We began by motivating risk assessment via social media as a person-centered, time-limited prioritization problem, in which the technological goal is to support downstream clinicians or other assessors in identifying as many people at risk as possible. This led to the conclusion that systems should not only rank individuals but, for each individual, rank their posts, and we introduced an evaluation framework that involves an abstraction of the user's process of identifying people at risk given a nested ranking. + +Next, we need a system that can produce such nested rankings of individuals and their posts. Ideally such a system should be able to train on only individual-level, not document-level, labels, since suicide risk is a property of individuals, not documents, and document labels are more difficult to obtain. In addition, such a system should ideally produce additional information to help the downstream user — if not justification of its output, then at least highlighting potentially useful information. + +To address this need, we introduce 3HAN, a hierarchical attention network (Yang et al., 2016) that extends up to the level of individuals, who are + +represented as sequences of documents. This architecture is similar to the network we proposed in Shing et al. (2019) for coding clinical encounters; it obtained good predictive performance and we also showed that, despite concerns about the interpretation of network attention (Jain and Wallace, 2019), hierarchical document-level attention succeeded in identifying documents containing relevant evidence. The architecture here differs in that it builds representations hierarchically from the word level, as opposed to pre-extracted conceptual features, and takes document ordering into account using a bi-directional GRU (Bahdanau et al., 2015). + +Specifically, our model has five layers (Figure 4). The first is a word-embedding layer that turns a one-hot word vector into a dense vector. The second to fourth layers are three Seq2Vec layers with attention that learn to aggregate, respectively, a sequence of word vectors into a sentence vector, a sequence of sentence vectors into a document vector, and a sequence of document vectors into an individual vector (hence 3HAN). The final layer is a fully connected layer followed by softmax. + +We detail our Seq2Vec layer in the context of aggregating a sequence of document vectors to an individual's vector, though the three Seq2Vec layers are the same. See Figure 4b for an illustration. Document vectors $\{d_{i,j}\}_{j = 1}^{m}$ are first passed through a bi-directional GRU layer. The outputs, after passing through a fully-connected layer and a non-linear layer, are then compared to a learnable attention vector, $v_{\text {attention }}$ . Specifically, + +$$ +g _ {i, j} = \operatorname {B i - G R U} \left(d _ {i, j}\right) \tag {8} +$$ + +$$ +r _ {i, j} = \tanh \left(W g _ {i, j} + b\right) \tag {9} +$$ + +$$ +a _ {i, j} = \frac {e ^ {r _ {i , j} ^ {\top} v _ {\text {a t t e n t i o n}}}}{\sum_ {j ^ {\prime} = 1} ^ {m} e ^ {r _ {i , j ^ {\prime}} ^ {\top} v _ {\text {a t t e n t i o n}}}} \tag {10} +$$ + +$$ +u _ {i} = \sum_ {j = 1} ^ {m} a _ {i, j} g _ {i, j} \tag {11} +$$ + +where $a_{i,j}$ is the normalized document attention score for the $j$ -th vector, and $u_i$ is the final aggregated individual vector. As shown in Equation 10, the transformed vector $r_{i,j}$ is compared with the learnable attention vector $v_{\text{attention}}$ using a dot product, and further normalized for the weighted averaging step in Equation 11. + +Once we have the individual vector $u_{i}$ , we can predict the risk label of the individual by passing it through a fully-connected layer and a softmax. + +Specifically, + +$$ +P \left(\hat {y} _ {i}\right) = \text {s o f t m a x} \left(W _ {F C} u _ {i} + b _ {F C}\right) \tag {12} +$$ + +Finally, we compare with the ground truth label $y_{i}$ of individual $i$ using negative log-likelihood to calculate a loss: + +$$ +\operatorname {l o s s} _ {i} = - \log (P (\hat {y} _ {i} = y _ {i})). \tag {13} +$$ + +# 5 Experimentation + +We first introduce the test collection and then show how we can evaluate 3HAN and the cascade model baselines on the test collection using hTBG. + +To demonstrate the effectiveness of the 3HAN model, which jointly learns to rank individuals and, within each individual, their posts as evidence, we compare it with different combinations of individual-level rankers and document-level rankers. Training details for all the models can be found in Appendix C. + +# 5.1 Test Collection + +In our experimentation, we use the University of Maryland Reddit Suicidality Dataset, v.2 (Shing et al., 2018; Zirikly et al., 2019). This English-language dataset, derived from the 2015 Full Reddit Submission Corpus (2006-2015), includes 11,129 potentially at-risk individuals who posted on r/SuicideWatch (a subreddit dense in self-reports about suicidality, henceforth SW), as well as 11,129 control individuals who never posted on any mental-health related subreddit. Entire posting histories (not just from SW, but all Reddit forums) were collected. An individual's number of posts can range from 10 to 1,326. See Table 2 for a detailed breakdown of number of posts per individual across datasets and risk categories. + +The full dataset has three subsets with disjoint individuals. The first, which we term the WEAK SUPERVISION dataset, includes 10,263 individuals who posted in SW and 10,263 control individuals who did not; they are respectively considered to be indirectly positively and negatively labeled, very noisily since posting on SW does not necessary imply suicidal ideation.[8] The second set is the CROWDSOURCE dataset, including 621 individuals annotated by crowdsourcers with four risk levels: No Risk, Low Risk, Moderate Risk, and Severe Risk. + +![](images/ab9b5b728064c9d9d60079465bc59b77d5961e355099b2fea43fd4620f1fe894.jpg) +(a) 3HAN + +![](images/ab46f07369651bc8fddaf8f16cea8d9060d416d059fa6e7df39175a7aab313d7.jpg) +(b) Seq2Vec with Attention +Figure 4: An illustration of the three-level Hierarchical Attention Network (3HAN) model + +
# Posts10-2020-4040-6060-100100-200200-500500-1,0001,000-1,500
CrowdSourceNo Risk31422527181240
Low Risk19225112400
Moderate Risk464519149710
Severe Risk80793719281230
ExpertNo Risk37257830
Low Risk6115118711
Moderate Risk23191226131453
Severe Risk725910441
+ +Table 2: Number of individuals with the number (range) of posts, by dataset and risk category. + +The last is the EXPERT dataset, including 242 individuals with the same four-level annotation, by four suicide risk assessment experts. Along with the level of risk for each individual, the expert annotators also designated the single post that most strongly supported each of their low, moderate, or severe risk labels. + +# 5.2 Evaluating with hTBG + +As TBG and hTBG are measures designed for binary relevance judgements, we map the Severe Risk category to at-risk, and everything else to not-at-risk. For word counts, we directly use the token counts in documents. We use the parameters that Smucker and Clarke (2012) estimated for TBG in user studies (Table 1). As discussed in Section 3.2, we assume there exists a maximum number of documents the clinician can read for each individual. + +We set that number to 50 for the calculation of hTBG; if no relevant document exists in the top 50 documents, we consider that individual a miss and set the gain to zero. $^{11}$ + +To rank individuals using our classification models, we use a standard conversion method to convert four-class probability to a single score: + +$$ +\sum_ {\operatorname {r e l} _ {i}} ^ {R} P \left(\hat {y} _ {i} = \operatorname {r e l} _ {i}\right) \operatorname {s c o r e} _ {\operatorname {r e l} _ {i}} \tag {14} +$$ + +where $R$ is $\{\mathrm{No, Low, Moderate, Severe}\}$ , and $\mathrm{score}_{\mathrm{rel}_i}$ is the real number that maps to the risk-level of the individual $i$ . We use $\{\mathrm{No} = 0, \mathrm{Low} = 1, \mathrm{Moderate} = 2, \mathrm{Severe} = 4\}$ as our mapping — No Risk can plausibly be treated the same as a post with no annotation (e.g. a control individual), and exponential scaling also seems plausible although just one of many possibilities, which we leave for future work. + +The hTBG metric also requires a stopping probability for each document, $R_{i,l}$ . Assuming that the more severe the risk associated with a document is, the more likely the assessor is to stop and flag the + +individual, on the EXPERT dataset where we have document-level annotations, we can estimate the expected stopping probability as: + +$$ +R _ {i, l} = 1 - \prod_ {c = 1} ^ {C} \left(1 - \frac {\operatorname {s c o r e} _ {\text {r e l} _ {i , l , c}}}{\operatorname {s c o r e} _ {\max }}\right) \tag {15} +$$ + +where $C$ annotators annotated the post as most strongly supporting their judgment. $\mathrm{Score}_{\mathrm{rel}_{i,l,c}}$ is a mapping from the document-level risk by annotator $c$ to a real number, with the same mapping used in Equation 14. $\mathrm{Score}_{\max} = 4$ is the maximum in that mapping. + +To reflect different time budgets, we report results with the half-life parameter ranging from 1 to 6 hours, which corresponds to expected reading time budgets from 1.4 to 8.7 hours. + +# 5.3 Models for Ranking Individuals + +3HAN. 3HAN is first pretrained on the binary WEAK SUPERVISION dataset. The model is then further tuned on the four-class CROWDSOURCE dataset by transferring the weights (except the last fully-connected prediction layer) over. We initialized and fixed the word embedding using the 200-dimensional Glove embedding trained on Twitter (Pennington et al., 2014). + +3HAN_Av. 3HAN Average is trained the same way as 3HAN, except that the last Seq2Vec layer (the layer that aggregates a sequence of document vectors to an individual vector) is averaged instead of using attention, which can be achieved by fixing $a_{i,j} = \frac{1}{m}$ in Equation 10. This is similar to the HN-AVE baseline in Yang et al. (2016). Note that 3HAN Av cannot rank documents, as it lacks document attention. + +LR. A logistic regression model is trained on the CROWDSOURCE dataset. The feature vector for an individual is computed by converting documents into document-level feature vectors, and then averaging them to obtain an individual-level feature vector. For each document, we concatenate four feature sets: (1) bag-of-words for vocabulary count larger than three, (2) Glove embedding summing over words, (3) 194 features representing emotional topics from Empath (Fast et al., 2016), + +and (4) seven scores measuring document readability. $^{13}$ This model is included as a conventional baseline in suicide risk assessment, similar to the baseline found in Shing et al. (2018). + +# 5.4 Models for Ranking Documents + +3HAN_At. Document attention learned jointly with 3HAN. As a side effect to training our 3HAN model, we learn document attention scores, see Equation 10. This score can then be used to rank documents in terms of their relevance to the judgement. This availability of document ranking, despite a lack of document annotations, is a significant advantage of hierarchical attention networks, since fine-grained document annotations are difficult to obtain on a large scale. Sentence- and word-level attention are a further advantage, in terms of potentially facilitating user review (see Figure 1), although exploring that awaits future work. + +Forward and Backward. Ranking an individual's documents in either chronological order or reverse chronological order is an obvious default in the absence of a trained model for document ranking, important baselines for testing whether a document ranking model actually adds value. + +# 6 Results and Discussion + +Our model, 3HAN+3HAN ATT, the only joint model, achieves the best performance on hTBG compared to all other combinations of individual rankers and document rankers across three different time budgets (Table 3). The result is significant except when compared to 3HAN_AV+3HAN ATT. However, using 3HAN ATT to rank documents implies that you have already trained 3HAN. Therefore, a more reasonable combination to compare with is 3HAN_AV+BACKWARD, which we outperform by a significant margin. + +Overall, the effect of document ranking is larger than the effect of individual ranking. Notably, the FORWARD document ranker always yields the worst performance. BACKWARD, on the other hand, is surprisingly competitive. We hypothesize that this may be an indication that suicidal ideation worsens over time, or perhaps of the unfortunate + +
Individual RankerDocument RankerHalf-life h
1 hr3 hrs6 hrs
LRFORWARD7.5110.0510.89
3HAN_AVFORWARD7.7610.1510.94
3HANFORWARD7.409.9810.84
LRBACKWARD8.7511.7012.68
3HAN_AVBACKWARD9.6512.0912.89
3HANBACKWARD9.7312.1712.95
LR3HAN_ATT9.4412.0512.88
3HAN_AV3HAN_ATT10.1612.3513.04
3HAN3HAN_ATT10.3912.4913.12
Optimal hTBG19.7820.3920.54
+ +event of suicide attempts following posting a Severe Risk document. This motivates the importance of prioritizing the reading order of documents: being able to find evidence early in suicide assessment leaves more time for other individuals, and will reduce probability of misses. + +Document ranking alone does not decide everything, as 3HAN+BACKWARD outperforms LR+3HAN_ATT. It is the combination of 3HAN and its document attentions that produce our best model. This makes sense, as 3HAN, while learning to predict the level of risk, also learns which documents are important to make the prediction. + +Figure 1 shows the top 3 documents in a summary-style view for each of the highest ranked 3 individuals, with word-level attention shown using shading. Words without attention are obfuscated; others are altered to preserve privacy. + +Previously Existing Measures. For previously existing measures, e.g. TBG and NDCG@20, document ranking has no effect, and thus these are not suitable measures in our scenario. However, we include results here for reference (Table 4). Since 3HAN Av. and LR cannot rank documents, it is impossible to calculate hTBG, so we report results on the chronologically backward ranking strategy. NDCG@20 is NDCG score cut off at 20, chosen based on the optimal hTBG value. + +# 7 Conclusions and Future Work + +We introduced hTBG, a new evaluation measure, as a step toward moving beyond risk classification to a paradigm in which prioritization is the focus, and where time matters. Like TBG, the hTBG score is interpretable as a lower bound on the expected + +Table 3: hTBG scores with three different time budgets, all combinations of individual and document rankers. + +
RankerhTBGTBGNDCG@20
3HAN+3HAN ATT.12.4911.4670.90
3HAN AV.+BACKWARD12.0911.4068.28
LR+BACKWARD11.7010.9869.44
Optimal20.3919.75100.00
+ +Table 4: TBG and NDCG@20 listed to compare with hTBG. Both hTBG's and TBG's half lives are set at 3 hrs, and maximum document cutoff is set at 50. + +number of relevant items found in a ranking, given a time budget. In our experiment, a "relevant item" is a person classified by experts as being at risk of attempting suicide in the near future. + +Measured at an expected reading time budget of about half a day (4hr20min, half-life 3hrs), our joint ranking approach achieved hTBG of 12.49 compared with 11.70 for a plausible baseline from prior art: using logistic regression to rank individuals, and then looking at a individual's posts in backward chronological order. That increase is just a bit short of identifying one more person in need of immediate help in the experiment's population of 242 individuals. There are certainly limitations in our study and miles to go before validating our approach in the real world, but our framework should make it easy to integrate and explore other individual rankers, document rankers and explanation mechanisms, and to actually build user interfaces like the schematic in Figure 1. + +# Acknowledgments + +This work has been supported in part by a University of Maryland Strategic Partnership (MPower) seed grant, an AWS Machine Learning Research Award, and an AI + Medicine for High Impact (AIM-HI) Challenge Award. We are immensely grateful to Glen Coppersmith, Michelle Colder Carras, April Foreman, Michelle Kuchuk, Beau Pinkham, Rebecca Resnik, Katherine Musacchio Schafer, Jonathan Singer, Raymond Tucker, Tony Wood, Ayah Zirikly, members of the UMIACS CLIP lab, and participants at the Workshops on Computational Linguistics and Clinical Psychology for valuable discussions related to this work. + +# References + +Rakesh Agrawal, Sreenivas Gollapudi, Alan Halverson, and Samuel Leong. 2009. Diversifying search results. In Proceedings of the Second ACM International Conference on Web Search and Data Mining, + +WSDM '09, page 5-14, New York, NY, USA. Association for Computing Machinery. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Krisztian Balog. 2018. Entity-Oriented Search, volume 39 of The Information Retrieval Series. Springer. +Krisztian Balog, Yi Fang, Maarten de Rijke, Pavel Serdyukov, and Luo Si. 2012. Expertise retrieval. Foundations and Trends in Information Retrieval, 6(2-3):127-256. +Tal Baumel, Jumana Nassour-Kassis, Raphael Cohen, Michael Elhadad, and Noémie Elhadad. 2018. Multi-label classification of patient notes: Case study on ICD code assignment. In *The Workshops of the Thirty-Second AAAI Conference on Artificial Intelligence*, pages 409–416. AAAI Press. +Adrian Benton, Glen Coppersmith, and Mark Dredze. 2017. Ethical research protocols for social media health research. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, EthNLP@EACL, pages 94-102. Association for Computational Linguistics. +David E. Bloom, Elizabeth Cafiero, Eva Jané-Llopis, Shafika Abrahams-Gessel, Lakshmi Reddy Bloom, Sana Fathima, Andrea B. Feigl, Tom Gaziano, Ali Hamandi, Mona Mowafi, Danny O'Farrell, and Emre. 2012. The Global Economic Burden of Noncommunicable Diseases. PGDA Working Papers 8712, Program on the Global Demography of Aging. +Bureau of Health Workforce. 2020. Designated health professional shortage areas: Statistics, second quarter of fiscal year 2020, designated HPSA quarterly summary. +Rafael A. Calvo, David N. Milne, M. Sazzad Hussain, and Helen Christensen. 2017. Natural language processing in mental health applications using non-clinical texts. Nat. Lang. Eng., 23(5):649-685. +Stevie Chancellor, Michael L. Birnbaum, Eric D. Caine, Vincent M. B. Silenzio, and Munmun De Choudhury. 2019. A Taxonomy of Ethical Tensions in Inferring Mental Health States from Social Media. In Proceedings of the Conference on Fairness, Accountability, and Transparency. +Olivier Chapelle, Donald Metlzer, Ya Zhang, and Pierre Grinspan. 2009. Expected reciprocal rank for graded relevance. In Proceedings of the 18th ACM Conference on Information and Knowledge Management, CIKM 2009, pages 621-630. ACM. + +Munmun De Choudhury. 2013. Role of social media in tackling challenges in mental health. In Proceedings of the 2nd international workshop on Socially-aware multimedia, SAM@ACM Multimedia 2013, pages 49-52. ACM. +Cindy Chung and James W Pennebaker. 2007. The psychological functions of function words. Social communication, 1:343-359. +Glen Coppersmith, Ryan Leary, Patrick Crutchley, and Alex Fine. 2018. Natural Language Processing of Social Media as Screening for Suicide Risk. Biomedical Informatics Insights, 10:117822261879286. +Ethan Fast, Binbin Chen, and Michael S. Bernstein. 2016. Empath: Understanding topic signals in largescale text. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pages 4647-4657. ACM. +Joseph C. Franklin, Jessica D. Ribeiro, Kathryn R. Fox, Kate H. Bentley, Evan M. Kleiman, Xieyining Huang, Katherine M. Musacchio, Adam C. Jaroszewski, Bernard P. Chang, and Matthew K. Nock. 2017. Risk factors for suicidal thoughts and behaviors: A meta-analysis of 50 years of research. Psychological Bulletin, 143(2):187-232. +Devin Gaffney and J. Nathan Matias. 2018. Caveat emptor, computational social science: Large-scale missing data in a widely-published reddit corpus. PLOS ONE, 13(7):1-13. +Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language processing platform. In Proceedings of Workshop for NLP Open Source Software (NLP-OSS), pages 1-6. Association for Computational Linguistics. +Holly Hedegaard, Sally C Curtin, and Margaret Warner. 2018. Suicide rates in the United States continue to increase. National Center for Health Statistics. +Matthew Honnibal and Mark Johnson. 2015. An improved non-monotonic transition system for dependency parsing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, pages 1373-1378. The Association for Computational Linguistics. +Sarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, pages 3543-3556. Association for Computational Linguistics. +Kalervo Jarvelin and Jaana Kekäläinen. 2002. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS), 20(4):422-446. + +Kathryn P. Linthicum, Katherine Musacchio Schafer, and Jessica D. Ribeiro. 2019. Machine learning in suicide science: Applications and ethics. Behavioral Sciences & the Law, 37(3):214-222. +David E. Losada, Fabio Crestani, and Javier Parapar. 2020. eRisk 2020: Self-harm and depression challenges. In Advances in Information Retrieval - 42nd European Conference on IR Research, ECIR 2020, Lisbon, Portugal, April 14-17, 2020, Proceedings, Part II, volume 12036 of Lecture Notes in Computer Science, pages 557-563. Springer. +David N. Milne, Glen Pink, Ben Hachey, and Rafael A. Calvo. 2016. CLPsych 2016 shared task: Triaging content in online peer-support forums. In Proceedings of the 3rd Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, CLPsych@NAACL-HLT 2016, pages 118-127. The Association for Computational Linguistics. +James Mullenbach, Sarah Wiegreffe, Jon Duke, Jimeng Sun, and Jacob Eisenstein. 2018. Explanable prediction of medical codes from clinical text. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, pages 1101-1111. Association for Computational Linguistics. +Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, pages 1532-1543. ACL. +Tetsuya Sakai. 2019. Graded relevance assessments and graded relevance measures of NTCIR: A survey of the first twenty years. CoRR, abs/1903.11272. +SAMHSA. 2019. National Survey on Drug Use and Health, 2017 and 2018. Center for Behavioral Health Statistics and Quality. Table 8.58B. +Allison Schuck, Raffaella Calati, Shira Barzilay, Sarah Bloch-Elkouby, and Igor Galynker. 2019. Suicide Crisis Syndrome: A review of supporting evidence for a new suicide-specific diagnosis. Behavioral sciences & the law, 37(3):223-239. +Han-Chin Shing, Suraj Nair, Ayah Zirikly, Meir Friedenberg, Hal Daumé III, and Philip Resnik. 2018. Expert, crowdsourced, and machine assessment of suicide risk via online postings. In Proceedings of the Fifth Workshop on Computational Linguistics and Clinical Psychology: From Keyboard to Clinic, CLPsych@NAACL-HTL, pages 25-36. Association for Computational Linguistics. +Han-Chin Shing, Guoli Wang, and Philip Resnik. 2019. Assigning medical codes at the encounter level by paying attention to documents. In ML4H, Machine Learning for Health Workshop at NeurIPS. + +Mark D. Smucker and Charles L. A. Clarke. 2012. Time-based calibration of effectiveness measures. In The 35th International ACM SIGIR conference on research and development in Information Retrieval, SIGIR '12, pages 95-104. ACM. +Ellen M. Voorhees. 2001. The philosophy of information retrieval evaluation. In Evaluation of Cross-Language Information Retrieval Systems, Second Workshop of the Cross-Language Evaluation Forum, CLEF 2001, volume 2406 of Lecture Notes in Computer Science, pages 355-370. Springer. +Byron C. Wallace. 2019. Thoughts on "attention is not not explanation". Medium, Accessed: December, 2019. +Sarah Wiegrefe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, pages 11-20. Association for Computational Linguistics. +Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard H. Hovy. 2016. Hierarchical attention networks for document classification. In *NAACL HLT* 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*, pages 1480-1489. The Association for Computational Linguistics. +Andrew Yates, Arman Cohan, and Nazli Goharian. 2017. Depression and Self-Harm Risk Assessment in Online Forums. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2968-2978. +Michael Zimmer. 2010. "But the data is already public": on the ethics of research in Facebook. Ethics and Information Technology, 12(4):313-325. +Ayah Zirikly, Philip Resnik, Ozlem Uzuner, and Kristy Hollingshead. 2019. CLPsych 2019 shared task: Predicting the degree of suicide risk in Reddit posts. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 24-33. Association for Computational Linguistics. + +# A Appendix: Ethical Considerations + +Our research involving the University of Maryland Reddit Suicide Dataset has undergone review by the University of Maryland Institutional Review Board with a determination of Category 4 Exempt status under U.S. federal regulations. For this dataset, (a) the original data are publicly available, and (b) the originating site (Reddit) is intended for anonymous posting. In addition, since Reddit is officially anonymous, but that is not enforced on the site, the dataset has undergone automatic de-identification using named entity recognition aggressively to identify and mask out potential personally identifiable information such as personal names and organizations, in order to create an additional layer of protection (Zirikly et al., 2019). In an assessment of de-identification quality, we manually reviewed a sample of 200 randomly selected posts (100 from the SuicideWatch subreddit and 100 from other subreddits), revealing zero instances of personally identifiable information. + +Following Benton et al. (2017), we treat the data (even though de-identified) as sensitive and restrict access to it, we use obfuscated and minimal examples in papers and presentations, and we do not engage in linkage with other datasets. + +The dataset is available to other researchers via an application process put in place with the American Association of Suicidology that requires IRB or equivalent ethical review, a commitment to appropriate data management, and, since ethical research practice is not just a matter of publicly available data or even IRB approval (Zimmer, 2010; Benton et al., 2017; Chancellor et al., 2019), a commitment to following additional ethical guidelines. Interested researchers can find information at http://umiacs.umd.edu/\~resnik/umd_redit_suicidality_dataset.html. + +# B Appendix: Proofs + +# B.1 Time-Biased Gain + +In order to prove that TBG satisfies the speed-biased criterion, consider two individuals ranked at consecutive positions $k$ and $k + 1$ ; if we swap the two individual, the change in TBG score is: + +$$ +\begin{array}{l} \Delta \mathrm {T B G} = (g _ {k + 1} - g _ {k}) D (T (k)) \\ + g _ {k} D (T (k) + t (k + 1)) \tag {16} \\ - g _ {k + 1} D (T (k) + t (k)) \\ \end{array} +$$ + +This leads to Lemma B.1-B.3: + +Lemma B.1. Swapping a not-at-risk individual ranked at $k$ with an at-risk individual ranked at $k + 1$ always increases TBG. + +Proof. Let $g_{k} = 0$ and $g_{k + 1} > 0$ . Equation 16 simplifies to + +$$ +\Delta \mathrm {T B G} = g _ {k + 1} (D (T (k)) - D (T (k) + t (k))) \tag {17} +$$ + +which is always positive because the decay function monotonically decreases, and each assessment of an individual requires at least $T_{s}$ seconds. + +Lemma B.2 (Risk-based Criterion). The optimal value of TBG under binary relevance is obtained only if all not-at-risk individuals are ranked below all at-risk individuals. + +Proof. Let $\pi$ be a ranking of individuals that yields the optimal value of TBG. Assume that in $\pi$ there exist not-at-risk individuals ranked before at-risk individuals. Let the $k$ -position be the lowest ranked not-at-risk individual that is at least in front of one at-risk individual, we can then apply Lemma B.1 to increase TBG. This leads to a contradiction. + +Lemma B.3. Swapping an at-risk individual of longer assessment time ranked at $k$ of with an at-risk individual of shorter assessment time ranked at $k + n$ , where $k + n$ is the closest at-risk individual ranked lower than $k$ , always increases TBG. + +Proof. Let $g_{k} = g_{k + n} > 0$ , and $\forall i \in \{i | k < i < k + n\}$ , $g_{i} = 0$ . We have + +$$ +\begin{array}{l} \Delta \mathrm {T B G} = g _ {k} (D (T (k + n) + t (k + n) - t (k)) \\ - D (T (k + n))) \tag {18} \\ \end{array} +$$ + +which is always positive because the decay function monotonically decreases, and $t(k + n) < t(k)$ from the assumption that the individual at $k + n$ has shorter assessment time. + +Lemma B.3 naturally leads to a proof for the speed-biased property of TBG: + +Proof for Theorem 3.1. Applying Lemma B.3, we know that swapping $k$ and $k + r$ leads to a positive gain between the two. Now, consider all + +at-risk individuals ranked between $k$ and $k + r$ : $\forall u,$ s.t. $k < u < k + r$ , the difference is: + +$$ +g _ {u} (D (T (u) + t (k + r) - t (k)) - D (T (u))) \tag {19} +$$ + +which is always greater than or equal to zero due to the fact that the decay function monotonically decrease, and $t(k + r) < t(k)$ . Thus, the net difference is always larger than zero, thus satisfying the speed-biased criterion. + +Finally, combing previous results, we can easily show: + +Proof for Theorem 3.2. A direct consequence of Theorem 3.1 is that if the at-risk individuals are sorted by assessment time in ascending order, no swapping between any two individuals can increase TBG. This, combined with Lemma B.2, that all at-risk individuals are on top of not-at-risk individuals, leads to the necessary condition. Because any swapping within the not-at-risk individuals does not change TBG when no at-risk individuals are ranked lower, this implies that ranking according to Theorem 3.2 gives us a unique and optimal value, which satisfies the sufficient condition of Theorem 3.2. $\square$ + +# B.2 Hierarchical Time-Biased Gain + +The assessment time of an individual ranked at $k$ , $t(k)$ , is monotonic with $E_{i}$ , thus showing minimal value of $E_{i}$ suffices. Recall that $E_{i}$ is calculated as: + +$$ +E _ {i} = T _ {\alpha} \sum_ {l = 1} ^ {L} \left(W _ {i, l} \prod_ {m = 1} ^ {l - 1} \left(1 - R _ {i, m}\right)\right) + T _ {\beta} (2 0) +$$ + +Consider, again, swapping a document at rank $l$ with a document at rank $l + 1$ belonging to the same individual $i$ . The change in $E_{i}$ is: + +$$ +\Delta E _ {i} = \kappa_ {i, l} \left(W _ {i, l + 1} R _ {i, l} - W _ {i, l} R _ {i, l + 1}\right) \tag {21} +$$ + +where $\kappa_{i,l} = T_{\alpha}\prod_{j = 1}^{l - 1}(1 - R_{i,j})\geq 0$ is a fixed term that is not affected by the swap. + +Equation 21 also points to an important observation: + +Lemma B.4. If $W_{i,l+1}R_{i,l} - W_{i,l}R_{i,l+1} < 0$ and $R_{i,j} < 1$ for all $j < l$ , then swapping document $l$ with document $l + 1$ will decrease $E_{i}$ . + +Proof. This follows directly from Equation 21. $\square$ + +Lemma B.5. If $R_{i,j} < 1$ for all $j$ , then minimum individual assessment time is obtained if and only if the documents are sorted in descending order by + +$$ +\frac {R _ {i , l}}{W _ {i , l}}. \tag {22} +$$ + +Proof. Let $\tau$ be a document ranking that yields the minimum individual assessment time, and for the sake of contradiction, not a ranking that can be obtained by ranking according to $\frac{R_{i,l}}{W_{i,l}}$ . We can, thus, find two neighboring documents, without loss of generality, $l$ and $l + 1$ , such that: + +$$ +\frac {R _ {i , l}}{W _ {i , l}} < \frac {R _ {i , l + 1}}{W _ {i , l + 1}} \tag {23} +$$ + +this leads to: + +$$ +R _ {i, l} W _ {i, l + 1} - R _ {i, l + 1} W _ {i, l} < 0 \tag {24} +$$ + +since all $W > 0$ . Lemma B.4 together with the prerequisite that $R_{i,j} < 1$ for all $j$ then suggest that swapping the two leads to a decrease of $E_{i}$ . This contradicts with the assumption that $\tau$ is an optimal ranking. This proves that to achieve minimum individual assessment time, it is necessary to sort by $\frac{R_{i,l}}{W_{i,l}}$ . The sufficient condition follows by the fact that swapping tied documents does not lead to change in $E_{i}$ , as shown in Equation 21 + +Proof for Theorem 3.3. Let $\tau$ be a document ranking according to $\frac{R_{i,l}}{W_{i,l}}$ . Let $m$ be the document such that $R_{i,m} = 1$ and is ranked closer to the top then any other document with $R_{i,:} = 1$ (i.e. with the shortest $W_{i,:}$ ). Now, consider using $m$ to cut the documents into two partitions: the first partition of documents are ones ranked before $m$ . Applying Lemma B.5, this partition of documents are already in optimal sorted order, since there's no $R_{i,:} = 1$ . The second partition, documents ranked lower than $m$ , the ranking simply does not matter, as Equation 20 shows, the $(1 - R_{i,m})$ term will make everything zero afterwards. + +Now, let's consider moving a document from the second partition to the first partition. Since any documents in the second partition has a $\frac{R_{i,j}}{W_{i,j}}$ that is smaller than any documents in the first partition, after you move the document, the optimal ranking for the first partition will put the document at the bottom, right next to $m$ . And since $\frac{R_{i,m}}{W_{i,m}} \geq \frac{R_{i,j}}{W_{i,j}}$ due to the original ordering, we can apply Lemma B.4, which can swap the document back below $m$ . Next, + +consider moving the lowest ranked document of the first partition (the one ranked at $m - 1$ ) to the second partition. This will always increase $E_{i}$ , as shown from Lemma B.4. Moving any other document in the first partition will also increase $E_{i}$ as least as much as before, since the process is equivalent to swapping with (and thus potentially increase $E_{i}$ ) any intermediate documents in between. + +Combine these two together, we show that $E_{i}$ is at a minimum value when sorted in descending order according to $\frac{R_{i,l}}{W_{i,l}}$ . + +# B.3 Relationship between ERR and hTBG + +Here we show the derivation from the cascading user model in ERR to the individual assessment time estimation $(E_{i})$ in hTBG. ERR assumes a stopping probability (written in hTBG terms): + +$$ +P (\text {s t o p} 1) = R _ {i, l} \prod_ {j = 1} ^ {l - 1} \left(1 - R _ {i, j}\right) \tag {25} +$$ + +The expected words read, can then be calculated as: + +$$ +\begin{array}{l} \sum_ {l = 1} ^ {L} \left(P (\text {s t o p a t} l) \sum_ {d = 1} ^ {l} W _ {i, l}\right) \\ = \sum_ {l = 1} ^ {L} \left(R _ {i, l} \prod_ {j = 1} ^ {l - 1} \left(1 - R _ {i, j}\right) \left(\sum_ {d = 1} ^ {l} W _ {i, l}\right)\right) \tag {26} \\ \end{array} +$$ + +This can be rearranged to the formula we used in hTBG: + +$$ +\sum_ {l = 1} ^ {L} \left(W _ {i, l} \prod_ {m = 1} ^ {l - 1} \left(1 - R _ {i, m}\right)\right) \tag {27} +$$ + +by letting $R_{i,L} = 1$ (the user has to stop reading at the last document). To show this, observe that $W_{i,1}$ appears in all $L$ terms of the summation, thus the coefficient for $W_{i,1}$ is simply $\sum_{l=1}^{L}(R_{i,l}\prod_{j=1}^{l-1}(1 - R_{i,j})) = 1$ , from both simple manipulation and the fact that we are summing over probability. Similarly, $W_{i,2}$ appears in all $L$ terms except with $l = 1$ , thus $(1 - R_{i,1})$ . For $W_{i,3}$ it is $(1 - R_{i,1}) - R_{i,2}(1 - R_{i,1}) = \prod_{j=1}^{2}(1 - R_{i,j})$ . The rest follows. + +# C Appendix: Training Details + +All models are built using AllenNLP (Gardner et al., 2018). Tokenization and sentence splitting are done using spaCy (Honnibal and Johnson, + +2015). + +The CROWDSOURCE dataset is split into a training set $(80\%)$ and a validation set $(20\%)$ during model development. We did not test on the Expert dataset until all parameters of the models were fixed. Cross validation on the training set is used for hyperparameter tuning. For 3HAN, we used ADAM with learning rate 0.003, trained for 100 epochs with early stopping on the validation dataset, with patience set to 30. For 3HAN_Av, the same hyperparameters are used. For LR, we used SGD with learning rate 0.003, trained for 100 epochs with early stopping on the validation dataset, with patience set to 30. + +Both 3HAN and 3HAN_Av's Seq2Vec layers use bi-directional GRU with attention. The word-to-sentence layer has input dimension of 200, hidden dimension of 50, and output dimension of 100, since the GRU is bi-directional. The sentence-to-document and document-to-individual layer, similarly, has input dimension of 100, hidden dimension of 50, and output dimension of 100. Hyperparameters were selected using cross validation on the training set split of the CROWDSOURCE dataset. \ No newline at end of file diff --git a/aprioritizationmodelforsuicidalityriskassessment/images.zip b/aprioritizationmodelforsuicidalityriskassessment/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b50bbbaba9a655032dd6a841a3130a432fe0532e --- /dev/null +++ b/aprioritizationmodelforsuicidalityriskassessment/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf76d50e828e12f41604e7cdabbbfb273be754b627e872df14dafa2385bf1cdd +size 405936 diff --git a/aprioritizationmodelforsuicidalityriskassessment/layout.json b/aprioritizationmodelforsuicidalityriskassessment/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..45f2201e17fe239cc0039ff9eef374909d5620d9 --- /dev/null +++ b/aprioritizationmodelforsuicidalityriskassessment/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d9d2ee4944184f4da44dc90b58623011bd94a13a21758b667e2952021007b7b6 +size 550060 diff --git a/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/d14a3751-56eb-4369-bc80-47acc26ceac9_content_list.json b/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/d14a3751-56eb-4369-bc80-47acc26ceac9_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..cef21bb5d7c250b616fb252a0758781496911358 --- /dev/null +++ b/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/d14a3751-56eb-4369-bc80-47acc26ceac9_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83e13967cf1f0b9a24798e14f628353b574f13b520adf29139ccd11d3753a270 +size 53117 diff --git a/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/d14a3751-56eb-4369-bc80-47acc26ceac9_model.json b/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/d14a3751-56eb-4369-bc80-47acc26ceac9_model.json new file mode 100644 index 0000000000000000000000000000000000000000..046534f84ef03393951771d244c9f4efe9a2cc79 --- /dev/null +++ b/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/d14a3751-56eb-4369-bc80-47acc26ceac9_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:750b7608328d76d0bc10756581a85de393bbb68d46caa0bd27a8c7e78f32353c +size 62246 diff --git a/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/d14a3751-56eb-4369-bc80-47acc26ceac9_origin.pdf b/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/d14a3751-56eb-4369-bc80-47acc26ceac9_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2a27b468c73e8f6cafb7e9205417f81d504e1366 --- /dev/null +++ b/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/d14a3751-56eb-4369-bc80-47acc26ceac9_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ea3cbbb87051dc942e78081069c9a69590594dd8891888583af59f9c38aad997 +size 2877837 diff --git a/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/full.md b/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/full.md new file mode 100644 index 0000000000000000000000000000000000000000..87268369d6ec4d868d25cfd48bc751cbcd892413 --- /dev/null +++ b/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/full.md @@ -0,0 +1,241 @@ +# A Probabilistic Generative Model for Typographical Analysis of Early Modern Printing + +Kartik Goyal1 Chris Dyer2 Christopher Warren3 Max G'Sell4 Taylor Berg-Kirkpatrick5 + +$^{1}$ Language Technologies Institute, Carnegie Mellon University $^{2}$ Deepmind + +$^{3}$ Department of English, Carnegie Mellon University + +$^{4}$ Department of Statistics, Carnegie Mellon University + +5Computer Science and Engineering, University of California, San Diego + +{kartikgo,cnwarren,mgsell}@andrew.cmu.edu + +cdyer@google.com tberg@eng.ucsd.edu + +# Abstract + +We propose a deep and interpretable probabilistic generative model to analyze glyph shapes in printed Early Modern documents. We focus on clustering extracted glyph images into underlying templates in the presence of multiple confounding sources of variance. Our approach introduces a neural editor model that first generates well-understood printing phenomena like spatial perturbations from template parameters via interpretable latent variables, and then modifies the result by generating a non-interpretable latent vector responsible for inkling variations, jitter, noise from the archiving process, and other unforeseen phenomena associated with Early Modern printing. Critically, by introducing an inference network whose input is restricted to the visual residual between the observation and the interpretably-modified template, we are able to control and isolate what the vector-valued latent variable captures. We show that our approach outperforms rigid interpretable clustering baselines (Ocular) and overly-flexible deep generative models (VAE) alike on the task of completely unsupervised discovery of typefaces in mixed-font documents. + +# 1 Introduction + +Scholars interested in understanding details related to production and provenance of historical documents rely on methods of analysis ranging from the study of orthographic differences and stylometrics, to visual analysis of layout, font, and printed characters. Recently developed tools like Ocular (Berg-Kirkpatrick et al., 2013) for OCR of historical documents have helped automate and scale some textual analysis methods for tasks like compositor attribution (Ryskina et al., 2017) and digitization of historical documents (Garrette et al., 2015). However, researchers often find the need to go beyond + +![](images/48d1c2f59c5cd96e7d54ba97b171ed207f14f46a0788357f53fc410b4726d0d1.jpg) +Figure 1: We desire a generative model that can be biased to cluster according to typeface characteristics (e.g. the length of the middle arm) rather than other more visually salient sources of variation like inking. + +textual analysis for establishing provenance of historical documents. For example, Hinman (1963)'s study of typesetting in Shakespeare's First Folio relied on the discovery of pieces of damaged or distinctive type through manual inspection of every glyph in the document. More recently, Warren et al. (2020) examine pieces of distinctive types across several printers of the early modern period to posit the identity of clandestine printers of John Milton's Areopagitica (1644). In such work, researchers frequently aim to determine whether a book was produced by a single or multiple printers (Weiss (1992); Malcolm (2014); Takano (2016)). Hence, in order to aid these visual methods of analyses, we propose here a novel probabilistic generative model for analyzing extracted images of individual printed characters in historical documents. We draw from work on both deep generative modeling and interpretable models of the printing press to develop an approach that is both flexible and controllable – the later being a critical requirement for such analysis tools. + +As depicted in Figure 1, we are interested in identifying clusters of subtly distinctive glyph shapes as these correspond to distinct metal stamps in the type-cases used by printers. However, other + +sources of variation (inking, for example, as depicted in Figure 1) are likely to dominate conventional clustering methods. For example, powerful models like the variational autoencoder (VAE) (Kingma and Welling, 2014) capture the more visually salient variance in inking rather than typeface, while more rigid models (e.g. the emission model of Ocular (Berg-Kirkpatrick et al., 2013)), fail to fit the data. The goal of our approach is to account for these confounding sources of variance, while isolating the variables pertinent to clustering. + +Hence, we propose a generative clustering model that introduces a neural editing process to add expressivity, but includes interpretable latent variables that model well-understood variance in the printing process: bi-axial translation, shear, and rotation of canonical type shapes. In order to make our model controllable and prevent deep latent variables from explaining all variance in the data, we introduce a restricted inference network. By only allowing the inference network to observe the visual residual of the observation after interpretable modifications have been applied, we bias the posterior approximation on the neural editor (and thus the model itself) to capture residual sources of variance in the editor – for example, inkling levels, ink bleeds, and imaging noise. This approach is related to recently introduced neural editor models for text generation (Guu et al., 2018). + +In experiments, we compare our model with rigid interpretable models (Ocular) and powerful generative models (VAE) at the task of unsupervised clustering subtly distinct typeface in scanned images early modern documents sourced from Early English Books Online (EEBO). + +# 2 Model + +Our model reasons about the printed appearances of a symbol (say majuscule F) in a document via a mixture model whose $K$ components correspond to different metal stamps used by a printer for the document. During various stages of printing, random transformations result in varying printed manifestations of a metal cast on the paper. Figure 2 depicts our model. We denote an observed image of the extracted character by $X$ . We denote choice of typeface by latent variable $c$ (the mixture component) with prior $\pi$ . We represent the shape of the $k$ -th stamp by template $T_{k}$ , a square matrix of parameters. We denote the interpretable latent variables corresponding to spatial adjustment of + +![](images/6b094b71eef7ca505d4d9c9a10bbd43eee31327526937e02bfe77ce330730359.jpg) +Figure 2: Proposed generative model for clustering images of a symbol by typeface. Each mixture component $c$ corresponds to a learnable template $T_{k}$ . The $\lambda$ variables warp (spatially adjust) the original template $T$ to $\tilde{T}$ . This warped template is then further transformed via the $z$ variables to $\hat{T}$ via an expressive neural filter function parametrized by $\theta$ . + +the metal stamp by $\lambda$ , and the editor latent variable responsible for residual sources of variation by $z$ . As illustrated in Fig. 2, after a cluster component $c = k$ is selected, the corresponding template $T_{k}$ undergoes a transformation to yield $\hat{T}_k$ . This transformation occurs in two stages: first, the interpretable spatial adjustment variables $(\lambda)$ produce an adjusted template ( $\S 2.1$ ), $\hat{T}_k = \mathrm{warp}(T_k,\lambda)$ , and then the neural latent variable transforms the adjusted template ( $\S 2.2$ ), $\hat{T}_k = \mathrm{filter}(\tilde{T}_k,z)$ . The marginal probability under our model is + +$$ +p (X) = \sum_ {k} \pi_ {k} \int p (X | \lambda , z; T _ {k}) p (\lambda) p (z) d z d \lambda , +$$ + +where $p(X|\lambda, z; T_k)$ refers to the distribution over the binary pixels of $X$ where each pixel has a bernoulli distribution parametrized by the value of the corresponding pixel-entry in $\hat{T}_k$ . + +# 2.1 Interpretable spatial adjustment + +Early typesetting was noisy, and the metal pieces were often arranged with slight variations which resulted in the printed characters being positioned with small amounts of offset, rotation and shear. These real-valued spatial adjustment variables are denoted by $\lambda = (r,o,s,a)$ , where $r$ represents the rotation variable, $o = (o_h,o_v)$ represents offsets along the horizontal and vertical axes, $s = (s_h,s_v)$ + +denotes shear along the two axes. A scale factor, $\tilde{a} = 1.0 + a$ , accounts for minor scale variations arising due to the archiving and extraction processes. All variables in $\lambda$ are generated from a Gaussian prior with zero mean and fixed variance as the transformations due to these variables tend to be subtle. + +In order to incorporate these deterministic transformations in a differentiable manner, we map $\lambda$ to a template sized attention map $H_{ij}$ for each output pixel position $(i,j)$ in $\tilde{T}$ as depicted in Figure 3. The attention map for each output pixel is formed in order to attend to the corresponding shifted (or scaled or sheared) portion of the input template and is shaped according to a Gaussian distribution with mean determined by an affine transform. This approach allows for strong inductive bias which contrasts with related work on spatial-VAE (Bepler et al., 2019) that learns arbitrary transformations. + +![](images/545240168ef8e889cc6b22d811e9e846b28e81e2684e7a2c7a634c541b29998c.jpg) +Figure 3: Translation operation: The mode of the attention map is shifted by the offset values for every output pixel in $\tilde{T}$ . Similar operations account for shear, rotation, and scale. + +# 2.2 Residual sources of variations + +Apart from spatial perturbations, other major sources of deviation in early printing include random inking perturbations caused by inconsistent application of the stamps, unpredictable ink bleeds, and noise associated with digital archiving of the documents. Unlike in the case of spatial perturbations which could be handled by deterministic affine transformation operators, it is not possible to analytically define a transformation operator due to these variables. Hence we propose to introduce a non-interpretable real-valued latent vector $z$ , with a Gaussian prior $\mathcal{N}(\mathbf{0},\mathbf{I})$ , that transforms $\tilde{T}$ into a final template $\hat{T}$ via neurally-parametrized function filter $(\tilde{T},z;\theta)$ with neural network parameters $\theta$ . This function is a convolution over $\tilde{T}$ whose kernel is parametrized by $z$ , followed by non-linear operations. Intuitively, parametrizing the filter by $z$ results in the latent variable accounting for variations like ink ing appropriately because convolution filters capture local variations in appearance. Srivatsan et al. (2019) also observed the effectiveness of using $z$ to define a deconvolutional kernel for + +![](images/66de3962bfa93ce47ed8a6d537dfa97febec6f834c15877868871d3b94dba673.jpg) +Figure 4: Inference network for $z$ conditions on the mixture component and only the residual image left after subtracting the $\lambda$ -transformed template from the image. This encourages $z$ to model variance due to sources other than spatial adjustments. + +font generation. + +# 2.3 Learning and Inference + +Our aim is to maximize the log likelihood of the observed data $\left(\{X_{d} \mid d \in \mathbb{N}, d < n\}\right)$ of $n$ images wrt. model parameters: + +$$ +\operatorname {L L} (T _ {1, \dots , k}, \theta) = \max _ {T, \theta} \sum_ {d} \log \left[ \sum_ {k} \pi_ {k} \right. +$$ + +$$ +\int p (X _ {d} | \lambda_ {d}, z _ {d}; T _ {k}, \theta) p (\lambda_ {d}) p (z _ {d}) d z _ {d} d \lambda_ {d} \Big ] +$$ + +During training, we maximize the likelihood wrt. $\lambda$ instead of marginalizing, which is an approximation inspired by iterated conditional modes (Besag, 1986): + +$$ +\max _ {T, \theta} \sum_ {d} \log \sum_ {k} \max _ {\gamma_ {k, d}} \pi_ {k} \int p (X _ {d} | \lambda_ {d} = \gamma_ {k, d}, z _ {d}; +$$ + +$$ +T _ {k}, \theta) p (\lambda_ {d} = \gamma_ {k, d}) p (z _ {d}) d z _ {d} +$$ + +However, marginalizing over $z$ remains intractable. Therefore we perform amortized variational inference to define and maximize a lower bound on the above objective (Kingma and Welling, 2014). We use a convolutional inference neural network parametrized by $\phi$ (Fig. 4), that takes as input, the mixture component $k$ , the residual image $R_{k} = X - \tilde{T}_{k}$ , and produces mean and variance parameters for an isotropic gaussian proposal distribution $q(z \mid R_{k}, k; \phi)$ . This results in the final training objective: + +$$ +\begin{array}{l} \max _ {T, \theta , \phi} \sum_ {d} \log \sum_ {k} \mathrm {E} _ {q (z _ {d} | R _ {d, k}, k; \phi)} \left[ \max _ {\gamma_ {k, d}} \left(\pi_ {k} \right. \right. \\ \left. p (X _ {d} | \lambda = \gamma_ {k, d}, z _ {d}; T _ {k}, \theta) p (\lambda = \gamma_ {k, d}) \right] \\ - \mathrm {K L} \left(q \left(z _ {d} \mid R _ {d, k}, k; \phi\right) \| p (z)\right) \\ \end{array} +$$ + +We use stochastic gradient ascent to maximize this objective with respect to $T, \gamma, \theta$ and $\phi$ . + +# 3 Experiments + +We train our models on printed occurrences of 10 different uppercase character classes that scholars have found useful for bibliographic analysis (Warren et al., 2020) because of their distinctiveness. As a preprocessing step, we ran Ocular (Berg-Kirkpatrick et al., 2013) on the grayscale scanned images of historical books in EEBO dataset and extracted the estimated image segments for the letters of interest. + +# 3.1 Quantitative analysis + +We show that our model is superior to strong baselines at clustering subtly distinct typefaces (using realistic synthetic data), as well as in terms of fitting the real data from historical books. + +# 3.1.1 Baselines for comparison + +Ocular: Based on the emission model of Ocular that uses discrete latent variables for the vertical/horizontal offset and inking variables, and hence has limited expressivity. + +$\lambda$ -only: This model only has the interpretable continuous latent variables pertaining to spatial adjustment. + +VAE-only: This model is expressive but doesn't have any interpretable latent variables for explicit control. It is an extension of Kingma et al. (2014)'s model for semi-supervised learning with a continuous latent variable vector in which we obtain tighter bounds by marginalizing over the cluster identities explicitly. For fair comparison, the encoder and decoder convolutional architectures are the same as the ones in our full model. The corresponding training objective for this baseline is: + +$$ +\begin{array}{l} \max _ {T, \theta , \phi} \sum_ {d} \log \sum_ {k} \operatorname {E} _ {q (z _ {d} | X _ {d}, k; \phi)} \left[ \pi_ {k} p (X _ {d} | z _ {d}; T _ {k}, \theta) \right] \\ - \mathrm {K L} \left(q \left(z _ {d} \mid X _ {d}, k; \phi\right) \| p (z)\right) \\ \end{array} +$$ + +No-residual: The only difference from the full model is that the encoder for the inference network conditions the variational distribution $q(z)$ on the entire input image $X$ instead of just the residual image $X - \tilde{T}$ . + +# 3.1.2 Font discovery in Synthetic Data + +Early modern books were frequently composed from two or more type cases, resulting in documents with mixed fonts. We aim to learn the dif + +
V-measureMutual InfoF&MNLL
Ocular0.420.450.61379.21
λ-only0.490.510.70322.04
VAE-only0.220.290.38263.45
No-residual0.540.580.73264.27
Our Model0.730.740.85257.92
+ +Table 1: (a) Clustering results on synthetic data (V-measure, Mutual Info, F&M). (b) Test negative log likelihood (NLL) on real data from historical documents, or negative ELBO bound for intractable models (NLL). + +ferent shapes of metal stamps that were used as templates for each cluster component in our model. + +Data: In order to quantitatively evaluate our model's performance, we experiment with synthetically generated realistic dataset for which we know the ground truth cluster identities in the following manner: For each character of interest, we pick three distinct images from scanned segmented EEBO images, corresponding to three different metal casts. Then we randomly add spatial perturbations related to scale, offset, rotation and shear. To incorporate varying inking levels and other distortions, we randomly either perform erosion, dilation, or a combination of these warpings using OpenCV (Bradski, 2000) with randomly selected kernel sizes. Finally, we add a small Gaussian noise to the pixel intensities and generate 300 perturbed examples per character class. + +Results: We report macro-averaged results across all the character classes on three different clustering measures, V-measure (Rosenberg and Hirschberg, 2007), Mutual Information and Fowlkes and Mallows Index (Fowlkes and Mallows, 1983). In Table 1, we see that our model significantly outperforms all other baselines on every metric. Ocular and $\lambda$ -only models fail because they lack expressiveness to explain the variations due to random jitters, erosions and dilations. The VAE-only model, while very expressive, performs poorly because it lacks the inductive bias needed for successful clustering. The No-residual model performs decently but our model's superior performance emphasizes the importance of designing a restrictive inference network such that $z$ only focuses on extraneous sources of variation. + +# 3.1.3 Fitting Real Data from Historical Books + +For the analysis of real books, we selected three books from the EEBO dataset printed by different printers. We modeled each character class for each book separately and report the macro-aggregated + +upper bounds on the negative log likelihood (NLL) in Table 1. We observe that adding a small amount of expressiveness makes our $\lambda$ -only model better than Ocular. The upper bounds of other inference network based models are much better than the tight1 bounds of both the interpretable models. Our model has the lowest upper bound of all the models while retaining interpretability and control. + +# 3.2 Qualitative analysis + +We provide visual evidence of desirable behavior of our model on collections of character extractions from historical books with mixed fonts. Specifically, we discuss the performance of our model on the mysterious edition of Thomas Hobbes' Leviathan known as "the 25 Ornaments" edition. (Hobbes, 1651 [really 1700?]). The 25 Ornaments Leviathan is an interesting test case for several reasons. While its title page indicates a publisher and year of publication, both are fabricated (Malcolm, 2014). The identities of its printer(s) remain speculative, and the actual year of publication is uncertain. Further, the 25 Ornaments exhibits two distinct fonts. + +# 3.2.1 Quality of learned templates + +![](images/52896efb2c5c06cc15ea792b020d6bb51d1a51b4afc6de30406f2c990cb184bb.jpg) +Figure 5: The learned templates for $\mathbf{F}$ and $\mathbf{R}$ and the transformed templates $\hat{T}$ for four examples of $\mathbf{F}$ are shown. Our model is able to learn desirable templates based on underlying glyph structure. + +Our model is successful in discovering distinctly shaped typefaces in the 25 Ornaments Leviathan. We focus on the case study of majuscule letters $\mathbf{F}$ and $\mathbf{R}$ , each of which have two different typefaces mixed in throughout. The two typefaces for $\mathbf{F}$ differ in the length of the middle arm (Fig. 1), and the two typefaces for $\mathbf{R}$ have differently shaped legs. In Fig. 5, we show that our model successfully learns the two desired templates $T_{1}$ and $T_{2}$ for both the characters which indicates that the clusters in our + +model mainly focus on subtle differences in underlying glyph shapes. We also illustrate how the latent variables transform the model templates $T$ to $\hat{T}$ for four example $\mathbf{F}$ images. The model learns complex functions to transform the templates which go beyond simple affine and morphological transformations in order to account for inking differences, random jitter, contrast variations etc. + +# 3.2.2 Interpretable variables $(\lambda)$ and Control + +Unaligned raw Images + +Aligned Images + +![](images/fb423e4b9f64aa36e1f318a5c440f29785ec7152f44cf4199ffdedbfff02b7a9.jpg) +Figure 6: Result of alignment on Leviathan extractions using the interpretable $\lambda$ variables along with their pixelwise average images. Aligned average image is much sharper than the unaligned average image. + +Finally, we visualize the ability of our model to separate responsibility of modelling variation among the interpretable and non-interpretable variables appropriately. We use the inferred values of the interpretable $(\lambda)$ variable for each image in the dataset to adjust the corresponding image. Since the templates represent the canonical shape of the letters, the $\lambda$ variables which shift the templates to explain the images can be reverse applied to the input images themselves in order to align them by accounting for offset, rotation, shear and minor size variations. In Fig. 6, we see that the input images (top row) are uneven and vary by size and orientation. By reverse applying the inferred $\lambda$ values, we are able to project the images to a fixed size such that they are aligned and any remaining variations in the data are caused by other sources of variation. Moreover, this alignment method would be crucial for automating certain aspects of bibliographic studies that focus on comparing specific imprints. + +# 4 Conclusion + +Beyond applications to typeface clustering, the general approach we take might apply more broadly to other clustering problems, and the model we developed might be incorporated into OCR models for historical text. + +# 5 Acknowledgements + +This project is funded in part by the NSF under grants 1618044 and 1936155, and by the NEH under grant HAA256044-17. + +# References + +Tristan Bepler, Ellen Zhong, Kotaro Kelley, Edward Brignole, and Bonnie Berger. 2019. Explicitly disentangling image content from translation and rotation with spatial-vae. In Advances in Neural Information Processing Systems, pages 15409-15419. +Taylor Berg-Kirkpatrick, Greg Durrett, and Dan Klein. 2013. Unsupervised transcription of historical documents. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 207-217, Sofia, Bulgaria. Association for Computational Linguistics. +Julian Besag. 1986. On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society: Series B (Methodological), 48(3):259-279. +G. Bradski. 2000. The OpenCV Library. Dr. Dobb's Journal of Software Tools. +Edward B Fowlkes and Colin L Mallows. 1983. A method for comparing two hierarchical clusterings. Journal of the American statistical association, 78(383):553-569. +Dan Garrette, Hannah Alpert-Abrams, Taylor Berg-Kirkpatrick, and Dan Klein. 2015. Unsupervised code-switching for multilingual historical document transcription. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1036-1041, Denver, Colorado. Association for Computational Linguistics. +Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. Transactions of the Association for Computational Linguistics, 6:437-450. +Charlton Hinman. 1963. The printing and proof-reading of the first folio of Shakespeare, volume 1. Oxford: Clarendon Press. +Thomas Hobbes. 1651 [really 1700?]. Leviathan, or, the matter, form, and power of a common-wealth ecclesiastical and civil. By Thomas Hobbes of Malmesbury. Number R13935 in ESTC. [false imprint] printed for Andrew Crooke, at the Green Dragon in St. Pauls Church-yard, London. +Diederik P. Kingma and Max Welling. 2014. Autoencoding variational bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings. +Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. 2014. Semi-supervised learning with deep generative models. In Advances in neural information processing systems, pages 3581-3589. +Noel Malcolm. 2014. Editorial Introduction. In Leviathan, volume 1. Clarendon Press, Oxford. + +Andrew Rosenberg and Julia Hirschberg. 2007. V-measure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL), pages 410-420. +Maria Ryskina, Hannah Alpert-Abrams, Dan Garrette, and Taylor Berg-Kirkpatrick. 2017. Automatic composer attribution in the first folio of shakespeare. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 411–416, Vancouver, Canada. Association for Computational Linguistics. +Nikita Srivatsan, Jonathan Barron, Dan Klein, and Taylor Berg-Kirkpatrick. 2019. A deep factorization of style and structure in fonts. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2195-2205, Hong Kong, China. Association for Computational Linguistics. +Akira Takano. 2016. Thomas Warren: A Printer of Leviathan (head edition). Annals of Nagoya University Library Studies, 13:1-17. +Christopher N. Warren, Pierce Williams, Shruti Rijhwani, and Max G'Sell. 2020. Damaged type and Areopagitica's clandestine printers. Milton Studies, 62.1. +Adrian Weiss. 1992. Shared Printing, Printer's Copy, and the Text(s) of Gascoigne's "A Hundredth Sundrie Flowres". Studies in Bibliography, 45:71-104. + +# A Character wise quantitative analysis + +The quantitative experiments were performed on the following character classes: A, B, E, F, G, H, M, N, R, W. + +
V-measureMutual InfoF&MNLL
λ-only0.770.820.89264.90
VAE-only0.330.380.5230.45
No-residual0.790.850.90231.45
Our Model0.780.860.89226.25
+ +Table 2: Results for character A + +
V-measureMutual InfoF&MNLL
λ-only0.370.390.59261.1
VAE-only0.150.20.32229.1
No-residual0.370.390.58228.1
Our Model0.680.730.81226.25
+ +Table 3: Results for character B + +
V-measureMutual InfoF&MNLL
λ-only0.330.360.55282.4
VAE-only0.170.190.30253.2
No-residual0.330.350.56251.45
Our Model0.650.700.76234.05
+ +Table 4: Results for character E + +
V-measureMutual InfoF&MNLL
λ-only0.090.100.55258.40
VAE-only0.030.050.31218.2
No-residual0.120.090.59208.1
Our Model0.810.560.94204.48
+ +Table 5: Results for character F + +
V-measureMutual InfoF&MNLL
λ-only0.600.620.73268.40
VAE-only0.280.380.40250.8
No-residual0.640.660.77244.5
Our Model0.600.620.73240.84
+ +Table 6: Results for character G + +
V-measureMutual InfoF&MNLL
λ-only0.720.710.79313.75
VAE-only0.320.320.40254.2
No-residual0.900.970.94258.8
Our Model0.921.010.96249.81
+ +Table 7: Results for character H + +
V-measureMutual InfoF&MNLL
λ-only0.620.640.78392.06
VAE-only0.290.380.40323.5
No-residual0.700.830.74329.25
Our Model0.750.840.87323.04
+ +Table 8: Results for character M + +
V-measureMutual InfoF&MNLL
λ-only0.650.700.73331.6
VAE-only0.300.450.40265.2
No-residual0.740.810.82270.11
Our Model0.690.750.75264.23
+ +Table 9: Results for character N + +
V-measureMutual InfoF&MNLL
λ-only0.070.080.55330.6
VAE-only0.030.040.34247.1
No-residual0.060.070.53251.32
Our Model0.460.320.78246.02
+ +Table 10: Results for character R + +
V-measureMutual InfoF&MNLL
λ-only0.650.710.79418.01
VAE-only0.310.450.42364.2
No-residual0.720.780.82369.5
Our Model0.720.790.84364.21
+ +Table 11: Results for character W \ No newline at end of file diff --git a/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/images.zip b/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..6fc4bfdc6246e11f269490e44ae03fbe1c0527a9 --- /dev/null +++ b/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fe510aff25aa3f23634a58f83b89d30e68292f0ed5111d73f30fd9fb730a3e0a +size 370115 diff --git a/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/layout.json b/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..716d03aa1109c2726d309cd7c81ff320595d5555 --- /dev/null +++ b/aprobabilisticgenerativemodelfortypographicalanalysisofearlymodernprinting/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f0a29e7b5ece74413107d747d1d82e925dac4e4aa730dc43162c064a60cf1ca1 +size 286276 diff --git a/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/07894a62-3bbc-44f3-9959-8d08ad891e43_content_list.json b/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/07894a62-3bbc-44f3-9959-8d08ad891e43_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..89bbf51678cf778a878462f66ba7b0a921c65a99 --- /dev/null +++ b/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/07894a62-3bbc-44f3-9959-8d08ad891e43_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dbda779a83884510277b5d9a4a34d78fcd64a7e9bb01480b01a9ec683e91b284 +size 89291 diff --git a/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/07894a62-3bbc-44f3-9959-8d08ad891e43_model.json b/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/07894a62-3bbc-44f3-9959-8d08ad891e43_model.json new file mode 100644 index 0000000000000000000000000000000000000000..424b09c103f0f3eb9c730e40ac294b3cbfa770b5 --- /dev/null +++ b/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/07894a62-3bbc-44f3-9959-8d08ad891e43_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b30c1eab05e4122db04ac8687b190a8143701e678b553473884e57449efa7784 +size 106556 diff --git a/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/07894a62-3bbc-44f3-9959-8d08ad891e43_origin.pdf b/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/07894a62-3bbc-44f3-9959-8d08ad891e43_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7c7d45d8461a8442966ef082fd3bf8ab9cbe10ed --- /dev/null +++ b/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/07894a62-3bbc-44f3-9959-8d08ad891e43_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5fbaf9159000bc310a6d5abd6e22452bc1040fb6f8565375288bb1ded3145aef +size 1366150 diff --git a/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/full.md b/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/full.md new file mode 100644 index 0000000000000000000000000000000000000000..a51246c3dd3ca59c82fd8db1f1cb2b7cd8f68e05 --- /dev/null +++ b/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/full.md @@ -0,0 +1,328 @@ +# A Recipe for Creating Multimodal Aligned Datasets for Sequential Tasks + +Angela S. Lin * Sudha Rao* Asli Celikyilmaz* Elnaz Nouri* Chris Brockett* Debadeepta Dey* Bill Dolan* + +\*Salesforce Research, Palo Alto, CA, USA + +Microsoft Research, Redmond, WA, USA + +angela.lin@salesforce.com {sudhra, aslicel, elnouri} $@$ microsoft.com + +{chrisbkt,dedey,billdol} $@$ microsoft.com + +# Abstract + +Many high-level procedural tasks can be decomposed into sequences of instructions that vary in their order and choice of tools. In the cooking domain, the web offers many partially-overlapping text and video recipes (i.e. procedures) that describe how to make the same dish (i.e. high-level task). Aligning instructions for the same dish across different sources can yield descriptive visual explanations that are far richer semantically than conventional textual instructions, providing commonsense insight into how real-world procedures are structured. Learning to align these different instruction sets is challenging because: a) different recipes vary in their order of instructions and use of ingredients; and b) video instructions can be noisy and tend to contain far more information than text instructions. To address these challenges, we first use an unsupervised alignment algorithm that learns pairwise alignments between instructions of different recipes for the same dish. We then use a graph algorithm to derive a joint alignment between multiple text and multiple video recipes for the same dish. We release the MICROSOFT RESEARCH MULTIMODAL ALIGNED RECIPCE CORPUS containing $\sim 150\mathrm{K}$ pairwise alignments between recipes across 4,262 dishes with rich commonsense information. + +# 1 Introduction + +Although machine learning has seen tremendous recent success in challenging game environments such as Go (Schrittwieser et al., 2019), DOTA (OpenAI, 2019), and StarCraft (DeepMind, 2019), we have not seen similar progress toward algorithms that might one day help humans perform everyday tasks like assembling furniture, applying makeup, + +![](images/81182b74ff42f81d5b831da651f4e981d023577a2a2a3f359518f410766018a2.jpg) +Figure 1: Text recipe (left) and transcript of video recipe (right) for shrimp fried rice. Aligned instructions are highlighted in the same color. Ingredients that can be substituted are encircled in the same color. + +repairing an electrical problem, or cooking a particular dish. In part this is because the relevant large-scale multimodal (language, video, audio) datasets are difficult to acquire, even with extensive crowdsourcing (Salvador et al., 2017; Sanabria et al., 2018). Unimodal data, though, is abundant on the web (e.g. instructional videos or textual instructions of tasks). Using language as the link between these modalities, we present an approach for learning large-scale alignment between multimodal procedural data. We hope our work, and the resulting released dataset, will help spur research on real-world procedural tasks. + +Recipes in the cooking domain provide procedural instruction sets that are captured – in large volume – both in video and text-only forms. Instruction sets in these two modalities overlap sufficiently to allow for an alignment that reveals interestingly different information in the linguistic and visual realms. In Figure 1, for instance, the text recipe (left) and the transcribed video recipe (right) for shrimp fried rice vary in word usage, order of instructions and use of ingredients. Know- + +![](images/eb524ea8a6545188b93ef15fa89465c9d842f9a72433ae38471fee7c61d6d52d.jpg) +Figure 2: Dish level alignment between three text recipes and two video recipes for fried rice. Same colored text boxes (in text recipes) and image borders (in video recipes) indicate instructions that are aligned to each other. + +ing that the highlighted instructions correspond to the same step is useful in understanding potential ingredient substitutions, how the same step can be linguistically described and physically realized in different ways, and how instruction order can be varied without affecting the outcome. + +Motivated by this idea that aligned procedural data can be a powerful source of practical commonsense knowledge, we describe our approach for constructing the MICROSOFT RESEARCH MULTIMODAL ALIGNED RECIPCE CORPUS. We first extract a large number of text and video recipes from the web. Our goal is to find joint alignments between multiple text recipes and multiple video recipes for the same dish (see Figure 2). The task is challenging, as different recipes vary in their order of instructions and use of ingredients. Moreover, video instructions can be noisy, and text and video instructions include different levels of specificity in their descriptions. Most previous alignment approaches (Munteanu and Marcu, 2005) deal with pairwise alignments. Since our goal is to align multiple instruction sets, we introduce a novel two-stage unsupervised algorithm. In the first stage, we learn pairwise alignments between two text recipes, two video recipes, and between a text and a video recipe using an unsupervised alignment algorithm (§3.1). In the second stage, we use the pairwise alignments between all recipes within a dish to construct a graph for each dish and find a maximum spanning tree of this graph to derive joint + +alignments across multiple recipes (§3.2). + +We train our unsupervised algorithm on 4,262 dishes consisting of multiple text and video recipes per dish. We release the resulting pairwise and joint alignments between multiple recipes within a dish for all 4,262 dishes, along with commonsense information such as textual and visual paraphrases, and single-step to multi-step breakdown (§5). + +We evaluate our pairwise alignment algorithm on two datasets: 1,625 text-video recipe pairs across 90 dishes from the YouCook2 dataset (Zhou et al., 2018a), and a small set of 200 human-aligned text-text recipe pairs across 5 dishes from Common Crawl. We compare our algorithm to several textual similarity baselines and perform ablations over our trained model (§4). Finally, we discuss how this data release will help with research at the intersection of language, vision, and robotics (§6). + +# 2 Recipe Data Collection + +We describe our approach for collecting large-scale text and video recipes; and constructing recipe pairs for training our unsupervised alignment algorithm. + +# 2.1 Common crawl Text Recipes + +We extract text recipes from Common Crawl, one of the largest web sources of text. We heuristically filter the extracted recipes to obtain a total of 48,852 recipes across 4,262 dishes. The number + +![](images/f0fdc48a1a79f2692332729e4cdeeb4ab0fea9e96dbc6270bd0b9bcdfa66295f.jpg) +Figure 3: An example transcript of a video recipe with sentences marked as "chat" (non-instructional) or "content" (instructional). + +of recipes per dish ranges from 3 to 100 (with an average of 6.54 and standard deviation of 7.22). The average recipe length is 8 instructions. + +# 2.2 YouTube Video Recipes + +For each dish in the text recipes, we use the dish name with 'recipe' appended, e.g. 'chocolate chip cookie recipe', as a query on YouTube and extract the top N videos where N is proportional to the number of text recipes for that dish4 to obtain a total of 77,550 video recipes. We transcribe these videos using the Microsoft Speech-to-Text Cognitive service.5 + +Video recipes, unlike text recipes, contain noninstructional ("chat") information. For instance, the presenter may give an introduction either of themselves or of the dish at the beginning of the video before diving into the steps of the recipe. Figure 3 contains an example transcript with "chat" and "content" information marked. We hypothesize that it is useful to remove such chat information from the transcripts before aligning them to text recipes. We build a supervised chat/content classifier using the YouCook2 dataset (Zhou et al., 2018a), an existing instructional cooking video dataset where parts of video that correspond to instructions are annotated by humans. We assume that these parts correspond to content whereas the rest of the video corresponds to chat. We preprocess the transcriptions of all 77,550 videos using this chat/content classifier7 to remove all sentences classified as chat. + +
TrainValTest
No. of dishes4,06594103
Text-Text Pairs46,0545,82211,652
Text-Video Pairs56,2913,8005,341
Video-Video Pairs19,200274514
+ +Table 1: Statistics of our recipe pairs data (2.3) + +# 2.3 Recipe Pairs for Training + +Given N text recipes and M video recipes for a dish, we pair each text recipe with every other text recipe to get $O(N^2)$ text-text recipe pairs. Similarly, we pair each text recipe with every video recipe to get $O(N * M)$ text-video recipe pairs, and pair each video recipe with every other video recipe to get $O(M^2)$ video recipe pairs. On closer inspection, we find that some of these pairs describe recipes that are very different from one other, making a reasonable alignment almost impossible. For example, one black bean soup recipe might require the use of a slow cooker, while another describes using a stove. We therefore prune these recipe pairs based on the match of ingredients and length8 to finally yield a set of 63,528 text-text recipe pairs, 65,432 text-video recipe pairs and 19,988 videodvdoe recipe pairs. We split this into training, validation and test split at the dish level. Table 1 shows the number of dishes and pairs in each split. + +# 3 Recipe Alignment Algorithm + +We first describe our unsupervised pairwise alignment model trained to learn alignments between text-text, text-video, and video-video recipes pairs. We then describe our graph algorithm, which derives joint alignments between multiple text and video recipes given the pairwise alignments. + +# 3.1 Pairwise Alignments between Recipes + +Our alignment algorithm is based on prior work (Naim et al., 2014) that learns to align a sequence of natural language instructions to segments of video recording of the same wet lab protocol. They first identify the nouns in the text sentences and the blobs (i.e. objects) in video segments. Given the blobs from $M$ video segments $F = [\mathbf{f}^{(1)},\dots,\mathbf{f}^{(M)}]$ and the nouns from $N$ sentences $E = [\mathbf{e}^{(1)},\dots,\mathbf{e}^{(N)}]$ , the task is to learn alignments between video segments and text sentences. They propose a hierarchical generative model which first uses a Hidden Markov Model + +![](images/03a52be8037a803b21fbc75c21a7d8989cac6a0b4722b4e7f6804f75871b89e2.jpg) +Figure 4: A maximum span tree for fried rice dish with text instructions and transcript segments as nodes, alignments as edges, and alignment probabilities as edge weights. Nodes representing text instructions are labeled "T". Nodes representing transcript segments are labeled "V". Each color indicates a different recipe. The bounding box shows a magnified section of the tree with edge weights and the instruction/transcript associated with each node. + +(HMM) (Rabiner, 1989; Vogel et al., 1996) to generate each video segment $f^{(m)}$ from one of the text sentences $e^{(n)}$ . They then use IBM1 model (Brown et al., 1993) emission probabilities to generate the blobs $\{f_1^{(m)},\dots,f_J^{(m)}\}$ in $f^{(m)}$ from the nouns $\{e_1^{(n)},\dots,e_I^{(n)}\}$ in $e^{(n)}$ as follows: + +$$ +P \left(\mathbf {f} ^ {(m)} \mid \mathbf {e} ^ {(n)}\right) = \frac {\epsilon}{(I) ^ {J}} \prod_ {j = 1} ^ {J} \sum_ {i = 1} ^ {J} p \left(f _ {j} ^ {(m)} \mid e _ {i} ^ {(n)}\right) \tag {1} +$$ + +The hidden state in the HMM model corresponds to the alignment between video segment and text sentence, and the state transition probabilities correspond to the jump between adjacent alignments. For computational tractability, a video segment can be aligned to only one sentence (multiple sentences can align to the same video segment) + +We use this algorithm to learn pairwise alignments between text-text, text-video and video-video recipes. Given two recipes (source and target) of the same dish, we define our alignment task as mapping each text instruction (or video transcript sentence) in the source recipe to one or more text instructions (or video transcript sentences) in the target recipe. + +We make two modifications to the alignment algorithm described above: First, our recipe pairs, unlike the wet lab protocol data, does not follow the + +same temporal sequence. The alignment algorithm must thus learn to jump within a longer range. We set the window of jump probabilities at $[-2, 2]$ .9 Second, we use transcriptions to learn alignments rather than the objects detected in videos. We hypothesize that the richness of language used in instructional videos may facilitate better alignment with transcripts (as others have observed (Malmaud et al., 2015; Sener et al., 2015)). We use all words (except stop words) in video transcript sentences and all words in text instructions while learning the IBM1 word level probabilities. An instruction in one recipe can be aligned to multiple instructions in the other recipe. + +# 3.2 Joint Alignment among Multiple Recipes + +We use the pairwise alignments to derive a joint alignment at the dish level between multiple text and video recipes. For each dish, we construct a graph where each node represents an instruction from a text recipe or a transcript sentence from a video recipe. We use the pairwise alignments to draw edges between nodes, with alignment probabilities as the edge weights. We include only those edges that have alignment probability greater than 0.5. The pairwise alignments are directed since they go from the source recipe to the target recipe. + +We first convert the directed graph into an undirected graph by averaging the edge weights between two nodes and converting directed edges into undirected edges. Note that the resultant graph can have multiple connected components as some recipe pairs may not have any instructions aligned with probability greater than the threshold of 0.5 + +Our goal is to find a set of jointly-alignable instructions across different recipes. We therefore convert the graph (with cycles) into a forest by running the maximum spanning tree algorithm on the graph. Figure 4 shows an example tree derived for one of the dishes. A path in this tree, that has at most one node from each recipe, constitutes a set of jointly-alignable instructions. For example, in the magnified section of the tree in Figure 4, all unique colored nodes in the path from the yellow node to the green node constitute a set of jointly-alignable instructions. + +# 4 Experimental Results + +We describe how we evaluate our pairwise alignment algorithm (from §3.1). We answer the following research questions using our experimentation: + +1. How does our alignment model perform when evaluated on human-aligned recipe pairs? +2. Does our unsupervised alignment model outperform simpler non-learning baselines? +3. How does performance differ when we use only nouns or nouns and verbs instead of all words to learn alignments? + +# 4.1 Human Aligned Evaluation Set + +We evaluate our pairwise alignment algorithm on the following two human annotated datasets: + +YouCook2 text-video recipe pairs The YouCook2 dataset (Zhou et al., 2018a) consists of 1,625 cooking videos paired with human-written descriptions for each video segment. These span 90 different dishes. We transcribe all videos using the Microsoft Speech-to-Text Cognitive service $^{10}$ and separate it into sentences using a sentence tokenizer. Given a sequence of human-written descriptions and a sequence of transcript sentences, the alignment task is to align each transcript sentence to one of the human-written descriptions. We train our pairwise alignment model on the train split of our text-video recipe pairs (from + +$\S 2.3)$ and evaluate on the YouCook2 dataset. An important difference between the text-video pairs in YouCook2 and in our data is that in YouCook2, the text instructions and the video segments are temporally aligned since the text instructions were specifically written for the videos. In our data, however, the text and the video recipes can differ in order. + +CommonCrawl text-text recipe pairs We randomly choose 200 text-text recipes pairs (spanning 5 dishes) from the test split of our data ( $\S 2.3$ ) and collect alignment annotations for them using six human experts. We show annotators a numbered list of the instructions for the target recipe (along with its title and ingredients). We display instructions for the source recipe with input boxes besides them and ask annotators to write in the number(s) (i.e. labels) of one or more target instruction(s) with which it most closely aligns. Each recipe pair is annotated by three annotators. For $65\%$ of the instructions, two or more annotators agree on a label. For only $42\%$ of the instructions do all three annotators agree, suggesting that the difficulty level of this annotation task is high. We train our pairwise alignment model on the train split of our text-text recipe pairs ( $\S 2.3$ ) and evaluate on the 200 human-aligned pairs. + +# 4.2 Baselines + +Baselines described below align each instruction in the source recipe to one or more instructions in the target recipe. + +Random We align each instruction in the source recipe to a random instruction in the target recipe. + +Uniform alignment Given $N$ instructions in the target recipe, we divide the instructions in the source recipe into $N$ equal chunks and align each instruction in the $i^{th}$ chunk of the source recipe to the $i^{th}$ instruction in the target recipe. For instance, given a source recipe $[S1, S2, S3, S4]$ and a target recipe $[T1, T2]$ , uniform alignment would align $S1$ and $S2$ to $T1$ and $S3$ and $S4$ to $T2$ . More generally, we align the $i^{th}$ instruction in the source recipe to the $\left[\left(\frac{N}{M} i\right)^{th} - \left(\frac{N}{M} (i + 1)\right)^{th}\right)$ instruction in the target recipe. + +BM25 retrieval We use BM25 (Robertson et al., 2009) as our information retrieval baseline. Given + +
MethodsPrecisionRecallF1
Random18.5314.4714.49
Uniform alignment63.4450.8153.10
BM25 retrieval48.8639.8538.91
Textual Similarity
Exact word match46.7540.7040.06
TF-IDF46.8239.2338.55
GloVe46.1338.7437.14
BERT48.8341.4840.89
RoBERTa50.2142.4342.28
HMM+IBM1
Nouns78.6363.8365.29
Nouns+Verbs80.5667.9069.00
All words81.3969.2770.30
+ +a source and a target recipe pair, we construct a corpus using all instructions in the target recipe. We then use each source instruction as a query to retrieve the top most instruction from the target instruction corpus and align the source instruction to the retrieved target instruction. + +Textual similarity Given a source recipe instruction and a target recipe instruction, we define a measure of textual similarity between the two instructions using the following five methods. For each source instruction, we compute its similarity score with every target instruction and align it to the target instruction with the highest score. + +a. Exact word match: Given two instructions, we define exact word match as the ratio of the number of common words between the two divided by the number of words in the longer of the two. This gives us a measure of word match that is comparable across instructions of different lengths. + +b. TF-IDF: We use all the recipes in our training set to create a term frequency (TF)-inverse document frequency (IDF) vectorizer. Given an instruction from the evaluation set, we compute the TF-IDF vector for the instruction using this vectorizer. Given two instructions, we define their TF-IDF similarity as the cosine similarity between their TF-IDF vectors. + +c. GloVe: We train GloVe embeddings (Pennington et al., 2014) on an in-domain corpus of 3 million words put together by combining text recipes and video transcriptions. Given an instruction, we average the GloVe embeddings (Pennington + +Table 2: Results for text-video recipe alignments on YouCook2 dataset. + +
MethodsPrecisionRecallF1
Random14.2614.0012.69
Uniform alignment41.3831.8533.22
BM25 retrieval50.0655.2749.30
Textual Similarity
Exact word match53.9048.3946.98
TF-IDF52.7846.8245.12
GloVe56.0451.8950.30
BERT50.7255.0749.10
RoBERTa52.4955.8650.44
HMM+IBM1
Nouns62.1148.9950.73
Nouns+Verbs64.7250.7652.97
All words66.2152.4254.55
+ +Table 3: Results for text-text recipe alignment on Common Crawl dataset. + +et al., 2014) of nouns and verbs $^{12}$ to obtain its embedding vector. Given two instructions, we define their embedding similarity as the cosine similarity of their embedding vectors. + +d. BERT: Given an instruction, we compute its embedding vector using BERT-based sentence embedding (Reimers and Gurevych, 2019). We experiment with different variants and find that the BERT-base model trained on AllNLI, then on STS benchmark training set13 performed the best for us. Given two instructions, we define their BERT similarity as the cosine similarity between their sentence embedding vectors. + +e. RoBERTa: We also experiment with a variant of the above baseline where we use RoBERTa (Liu et al., 2019) instead of BERT to compute the sentence embeddings. We use RoBERTa-large trained on AllNLI, then on STS benchmark training set. + +# 4.3 Model Ablations + +We experiment with the following ablations of our unsupervised pairwise alignment model (§3.1): + +HMM+IBM1 (nouns) We use the NLTK $^{14}$ part-of-speech tagger to identify all the nouns in an instruction and only use those to learn the IBM1 word-level alignments. This ablation is similar to the model proposed by Naim et al. (2014) that align objects in videos to nouns in text. + +HMM+IBM1 (nouns and verbs) We use both nouns and verbs to learn IBM1 word-level alignments. This ablation is similar to the method used in Song et al. (2016) that align objects and actions in videos to nouns and verbs in text. + +HMM+IBM1 (all words) We use all words (except stop words) in the source and the target recipe instructions to learn the word-level alignments.[15] + +# 4.4 Evaluation Metrics + +Given $M$ source recipe instructions and $N$ target recipe instructions, the alignment task is to label each of the $M$ source instructions with a label from $[0,\dots,(N - 1)]$ . Given a predicted sequence of labels (from baseline or proposed model) and a reference sequence of labels (from human annotations) for a recipe pair, we calculate the weighted-average16 precision, recall and F1 score. We average these scores across all alignment pairs to compute aggregate scores on the test set. + +# 4.5 Results + +On text-video alignments Table 2 shows results of our pairwise alignment algorithm compared with baselines on 1,625 human aligned text-video recipe pairs from YouCook2. The BM25 baseline outperforms two of the textual similarity baselines. Within the textual similarity baselines, RoBERTa outperforms all others suggesting that a pretrained sentence level embedding acts as a good textual similarity method for this alignment task. The uniform alignment baseline, interestingly, outperforms all other baselines. This is mainly because in the YouCook2 dataset, the text instructions and the transcript sentences follow the same order, making uniform alignment a strong baseline. Our unsupervised HMM+IBM1 alignment model significantly outperforms (with $p < 0.001$ ) all baselines. Specifically, it gets much higher precision scores compared to all baselines. Under ablations of the HMM+IBM1 model, using all words to learn alignments works best. + +On text-text alignments Table 3 shows results of our pairwise alignment algorithm compared with baselines on 200 human-aligned text-text recipe pairs from Common Crawl. Unlike text-video alignments, we find that the uniform alignment + +baseline does not outperform textual similarity baselines, suggesting that the different re-orderings between text-text recipe pairs makes alignment more challenging. Within textual similarity baselines, similar to text-video alignment, RoBERTa outperforms all others. We believe this is because text recipes tend to share similar vocabulary, making it easier to find similar words between two textual instructions. Video narrators tend to use more colloquial language than the authors of text recipes, making it more difficult to learn alignments using word similarities. Interestingly, both BM25 and RoBERTa get higher recall than our best HMM+IBM1 model but they lose out on precision. This suggests that retrieval models are good for identifying more alignments, albeit with lower precision. Our unsupervised HMM+IBM1 model again significantly outperforms $(p < 0.001)$ all baselines on F1 score. Under ablations of the HMM+IBM1 model, we again find that using all words to learn alignments performs best. + +Comparing text-video and text-text alignment results On comparing Table 2 and Table 3, we find that textual similarity baselines have overall higher scores on the text-text alignments than the text-video alignments. Our HMM+IBM1 model, on the other hand, has overall higher scores on text-video alignments than on text-text alignments. We attribute this contrast to the fact that two text recipes have higher vocabulary similarities than a text and a video recipe, resulting in textual similarity baselines to perform well on text-text alignments. Our HMM+IBM1 unsupervised learning model is able to do better on text-video pairs where the word usage differences are higher. Furthermore, the text-video pairs from YouCook2 are temporally aligned whereas the text-text pairs from Common crawl have several re-orderings making the text-text evaluation set comparatively harder. The supplementary material includes an analysis of alignment outputs. + +# 5 Data Release + +We describe the data released in our MICROSOFT RESEARCH MULTIMODAL ALIGNED RECIPER CORPUS. In all our released data, for text recipes, we include the actual text of the instructions. Whereas, for video recipes, we release the URL to the YouTube video with timestamps corresponding to the aligned video segments. + +
Single StepMultiple Steps
Beat eggs, oil vanilla and sugar together in a large bowl.1. Beat eggs in large bowl until foamy. +2. Add sugar, oil and vanilla mix well.
Butter 2 loaf pans and bake 1 hour at 325 degrees.1. Pour into greased muffin tins or loaf pans +2. Yields about 4 small loaves or 2 large. +3. Bake for 25 minutes.
Mix the zucchini, sugar, oil, yogurt and egg in a bowl.1. Beat eggs, sugar, oil and vanilla. +2. Add zucchini.
+ +Table 4: Three examples of single-step to multi-step breakdown from the pairwise alignments. + +![](images/3716685e421c328a82835fdade9ce43098179f40938796443d66e47432b4a8e0.jpg) +Figure 5: We plot the trade-off between the percentage of paraphrases extracted and the precision, recall and F1 score (as measured by human annotators) with increasing alignment probability threshold on 200 human-aligned text-text recipes pairs. + +# 5.1 Pairwise and Joint Alignments + +We release the pairwise alignments between recipes of the same dish (derived from § 3.1) for 4,262 dishes. This includes 63,528 alignments between text recipes, 65,432 alignments between text and video recipes; and 19,988 alignments between video recipes. We also release the joint alignments between multiple text and multiple video recipes within a dish (derived from § 3.2) for 4,262 dishes. + +# 5.2 Textual and Visual Paraphrases + +The pairwise alignment algorithm described in §3.1 gives alignment probabilities for each pair of instructions it aligns. We threshold on these alignment probabilities to retrieve textual and visual paraphrases. Since our goal is to extract large numbers of high quality paraphrases, we decide on the threshold value by looking at the trade-off between the percentage of paraphrases extracted and their quality as measured by human annotators on 200 human-aligned text-text recipe pairs from our evaluation set (§4.1). + +Figure 5 shows the trade-off between the preci + +sion, recall and F1 score and the percentage of paraphrases extracted with increasing threshold on instruction-level alignment probability. At 0.5 threshold, we extract $60\%$ of the total alignments as paraphrases from our evaluation set. We use this threshold value of 0.5 on the pairwise alignments in the training, validation and test sets to extract a total of 358,516 textual paraphrases and 211,703 text-to-video paraphrases from 4,262 dishes and include it in our corpus. + +# 5.3 Single-step to Multi-step breakdown + +The pairwise alignments between text recipes include many instances where one instruction in one recipe is aligned to multiple instructions in another recipe with high alignment probability (greater than 0.9). Table 4 shows three such single-step to multi-step breakdown. We extract a total of 5,592 such instances from 1,662 dishes across the training, validation and test sets and include it in our corpus. + +# 6 Applications of Our Corpus + +We believe that our data release will help advance research at the intersection of language, vision and robotics. The pairwise alignment between recipes within a dish could be useful in training models that learn to rewrite recipes given ingredient or cooking method based constraints. The joint alignment over multiple text recipes within a dish should prove useful for learning the types of ingredient substitutions and instruction reordering that come naturally to expert cooks. The textual and visual paraphrases will, we believe, have implications for tasks like textual similarity, image and video captioning, dense video captioning and action recognition. The single-step to multi-step breakdown derived from our pairwise alignments may also prove useful for understanding task simplification, an important problem for agents performing complex actions. + +Such multimodal data at scale is a crucial ingredient for robots to learn-from-demonstrations + +of procedural tasks in a variety of environments. Collecting such large scale data is prohibitively expensive in robotics since it requires extensive instrumentation of many different environments. Other example applications are learning to ground natural language to physical objects in the environment, and catching when humans are about to commit critical errors in a complicated task and offering to help with corrective instructions. + +# 7 Related Work + +Alignment Algorithms Our unsupervised alignment algorithm is based on Naim et al. (2014), who propose a hierarchical alignment model using nouns and objects to align text instructions to videos. Song et al. (2016) further build on this work to make use of action codewords and verbs. Bojanowski et al. (2015) view the alignment task as a temporal assignment problem and solve it using an efficient conditional gradient algorithm. Malmaud et al. (2015) use an HMM-based method to align recipe instructions to cooking video transcriptions that follow the same order. Our work contrasts with these works in two ways: we learn alignments between instructions that do not necessarily follow the same order; and our algorithm is trained on a much larger scale dataset. + +Multi-modal Instructional Datasets Marin et al. (2019) introduce a corpus of 1 million cooking recipes paired with 13 million food images for the task of retrieving a recipe given an image. YouCook2 dataset (Zhou et al., 2018a) consists of 2,000 recipe videos with human written descriptions for each video segment. The How2 dataset (Sanabria et al., 2018) consists of 79,114 instructional videos with English subtitles and crowdsourced Portuguese translations. The COIN dataset (Tang et al., 2019) consists of 11,827 videos of 180 tasks in 12 daily life domains. YouMakeup (Wang et al., 2019) consists of 2,800 YouTube videos, annotated with natural language descriptions for instructional steps, grounded in temporal video range and spatial facial areas. + +Leveraging Document Level Alignments Our work relies on the assumption that text recipes and instructional cooking videos of the same dish are comparable. This idea has been used to extract parallel sentences from comparable corpora to increase the number of training examples for machine translation (Munteanu and Marcu, 2005; + +Abdul-Rauf and Schwenk, 2009; Smith et al., 2010; Grégoire and Langlais, 2018). Likewise, Talk-Summ (Lev et al., 2019) use the transcripts of scientific conference talks to automatically extract summaries. Zhu et al. (2015) use books and movie adaptations of the books to extract descriptive explanations of movie scenes. + +Related Tasks A related task is localizing and classifying steps in instructional videos (Alayrac et al., 2016; Zhukov et al., 2019) where they detect when an action is performed in the video whereas we focus on describing actions. Dense event captioning of instructional videos (Zhou et al., 2018b; Li et al., 2018; Hessel et al., 2019) relies on human curated, densely labeled datasets whereas we extract descriptions of videos automatically through our alignments. + +# 8 Conclusion + +We introduce a novel two-stage unsupervised algorithm for aligning multiple text and multiple video recipes. We use an existing algorithm to first learn pairwise alignments and then use a graph-based algorithm to derive the joint alignments across multiple recipes describing the same dish. We release a large-scale dataset constructed using this algorithm consisting of joint alignments between multiple text and video recipes along with useful commonsense information such as textual and visual paraphrases; and single-step to multi-step breakdown. + +Although our dataset focuses on the cooking domain, our framework should generalize to any domain with abundant volumes of unstructured-but- alignable multi-modal data. DIY (Do-It-Yourself) videos and websites, for instance, are an obvious next target. We also envision extending this work by including audio and video features to enhance the quality of our alignment algorithm. Ultimately, we believe this work will further the goal of building agents that can work with human collaborators to carry out complex tasks in the real world. + +# Acknowledgments + +We would like to thank Harpreet Sawhney, Roshan Rao, Prasoon Goyal, Dilip Arumugam and Raymond J. Mooney for all their help. We would also like to thank the four anonymous reviewers for their useful comments and suggestions. + +# References + +Sadaf Abdul-Rauf and Holger Schwenk. 2009. On the use of comparable corpora to improve SMT performance. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 16-23, Athens, Greece. Association for Computational Linguistics. +Jean-Baptiste Alayrac, Piotr Bojanowski, Nishant Agrawal, Josef Sivic, Ivan Laptev, and Simon Lacoste-Julien. 2016. Unsupervised Learning from Narrated Instruction Videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4575-4583. +Piotr Bojanowski, Rémi Lajugie, Edouard Grave, Francis Bach, Ivan Laptev, Jean Ponce, and Cordelia Schmid. 2015. Weakly-Supervised Alignment of Video with Text. In Proceedings of the IEEE international conference on computer vision, pages 4462-4470. +Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263-311. +DeepMind. 2019. AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii. +Francis Grégoire and Philippe Langlais. 2018. Extracting parallel sentences with bidirectional recurrent neural networks to improve machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1442-1453, Santa Fe, New Mexico, USA. Association for Computational Linguistics. +Jack Hessel, Bo Pang, Zhenhai Zhu, and Radu Soricut. 2019. A case study on combining ASR and visual features for generating instructional video captions. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 419-429, Hong Kong, China. Association for Computational Linguistics. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. +Guy Lev, Michal Shmueli-Scheuer, Jonathan Herzig, Achiya Jerbi, and David Konopnicki. 2019. Talk-Summ: A dataset and scalable annotation method for scientific paper summarization based on conference talks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2125-2131, Florence, Italy. Association for Computational Linguistics. +Yehao Li, Ting Yao, Yingwei Pan, Hongyang Chao, and Tao Mei. 2018. Jointly Localizing and Describing Events for Dense Video Captioning. In Proceed + +ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7492-7500. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692. +Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal. Association for Computational Linguistics. +Jonathan Malmaud, Jonathan Huang, Vivek Rathod, Nicholas Johnston, Andrew Rabinovich, and Kevin Murphy. 2015. What's cookin'? interpreting cooking videos using text, speech and vision. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 143-152, Denver, Colorado. Association for Computational Linguistics. +Javier Marin, Aritro Biswas, Ferda Ofli, Nicholas Hynes, Amaia Salvador, Yusuf Aytar, Ingmar Weber, and Antonio Torralba. 2019. Recipe1M+: A Dataset for Learning Cross-Modal Embeddings for Cooking Recipes and Food Images. IEEE Transactions on Pattern Analysis and Machine Intelligence. +Dragos Stefan Munteanu and Daniel Marcu. 2005. Improving machine translation performance by exploiting non-parallel corpora. Computational Linguistics, 31(4):477-504. +Iftekhar Naim, Young Chol Song, Qiguang Liu, Henry Kautz, Jiebo Luo, and Daniel Gildea. 2014. Unsupervised Alignment of Natural Language Instructions with Video Segments. In Twenty-Eighth AAAI Conference on Artificial Intelligence. +OpenAI. 2019. Dota 2 with Large Scale Deep Reinforcement Learning. arXiv preprint arXiv:1912.06680. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics. +Lawrence R Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257-286. +Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERT-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing + +and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics. +Stephen Robertson, Hugo Zaragoza, et al. 2009. The Probabilistic Relevance Framework: BM25 and Beyond. Foundations and Trends in Information Retrieval, 3(4):333-389. +Amaia Salvador, Nicholas Hynes, Yusuf Aytar, Javier Marin, Ferda Ofli, Ingmar Weber, and Antonio Torralba. 2017. Learning Cross-modal Embeddings for Cooking Recipes and Food Images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. +Ramon Sanabria, Ozan Caglayan, Shruti Palaskar, Desmond Elliott, Loic Barrault, Lucia Specia, and Florian Metze. 2018. How2: A Large-scale Dataset for Multimodal Language Understanding. In NeurIPS. +Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy Lillicrap, and David Silver. 2019. Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model. +Ozan Sener, Amir R Zamir, Silvio Savarese, and Ashutosh Saxena. 2015. Unsupervised Semantic Parsing of Video Collections. In Proceedings of the IEEE International Conference on Computer Vision, pages 4480-4488. +Jason R. Smith, Chris Quirk, and Kristina Toutanova. 2010. Extracting parallel sentences from comparable corpora using document level alignment. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 403-411, Los Angeles, California. Association for Computational Linguistics. +Young Chol Song, Iftekhar Naim, Abdullah Al Mamun, Kaustubh Kulkarni, Parag Singla, Jiebo Luo, Daniel Gildea, and Henry A Kautz. 2016. Unsupervised Alignment of Actions in Video with Text Descriptions. In IJCAI, pages 2025-2031. +Yansong Tang, Dajun Ding, Yongming Rao, Yu Zheng, Danyang Zhang, Lili Zhao, Jiwen Lu, and Jie Zhou. 2019. COIN: A Large-scale Dataset for Comprehensive Instructional Video Analysis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1207-1216. +Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical translation. In COLING 1996 Volume 2: The 16th International Conference on Computational Linguistics. + +Weiying Wang, Yongcheng Wang, Shizhe Chen, and Qin Jin. 2019. YouMakeup: A large-scale domain-specific multimodal dataset for fine-grained semantic comprehension. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5133-5143, Hong Kong, China. Association for Computational Linguistics. +Luowei Zhou, Chenliang Xu, and Jason J Corso. 2018a. Towards Automatic Learning of Procedures from Web Instructional Videos. In Thirty-Second AAAI Conference on Artificial Intelligence, pages 7590-7598. +Luowei Zhou, Yingbo Zhou, Jason J Corso, Richard Socher, and Caiming Xiong. 2018b. End-to-End Dense Video Captioning with Masked Transformer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8739-8748. +Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books. In Proceedings of the IEEE International Conference on Computer Vision, pages 19-27. +Dimitri Zhukov, Jean-Baptiste Alayrac, Ramazan Gokberg Cinbis, David Fouhey, Ivan Laptev, and Josef Sivic. 2019. Cross-task weakly supervised learning from instructional videos. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3537-3545. + +# A Supplemental Material + +In this supplementary, we describe the details of our data collection process (§A.1), experimental details of our algorithm (§A.2) and provide analysis of our alignment outputs (§A.3). + +# A.1 Details of Data Collection + +# A.1.1 Common crawl Text Recipes + +We use recipe data from Common Crawl $^{17}$ that has metadata formatted according to the Schema.org Recipe schema $^{18}$ including title, ingredients, instructions, and a URL to the recipe source. There were originally 3.2 million recipes extracted from Common Crawl. We filter the data by limiting the data to recipes with instructions written in English, removing recipes with titles that are longer than 5 words, removing duplicate recipes, removing recipes where the recipe title contains words that are not in the top $50\%$ most common words + +that occur in the recipe titles, and removing recipes with fewer than 2 steps. After filtering the data, we clustered the recipes into dishes using exact match on the recipe titles. We only retain recipes from dishes that have at least three recipes. The final dataset has a total of 4,262 dishes and 48,852 recipes with an average of 8 instructions per recipe. + +# A.1.2 YouTube Video Recipes + +Given the dish names from the text recipes, we extract YouTube video recipes for each of the dishes. The number of videos extracted for each dish is proportional to the number of text recipes found for that dish. For instance, for a more popular dish like chocolate chip cookies, we would extract more text and video recipes than for a less popular dish like crème brulee. The number of videos extracted ranges from 3 to 100. + +# A.1.3 Chat/Content Classifier + +Instructional cooking videos can contain a lot of non-instructional content ("chat"). For example, the person cooking the dish often introduces themselves (or their video channel) at the beginning of the video. They sometimes also introduce the dish they are going to prepare and suggest pairings for the dish. The non-instruction content are often found in the beginning and towards the end of the video but there are several instances of "chat" interspersed with instructional content as well. Since we wish to align these videos to text recipe instructions that do not contain non-instructional information, we need a way to remove non-instructional content. We train a supervised neural network based classifier for this task. + +We train our classifier using the YouCook2 dataset (Zhou et al., 2018a) of 1,500 videos across 90 dishes. This dataset was created by asking humans to identify segments of a video that correspond to an instruction and annotate each segment with an imperative statement describing the action being executed in the video segment. We make the assumption that the transcript sentences that are included within an annotated video segment are instructional whereas those that are not included within an annotated video segment are noninstructional. We first transcribe all 1,500 videos in the dataset using a commercial transcription web service. We split the transcription into sentences using a sentence tokenizer. We label a transcript sentence with the label 1 if the corresponding video segment was annotated and with the label 0 if it + +was not. We get a total of 90,927 labelled transcript sentences which we split by dishes into the training (73,728 examples), validation (7,767 examples) and test (9,432 examples) sets. + +We use an LSTM (long-short term memory) model (Hochreiter and Schmidhuber, 1997) with attention (Luong et al., 2015) to train a binary classifier on this data. We initialize (and freeze) our 300-dimensional word embeddings using GloVe (Pennington et al., 2014) vectors trained on 330 million tokens that we obtain by combining all text recipes and transcript sentences. We use the validation set to tune hyperparameters of our LSTM classifier (hidden size: 64, learning rate: 0.00001, batch size: 64, number of layers: 1). Our chat/content classifier achieves 86.76 precision, 84.26 recall and 85.01 F1 score on the held out test set. + +# A.1.4 Recipe Pair Pruning Strategy + +We define the following two pruning strategies to reduce the number of extracted recipe pairs: + +Ingredient match: Each of our text recipes from Common Crawl contains an ingredients list. Video recipes from YouTube however do not contain ingredient lists. We therefore estimate the ingredients for video recipes using text recipes of the same dish. We construct a set of ingredients at the dish level by combining all ingredients of the text recipes within that dish. We then use this dish-level ingredients information to identify ingredient words from the words of video transcriptions. Given a recipe pair, we compare the ingredients of the two recipes and if the percentage of ingredients that match is below a threshold, we remove the pair. For text-text and text-video recipe pair, we set this threshold to be $70\%$ , whereas for video-video recipe pair, we set this threshold to be $90\%$ (since video-video recipe pairs tend to be more noisy). + +Instruction length match: For text-text recipe pairs, if number of instructions in one recipe is more than double the number of instructions in another recipe, we remove the pair. For video recipes, if there are more than 100 sentences in the transcript after removing the background sentences, we remove that video recipe. + +# A.2 Details of HMM+IBM1 Model + +We train the HMM+IBM1 pairwise alignment model on three kinds of recipe pairs: text-text, text-video and video-video. The lower level IBM1 model works on words of text instruction or transcript sentences. The vocabulary size of all the + +text recipes from 4,262 dishes put together totals to 48,609 words. Since most words do not appear very frequently across the text recipes corpus, we reduce the vocabulary size to 13,061 by removing words that occur fewer than 5 times in the training set. Likewise, we reduce the vocabulary size of video recipe transcriptions to 16,733 words (from 88,744 words) by removing words that occur fewer than 15 times in the training set. We first train the HMM+IBM1 model for 3 iterations with a jump range of $[-1,0,+1]$ and further train it for 2 iteration with a jump range of $[-2,0,+2]$ . We find that warm starting the model with a shorter range helps the model to learn better alignments. + +# A.3 Alignment Output Analysis + +Table 5 shows the alignment between two text recipes for chocolate chip cookies obtained by our pairwise algorithm. The alignment task here is to align each instruction in the source recipe to one of the instructions in the target recipe. The table displays all the instructions in the source recipe in the second column. The first column of the table displays instructions from the target recipe that aligns to the source recipe instruction in the same row. The sentence level probabilities are shown in the last column. + +We can see the reordering between the two recipes by comparing the instruction indices. We see that instructions 0 to 2 from the source are aligned to target instructions with very high probabilities suggesting they are close paraphrases. Instruction 3 and 8 from the source, on the other hand, are aligned with comparatively lower probabilities to the target and we can see that in these two cases, the two instructions do differ in meaning. Instructions 6,7 and 8 (in source) aligned to instruction 11 (in target) is an example of single step to multi-step breakdown. + +
Target recipe instructionSource recipe instructionProbability
0: Preheat your oven to 350 degrees F.0: Preheat the oven to 350 degrees F.0.9999
2: In the bowl of your mixer cream together your butter and sugars until light and fluffy about 3-5 minutes.1: In a large bowl or the bowl of a stand mixer cream the butter sugar brown sugar eggs & vanilla together until smooth & fluffy.0.9998
1: Sift together the flour baking soda baking powder and salt into a medium sized bowl and set aside.2: In another bowl whisk together the flour salt baking powder and baking soda.0.9997
4: Add in the vanilla and mix.3: Add this to the butter mixture and mix until well combined.0.6889
6: Fold in your chocolate until evenly added throughout the dough.4: Stir in the chocolate chips.0.9820
8: Scoop your dough out onto the sheets.5: Form the dough into golf-ball sized balls and place them about 2 inches apart on a baking sheet.0.9997
11: Bake 10-12 minutes for smaller cookies or 18-20 minutes for larger cookies.6: Bake for 9-10 minutes just until the edges start to brown lightly.0.9912
11: Bake 10-12 minutes for smaller cookies or 18-20 minutes for larger cookies.7: Do not overbake them or they will be crispy rather than chewy.0.9528
11: Bake 10-12 minutes for smaller cookies or 18-20 minutes for larger cookies.8: They still look underbaked when you take them out but will firm up as they cool.0.6465
12: Allow the cookies to cool slightly on your baking sheet then move them to another surface to cool completely.9: Let them cool on the pan for about 5 minutes and them move to a wire rack to cool completely.0.9973
14: Store in an air-tight container at room temperature for up to 3 days or freeze for up to 2 months.10: Cookies will keep for 7 days in a sealed container at room temperature.0.8309
+ +Table 5: Alignment between two text recipes of chocolate chip cookie with their sentence level probabilities. \ No newline at end of file diff --git a/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/images.zip b/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7254de413294a69553718c9d51d74d80bce8900a --- /dev/null +++ b/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:076ec7c97e021ae17a598a94569199a257e2b58b0f33fb665634763cdbf605e2 +size 608954 diff --git a/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/layout.json b/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..090a36e87f15852da497ea4debfbe527ef37b800 --- /dev/null +++ b/arecipeforcreatingmultimodalaligneddatasetsforsequentialtasks/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fa66219ce2f3140c5452ee149a271a4f94ef80c894bdb2869523b1abac212e85 +size 370265 diff --git a/areevaluationofknowledgegraphcompletionmethods/93caf657-78e3-465a-a17b-87ddb51bf54c_content_list.json b/areevaluationofknowledgegraphcompletionmethods/93caf657-78e3-465a-a17b-87ddb51bf54c_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..6f2ace5c7616b628b57e72b301de43977eed183a --- /dev/null +++ b/areevaluationofknowledgegraphcompletionmethods/93caf657-78e3-465a-a17b-87ddb51bf54c_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2ca5b76205e900c804532b59ae283b02b3a2b3e021f9e230181e86d3ac62016b +size 43877 diff --git a/areevaluationofknowledgegraphcompletionmethods/93caf657-78e3-465a-a17b-87ddb51bf54c_model.json b/areevaluationofknowledgegraphcompletionmethods/93caf657-78e3-465a-a17b-87ddb51bf54c_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4ea7a07f270b33f9a53513f17ed048e82b1c1902 --- /dev/null +++ b/areevaluationofknowledgegraphcompletionmethods/93caf657-78e3-465a-a17b-87ddb51bf54c_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6b460aa4528f5ae50d4a4074b81f10027f0ae7686b085627dca3b7126df5066c +size 54149 diff --git a/areevaluationofknowledgegraphcompletionmethods/93caf657-78e3-465a-a17b-87ddb51bf54c_origin.pdf b/areevaluationofknowledgegraphcompletionmethods/93caf657-78e3-465a-a17b-87ddb51bf54c_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..5b7363492f967497dfc5dc5dfdb04f274d3cad59 --- /dev/null +++ b/areevaluationofknowledgegraphcompletionmethods/93caf657-78e3-465a-a17b-87ddb51bf54c_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:863157b8c8ad1909f932019ec42b5a67ca7554a5a6abb69545dcbf6c488379be +size 333851 diff --git a/areevaluationofknowledgegraphcompletionmethods/full.md b/areevaluationofknowledgegraphcompletionmethods/full.md new file mode 100644 index 0000000000000000000000000000000000000000..522450a201fbbcab2ca7953108e9fb185ca34d79 --- /dev/null +++ b/areevaluationofknowledgegraphcompletionmethods/full.md @@ -0,0 +1,163 @@ +# A Re-evaluation of Knowledge Graph Completion Methods + +Zhiqing Sun $^{1*}$ Shikhar Vashisht $^{1,2*}$ Soumya Sanyal $^{2*}$ Partha Talukdar $^{2}$ Yiming Yang $^{1}$ + +$^{1}$ Carnegie Mellon University, $^{2}$ Indian Institute of Science + +{zhiqings,svashish,yiming}@cs.cmu.edu + +{Soumyasanyal,ppt}@iisc.ac.in + +# Abstract + +Knowledge Graph Completion (KGC) aims at automatically predicting missing links for large-scale knowledge graphs. A vast number of state-of-the-art KGC techniques have got published at top conferences in several research fields, including data mining, machine learning, and natural language processing. However, we notice that several recent papers report very high performance, which largely outperforms previous state-of-the-art methods. In this paper, we find that this can be attributed to the inappropriate evaluation protocol used by them and propose a simple evaluation protocol to address this problem. The proposed protocol is robust to handle bias in the model, which can substantially affect the final results. We conduct extensive experiments and report performance of several existing methods using our protocol. The reproducible code has been made publicly available. + +# 1 Introduction + +Real-world knowledge bases are usually expressed as multi-relational graphs, which are collections of factual triplets, where each triplet $(h,r,t)$ represents a relation $r$ between a head entity $h$ and a tail entity $t$ . However, real-word knowledge bases are usually incomplete (Dong et al., 2014), which motivates the research of automatically predicting missing links. A popular approach for Knowledge Graph Completion (KGC) is to embed entities and relations into continuous vector or matrix space, and use a well-designed score function $f(h,r,t)$ to measure the plausibility of the triplet $(h,r,t)$ . Most of the previous methods use translation distance based (Bordes et al., 2013; Wang et al., 2014; Xiao et al., 2016; Sun et al., 2019) and semantic matching based (Nickel and Tresp, 2013; Yang et al., 2014; Nickel et al., 2016; Trouillon et al., 2016; + +Liu et al., 2017) scoring functions which are easy to analyze. + +However, recently, a vast number of neural network-based methods have been proposed. They have complex score functions which utilize blackbox neural networks including Convolutional Neural Networks (CNNs) (Dettmers et al., 2018; Nguyen et al., 2018), Recurrent Neural Networks (RNNs) (Lin et al., 2015; Wang et al., 2018), Graph Neural Networks (GNNs) (Schlichtkrull et al., 2017; Shang et al., 2019), and Capsule Networks (Nguyen et al., 2019). While some of them report state-of-the-art performance on several benchmark datasets that are competitive to previous embedding-based approaches, a considerable portion of recent neural network-based papers report very high performance gains which are not consistent across different datasets. Moreover, most of these unusual behaviors are not at all analyzed. Such a pattern has become prominent and is misleading the whole community. + +In this paper, we investigate this problem and find that this is attributed to the inappropriate evaluation protocol used by these approaches. We demonstrate that their evaluation protocol gives a perfect score to a model that always outputs a constant irrespective of the input. This has lead to artificial inflation of performance of several models. For this, we find a simple evaluation protocol that creates a fair comparison environment for all types of score functions. We conduct extensive experiments to re-examine some recent methods and fairly compare them with existing approaches. The source code of the paper has been publicly available at http://github.com/svjan5/kg-reeval. + +# 2 Background + +Knowledge Graph Completion Given a Knowledge Graph $\mathcal{G} = (\mathcal{E},\mathcal{R},\mathcal{T})$ , where $\mathcal{E}$ and $\mathcal{R}$ de + +
FB15k-237WN18RR
ConvE.325.430
RotatE.338 (+4.0%).476 (+10.6%)
TuckER.358 (+10.2%).470 (+9.3%)
ConvKB.396 (+21.8%).248 (-42.3%)
CapsE.523 (+60.9%).415 (-3.4%)
KBAT.518 (+59.4%).440 (+2.3%)
TransGate.404 (+24.3%).409 (-4.9%)
+ +Table 1: Changes in MRR for different methods on FB15k-237 and WN18RR datasets with respect to ConvE show inconsistent improvements. + +note the set of entities and relations and $\mathcal{T} = \{(h,r,t)\mid h,t\in \mathcal{E},r\in \mathcal{R}\}$ is the set of triplets (facts), the task of Knowledge Graph Completion (KGC) involves inferring missing facts based on the known facts. Most the existing methods define an embedding for each entity and relation in $\mathcal{G}$ , i.e., $e_h,e_r\forall h\in \mathcal{E},r\in \mathcal{R}$ and a score function $f(h,r,t):\mathcal{E}\times \mathcal{R}\times \mathcal{E}\to \mathbb{R}$ which assigns a high score for valid triplets than the invalid ones. + +KGC Evaluation During KGC evaluation, for predicting $t$ in a given triplet $(h,r,t)$ , a KGC model scores all the triplets in the set $\mathcal{T}' = \{(h,r,t') \mid t' \in \mathcal{E}\}$ . Based on the score, the model first sorts all the triplets and subsequently finds the rank of the valid triplet $(h,r,t)$ in the list. In a more relaxed setting called filtered setting, all the known correct triplets (from train, valid, and test triplets) are removed from $\mathcal{T}'$ except the one being evaluated (Bordes et al., 2013). The triplets in $\mathcal{T}' - \{t\}$ are called negative samples. + +Related Work Prior to our work, Kadlec et al. (2017) cast doubt on the claim that performance improvement of several models is due to architectural changes as opposed to hyperparameter tuning or different training objective. In our work, we raise similar concerns but through a different angle by highlighting issues with the evaluation procedure used by several recent methods. Chandrahas et al. (2018) analyze the geometry of KG embeddings and its correlation with task performance while Nayyeri et al. (2019) examine the effect of different loss functions on performance. However, their analysis is restricted to non-neural approaches. + +![](images/340af16a3845eaa4227f90213b5ac0c2d87ea5e04d24b76deac8f572218818de.jpg) +Figure 1: Sorted score distribution of ConvKB for an example valid triplet and its negative samples. The score is normalized into [0, 1] (lower the better). Dotted line indicates the score for the valid triplet. We find that in this example, around $58.5\%$ negative sampled triplets obtain the exact same score as the valid triplet. + +# 3 Observations + +In this section, we first describe our observations and concerns and then investigate the reason behind. + +# 3.1 Inconsistent Improvements over Benchmark Datasets + +Several recently proposed methods report high performance gains on a particular dataset. However, their performance on another dataset is not consistently improved. In Table 1, we report change in MRR score on FB15k-237 (Toutanova and Chen, 2015) and WN18RR (Dettmers et al., 2018) datasets with respect to ConvE (Dettmers et al., 2018) for different methods including RotatE (Sun et al., 2019), TuckER (Balažević et al., 2019), ConvKB (Nguyen et al., 2018), CapsE (Nguyen et al., 2019), KBAT (Nathani et al., 2019), and TransGate (Yuan et al., 2019). Overall, we find that for a few recent NN based methods, there are inconsistent gains on these two datasets. For instance, in ConvKB, there is a $21.8\%$ improvement over ConvE on FB15k-237, but a degradation of $42.3\%$ on WN18RR, which is surprising given the method is claimed to be better than ConvE. On the other hand, methods like RotatE and TuckER give consistent improvement across both benchmark datasets. + +# 3.2 Observations on Score Functions + +Score distribution When evaluating KGC methods, for a given triplet $(h,r,t)$ , the ranking of $t$ given $h$ and $r$ is computed by scoring all the triplets of form $\{(h,r,t') \mid t' \in \mathcal{E}\}$ , where $\mathcal{E}$ is the set of + +![](images/51e8f910db13ab5b8c166c81db6e9ea6a3956c98d6e627833e499d7643f1c6be.jpg) +Figure 2: Plot shows the frequency of the number of negative triplets with the same assigned score as the valid triplet during evaluation on FB15k-237 dataset. The results show that for methods like ConvKB and CapsE, a large number of negative triplets get the same score as the valid triplets whereas for methods like ConvE such occurrences are rare. + +all entities. On investing a few recent NN based approaches, we find that they have unusual score distribution, where some negatively sampled triplets have the same score as the valid triplet. An instance of FB15k-237 dataset is presented in Figure 1. Here, out of 14,541 negatively sampled triplets, 8,520 have the exact same score as the valid triplet. + +Statistics on the whole dataset In Figure 2, we report the total number of triplets with the exact same score over the entire dataset for ConvKB (Nguyen et al., 2018) and CapsE (Nguyen et al., 2019) and compare them with ConvE (Dettmers et al., 2018) which does not suffer from this issue. We find that both ConvKB and CapsE have multiple occurrences of such unusual score distribution. On average, ConvKB and CapsE have 125 and 197 entities with exactly same score as the valid triplet over the entire evaluation dataset of FB15k-237, whereas ConvE has around 0.002, which is almost negligible. In Section 4, we demonstrate how this leads to massive performance gain for methods like ConvKB and CapsE. + +Root of the problem Further, we investigate the cause behind such unusual score distribution. In Figure 3, we plot the ratio of neurons becoming zero after ReLU activation for the valid triplets vs. their normalized frequency on FB15k-237 dataset. The results show that in ConvKB and CapsE, a large fraction (87.3% and 92.2% respectively) of the neurons become zeros after applying ReLU + +![](images/4fa13b23ab9a1ce649064958c1a2babcbed94d2db3636f1db7108ebd852d9255.jpg) +Figure 3: Distribution of ratio of neurons becoming zero after ReLU activation in different methods for the valid triplets in FB15k-237 dataset. We find that for ConvKB and CapsE an unusually large fraction of neurons become zero after ReLU activation whereas the does not hold with ConvE. + +activation. However, with ConvE, this count is substantially less (around $41.1\%$ ). Because of the zeroing of nearly all neurons (at least $14.2\%$ for ConvKB and $22.0\%$ for CapsE), the representation of several triplets become very similar during forward pass and thus leading to obtaining the exact same score. + +# 4 Evaluation Protocols for KGC + +In this section, we present different evaluation protocols that can be adopted in knowledge graph completion. We further show that inappropriate evaluation protocol is the key reason behind the unusual behavior of some recent NN-based methods. + +How to deal with the same scores? An essential aspect of the evaluation method is to decide how to break ties for triplets with the same score. More concretely, while scoring the candidate set $\mathcal{T}'$ , if there are multiple triplets with the same score from the model, one should decide which triplet to pick. Assuming that the triplets are sorted in a stable manner, we design a general evaluation scheme for KGC, which consists of the following three different protocols: + +- TOP: In this setting, the correct triplet is inserted in the beginning of $\mathcal{T}'$ . +- BOTTOM: Here, the correct triplet is inserted at the end of $\mathcal{T}'$ . +- RANDOM: In this, the correct triplet is placed randomly in $\mathcal{T}'$ . + +
ReportedRANDOMTOPBOTTOM
MRR ↑MR ↓H@10 ↑MRR ↑MR ↓H@10 ↑MRR ↑MR ↓H@10 ↑MRR ↑MR ↓H@10 ↑
ConvE.325244.501.324 ± .0285 ± 0.501 ± .0.324285.501.324285.501
RotatE.338177.533.336 ± .0178 ± 0.530 ± .0.336178.530.336178.530
TuckER.358-.544.353 ± .0162 ± 0.536 ± .0.353162.536.353162.536
ConvKB.396257.517.243 ± .0309 ± 2.421 ± .0.407 (+.164)246 (-63).527 (+.106).130 (-.113)373 (+64).383 (-.038)
CapsE.523303.593.150 ± .0403 ± 2.356 ± .0.511 (+.361)305 (-99).586 (+.229).134 (-.016)502 (+99).297 (-.059)
KBAT.518†210†.626†.157 ± .0270 ± 0.331 ± .0.157270.331.157270.331
+ +Table 2: Effect of different evaluation protocols on recent KG embedding methods on FB15k-237 dataset. For TOP and BOTTOM, we report changes in performance with respect to RANDOM protocol. Please refer to Section 5.4 for details. $\dagger$ : KBAT has test data leakage in their original implementation, which is fixed in our experiments. + +Discussion Based on the definition of the three evaluation protocols, it is clear that TOP evaluation protocol does not evaluate the model rigorously. It gives the models that have a bias to provide the same score for different triplets, an inappropriate advantage. On the other hand, BOTTOM evaluation protocol can be unfair to the model during inference time because it penalizes the model for giving the same score to multiple triplets, i.e., if many triplets have the same score as the correct triple, the correct triplet gets the least rank possible. + +As a result, RANDOM is the best evaluation technique which is both rigorous and fair to the model. It is in line with the situation we meet in the real world: given several same scored candidates, the only option is to select one of them randomly. Hence, we propose to use RANDOM evaluation scheme for all model performance comparisons. + +# 5 Experiments + +In this section, we conduct extensive experiments using our proposed evaluation protocols and make a fair comparison for several existing methods. + +# 5.1 Datasets + +We evaluate the proposed protocols on FB15k-237 (Toutanova and Chen, 2015) dataset1, which is a subset of FB15k (Bordes et al., 2013) with inverse relations deleted to prevent direct inference of test triples from training. + +# 5.2 Methods Analyzed + +In our experiments, we categorize existing KGC methods into the following two categories: + +- Non-Affected: This includes methods which give consistent performance under different evaluation protocols. For experiments in this paper, we consider three such methods - ConvE, RotatE, and TuckER. +- Affected: This category consists of recently proposed neural-network based methods whose performance is affected by different evaluation protocols. ConvKB, CapsE, TransGate $^2$ , and KBAT are methods in this category. + +# 5.3 Evaluation Metrics + +For all the methods, we use the code and the hyperparameters provided by the authors in their respective papers. Model performance is evaluated by Mean Reciprocal Rank (MRR), Mean Rank (MR) and Hits@10 (H@10) on the filtered setting (Bordes et al., 2013). + +# 5.4 Evaluation Results + +To analyze the effect of different evaluation protocols described in Section 4, we study the performance variation of the models listed in Section 5.2. We study the effect of using TOP and BOTTOM protocols and compare them to RANDOM protocol. In their original paper, ConvE, RotatE, and TuckER use a strategy similar to the proposed RANDOM protocol, while ConvKB, CapsE, and KBAT use TOP protocol. We also study the random error in RANDOM protocol with multiple runs, where we report the average and standard deviation on 5 runs with different random seeds. The results are presented in Tables 2. + +We observe that for Non-Affected methods like ConvE, RotatE, and TuckER, the performance remains consistent across different evaluation protocols. However, with Affected methods, there is a considerable variation in performance. Specifically, we can observe that these models perform best when evaluated using TOP and worst when evaluated using BOTTOM3. Finally, we find that the proposed RANDOM protocol is very robust to different random seeds. Although the theoretic upper and lower bounds of a RANDOM score are TOP and BOTTOM scores respectively, when we evaluate knowledge graph completion for real-world large-scale knowledge graphs, the randomness doesn't affect the evaluation results much. + +# 6 Conclusion + +In this paper, we performed an extensive reexamination study of recent neural network based KGC techniques. We find that many such models have issues with their score functions. Combined with inappropriate evaluation protocol, such methods reported inflated performance. Based on our observations, we propose RANDOM evaluation protocol that can clearly distinguish between these affected methods from others. We also strongly encourage the research community to follow the RANDOM evaluation protocol for all KGC evaluation purposes. + +# Acknowledgements + +We thank the reviewers for their helpful comments. This work is supported in part by the National Science Foundation (NSF) under grant IIS-1546329 and Google PhD Fellowship. + +# References + +Ivana Balažević, Carl Allen, and Timothy M. Hospedales. 2019. Tucker: Tensor factorization for knowledge graph completion. In Empirical Methods in Natural Language Processing. +Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing + +Systems 26, pages 2787-2795. Curran Associates, Inc. +Chandrahas, Aditya Sharma, and Partha Talukdar. 2018. Towards understanding the geometry of knowledge graph embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 122-131, Melbourne, Australia. Association for Computational Linguistics. +Tim Dettmers, Minervini Pasquale, Stenetorp Pontus, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the 32th AAAI Conference on Artificial Intelligence, pages 1811-1818. +Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, pages 601-610, New York, NY, USA. ACM. +Rudolf Kadlec, Ondrej Bajgar, and Jan Kleindienst. 2017. Knowledge base completion: Baselines strike back. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pages 69-74, Vancouver, Canada. Association for Computational Linguistics. +Yankai Lin, Zhiyuan Liu, Huanbo Luan, Maosong Sun, Siwei Rao, and Song Liu. 2015. Modeling relation paths for representation learning of knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 705-714, Lisbon, Portugal. Association for Computational Linguistics. +Hanxiao Liu, Yuexin Wu, and Yiming Yang. 2017. Analogical inference for multi-relational embeddings. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 2168-2178, International Convention Centre, Sydney, Australia. PMLR. +Deepak Nathani, Jatin Chauhan, Charu Sharma, and Manohar Kaul. 2019. Learning attention-based embeddings for relation prediction in knowledge graphs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. +Mojtaba Nayyeri, Chengjin Xu, Yadollah Yaghoobzadeh, Hamed Shariat Yazdi, and Jens Lehmann. 2019. Toward Understanding The Effect Of Loss function On Then Performance Of Knowledge Graph Embedding. arXiv e-prints, page arXiv:1909.00519. +Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2018. A novel embedding model for knowledge base completion based + +on convolutional neural network. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 327-333. Association for Computational Linguistics. +Dai Quoc Nguyen, Thanh Vu, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2019. A Capsule Network-based Embedding Model for Knowledge Graph Completion and Search Personalization. In Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2180-2189. +Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16, pages 1955-1961. AAAI Press. +Maximilian Nickel and Volker Tresp. 2013. Tensor factorization for multi-relational learning. In Machine Learning and Knowledge Discovery in Databases, pages 617-621, Berlin, Heidelberg. Springer Berlin Heidelberg. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543. +Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2017. Modeling relational data with graph convolutional networks. arXiv preprint arXiv:1703.06103. +Chao Shang, Yun Tang, Jing Huang, Jinbo Bi, Xiaodong He, and Bowen Zhou. 2019. End-to-end structure-aware convolutional networks for knowledge base completion. +Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In International Conference on Learning Representations. +Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 57-66. +Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML'16, pages 2071-2080. JMLR.org. +Haoyu Wang, Vivek Kulkarni, and William Yang Wang. 2018. DOLORES: deep contextualized knowledge graph embeddings. CoRR, abs/1811.00147. + +Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, AAAI'14, pages 1112-1119. AAAI Press. +Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2016. Transg: A generative model for knowledge graph embedding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2316-2325. Association for Computational Linguistics. +Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. CoRR, abs/1412.6575. +Jun Wen Yuan, Neng Gao, and Ji Xiang. 2019. Transgate: Knowledge graph embedding with shared gate structure. In AAAI. + +# Appendix + +# A Results on WN18RR dataset + +Besides FB15k-237, we also evaluate the proposed protocols on WN18RR (Dettmers et al., 2018) dataset, which is a subset of WN18 (Bordes et al., 2013) containing lexical relations between words. Similar to FB15k-237, inverse relations are removed in WN18RR. The results on WN18RR are shown in Table 3. From these results, we can draw similar conclusions as in Section 5. We also show the total number of triplets with the exact same score over the entire WN18RR dataset for ConvKB, CapsE and ConvE in Figure 4. + +
ReportedRANDOMTOPBOTTOM
MRR ↑MR ↓H@10 ↑MRR ↑MR ↓H@10 ↑MRR ↑MR ↓H@10 ↑MRR ↑MR ↓H@10 ↑
ConvE.434187.52.444 ± .04950 ± 0.503 ± .0.4444950.503.4444950.503
RotatE.4763340.571.473 ± .03343 ± 0.571 ± .0.4733343.571.4733343.571
TuckER.470-.526.461 ± .06324 ± 0.516 ± .0.4616324.516.4616324.516
ConvKB.2482554.525.249 ± .03433 ± 42.524 ± .0.251 (+.002)1696 (-1737).529 (+.005).164 (-.085)5168 (+1735).516 (-.008)
CapsE‡.415719.560.415 ± .0718 ± 0.559 ± .0.415718.559.323 (-.092)719 (+1).555 (-.004)
KBAT.440†1940†.581†.412 ± .01921 ± 0.554 ± .0.4121921.554.4121921.554
+ +Table 3: Performance comparison under different evaluation protocols on WN18RR dataset. For TOP and BOTOM, we report changes in performance with respect to RANDOM protocol. $\ddagger$ : CapsE uses the pre-trained 100-dimensional Glove (Pennington et al., 2014) word embeddings for initialization on WN18RR dataset, which makes the comparison on WN18RR still unfair. $\dagger$ : KBAT has test data leakage in their original implementation, which is fixed in our experiments. + +![](images/8639171ff292ea6d3113f53d169179055215fa56082fb260f2b484e0f14ea2dd.jpg) +Figure 4: Plot shows the frequency of the number of negative triplets with the same assigned score as the valid triplet during evaluation on WN18RR dataset. The results show that Unlike FB15k-237, in this dataset, only ConvKB has a large number of negative triplets get the same score as the valid triplets. \ No newline at end of file diff --git a/areevaluationofknowledgegraphcompletionmethods/images.zip b/areevaluationofknowledgegraphcompletionmethods/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..dfc8ab182f0cec1ec380216e3f5a4b37b082f1b1 --- /dev/null +++ b/areevaluationofknowledgegraphcompletionmethods/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:690f6c398fb30b375eeea769f522d0740a2a07970d0887e7d5363a5b753765f3 +size 249849 diff --git a/areevaluationofknowledgegraphcompletionmethods/layout.json b/areevaluationofknowledgegraphcompletionmethods/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..bc89d7cd0b940578b583e61fde67dd10c9ba0154 --- /dev/null +++ b/areevaluationofknowledgegraphcompletionmethods/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e59633511f794764f8f3655a9600c8838044e0c4f7c8f9d0fc4d2ce552d76425 +size 209812 diff --git a/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/71eb7d67-e7b9-431d-93e5-0708eb3047a2_content_list.json b/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/71eb7d67-e7b9-431d-93e5-0708eb3047a2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..fa105a31e47b54b47feccbe78f16c1f6dbeee1bb --- /dev/null +++ b/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/71eb7d67-e7b9-431d-93e5-0708eb3047a2_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb0a563bf56bbbf8b24187246ef2a9380efcdb56ec8118b969629845c2257aea +size 86742 diff --git a/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/71eb7d67-e7b9-431d-93e5-0708eb3047a2_model.json b/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/71eb7d67-e7b9-431d-93e5-0708eb3047a2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f5e1fb20305b18dfed200ed1077a89aaa2e5ec7c --- /dev/null +++ b/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/71eb7d67-e7b9-431d-93e5-0708eb3047a2_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:63c2c5707feb37782b3fc18d2fc4c455666a74ffdc5f4d495fe4e2848cc92343 +size 100734 diff --git a/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/71eb7d67-e7b9-431d-93e5-0708eb3047a2_origin.pdf b/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/71eb7d67-e7b9-431d-93e5-0708eb3047a2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f8f481defe92fa07ef7fc62e5beadc1c78cca619 --- /dev/null +++ b/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/71eb7d67-e7b9-431d-93e5-0708eb3047a2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2369d7f75c1102a951b85be1c83e6d58a3f67a1a058b62560264cff4a09edb39 +size 1564128 diff --git a/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/full.md b/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1d602a5d2a0159d828c2fa773e2db9fd1d4fd6a3 --- /dev/null +++ b/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/full.md @@ -0,0 +1,367 @@ +# A Reinforced Generation of Adversarial Examples for Neural Machine Translation + +Wei Zou1 Shujian Huang1 Jun Xie2 Xinyu Dai1 Jiajun Chen1 + +$^{1}$ National Key Laboratory for Novel Software Technology, Nanjing University, China $^{2}$ Tencent Technology Co, China + +zouw@smail.nju.edu.cn, {huangsj, daixinyu, chenjj} @nju.edu.cn +stiffxie@tencent.com + +# Abstract + +Neural machine translation systems tend to fail on less decent inputs despite its significant efficacy, which may significantly harm the credibility of these systems—fathoming how and when neural-based systems fail in such cases is critical for industrial maintenance. Instead of collecting and analyzing bad cases using limited handcrafted error features, here we investigate this issue by generating adversarial examples via a new paradigm based on reinforcement learning. Our paradigm could expose pitfalls for a given performance metric, e.g., BLEU, and could target any given neural machine translation architecture. We conduct experiments of adversarial attacks on two mainstream neural machine translation architectures, RNN-search, and Transformer. The results show that our method efficiently produces stable attacks with meaning-preserving adversarial examples. We also present a qualitative and quantitative analysis for the preference pattern of the attack, demonstrating its capability of pitfall exposure. + +# 1 Introduction + +Neural machine translation (NMT) based on the encoder-decoder framework, such as RNN-Search (Bahdanau et al., 2014; Luong et al., 2015, RNNSearch) or Transformer (Vaswani et al., 2017, Transformer), has achieved remarkable progress and become a de-facto in various machine translation applications. However, there are still pitfalls for a well-trained neural translation system, especially when applied to less decent real-world inputs compared to training data (Belinkov and Bisk, 2017). For example, typos may severely deteriorate system outputs (Table 1). Moreover, recent studies show that a neural machine translation system can also be broken by noisy synthetic inputs (Belinkov and Bisk, 2017; Lee et al., 2018). Due to the black-box nature of a neural system, it has + +
in耶路撒冷发生自杀爆炸事件
outsuicide bombing in jerusalem
in耶路撒冷发生自杀爆事件
outeastern jerusalem explores a case of eastern europe
+ +Table 1: Fragility of neural machine translation. A typo leaving out a Chinese character "炸" leads to significant errors (noted by italics) in English translation. Both "爆" and "爆炸" mean "bombing" in English. + +been a challenge to fathom when and how the system tends to fail. + +Intuitively, researchers seek to apprehend such failures by the analysis of handcrafted error indicating features (Zhao et al., 2018; Karpukhin et al., 2019). This strategy is costly because it requires expert knowledge for both linguistics and the target neural architecture. Such features are also less applicable because some common errors in deep learning systems are hard to formulate, or very specific to certain architectures. + +Instead of designing error features, recent researchers adopt ideas from adversarial learning (Goodfellow et al., 2014) to generate adversarial examples for mining pitfalls of NLP systems (Cheng et al., 2018a; Ebrahimi et al., 2018; Zhao et al., 2017). Adversarial examples are minor perturbed inputs that keep the semantic meaning, yet yield degraded outputs. The generation of valid adversarial examples provides tools for error analysis that is interpretable for ordinary users, which can contribute to system maintenance. Though it has achieved success concerning continuous input, e.g., images, there are following major issues for NLP tasks. + +First, it is non-trivial to generate valid discrete tokens for natural language, e.g., words or characters. Cheng et al. (2018a) follow Goodfellow et al. (2014) to learn noised representation then sample tokens accordingly. However, there is no guaranteed correspondence between arbitrary representation and valid tokens. Therefore, it may gen + +
inTwo man are playing on the street corner.
perturbed inTwo man are playing frisbee in the park.
outZwei Männer spielen an einer Straßenecke.
perturbed outZwei Männer spielen frisbee im park.
+ +Table 2: Example of undesirable perturbation in adversarial examples for machine translation in (Zhao et al., 2017), though it yields very different output compare to the origin, it does not indicate system malfunction. + +erate tokens departing from learned representation, which undermines the generation. Ebrahimi et al. (2018) turns to a search paradigm by a brute-force search for direct perturbations on the token level. To lead the search, a gradient-based surrogate loss must be designed upon every token modification by given target annotations. However, this paradigm is inefficient due to the formidable computation for gradients. Furthermore, surrogate losses defined upon each token by targets requires high-quality targets, and risks being invalidated by any perturbation that changes tokenization. + +Another issue is to keep the semantics of original inputs. Different from the fact that minor noises on images do not change the semantics, sampling discrete tokens from arbitrary perturbed representation (Cheng et al., 2018a) may generate tokens with different semantics and lead to ill-perturbed samples (Table 2). Searching for the perturbed input also requires a semantic constraint of the search space, for which handcrafted constraints are employed (Ebrahimi et al., 2018). Though constraints can also be introduced by multitask modeling with additional annotations (Zhao et al., 2017), this is still not sufficient for tasks requiring strict semantic equivalence, such as machine translation. + +In this paper, we adopt a novel paradigm that generates more reasonable tokens and secures semantic constraints as much as possible. We summarize our contributions as the following: + +- We introduce a reinforcement learning (Sutton and Barto, 2018, RL) paradigm with a discriminator as the terminal signal in its environment to further constrain semantics. This paradigm learns to apply discrete perturbations on the token level, aiming for direct translation metric degradation. Experiments show that our approach not only achieves semantically constrained adversarial examples but also leads to effective attacks for machine translation. + +- Our paradigm can achieve the adversarial example generation with outclassed efficiency by only given source data. Since our method is model-agnostic and free of handcrafted error feature targeting architectures, it is also viable among different machine translation models. +- We also present some analysis upon the state-of-the-art Transformer based on its attack, showing our method's competence in system pitfall exposure. + +# 2 Preliminaries + +# 2.1 Neural Machine Translation + +The most popular architectures for neural machine translation are RNN-search (Bahdanau et al., 2014) and Transformer (Vaswani et al., 2017). They share the paradigm to learn the conditional probability $P(Y|X)$ of a target translation $Y = [y_{1},y_{2},\dots,y_{m}]$ given a source input $X = [x_{1},x_{2},\dots,x_{n}]$ . A typical NMT architecture consists of an encoder, a decoder and attention networks. The encoder encodes the source embedding $X_{emb} = [emb_{1},emb_{2},\dots emb_{n}]$ into hidden representation $H = [h_1,h_2,\dots,h_n]$ . Then a decoder $f_{dec}$ with attention network attentively accesses $H$ for an auto-regressive generation of each $y_{i}$ until the end of sequence symbol (EOS) is generated: + +$$ +P \left(y _ {i} \mid y _ {< i}, X\right) = \operatorname {s o f t m a x} \left(f _ {d e c} \left(y _ {i - 1}, s _ {t}, c _ {t}; \theta_ {d e c}\right)\right) \tag {1} +$$ + +where $c_{t}$ is the attentive result for current decoder state $s_{t}$ given $H$ . + +# 2.2 Actor-Critic for Reinforcement Learning + +Reinforcement learning (Sutton and Barto, 2018, RL) is a widely used machine learning technique following the paradigm of **explore and exploit**, which is apt for unsupervised policy learning in many challenging tasks (e.g., games (Mnih et al., 2015)). It is also used for direct optimization for non-differentiable learning objectives (Wu et al., 2018; Bahdanau et al., 2016) in NLP. + +Actor-critic (Konda and Tsitsiklis, 2000) is one of the most popular RL architectures where the agent consists of a separate policy and value networks called actor and critic. They both take in environment state $s_t$ at each time step as input, while + +![](images/95da735abf6882ecf1b1c3214686782a1f0c7707cb7e7cd97d9fb8e8b5a74255.jpg) +Figure 1: [a] Overview of our RL architecture. ① Environment states are processed as inputs for agent; ② agent yield modification upon SRC in environment; ③ determine survival and step reward of environment; ④ determine degradation with victim NMT as episodic reward; ⑤ update agent with total rewards. During a training episode, we loop ① to ③ and accumulate step rewards until environment terminates. Dash line indicates execution at the end of an episode. [b] Architecture of discriminator. [c] Architecture of agent. + +![](images/014d9a12e32b703b44424c0922a19fb806c42e23996ef25ab336c587be2832f6.jpg) + +![](images/53e635bab7445b96a8149018de6d563e81732916c30a3090729b42fe40ccfee8.jpg) + +actor determines an action $a_{t}$ among possible action set $\mathcal{A}$ and critic yields value estimation $V_{t}(s_{t})$ . In general, the agent is trained to maximize discounted rewards $R_{t} = \sum_{i=0}^{\infty} \gamma^{i} r_{t+i}$ for each state, where $\gamma \in (0,1]$ is the discount factor. Such goal can be further derived as individual losses applied to actor and critic. Thus the actor policy loss $L^{\pi}$ on step $t$ is: + +$$ +L _ {t} ^ {\pi} \left(\theta_ {\pi}\right) = \log P \left(a _ {t} \mid s _ {t}\right) A _ {t} \left(s _ {t}, a _ {t}\right); a _ {t} \in \mathcal {A} \tag {2} +$$ + +where $\theta_{\pi}$ denotes actor parameters, $A_{t}(s_{t},a_{t})$ denotes general advantage function (Schulman et al., 2015) on state $s_t$ for action $a_{t}$ given by $\sum_{i = 0}^{k - 1}\gamma^i r_{t + i} + \gamma^k V(s_{t + k}) - V(s_t)$ , which can be further derived as: + +$$ +\begin{array}{l} A _ {t} \left(s _ {t}, a _ {t}\right) = \gamma A _ {t + 1} \left(s _ {t + 1}, a _ {t + 1}\right) + r _ {t} \\ + \gamma V _ {t + 1} \left(s _ {t + 1}\right) - V _ {t} \left(s _ {t}\right) \tag {3} \\ \end{array} +$$ + +Meanwhile, critic learns to estimate $R_{t}$ via minimizing a temporal difference loss $L^{v}$ on each step $t$ : + +$$ +L _ {t} ^ {v} \left(\theta_ {v}\right) = \frac {1}{2} \left(r _ {t} + \gamma R _ {t + 1} - V _ {t} \left(s _ {t}\right)\right) ^ {2} \tag {4} +$$ + +where $\theta_v$ denotes critic parameter. + +Usually, the training is regularized by maximizing policy entropy $H^{\pi}$ to avoid exploration failure before exploiting optimum policy (Ziebart, 2010). Thus the total loss becomes: + +$$ +L (\theta) = \sum_ {t} \left(\alpha L _ {t} ^ {v} - L _ {t} ^ {\pi} - \beta H ^ {\pi} (\cdot | s _ {t})\right) \tag {5} +$$ + +where $\alpha$ and $\beta$ are hyperparameters for value loss and entropy coefficients. + +# 2.3 adversarial examples in NLP + +A general adversarial example generation can be described as the learning process to find a perturbation $\delta$ on input $X$ that maximize system degradation $L_{adv}$ within a certain constraint $C(\delta)$ : + +$$ +\underset {\delta} {\operatorname {a r g m a x}} L _ {a d v} (X + \delta) - \lambda C (\delta) \tag {6} +$$ + +where $\lambda$ denotes the constraint coefficient, $L_{adv}$ is determined by the goal of the attack. However, currently effective adversarial generation for NLP is to search by maximizing a surrogate gradient-based loss: + +$$ +\operatorname * {a r g m a x} _ {1 \leq i \leq n, x ^ {\prime} \in v o c a b} L _ {a d v} \left(x _ {0}, x _ {1}, \dots x _ {i} ^ {\prime} \dots x _ {n}\right) \tag {7} +$$ + +where $L_{adv}$ is a differentiable function indicating the adversarial object. Due to its formidable search space, this paradigm simply perturbs on a small ratio of token positions and greedy search by brute force among candidates. Note that adversarial example generation is fundamentally different from noised hidden representation in adversarial training (Cheng et al., 2019; Sano et al., 2019), which is not to be concerned in this work. + +# 3 Approach + +In this section, we will describe our reinforced learning and generation of adversarial examples (Figure 1) in detail. Overall, the victim model is a part of the environment (denoted as $Env$ ), which yields rewards indicating overall degradation based on modified inputs. A reinforced agent + +learns to modify every source position from left to right sequentially. Meanwhile, a discriminator in Env provides every-step survival signals by determining whether SRC is ill-perturbed. + +# 3.1 Environment + +We encapsulate the victim translation model with a discriminative reward process as an Env for a reinforced agent to interact. + +# 3.1.1 Environment State + +The state of the Env is described as $s_t = (SRC, t)$ , where $SRC = [src_0, src1, \dots, src_N]$ are $N$ sequences processed by victim model's vocabulary and tokenization. Each sequence $src_i = [x_1, x_2, \dots, x_n]$ is concatenated with $BOS, EOS$ , which indicate the begin and end of the sequence, then padded to same length. Time step $t \in [1, n]$ also indicates the token position to be perturbed by the agent. Env will consecutively loop for all token positions and update $s_t$ based on the agent's modification. Env also yields reward signals until the end or intermediately terminated. That is, all sequences in SRC are determined by $D$ as ill-perturbed during the reward process. Once the Env terminates, it finishes the current episode and reset its state with a new batch of sequences as SRC. + +# 3.1.2 Reward Process with Discriminator + +The reward process is only used during training. It consists of a survival reward $r_{\mathrm{s}}$ on every step and a final degradation $r_{\mathrm{d}}$ concerning an overall metric if the agent survives till the end. Overall, we have: + +$$ +r _ {t} = \left\{ \begin{array}{l} - 1, \text {t e r m i n a t e d} \\ \frac {1}{N} \sum_ {N} a \cdot r _ {\mathrm {s}}, \text {s u r v i v e} \& t \in [ 1, n) \\ \frac {1}{N} \sum_ {N} (a \cdot r _ {\mathrm {s}} + b \cdot r _ {\mathrm {d}}), \text {s u r v i v e} \& t = n \end{array} \right. \tag {8} +$$ + +where $a, b$ are hyper parameters that keeps the overall $r_{\mathrm{s}}$ and $r_{\mathrm{d}}$ within similar magnitude. + +Instead of direct optimization of the constrained adversarial loss in Eq.6, we model discriminator $D$ 's output as survival rewards similar to that in gaming (Mnih et al., 2015). That is, the agent must survive for its goal by also fooling $D$ , which attempts to terminate ill-perturbed modifications. We define an ill-perturbed source by determining whether it still matches the original target $tgt$ . + +Discriminator As it is shown in Figure 1(b), discriminator $D$ consists of bi-directional GRU encoders for both source and target sequence. Their corresponding representation is averaged and concatenated before passed to a feedforward layer with dropout. Finally, the output distribution is calculated by a softmax layer. Once $D$ determines the pair as positive, its corresponding possibility is regarded as the reward, otherwise 0: + +$$ +r _ {\mathrm {s}} = \left\{ \begin{array}{l l} P (\text {p o s i t i v e} | \left(s r c ^ {\prime}, t g t\right); \theta_ {d}), & \text {p o s i t i v e} \\ 0, & \text {o t h e r w i s e} \end{array} \right. \tag {9} +$$ + +As long as the environment survives, it yields averaged reward among samples from SRC (Eq.8) to mitigate rewards' fluctuation that destabilize training. + +Discriminator Training Similar to GAN training, the environment's $D$ must update as the agent updates. During its training, the agent's parameter is freezed to provide training samples. For every $D$ 's training epoch, we randomly choose half of the batch and perturb its source using the current agent as negative samples. During $D$ 's updates, we randomly generate a new batch of pairs from parallel data likewise to test its accuracy. $D$ is updated at most $step_{D}$ epochs, or until its test accuracy reaches acc_bound. + +Env only yields -1 as overall terminal rewards when all sequences in SRC are intermediately terminated. For samples classified as negative during survival, their follow-up rewards and actions are masked as 0. If the agent survives until the end, Env yields additional averaged $r_{\mathrm{d}}$ as final rewards for an episode. We follow Michel et al. (2019) to adopt relative degradation: + +$$ +r _ {\mathrm {d}} = \frac {\operatorname {s c o r e} \left(y , r e f s\right) - \operatorname {s c o r e} \left(y ^ {\prime} , r e f s\right)}{\operatorname {s c o r e} \left(y , r e f s\right)} \tag {10} +$$ + +where $y$ and $y^\prime$ denote original and perturbed output, refs are references, and score is a translation metric. If $\text{score}(y,\text{refs})$ is zero, we return zero as $r_{\mathrm{d}}$ . To calculate score we retokenize perturbed SRC by victim models vocabulary and tokenizer before translation. + +# 3.2 Agent + +As it is shown in Figure 1 (c), the agent's actor and critic share the same input layers and encoder, but later processed by individual feedforward layers and output layers. Actor takes in $SRC$ and current token with its surrounding $(x_{t-1}, x_t, x_{t+1})$ , then yields a binary distribution to determine whether to attack a token on step $t$ , while critic emits a value $V(s_t)$ for every state. Once the actor decides to perturb a specific token, this token will be replaced by another token in its candidate set. + +Candidate Set We collect at most $K$ candidates for each token in the victim's vocabulary within a distance of $\epsilon$ . $\epsilon$ is the averaged Euclidean distance of $K$ -nearest embedding for all tokens in victim vocabulary. We note that there shall always be candidates for a token in test scenarios that are beyond victim's vocabulary, for those without a nearby candidate, we assign UNK as its candidate. Once the agent chooses to replace a token with UNK, we follow Michel et al. (2019) to present a valid token that is also UNK to the victim's vocabulary. + +Agent Training The agent is trained by algorithm in appendix A. Since the agent is required to explore with stochastic policy during training, it will first sample based on its actor's output distribution on whether to perturb the current position, then randomly choose among its candidates. The agent and discriminator take turns to update. We assume the training is converged when test accuracy for $D$ does not reach over a certain value within certain continuous learning rounds of agent and discriminator. + +Agent Inference To generate adversarial examples, the agent will take in source sequences and perturb on each position based on the actor's output from left to right, then choose the nearest candidate. As the agent's critic learns to estimate expected future rewards for a step, only when it yields positive value will agent perturb, otherwise it indicates an undesirable perturbation; thus, the agent is muted. + +# 4 Experiments + +# 4.1 Data Sets + +We test our adversarial example generations on $\mathrm{Zh}\rightarrow \mathrm{En}$ $\mathrm{En}\to \mathrm{Fr}$ and $\mathrm{En}\to \mathrm{De}$ translation tasks, + +which provide relatively strong baselines for victim models and mass test samples. + +We train our agent using only parallel data that is used for victims' training. we train on LDC $\mathrm{Zh}\rightarrow \mathrm{En}^2 (1.3\mathrm{M}$ pairs), WMT14 $\mathrm{En}\rightarrow \mathrm{De}^3$ (4.5M pairs) and WMT15 $\mathrm{En}\rightarrow \mathrm{Fr}^4 (2.2\mathrm{M}$ pairs) for victim models respectively. For subword level translation, we apply byte pair encoding (Sennrich et al., 2015, BPE) for both source and target languages with the vocabulary size of $37\mathrm{k}$ . We also use join-BPE for En-De and En-Fr experiments with $34\mathrm{k}$ and $33\mathrm{k}$ vocabulary size, respectively. For word-level translation, we use NLPIR-ICTCLAS and Moses tokenizer for Chinese and English tokenization, respectively. We adopt $30\mathrm{k}$ as vocabulary size for both source and target language. We adopt NIST test sets for $\mathrm{Zh}\rightarrow \mathrm{En}$ and WMT test sets for $\mathrm{En}\rightarrow \mathrm{De}$ and $\mathrm{En}\rightarrow \mathrm{Fr}$ , then generate adversarial examples for these sources for analysis. + +# 4.2 Victim Models + +We choose the state-of-the-art RNN-search and Transformer as victim translation models. For RNN-search, we train subword level models and strictly follow the architecture in Bahdanau et al. (2014). As for Transformer, we train both word-level and subword-level model for $\mathrm{Zh}\rightarrow \mathrm{En}$ and only subword-level models for $\mathrm{En}\rightarrow \mathrm{De}$ and $\mathrm{En}\rightarrow \mathrm{Fr}$ with the architecture and the base parameter settings by Vaswani et al. (2017). For the above models, we apply the same batch scheme and Adam optimizer following Vaswani et al. (2017). We choose MT03, newsdiscuss2015 and newstest2013 for $\mathrm{Zh}\rightarrow \mathrm{En}$ , $\mathrm{En}\rightarrow \mathrm{Fr}$ , $\mathrm{En}\rightarrow \mathrm{De}$ as validation set respectively. + +# 4.3 Metrics + +We first report attack results both in terms of char-level BLEU (chrBLEU) of perturbed source by the origin to indicate modification rate, and relative decrease in target BLEU $(RD)$ : + +$$ +R D = \frac {\mathrm {B L E U} \left(y , r e f s\right) - \mathrm {B L E U} \left(y ^ {\prime} , r e f s\right)}{\left(1 - \operatorname {c h r B L E U} \left(x ^ {\prime} , x\right)\right) \times \mathrm {B L E U} \left(y , r e f s\right)} \tag {11} +$$ + +We adopt sacreBLEU (Post, 2018) to test case-insensitive BLEU on detokenized targets. + +$^{2}$ ldc2002E18, ldc2003E14, ldc2004T08, ldc2005T06 +3 https://nlp.stanford.edu/projects/nmt/ +4Europarl-v7, news-commentary-v10 +5MT02,03,04,05,06 + +
Zh-En MT02-06
BLEUchrBLEURD↑HE↑
Transformer-word41.16---
RSNI (0.2)*29.680.8922.580*1.39*
RSNI (0.3)*19.940.7812.350*1.10*
GS (0.2)33.460.7490.7463.23
GS (0.3)29.860.6760.8472.49
Ours33.720.8040.9523.73
Transformer-BPE44.06---
RSNI (0.2)*34.440.8922.019*1.45*
RSNI (0.4)*25.780.7811.891*1.08*
GS (0.2)35.520.8231.0943.88
GS (0.4)28.180.6751.0042.90
Ours35.480.8071.0093.79
RNN-search-BPE40.90---
RSNI (0.2)*32.540.8921.891*1.44*
RSNI (0.4)*25.540.7811.712*1.36*
GS (0.2)32.940.8231.1023.79
GS (0.4)27.020.6781.0532.88
Ours31.580.7851.0593.81
+ +As Michel et al. (2019) suggest, there is a tradeoff between achieving high $RD$ and maintaining semantic. One can achieve rather high $RD$ by testing with mismatched references, making degradation less meaningful. Therefore, we also test source semantic similarity with human evaluation $(HE)$ ranging from 0 to 5 used by Michel et al. (2019) by randomly sampling $10\%$ of total sequences mixed with baselines for a double-blind test. + +# 4.4 Results + +We implement state-of-the-art adversarial example generation by gradient search (Michel et al., 2019) (GS) as a baseline, which can be currently applied to various translation models. We also implemented random synthetic noise injection (Karpukhin et al., 2019) (RSNI) as an unconstrained contrast. Both baselines are required to provide a ratio for the amount of tokens to perturb during an attack, where we present the best results. Unlike our paradigm can generate on monolingual data, GS also requires target annotations, where we use one of the references to provide a strong baseline. Note that RSNI can significantly break semantics with distinctly lower $HE$ to achieve rather high $RD$ , which we do not consider as legit adversarial example generation and noted with “*” for exclusion. + +As it is shown in Table 3 and 4, our model + +Table 3: Experiment results for $\mathrm{Zh}\rightarrow \mathrm{En}$ MT attack. We list $BLEU$ for perturbed test sets generated by each adversarial example generation method, which is expected to deteriorate. An ideal adversarial example should achieve high $RD$ with respect to high $HE$ . + +
En-De newstest13-16
BLEUchrBLEURD↑HE↑
RNN-search-BPE25.35---
RSNI (0.2)*16.700.9496.691*2.32*
RSNI (0.4)*10.050.8975.860*1.58*
GS (0.2)19.420.8811.9663.81
GS (0.4)9.270.6801.9823.01
Ours21.270.9212.0373.95
Transformer-BPE29.05---
RSNI (0.2)*18.7750.9496.935*2.39*
RSNI (0.4)*11.1250.8975.991*1.58*
GS (0.2)18.290.8612.6653.69
GS (0.4)10.030.7512.6293.33
Ours19.290.8752.6883.79
En-Fr newstest13-14 + newsdiscuss15
RNN-search-BPE32.6---
RSNI (0.2)*21.930.9476.175*2.23*
RSNI (0.4)*14.30.8945.271*1.56*
GS (0.2)22.70.8331.8183.80
GS (0.4)15.20.7081.8283.25
Ours22.30.8432.0093.87
Transformer-BPE34.7---
RSNI (0.2)*24.00.9475.774*2.34*
RSNI (0.4)*15.80.8945.114*1.67*
GS (0.2)23.010.8301.9823.74
GS (0.4)19.60.7882.0533.68
Ours21.330.7981.9073.78
+ +Table 4: Experiment results for $\mathrm{En}\rightarrow \mathrm{De}$ and $\mathrm{En}\rightarrow \mathrm{Fr}$ MT attack. + +stably generate adversarial examples without significant change in semantics by the same training setting among different models and language pairs, achieving stably high $HE$ ( $>3.7$ ) without any handcrafted semantic constraints, while search methods (GS) must tune for proper ratio of modification, which can hardly strike a balance between semantic constraints and degradation. Unlike search paradigm relying on reference and victim gradients, our paradigm is model-agnostic yet still achieving comparable $RD$ with relatively high $HE$ . + +# 4.5 Case Study + +As it is shown in Table 5, our method is less likely to perturb some easily-modified semantics (e.g. numbers are edited to other "forms", but not different numbers), while search tends to generate semantically different tokens to achieve degradation. Thus our agent can lead to more insightful and plausible analyses for neural machine translation than search by gradient. + +# 5 Analysis + +# 5.1 Efficiency + +As it is shown in Figure 2, given the same amount of memory cost, our method is significantly more + +
a
origin in origin out全国4000万选民将在16名候选人中选举法兰西第五共和国第七任总统。 +40 million voters throughout the country will elect the seventh president of the fifth republic of france among the 16 candidates
references40 million voters in the nation will elect the 7th president for the french fifth republic from 16 candidates. +there are 40 million voters and they have to pick the fifth republic france's seventh president amongst the sixteen candidates. +forty million voters across the country are expected to choose the 7th president of the 5th republic of france from among 16 candidates. +40 million voters around france are to elect the 7th president of the 5 republic of france from 16 candidates.
GS (0.4) in GS (0.4) out全国性4000万市民将在6名候选人中选举法兰西第五国家第七任外交部长。 +of the 6 candidates, 40 million people will elect the seventh foreign minister of the five countries.
ours in ours out全国性4000万选民将在16位候选人中选举法兰西第5共和国第7任总统 +among the 16 candidates, 40 million voters will elect five presidents of France and seven presidents of the republic of France.
b
origin in origin out千案者目前被也门当局扣留。 +the persons involved in the case are currently detained by the yemeni authorities.
referencesthe perpetrator is currently in the custody of the yemeni authorities. +yemeni authority apprehended the suspect. +the suspect is now in custody of yemeni authorities. +the ones involed in this case were also detained by the authority.
GS (0.4) in GS (0.4) out千案者目前为也门现局留。 +the person involved in the case is now detained by the authorities!
ours in ours out千案方目前被也门当局扣留。 +the victim is currently detained by the yemeni authorities.
+ +Table 5: (a) an example of perturbed number and quantifier severely damaging outputs in $\mathrm{Zh}\rightarrow \mathrm{En}$ translation, where we highlight the changes. "五" is the character for 5 and "七" for 7, "名" and "位" are both commonly used quantifiers for people. However, search-based attack achieves degradation by some significant changes of semantics, where number "16" is changed to "6", and "外交部长" means "foreign minister". (b) an example of changed suffix which breaks the result. "方" and "者" are common suffixes (K) sharing same meaning used for people. Our model spots that victim model's fragility upon such perturb, while search does not. + +efficient compared to the search paradigm. Gradient computation concerning every modified source sequence can cost considerably in time or space for a state-of-the-art system, which could be even worse for systems with recurrent units. When it comes to mass production of adversarial examples for a victim translation system, our method can also generate by given only monolingual inputs. In contrast, search methods must be provided the same amount of well-informed targets. + +![](images/cc458a1b88ecf96792378a9eb928019e6ea5f94759a4e80f6c4c0427e139633c.jpg) +Figure 2: Time consumption of different methods: we limit memory usage to 2.5G on single Nvidia 1080, and generate adversarial examples for the same 800 inputs in $\mathrm{Zh}\rightarrow \mathrm{En}$ MT with different methods, our method significantly outclasses the state-of-the-art search paradigm (GS). + +# 5.2 Attack Patterns + +NMT systems may have different robustness over different parts of the inputs, thus some researchers implement input preprocessing targeting certain + +empirically weak parts, e.g., named entities(Li et al., 2018). Since the agent's policy is to attack without handcrafted error features, we can further investigate vulnerability by its attack preferences of different parts of speech. We choose Chinese, for example, and adopt LTP POS tagger6 to label NIST test sets, then check the modification rate for each POS. To ensure the reliability of our analysis, we run three rounds of experiments on both baselines and our agent with similar modification rate targeting state-of-the-art Transformer with BPE, and collect overall results. We also present random synthetic noise injection (Karpukhin et al., 2019) (RSNI), which is not intended for any preference as an additional baseline. + +As it is shown in Figure 3, our reinforced paradigm shows distinct preference upon certain POS tags, indicating pitfalls of a victim translation system. At the same time, RSNI distributed almost evenly upon different POS tags. Though the search paradigm (GS) does expose some types of pitfall, our method can further expose those omitted by the search. Note that unlike existing work relying on feature engineering to indicate errors, we have no such features implemented for an agent. However, our agent can still spot error patterns by favoring some of the POS, such as + +![](images/35346049a57dc887679fa09189e1110e9771167b5d14959931ca8f2dafdc1e21.jpg) +Figure 3: Attack preferences of different paradigms targeting $\mathrm{Zh}\rightarrow \mathrm{En}$ Transformer-BPE model. All share a similar modification rate. Our agent shows a significant preference for some POS (e.g., Ni, Nh, Nz, I), which are commonly regarded as hard-to-translate phrases among industrial implementations, while some (e.g., K) are less noticed. Preference among different choices. + +
Attack byBLEU(Δ)
Zh-En MT02-06
RNN-search-BPE-40.90
agent-RNN31.58(-9.32)
agent-TF32.14(-8.76)
Transformer-BPE-44.06
agent-TF35.48(-8.58)
agent-RNN33.14(-10.92)
En-De Newstest13-16
RNN-search-BPE-25.35
agent-RNN21.27(-4.08)
agent-TF17.18(-8.18)
Transformer-BPE-29.05
agent-TF19.29(-9.76)
agent-RNN24.2(-4.85)
En-Fr Newstest13-14+newsdiscuss15
RNN-search-BPE-32.60
agent-RNN22.3(-10.30)
agent-TF19.83(-14.87)
Transformer-BPE-34.70
agent-TF21.33(-13.37)
agent-RNN22.35(-10.25)
+ +Ni (organization name), Nh (person name), Nl (location name), M (numbers), which are commonly accepted as hard-to-translate parts. Moreover, the agent also tends to favor K (suffix) more, which is less noticed. + +# 5.3 Attack Generalization + +We additionally test agents by attacking different model architecture from the one that it's trained. As it is shown in Table 6, we perturb the inputs by agents trained to attack a different architecture, then test for degradation. The results show that our agent trained by targeting Transformer architecture can still achieve degradation on RNN-search, and vice-versa. + +Table 6: Attacks targeting different architecture from the trained one. We note agent with the architecture that is trained with(e.g., agent-RNN stands for agent trained by targeting RNN-search). + +
Clean testNoisy testIWSLT11-17
Transformer-BPE44.0635.4811.27
Tuned43.60(-0.46)40.31(+4.83)11.73(+0.46)
+ +Table 7: Tuning $\mathrm{Zh}\rightarrow \mathrm{En}$ Transformer-BPE model with adversarial examples. We generate adversarial examples for every training sources for tuning, achieving overall improvements for noisy tests. + +# 5.4 Tuning with Adversarial Examples + +Since the agent generates meaning-preserving adversarial examples efficiently, we can directly tune the original model with those samples. We choose $\mathrm{Zh} \rightarrow \mathrm{En}$ Transformer-BPE, for example, and generate the same amount of adversarial examples given original training sources(1.3M pairs), then paired with initial targets. We mix the augmented pairs with original pairs for a direct tuning. We test the tuned model on original test data and noisy test data generated by the attacking agent. We additionally test on IWSLT11-17 $\mathrm{Zh} \rightarrow \mathrm{En}$ test data, which is not used for training or tuning, to verified robustness improvement. As Table 7 shows, our agent can achieve substantial improvement(+4.83) on noisy tests with only minor loss on clean tests(-0.46). The improvement on the IWSLT test also indicates the adversarial tuning contributes to not only defending the agent's attack, but also overall robustness. + +# 5.5 Reinforced Examples for Machine translation + +We additionally switched the episodic rewards in the environment, then ignored all modifications that induce UNK tokens to train an agent, hoping to generate minor perturbed samples that can improve the translation metric. Though we failed to achieve overall improvements, we do succeed for quite a portion of samples, as shown in Table 8. Similar to adversarial examples, we call them reinforced examples. Such improvement is different from adversarial training that tunes model + +
in钱其琛同突尼斯外长会谈。
perturbed in钱其琛同突尼斯外长会谈。
outChinese, Tunisian ministers hold talks.
perturbed outqian qichen holds talks with Tunisian foreign minister.
in中巴及城巴车辆在南区通宵停泊
perturbed in中巴及城巴车辆在南区通宵停车
outovernight parking of cmb and city bus
perturbed outovernight parking of cmb and city bus in southern district
+ +Table 8: Example of minor perturbed samples that improves machine translation for $\mathrm{Zh}\rightarrow \mathrm{En}$ Transformer-BPE model. The "。” in first sample is modified to "−", then model yields the omitted "钱其琛 (qian qi chen)". The "停泊" in second sample is modified to "停车", where they both mean "parking", then comes the omitted "in southern district" for "在南区". + +for defense or strict text correction before the test phase. Reinforced examples are still noisy and can be directly applied for a test without any model updates to achieve improvements, which to our best knowledge is less investigated by researchers. Since we discovered that not all perturbed inputs are harmful, such an issue can be a good hint and alternative for better adversarial defense in NLP and should be further considered. + +# 6 Related Work + +Cheng et al. (2018a) and Cheng et al. (2018b) applied continuous perturbation learning on token's embedding and then manage a lexical representation out of a perturbed embedding. Zhao et al. (2017) learned such perturbation on the encoded representation of a sequence, and then decode it back as an adversarial example. These methods are applicable for simple NLP classification tasks, while failing machine translation which requires higher semantic constraints. Zhao et al. (2017) further attempted to constrain semantic in such paradigm by introducing multi-task modeling with accessory annotation, which further limits applicability. + +On the other hand, Ebrahimi et al. (2018), Chaturvedi et al. (2019) and Cheng et al. (2019) regarded it as a search problem by maximizing surrogate gradient losses. Due to the formidable gradient computation, such methods are less viable to more complex neural architectures. Cheng et al. (2019) introduced a learned language model to constrain generation. However, a learned language model is not apt for common typos or UNK. Another pitfall of this paradigm is that surrogate losses defined by a fixed tokenization for non-character level systems, risks being invalidated once the attack changes tokenization. Therefore, + +Ebrahimi et al. (2018) simply focused on char-level systems, while Michel et al. (2019) specially noted to exclude scenarios where attack changes tokenization in their paradigm. + +Other works turn to more sophisticated generation paradigms, e.g., Vidnerova and Neruda (2016) adopts a genetic algorithm for an evolutionary generation targeting simple machine learning models. Zang et al. (2019) consider adversarial generation as a word substitution-based combinatorial optimization problem tackled by particle swarm algorithm. Our paradigm shares some common ideology with Miao et al. (2019) and Xiao et al. (2018), which iteratively edit inputs constrained by generative adversarial learning. + +# 7 Conclusion + +We propose a new paradigm to generate adversarial examples for neural machine translation, which is capable of exposing translation pitfalls without handcrafted error features. Experiments show that our method achieves stable degradation with meaning preserving adversarial examples over different victim models. + +It is noticeable that our method can generate adversarial examples efficiently from monolingual data. As a result, the mass production of adversarial examples for the victim model's analysis and further improvement of robustness become convenient. Furthermore, we notice some exceptional cases which we call as "reinforced samples", which we leave as the future work. + +# Acknowledgement + +We would like to thank the anonymous reviewers for their insightful comments. Shujian Huang is the corresponding author. This work is supported by National Science Foundation of China (No. 61672277, 61772261), the Jiangsu Provincial Research Foundation for Basic Research (No. BK20170074). + +# References + +Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2016. An actor-critic algorithm for sequence prediction. arXiv preprint arXiv:1607.07086. +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR. + +Yonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine translation. arXiv preprint arXiv:1711.02173. +Akshay Chaturvedi, KP Abijith, and Utpal Garain. 2019. Exploring the robustness of nmt systems to nonsensical inputs. arXiv: Learning. +Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, and Cho-Jui Hsieh. 2018a. Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples. arXiv preprint arXiv:1803.01128. +Yong Cheng, Lu Jiang, and Wolfgang Macherey. 2019. Robust neural machine translation with doubly adversarial inputs. arXiv preprint arXiv:1906.02443. +Yong Cheng, Zhaopeng Tu, Fandong Meng, Junjie Zhai, and Yang Liu. 2018b. Towards robust neural machine translation. arXiv preprint arXiv:1805.06130. +Javid Ebrahimi, Daniel Lowd, and Dejing Dou. 2018. On adversarial examples for character-level neural machine translation. arXiv preprint arXiv:1806.09030. +Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. +Vladimir Karpukhin, Omer Levy, Jacob Eisenstein, and Marjan Ghazvininejad. 2019. Training on synthetic noise improves robustness to natural noise in machine translation. arXiv preprint arXiv:1902.01509. +Vijay R Konda and John N Tsitsiklis. 2000. Actor-critic algorithms. In Advances in neural information processing systems, pages 1008-1014. +Katherine Lee, Orhan First, Ashish Agarwal, Clara Fannjiang, and David Sussillo. 2018. Hallucinations in neural machine translation. NIPS 2018 Workshop IRASL. +Zhongwei Li, Xuancong Wang, Ai Ti Aw, Eng Siong Chng, and Haizhou Li. 2018. Named-entity tagging and domain adaptation for better customized translation. In Proceedings of the Seventh Named Entities Workshop, pages 41-46, Melbourne, Australia. Association for Computational Linguistics. +Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In EMNLP. +Ning Miao, Hao Zhou, Lili Mou, Rui Yan, and Lei Li. 2019. Cgmh: Constrained sentence generation by metropolis-hastings sampling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6834-6842. +Paul Michel, Xian Li, Graham Neubig, and Juan Miguel Pino. 2019. On evaluation of adversarial perturbations for sequence-to-sequence models. arXiv preprint arXiv:1903.06620. + +Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. Nature, 518(7540):529. +Matt Post. 2018. A call for clarity in reporting bleu scores. arXiv preprint arXiv:1804.08771. +Motoki Sano, Jun Suzuki, and Shun Kiyono. 2019. Effective adversarial regularization for neural machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 204-210. +John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. 2015. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv. +Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. arXiv preprint arXiv:1804.04235. +Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS. +Petra Vidnerová and Roman Neruda. 2016. Evolutionary generation of adversarial examples for deep and shallow machine learning models. In Multidisciplinary International Social Networks Conference. +Lijun Wu, Fei Tian, Tao Qin, Jianhuang Lai, and TieYan Liu. 2018. A study of reinforcement learning for neural machine translation. arXiv preprint arXiv:1808.08866. +Chaowei Xiao, Bo Li, Jun yan Zhu, Warren He, Mingyan Liu, and Dawn Song. 2018. Generating adversarial examples with adversarial networks. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pages 3905-3911. International Joint Conferences on Artificial Intelligence Organization. +Yuan Zang, Chenghao Yang, Fanchao Qi, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2019. Textual adversarial attack as combinatorial optimization. arXiv:1910.12196v2. +Yang Zhao, Jiajun Zhang, Zhongjun He, Chengqing Zong, and Hua Wu. 2018. Addressing troublesome words in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 391-400. + +Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2017. Generating natural adversarial examples. arXiv preprint arXiv:1710.11342. + +Brian D Ziebart. 2010. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. Ph.D. thesis, figshare. + +# A Training Details for Agent + +We adopt commonly accepted translation metric BLEU as score in Eq.9. We use 50 sequence pairs per batch both in environment initialization and training of discriminator and agent. It is essential to train on batches of sequences to stabilize reinforced training. Furthermore, note that $D$ can be too powerful during the early training stage compared to the agent's actor that it can quickly terminate an exploration. Therefore, we must train on batches and determine an overall terminal signal as aforementioned to ensure early exploration. The step $D$ and step $A$ are set as $80^7$ and 120. acc_bound for discriminator training is set to 0.85. The $a$ and $b$ in Eq.8 are set to 0.5 and 10. The dimension of feedforward layers in the agent's actor-critic and discriminator are all 256. We initialize the embedding of both agent and discriminator by the victim's embedding. + +For reinforcement learning, we adopt asynchronous learning with an additional global agent with an additional set of parameter $\theta^{\Omega}$ , we set discount factor $\gamma$ to 0.99, $\alpha$ and $\beta$ in Eq.5 to 0.5 and 0.05 respectively. As for the stop criterion, we set patience_round to 15 with convergence boundary for $acc_{D}$ to 0.52. We adopt Adafactor(Shazeer and Stern, 2018) for training, which is a memory-efficient Adam. The learning rate for agent's optimizer is initiated as 0.001 and scheduled by rsqrt with 100 steps of warmup. The $K$ for the candidate set is 12. + +Our agent takes around 30 hours to converge on a single Nvidia 1080ti. Note that higher acc_bound and lower convergence boundary for $D$ indicates higher semantic constraints, which will increase training time. + +# B Search-based Attack + +Search-based adversarial generation is currently widely applied in various robustness machine translation system. We generally follow the strategy of Ebrahimi et al. (2018); Michel et al. (2019) + +
Algorithm 1: Reinforced training for agent
Result: A learned global agent πθΩ
1 Assume global agent as πθΩ with parameter θΩ
2 Assume agent as πθ with parameter set θ
3 initialize: Env with D, θΩ, θ;
4 while not Stop Criterion do
5for stepD do
6train D with current agent πθ;
7if accD > acc_bound break;
8end
9test current D's accuracy accD for stop criterion;
10for stepA do
11initialize Env state s0;
12synchronize πθ with πθΩ;
13t = tstart;
14while
15st survive and t - tstart < tmax
16do
17get outtactor, Vt = πθ(st);
18compute entropy H(outtactor);
19sample at based on outtactor;
20perform at and receive rt and st+1;
21t← t+1;
22end
23R = {0 for terminal st}
24V(st) for non-terminal st
25for i ∈ {t-1,..., tstart} do
26R ← γR + ri;
27accumulate Lt(v(θ));
28accumulate Lπ(t);
29end
30compute overall loss L(θ);
31perform asynchronous updates on θΩ with gradient ∂L(θ)/∂θ;
32end
+ +which is applicable for both RNN-search and Transformer. More specifically, the $L_{adv}$ in Eq.7 is derived as: + +$$ +\underset {1 \leq i \leq n, e m b _ {i} ^ {\prime} \in v o c a b} {\operatorname {a r g m a x}} | e m b ^ {\prime} - e m b _ {i} | \nabla_ {e m b _ {i}} L _ {a d v}, \tag {12} +$$ + +$$ +L _ {a d v} (X ^ {\prime}, Y) = \sum_ {t = 1} ^ {| y |} \log (1 - P (y _ {t} | X ^ {\prime}, y _ {< t - 1})) +$$ + +where each $P(y_{t}|X)$ is calculated by Eq.1 given a corresponding target. For every source sequence, a small ratio of positions is sampled for search. Then we greedy search by the corresponding loss upon those positions with given candidates. For better comparison, we adopt the candidate set used in our model instead of naive KNN candidates. Both baseline and our model share the same UNK generation for presentation. We use homophone replacement for Chinese, and strategy by Michel et al. (2019) for English. \ No newline at end of file diff --git a/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/images.zip b/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..5f6eb26cf30a279d752e8d3393a9c7547ada9bad --- /dev/null +++ b/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:766b859e196a1ae7feaf831812f4d474b752a6583a732f7eaa00b75b774cf7ee +size 712446 diff --git a/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/layout.json b/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..52da7eb86e8ab89a9d0540f7e3791bd521902b8b --- /dev/null +++ b/areinforcedgenerationofadversarialexamplesforneuralmachinetranslation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:efd444ccfb6d3105cb38268f1beae6eafacc784216769e8a5a59dc5b155e8f2d +size 424920 diff --git a/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/d8822e8f-0ce1-4535-a746-c7a4caf3d282_content_list.json b/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/d8822e8f-0ce1-4535-a746-c7a4caf3d282_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..f3851fae972988eb1bd7eef4f4adf8edcdfadaa2 --- /dev/null +++ b/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/d8822e8f-0ce1-4535-a746-c7a4caf3d282_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dbbdf8d3517b9c25e0b29555c83f918e6e115d95e03d3c9c897782bf6c0b229f +size 50870 diff --git a/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/d8822e8f-0ce1-4535-a746-c7a4caf3d282_model.json b/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/d8822e8f-0ce1-4535-a746-c7a4caf3d282_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a336074e598b98f0c049fce922721df03c2bb352 --- /dev/null +++ b/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/d8822e8f-0ce1-4535-a746-c7a4caf3d282_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ebc1c797f707ac2ce2690335e456ecce3cba6a2b4922477d2bb5240a11554646 +size 61703 diff --git a/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/d8822e8f-0ce1-4535-a746-c7a4caf3d282_origin.pdf b/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/d8822e8f-0ce1-4535-a746-c7a4caf3d282_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3c0f0b21f55b91cbd52cd6243635b34c7cfdb667 --- /dev/null +++ b/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/d8822e8f-0ce1-4535-a746-c7a4caf3d282_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b03adb7f5499210cc0d467e6259aa762bebf21035fff39df0f871b3c71c718ce +size 529712 diff --git a/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/full.md b/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1d15c3c8f13d857132d439f8a6ff96f14f931add --- /dev/null +++ b/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/full.md @@ -0,0 +1,249 @@ +# A Relational Memory-based Embedding Model for Triple Classification and Search Personalization + +Dai Quoc Nguyen1, Tu Dinh Nguyen2, Dinh Phung1 + +1Monash University, Australia + +$^{2}$ Trusting Social + +$^{1}\{dai.nguyen,dinh.phung\} @monash.edu$ + +$^{2}$ tu@trustingsocial.com + +# Abstract + +Knowledge graph embedding methods often suffer from a limitation of memorizing valid triples to predict new ones for triple classification and search personalization problems. To this end, we introduce a novel embedding model, named R-MeN, that explores a relational memory network to encode potential dependencies in relationship triples. R-MeN considers each triple as a sequence of 3 input vectors that recurrently interact with a memory using a transformer self-attention mechanism. Thus R-MeN encodes new information from interactions between the memory and each input vector to return a corresponding vector. Consequently, R-MeN feeds these 3 returned vectors to a convolutional neural network-based decoder to produce a scalar score for the triple. Experimental results show that our proposed R-MeN obtains state-of-the-art results on SEARCH17 for the search personalization task, and on WN11 and FB13 for the triple classification task. + +# 1 Introduction + +Knowledge graphs (KGs) – representing the genuine relationships among entities in the form of triples (subject, relation, object) denoted as $(s, r, o)$ – are often insufficient for knowledge presentation due to the lack of many valid triples (West et al., 2014). Therefore, research work has been focusing on inferring whether a new triple missed in KGs is likely valid or not (Bordes et al., 2011, 2013; Socher et al., 2013). As summarized in (Nickel et al., 2016; Nguyen, 2017), KG embedding models aim to compute a score for each triple, such that valid triples have higher scores than invalid ones. + +Early embedding models such as TransE (Bordes et al., 2013), TransH (Wang et al., 2014), TransR (Lin et al., 2015), TransD (Ji et al., 2015), DISTRULT (Yang et al., 2015) and ComplEx (Trouillon et al., 2016) often employ simple linear oper + +ators such as addition, subtraction and multiplication. Recent embedding models such as ConvE (Dettmers et al., 2018) and CapsE (Nguyen et al., 2019b) successfully apply deep neural networks to score the triples. + +Existing embedding models are showing promising performances mainly for knowledge graph completion, where the goal is to infer a missing entity given a relation and another entity. But in real applications, less mentioned, such as triple classification (Socher et al., 2013) that aims to predict whether a given triple is valid, and search personalization (Vu et al., 2017) that aims to re-rank the relevant documents returned by a user-oriented search system given a query, these models do not effectively capture potential dependencies among entities and relations from existing triples to predict new triples. + +To this end, we leverage the relational memory network (Santoro et al., 2018) to propose R-MeN to infer a valid fact of new triples. In particular, R-MeN transforms each triple along with adding positional embeddings into a sequence of 3 input vectors. R-MeN then uses a transformer self-attention mechanism (Vaswani et al., 2017) to guide the memory to interact with each input vector to produce an encoded vector. As a result, R-MeN feeds these 3 encoded vectors to a convolutional neural network (CNN)-based decoder to return a score for the triple. In summary, our main contributions are as follows: + +- We present R-MeN - a novel KG embedding model to memorize and encode the potential dependencies among relations and entities for two real applications of triple classification and search personalization. +- Experimental results show that R-MeN obtains better performance than up-to-date embedding models, in which R-MeN produces new state-of-the-art results on SEARCH17 + +for the search personalization task, and a new highest accuracy on WN11 and the second-highest accuracy on FB13 for the triple classification task. + +# 2 The proposed R-MeN + +![](images/ea765d8e90bfde0cb18add912a09f634f17961ec71dd58e0d754fb18b9a0739b.jpg) +Figure 1: Processes in our proposed R-MeN for an illustration purpose. "M" denotes a memory. "MLP" denotes a multi-layer perceptron. "g" denotes a memory gating. "CNN" denotes a convolutional neural network-based decoder. + +Let $\mathcal{G}$ be a KG database of valid triples in the form of (subject, relation, object) denoted as $(s, r, o)$ . KG embedding models aim to compute a score for each triple, such that valid triples obtain higher scores than invalid triples. + +We denote $\mathbf{v}_s, \mathbf{v}_r$ and $\mathbf{v}_o \in \mathbb{R}^d$ as the embeddings of $s, r$ and $o$ , respectively. Besides, we hypothesize that relative positions among $s, r$ and $o$ are useful to reason instinct relationships; hence we add to each position a positional embedding. Given a triple $(s, r, o)$ , we obtain a sequence of 3 vectors $\{\mathbf{x}_1, \mathbf{x}_2, \mathbf{x}_3\}$ as: + +$$ +\mathbf {x} _ {1} = \mathbf {W} (\mathbf {v} _ {s} + \mathbf {p} _ {1}) + \mathbf {b} +$$ + +$$ +\mathbf {x} _ {2} = \mathbf {W} (\mathbf {v} _ {r} + \mathbf {p} _ {2}) + \mathbf {b} +$$ + +$$ +\mathbf {x} _ {3} = \mathbf {W} (\mathbf {v} _ {o} + \mathbf {p} _ {3}) + \mathbf {b} +$$ + +where $\mathbf{W} \in \mathbb{R}^{k \times d}$ is a weight matrix, and $\mathbf{p}_1, \mathbf{p}_2$ and $\mathbf{p}_3 \in \mathbb{R}^d$ are positional embeddings, and $k$ is the memory size. + +We assume we have a memory $M$ consisting of $N$ rows wherein each row is a memory slot. We use $M^{(t)}$ to denote the memory at timestep $t$ , and $M_{i,:}^{(t)} \in \mathbb{R}^k$ to denote the $i$ -th memory slot + +at timestep $t$ . We follow Santoro et al. (2018) to take $\mathbf{x}_t$ to update $M_{i,:}^{(t)}$ using the multi-head self-attention mechanism (Vaswani et al., 2017) as: + +$$ +\hat {M} _ {i,:} ^ {(t + 1)} \quad = [ \hat {M} _ {i,:} ^ {(t + 1), 1} \oplus \hat {M} _ {i,:} ^ {(t + 1), 2} \oplus +$$ + +$$ +\left. \dots \oplus \hat {M} _ {i,:} ^ {(t + 1), H} \right] +$$ + +with $\hat{M}_{i,:}^{(t + 1),h} = \alpha_{i,N + 1,h}\left(\mathbf{W}^{h,V}\mathbf{x}_t\right)$ + +$$ ++ \sum_ {j = 1} ^ {N} \alpha_ {i, j, h} \left(\mathbf {W} ^ {h, V} M _ {j,:} ^ {(t)}\right) +$$ + +where $H$ is the number of attention heads, and $\oplus$ denotes a vector concatenation operation. Regarding the $h$ -th head, $\mathbf{W}^{h,V}\in \mathbb{R}^{n\times k}$ is a value-projection matrix, in which $n$ is the head size and $k = nH$ . Note that $\{\alpha_{i,j,h}\}_{j = 1}^{N}$ and $\alpha_{i,N + 1,h}$ are attention weights, which are computed using the softmax function over scaled dot products as: + +$$ +\alpha_ {i, j, h} = \frac {\exp (\beta_ {i , j , h})}{\sum_ {m = 1} ^ {N + 1} \exp (\beta_ {i , m , h})} +$$ + +$$ +\alpha_ {i, N + 1, h} = \frac {\exp (\beta_ {i , N + 1 , h})}{\sum_ {m = 1} ^ {N + 1} \exp (\beta_ {i , m , h})} +$$ + +with $\beta_{i,j,h} = \frac{\left(\mathbf{W}^{h,Q}M_{i;\cdot}^{(t)}\right)^{\top}\left(\mathbf{W}^{h,K}M_{j;\cdot}^{(t)}\right)}{\sqrt{n}}$ + +$$ +\beta_ {i, N + 1, h} = \frac {\left(\mathbf {W} ^ {h , Q} M _ {i , :} ^ {(t)}\right) ^ {\top} \left(\mathbf {W} ^ {h , K} \mathbf {x} _ {t}\right)}{\sqrt {n}} +$$ + +where $\mathbf{W}^{h,Q}\in \mathbb{R}^{n\times k}$ and $\mathbf{W}^{h,K}\in \mathbb{R}^{n\times k}$ are query-projection and key-projection matrices, respectively. As following Santoro et al. (2018), we feed a residual connection between $\mathbf{x}_t$ and $\hat{M}_{i,:}^{(t + 1)}$ to a multi-layer perceptron followed by a memory gating to produce an encoded vector $\mathbf{y}_t\in \mathbb{R}^k$ for timestep $t$ and the next memory slot $M_{i,:}^{(t + 1)}$ for timestep $(t + 1)$ . + +As a result, we obtain a sequence of 3 encoded vectors $\{\mathbf{y}_1,\mathbf{y}_2,\mathbf{y}_3\}$ for the triple $(s,r,o)$ . We then use a CNN-based decoder to compute a score for the triple as: + +$$ +f (s, r, o) = \max (\operatorname {R e L U} ([ \mathbf {y} _ {1}, \mathbf {y} _ {2}, \mathbf {y} _ {3} ] * \boldsymbol {\Omega})) ^ {\top} \mathbf {w} +$$ + +where we view $[\mathbf{y}_1,\mathbf{y}_2,\mathbf{y}_3]$ as a matrix in $\mathbb{R}^{k\times 3}$ ; $\Omega$ denotes a set of filters in $\mathbb{R}^{m\times 3}$ , in which $m$ is the window size of filters; $\mathbf{w}\in \mathbb{R}^{|\Omega|}$ is a weight vector; $*$ denotes a convolution operator; and $\max$ denotes a max-pooling operator. Note that we use the max-pooling operator - instead of the vector + +concatenation of all feature maps used in ConvKB (Nguyen et al., 2018) – to capture the most important feature from each feature map, and to reduce the number of weight parameters. + +We illustrate our proposed R-MeN as shown in Figure 1. In addition, we employ the Adam optimizer (Kingma and Ba, 2014) to train R-MeN by minimizing the following loss function (Trouillon et al., 2016; Nguyen et al., 2018): + +$$ +\mathcal {L} = \sum_ {(s, r, o) \in \{\mathcal {G} \cup \mathcal {G} ^ {\prime} \}} \log \left(1 + \exp \left(- t _ {(s, r, o)} \cdot f (s, r, o)\right)\right) +$$ + +in which, $t_{(s,r,o)} = \left\{ \begin{array}{ll}1\mathrm{for}\left(s,r,o\right)\in \mathcal{G}\\ -1\mathrm{for}\left(s,r,o\right)\in \mathcal{G}' \end{array} \right.$ + +where $\mathcal{G}$ and $\mathcal{G}'$ are collections of valid and invalid triples, respectively. $\mathcal{G}'$ is generated by corrupting valid triples in $\mathcal{G}$ . + +# 3 Experimental setup + +# 3.1 Task description and evaluation + +# 3.1.1 Triple classification + +The triple classification task is to predict whether a given triple $(s, r, o)$ is valid or not (Socher et al., 2013). Following Socher et al. (2013), we use two benchmark datasets WN11 and FB13, in which each validation or test set consists of the same number of valid and invalid triples. It is to note in the test set that Socher et al. (2013) did not include triples that either or both of their subject and object entities also appear in a different relation type or order in the training set, to avoid reversible relation problems. Table 1 gives statistics of the experimental datasets. + +
Dataset#E#R#Triples in train/valid/test
FB1375,04313316,23211,81647,466
WN1138,69611112,5815,21821,088
+ +Table 1: Statistics of the experimental datasets. $\# \mathcal{E}$ is the number of entities. $\# \mathcal{R}$ is the number of relations. + +Each relation $r$ has a threshold $\theta_r$ computed by maximizing the micro-averaged classification accuracy on the validation set. If the score of a given triple $(s, r, o)$ is above $\theta_r$ , then this triple is classified as a valid triple, otherwise, it is classified as an invalid one. + +# 3.1.2 Search personalization + +In search personalization, given a submitted query for a user, we aim to re-rank the documents returned by a search system, so that the more the + +returned documents are relevant for that query, the higher their ranks are. We follow (Vu et al., 2017; Nguyen et al., 2019a,b) to view a relationship of the submitted query, the user and the returned document as a $(s,r,o)$ -like triple (query, user, document). Therefore, we can adapt our R-MeN for the search personalization task. + +We evaluate our R-MeN on the benchmark dataset SEARCH17 (Vu et al., 2017) as follows: (i) We train our model and use the trained model to compute a score for each (query, user, document) triple. (ii) We sort the scores in the descending order to obtain a new ranked list. (iii) We employ two standard evaluation metrics: mean reciprocal rank (MRR) and Hits@1. For each metric, the higher value indicates better ranking performance. + +# 3.2 Training protocol + +# 3.2.1 Triple classification + +We use the common Bernoulli strategy (Wang et al., 2014; Lin et al., 2015) when sampling invalid triples. For WN11, we follow Guu et al. (2015) to initialize entity and relation embeddings in our R-MeN by averaging word vectors in the relations and entities, i.e., $\mathbf{v}_{\text{american\_arborvitae}} = \frac{1}{2} (\mathbf{v}_{\text{american}} + \mathbf{v}_{\text{arborvitae}})$ , in which these word vectors are taken from the Glove 50-dimensional pre-trained embeddings (Pennington et al., 2014) (i.e., $d = 50$ ). For FB13, we use entity and relation embeddings produced by TransE to initialize entity and relation embeddings in our R-MeN, for which we obtain the best result for TransE on the FB13 validation set when using $l_{2}$ -norm, learning rate at 0.01, margin $\gamma = 2$ and $d = 50$ . + +Furthermore, on WN11, we provide our new fine-tuned result for TransE using our experimental setting, wherein we use the same initialization taken from the Glove 50-dimensional pre-trained embeddings to initialize entity and relation embeddings in TransE. We get the best score for TransE on the WN11 validation set when using $l_{1}$ -norm, learning rate at 0.01, margin $\gamma = 6$ and $d = 50$ . + +In preliminary experiments, we see the highest accuracies on the validation sets for both datasets when using a single memory slot (i.e., $N = 1$ ); and this is consistent with utilizing the single memory slot in language modeling (Santoro et al., 2018). Therefore, we set $N = 1$ to use the single memory slot for the triple classification task. Also from preliminary experiments, we select the batch size $bs = 16$ for WN11 and $bs = 256$ for FB13, and + +set the window size $m$ of filters to 1 (i.e., $m = 1$ ). + +Regarding other hyper-parameters, we vary the number of attention heads $H$ in $\{1, 2, 3\}$ , the head size $n$ in $\{128, 256, 512, 1024\}$ , the number of MLP layers $l$ in $\{2, 3, 4\}$ , and the number of filters $F = |\Omega|$ in $\{128, 256, 512, 1024\}$ . The memory size $k$ is set to be $nH = k$ . To learn our model parameters, we train our model using the Adam initial learning rate $lr$ in $\{1e^{-6}, 5e^{-6}, 1e^{-5}, 5e^{-5}, 1e^{-4}, 5e^{-4}\}$ . We run up to 30 epochs and use a grid search to select the optimal hyper-parameters. We monitor the accuracy after each training epoch to compute the relation-specific threshold $\theta_r$ to get the optimal hyper-parameters (w.r.t the highest accuracy) on the validation set, and to report the final accuracy on the test set. + +# 3.2.2 Search personalization + +We use the same initialization of user profile, query and document embeddings used by Nguyen et al. (2019b) on SEARCH17 to initialize the corresponding embeddings in our R-MeN respectively. From the preliminary experiments, we set $N = 1$ , $bs = 16$ and $m = 1$ . Other hyper-parameters are varied as same as used in the triple classification task. We monitor the MRR score after each training epoch to obtain the highest MRR score on the validation set to report the final scores on the test set. + +# 4 Main results + +# 4.1 Triple classification + +Table 2 reports the accuracy results of our R-MeN model and previously published results on WN11 and FB13. R-MeN sets a new state-of-the-art accuracy of $90.5\%$ that significantly outperforms other models on WN11. R-MeN also achieves a second highest accuracy of $88.9\%$ on FB13. Overall, R-MeN yields the best performance averaged over these two datasets. + +Regarding TransE, we obtain the second-best accuracy of $89.2\%$ on WN11 and a competitive accuracy of $88.1\%$ on FB13. Figure 2 shows the accuracy results for TransE and our R-MeN w.r.t each relation. In particular, on WN11, the accuracy for the one-to-one relation "similar_to" significantly increases from $50.0\%$ for TransE to $78.6\%$ for R-MeN. On FB13, R-MeN improves the accuracies over TransE for the many-to-many relations "institution" and "profession". + +
MethodWN11FB13Avg.
NTN (Socher et al., 2013)86.287.286.7
TransH (Wang et al., 2014)78.883.381.1
TransR (Lin et al., 2015)85.982.584.2
TransD (Ji et al., 2015)86.489.187.8
TransR-FT (Feng et al., 2016)86.682.984.8
TranSparse-S (Ji et al., 2016)86.488.287.3
TranSparse-US (Ji et al., 2016)86.887.587.2
ManifoldE (Xiao et al., 2016a)87.587.287.4
TransG (Xiao et al., 2016b)87.487.387.4
lppTransD (Yoon et al., 2016)86.288.687.4
ConvKB (Nguyen et al., 2019a)87.688.888.2
TransE (Bordes et al., 2013) (ours)89.288.188.7
Our R-MeN model90.588.989.7
TransE-NMM (Nguyen et al., 2016)86.888.687.7
TEKE_H (Wang and Li, 2016)84.884.284.5
Bilinear-COMP (Guu et al., 2015)87.686.186.9
TransE-COMP (Guu et al., 2015)84.987.686.3
+ +Table 2: Accuracy results (in %) on the WN11 and FB13 test sets. The last 4 rows report accuracies of the models that use relation paths or incorporate with a large external corpus. The best score is in bold while the second best score is in underline. "Avg." denotes the averaged accuracy over two datasets. + +![](images/b9fe1a078a2cb16a222b20acaf3f7dc94d5cc6ce5d21ea9fbacf12bcc39b24a5.jpg) + +![](images/dc9a4fe5c3b5d58664113bb05fe5a9ed31ed8b1a631ff38366c84a76b9d1ceff.jpg) +Figure 2: Accuracies for R-MeN and TransE w.r.t each relation on WN11 and FB13. + +# 4.2 Search personalization + +Table 3 presents the experimental results on SEARCH17, where R-MeN outperforms up-to-date embedding models and obtains the new highest performances for both MRR and Hits@1 metrics. We restate the prospective strategy proposed by Vu et al. (2017) in utilizing the KG embedding methods to improve the ranking quality of the personalized search systems. + +
MethodMRRH@1
SE (Original rank)0.55938.5
CI (Teevan et al., 2011)0.59741.6
SP (Vu et al., 2015)0.63145.2
TransE (Bordes et al., 2013)0.66950.9
ConvKB (Nguyen et al., 2019a)0.75059.9
CapsE (Nguyen et al., 2019b)0.76662.1
Our R-MeN0.77863.6
+ +![](images/f27fc06fe075ea85718d5fd02dc2e832eaffb50a6ca985086fced63f3092e8bf.jpg) + +![](images/5cdffacd29706c1ffd7c75da58f966acf05302d1b63c5d60be3b38e0d2bf64b7.jpg) + +![](images/cf4ee6e0c1a8180016d2132dbffcbae450937759998e3325e3fbec99cf326e02.jpg) + +![](images/3a28f78486f9b7faa4bef8935ccc0e4fb2b4887a1dcf608c2952839aefd0e1b0.jpg) + +![](images/bdc88d5952d50b49d52741528cd431aeee88981ae42b604dc12195e5aeb65977.jpg) +Figure 3: Effects of the head size $n$ and the number $H$ of attention heads on the validation sets. + +![](images/ecb68519df807467756359f15890cce624480340604b6077784c3c3788a78855.jpg) + +# 4.3 Effects of hyper-parameters + +Next, we present in Figure 3 the effects of hyperparameters consisting of the head size $n$ , and the number $H$ of attention heads. Using large head sizes (e.g., $n = 1024$ ) can produce better performances on all 3 datasets. Additionally, using multiple heads gives better results on WN11 and FB13, while using a single head (i.e., $H = 1$ ) works best on SEARCH17 because each query usually has a single intention. + +# 4.4 Ablation analysis + +For the last experiment, we compute and report our ablation results over 2 factors in Table 4. In particular, the scores degrade on FB13 and SEARCH17 when not using the positional embeddings. More importantly, the results degrade on + +Table 3: Experimental results on the SEARCH17 test set. Hits@1 (H@1) is reported in %. Our improvements over all baselines are statistically significant with $p < 0.05$ using the paired t-test. + +
ModelWN11FB13SEARCH17
Our R-MeN91.388.80.792
(a) w/o Pos91.388.70.787
(b) w/o M89.688.40.771
+ +Table 4: Ablation results on the validation sets. (i) Without using the positional embeddings. (ii) Without using the relational memory network, thus we define $f(s,r,o) = \max \left( {\operatorname{ReLU}{\left( \left\lbrack {\mathbf{v}}_{s},{\mathbf{v}}_{r},{\mathbf{v}}_{o}\right\rbrack * \mathbf{\Omega }\right) }}^{ \top }\right.$ w. + +all 3 datasets without using the relational memory network. These show that using the positional embeddings can explore the relative positions among $s$ , $r$ and $o$ ; besides, using the relational memory network helps to memorize and encode the potential dependencies among relations and entities. + +# 5 Conclusion + +We propose a new KG embedding model, named R-MeN, where we integrate transformer self-attention mechanism-based memory interactions with a CNN decoder to capture the potential dependencies in the KG triples effectively. Experimental results show that our proposed R-MeN obtains the new state-of-the-art performances for both the triple classification and search personalization tasks. In future work, we plan to extend R-MeN for multi-hop knowledge graph reasoning. Our code is available at: https://github.com/daiquocnguyen/R-MeN. + +# Acknowledgements + +This research was partially supported by the ARC Discovery Projects DP150100031 and DP160103934. + +# References + +Antoine Bordes, Nicolas Usunier, Alberto García-Durán, Jason Weston, and Oksana Yakhnenko. 2013. Translating Embeddings for Modeling Multi-relational Data. In Advances in Neural Information Processing Systems 26, pages 2787-2795. +Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning Structured Embeddings of Knowledge Bases. In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, pages 301-306. +Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2D Knowledge Graph Embeddings. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pages 1811-1818. + +Jun Feng, Minlie Huang, Mingdong Wang, Mantong Zhou, Yu Hao, and Xiaoyan Zhu. 2016. Knowledge Graph Embedding by Flexible Translation. In *Principles of Knowledge Representation and Reasoning: Proceedings of the Fifteenth International Conference*, pages 557-560. +Kelvin Guu, John Miller, and Percy Liang. 2015. Traversing Knowledge Graphs in Vector Space. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 318-327. +Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Knowledge Graph Embedding via Dynamic Mapping Matrix. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 687-696. +Guoliang Ji, Kang Liu, Shizhu He, and Jun Zhao. 2016. Knowledge Graph Completion with Adaptive Sparse Transfer Matrix. In Proceedings of the Thirtieth Conference on Artificial Intelligence, pages 985-991. +Diederik Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980. +Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning Entity and Relation Embeddings for Knowledge Graph Completion. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence Learning, pages 2181-2187. +Dai Quoc Nguyen, Dat Quoc Nguyen, Tu Dinh Nguyen, and Dinh Phung. 2019a. Convolutional Neural Network-based Model for Knowledge Base Completion and Its Application to Search Personalization. Semantic Web, 10(5):947-960. +Dai Quoc Nguyen, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2018. A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 327-333. +Dai Quoc Nguyen, Thanh Vu, Tu Dinh Nguyen, Dat Quoc Nguyen, and Dinh Phung. 2019b. A Capsule Network-based Embedding Model for Knowledge Graph Completion and Search Personalization. In Proceedings of the 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2180-2189. +Dat Quoc Nguyen. 2017. An Overview of Embedding Models of Entities and Relationships for Knowledge Base Completion. arXiv preprint, arXiv:1703.08098. + +Dat Quoc Nguyen, Kairit Sirts, Lizhen Qu, and Mark Johnson. 2016. Neighborhood Mixture Model for Knowledge Base Completion. In Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pages 40-50. +Maximilian Nickel, Kevin Murphy, Volker Tresp, and Evgeniy Gabrilovich. 2016. A Review of Relational Machine Learning for Knowledge Graphs. Proceedings of the IEEE, 104(1):11-33. +Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532-1543. +Adam Santoro, Ryan Faulkner, David Raposo, Jack Rae, Mike Chrzanowski, Theophane Weber, Daan Wierstra, Oriol Vinyals, Razvan Pascanu, and Timothy Lillicrap. 2018. Relational Recurrent Neural Networks. In Advances in Neural Information Processing Systems, pages 7299-7310. +Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning With Neural Tensor Networks for Knowledge Base Completion. In Advances in Neural Information Processing Systems 26, pages 926-934. +Jaime Teevan, Daniel J. Liebling, and Gayathri Ravichandran Geetha. 2011. Understanding and Predicting Personal Navigation. In Proceedings of the ACM International Conference on Web Search and Data Mining, pages 85-94. +Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex Embeddings for Simple Link Prediction. In Proceedings of the 33nd International Conference on Machine Learning, pages 2071-2080. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In NIPS, pages 5998-6008. +Thanh Vu, Dat Quoc Nguyen, Mark Johnson, Dawei Song, and Alistair Willis. 2017. Search Personalization with Embeddings. In Proceedings of the European Conference on Information Retrieval, pages 598-604. +Thanh Vu, Alistair Willis, Son Ngoc Tran, and Dawei Song. 2015. Temporal Latent Topic User Profiles for Search Personalisation. In Proceedings of the European Conference on Information Retrieval, pages 605-616. +Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge Graph Embedding by Translating on Hyperplanes. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, pages 1112-1119. + +Zhigang Wang and Juan-Zi Li. 2016. Text-Enhanced Representation Learning for Knowledge Graph. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pages 1293-1299. +Robert West, Evgeniy Gabrilovich, Kevin Murphy, Shaohua Sun, Rahul Gupta, and Dekang Lin. 2014. Knowledge Base Completion via Search-based Question Answering. In Proceedings of the 23rd International Conference on World Wide Web, pages 515-526. +Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2016a. From One Point to A Manifold: Knowledge Graph Embedding for Precise Link Prediction. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, pages 1315-1321. +Han Xiao, Minlie Huang, and Xiaoyan Zhu. 2016b. + +TransG: A Generative Model for Knowledge Graph Embedding. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2316-2325. +Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding Entities and Relations for Learning and Inference in Knowledge Bases. In Proceedings of the International Conference on Learning Representations. +Hee-Geun Yoon, Hyun-Je Song, Seong-Bae Park, and Se-Young Park. 2016. A Translation-Based Knowledge Graph Embedding Preserving Logical Property of Relations. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 907-916. \ No newline at end of file diff --git a/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/images.zip b/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..fa6d2106deef4be7a9cf6570bd69a222816d44e2 --- /dev/null +++ b/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dc0bceffe5662d278c6f00bfd85918e8672a5e1eb2fb54c2f055f4bfb7f5a7ce +size 316970 diff --git a/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/layout.json b/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..a369dfabdefa537b10e4ac76cc72d1cf6413bd14 --- /dev/null +++ b/arelationalmemorybasedembeddingmodelfortripleclassificationandsearchpersonalization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d84d4c9607dcbc3b2cd1aecbede60996910229181282495f83bb970f7250784 +size 305715 diff --git a/arelaxedmatchingprocedureforunsupervisedbli/e1dfeca1-f06a-4192-be6e-6f3799551b92_content_list.json b/arelaxedmatchingprocedureforunsupervisedbli/e1dfeca1-f06a-4192-be6e-6f3799551b92_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c510c7d8b7a1d56763c2a37c2a122aa8f14c6319 --- /dev/null +++ b/arelaxedmatchingprocedureforunsupervisedbli/e1dfeca1-f06a-4192-be6e-6f3799551b92_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:77df25c203ac3348c4f01f7e1e74b0ce0073c3d034004507566c711324e36aa9 +size 44624 diff --git a/arelaxedmatchingprocedureforunsupervisedbli/e1dfeca1-f06a-4192-be6e-6f3799551b92_model.json b/arelaxedmatchingprocedureforunsupervisedbli/e1dfeca1-f06a-4192-be6e-6f3799551b92_model.json new file mode 100644 index 0000000000000000000000000000000000000000..de3e1556c4628a45b9b5278fba545809b135db32 --- /dev/null +++ b/arelaxedmatchingprocedureforunsupervisedbli/e1dfeca1-f06a-4192-be6e-6f3799551b92_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:989b63f5e666b1e3c736fbfa436660b565f66d9ce5f04966ed7fc0875d44fbfb +size 54591 diff --git a/arelaxedmatchingprocedureforunsupervisedbli/e1dfeca1-f06a-4192-be6e-6f3799551b92_origin.pdf b/arelaxedmatchingprocedureforunsupervisedbli/e1dfeca1-f06a-4192-be6e-6f3799551b92_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..4e3d246c1388f04abcd8473cf8b2d313ef14e40a --- /dev/null +++ b/arelaxedmatchingprocedureforunsupervisedbli/e1dfeca1-f06a-4192-be6e-6f3799551b92_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2eceab43ed9391e502cd6f1de98a5137afa678bae51954c1f6dd0d2c43b5f8db +size 310409 diff --git a/arelaxedmatchingprocedureforunsupervisedbli/full.md b/arelaxedmatchingprocedureforunsupervisedbli/full.md new file mode 100644 index 0000000000000000000000000000000000000000..0c97489f2d61b2ce4680cbdaeb2fc7d8db001f1f --- /dev/null +++ b/arelaxedmatchingprocedureforunsupervisedbli/full.md @@ -0,0 +1,203 @@ +# A Relaxed Matching Procedure for Unsupervised BLI + +Xu Zhao $^{1}$ , Zihao Wang $^{1}$ , Hao Wu $^{2}$ , Zhang Yong $^{1}$ + +$^{1}$ BNRist, Department of Computer Science and Technology, RIIT, + +Institute of Internet Industry, Tsinghua University, Beijing, China + +$^{2}$ Department of Mathematical Sciences, Tsinghua University, Beijing, China + +{zhaoxul8,wzh17}@mails.tsinghua.edu.cn + +{hwu, zhangyong05}@tsinghua.edu.cn + +# Abstract + +Recently unsupervised Bilingual Lexicon Induction(BLI) without any parallel corpus has attracted much research interest. One of the crucial parts in methods for the BLI task is the matching procedure. Previous works impose a too strong constraint on the matching and lead to many counterintuitive translation pairings. Thus, We propose a relaxed matching procedure to find a more precise matching between two languages. We also find that aligning source and target language embedding space bidirectionally will bring significant improvement. We follow the previous iterative framework to conduct experiments. Results on standard benchmark demonstrate the effectiveness of our proposed method, which substantially outperforms previous unsupervised methods. + +# 1 Introduction + +Pretrained word embeddings (Mikolov et al., 2013b) are the basis of many other natural language processing and machine learning systems. Word embeddings of a specific language contain rich syntax and semantic information. Mikolov et al. (2013a) stated that the continuous embedding spaces exhibit similar structures across different languages, and we can exploit the similarity by a linear transformation from source embedding space to target embedding space. This similarity derives the Bilingual Lexicon Induction(BLI) task. The goal of bilingual lexicon induction is to align two languages' embedding space and generates word translation lexicon automatically. This fundamental problem in natural language processing benefits much other research such as sentence translation (Rapp, 1995; Fung, 1995), unsupervised machine translation (Lample et al., 2017), cross-lingual information retrieval (Lavrenko et al., 2002). + +Recent endeavors (Lample et al., 2018; Alvarez-Melis and Jaakkola, 2018; Grave et al., 2019; + +Artetxe et al., 2017) have proven that unsupervised BLI's performance is even on par with the supervised methods. A crucial part of these approaches is the matching procedure, i.e., how to generate the translation plan. Alvarez-Melis and Jaakkola (2018) used Gromov-Wasserstein distance to approximate the matching between languages. Grave et al. (2019) regarded it as a classic optimal transport problem and used the sinkhorn algorithm (Cuturei, 2013) to compute the translation plan. + +In this work, we follow the previous iterative framework but use a different matching procedure. Previous iterative algorithms required to compute an approximate 1 to 1 matching every step. This 1 to 1 constraint brings out many redundant matchings. Thus in order to avoid this problem, we relax the constraint and control the relaxation degree by adding two KL divergence regularization terms to the original loss function. This relaxation derives a more precise matching and significantly improves performance. Then we propose a bidirectional optimization framework to optimize the mapping from source to target and from target to source simultaneously. In the section of experiments, we verify the effectiveness of our method, and results show our method outperforms many SOTA methods on the BLI task. + +# 2 Background + +The early works for the BLI task require a parallel lexicon between languages. Given two embedding matrices $X$ and $Y$ with shape $n \times d$ ( $n$ :word number, $d$ :vector dimension) of two languages and word $x_{i}$ in $X$ is the translation of word $y_{i}$ in $Y$ , i.e., we get a parallel lexicon $X \rightarrow Y$ . Mikolov et al. (2013a) pointed out that we could exploit the similarities of monolingual embedding spaces by learning a linear transformation $W^{\star}$ such that + +$$ +W ^ {\star} = \underset {W \in M _ {d} (\mathbb {R})} {\arg \min } \| X W - Y \| _ {F} ^ {2} \tag {1} +$$ + +where $M_{d}(\mathbb{R})$ is the space of $d \times d$ matrices of real numbers. Xing et al. (2015) stated that enforcing an orthogonal constraint on $W$ would improve performance. There is a closed-form solution to this problem called Procrutes: $W^{\star} = Q = UV^{T}$ where $USV^{T} = XY^{T}$ . + +Under the unsupervised condition without parallel lexicon, i.e., vectors in $X$ and $Y$ are totally out of order, Lample et al. (2018) proposed a domain-adversarial approach for learning $W^{\star}$ . On account of the ground truth that monolingual embedding spaces of different languages keep similar spatial structures, Alvarez-Melis and Jaakkola (2018) applied the Gromov-Wasserstein distance based on infrastructure to find the corresponding translation pairings between $X$ and $Y$ and further derived the orthogonal mapping Q. Grave et al. (2019) formulated the unsupervised BLI task as + +$$ +\min _ {Q \in \mathcal {O} _ {d}, P \in \mathcal {P} _ {n}} \| X Q - P Y \| _ {F} ^ {2} \tag {2} +$$ + +where $\mathcal{O}_d$ is the set of orthogonal matrices and $\mathcal{P}_n$ is the set of permutation matrices. Given $Q$ , estimating $P$ in Problem (2) is equivalent to the minimization of the 2-Wasserstein distance between the two sets of points: $XQ$ and $Y$ . + +$$ +W _ {2} ^ {2} (X Q, Y) = \min _ {P \in \mathcal {P} _ {n}} \langle D, P \rangle \tag {3} +$$ + +where $D_{ij} = \| x_iQ - y_j\| _2^2$ and $\langle D,P\rangle = \sum_{i,j}P_{ij}D_{ij}$ denotes the matrix inner product. Grave et al. (2019) proposed a stochastic algorithm to estimate $Q$ and $P$ jointly. Problem (3) is the standard optimal transport problem that can be solved by Earth Mover Distance linear program with $O(n^{3})$ time complexity. Considering the computational cost, Zhang et al. (2017) and Grave et al. (2019) used the Sinkhorn algorithm (Cuturi, 2013) to estimate $P$ by solving the entropy regularized optimal tranpsort problem (Peyre et al., 2019). + +We also take Problem (2) as our loss function and our model shares a similar alternative framework with Grave et al. (2019). However, we argue that the permutation matrix constraint on $P$ is too strong, which leads to many inaccurate and redundant matchings between $X$ and $Y$ , so we relax it by unbalanced optimal transport. + +Alaux et al. (2019) extended the line of BLI to the problem of aligning multiple languages to a common space. Zhou et al. (2019) estimated Q by a density matching method called normalizing flow. Artetxe et al. (2018) proposed a multi-step + +framework of linear transformations that generalizes a substantial body of previous work. Garneau et al. (2019) further investigated the robustness of Artetxe et al. (2018)'s model by introducing four new languages that are less similar to English than the ones proposed by the original paper. Artetxe et al. (2019) proposed an alternative approach to this problem that builds on the recent work on unsupervised machine translation. + +# 3 Proposed Method + +In this section, we propose a method for the BLI task. As mentioned in the background, we take Problem (2) as our loss function and use a similar optimization framework in Grave et al. (2019) to estimate $P$ and $Q$ alternatively. Our method focuses on the estimation of $P$ and tries to find a more precise matching $P$ between $XQ$ and $Y$ . Estimation of $Q$ is by stochastic gradient descent. We also propose a bidirectional optimization framework in section 3.2. + +# 3.1 Relaxed Matching Procedure + +Regarding embedding set $X$ and $Y$ as two discrete distributions $\mu = \sum_{i=1}^{I} u_i \delta_{x_i}$ and $\nu = \sum_{j=1}^{J} v_j \delta_{y_j}$ , where $u$ (or $v$ ) is column vector satisfies $\sum_{i} u_i = 1, u_i > 0$ ( $v$ is similar), $\delta_x$ is the Dirac function supported on point $x$ . + +Standard optimal transport enforces the optimal transport plan to be the joint distribution $P \in \mathcal{P}_n$ . This setting leads to the result that every mass in $\mu$ should be matched to the same mass in $\nu$ . Recent application of unbalanced optimal transport (Wang et al., 2019) shows that the relaxation of the marginal condition could lead to more flexible and local matching, which avoids some counterintuitive matchings of source-target mass pairs with high transportation cost. + +The formulation of unbalanced optimal transport (Chizat et al., 2018a) differs from the balanced optimal transport in two ways. Firstly, the set of transport plans to be optimized is generalized to $\mathbb{R}_+^{I\times J}$ . Secondly, the marginal conditions of the Problem (3) are relaxed by two KL-divergence terms. + +$$ +\begin{array}{l} \min _ {P \in \mathbb {R} _ {+} ^ {I \times J}} \langle D, P \rangle + \lambda_ {1} \mathcal {K L} (P \mathbb {1} _ {J} | | u) \tag {4} \\ + \lambda_ {2} \mathcal {K L} \left(P ^ {T} \mathbb {1} _ {I} | | v\right) \\ \end{array} +$$ + +where $\mathcal{KL}(p||q) = \sum_{i} p_i \log \left( \frac{p_i}{q_i} \right) - p_i + q_i$ is the KL divergence. + +Algorithm 1 Generalized Sinkhorn Algorithm +Require: source and target measure $\mu_{i} \in \mathbb{R}_{+}^{m}, \nu_{j} \in \mathbb{R}^{n}$ , entropy regularizer $\epsilon$ , KL relaxation coefficient $\lambda_{1}, \lambda_{2}$ and distance matrix $D_{ij}$ . + +Ensure: Transport Plan $P_{ij}$ +1: Initialize $u\gets 0\in \mathbb{R}^m$ $v\gets 0\in \mathbb{R}^n$ $K\gets$ $e^{-D / \gamma}\in \mathbb{R}^{m\times n}$ + +2: while not converge do +4: $v \gets \left(\frac{\nu}{K^{\top}u}\right)^{\frac{\lambda_2}{\epsilon + \lambda_2}}$ +3: $u \leftarrow \left(\frac{\mu}{Kv}\right)^{\frac{\lambda_1}{\epsilon + \lambda_1}}$ + +5: end while +```javascript +6: $P\gets \mathrm{diag}(u)K\mathrm{diag}(v)$ +``` + +We estimate $P$ by considering the relaxed Problem (4) instead of the original Problem (3) in (Grave et al., 2019). Problem (4) could also be solved by entropy regularization with the generalized Sinkhorn algorithm (Chizat et al., 2018b; Wang et al., 2019; Peyre et al., 2019). + +In short, we already have an algorithm to obtain the minimum of the Problem (4). In order to avoid the hubness phenomenon, we replace $l_{2}$ distance of embedding with the rcsls distance proposed in Joulin et al. (2018) formalized as $D_{ij} = rcsls(x_iQ,y_j)$ . rcsls can not provide significantly better results than euclidean distance in our evaluation. However, previous study suggests that RCSLS could be considered as a better metric between words than euclidean distance. So we propose our approach with RCSLS. The "relaxed matching" procedure and the "bi-directional optimization" we proposed bring most of the improvement. + +We call this relaxed estimation of $P$ as Relaxed Matching Procedure(RMP). With RMP only when two points are less than some radius apart from each other, they may be matched together. Thus we can avoid some counterintuitive matchings and obtain a more precise matching $P$ . In the section of experiments we will verify the effectiveness of RMP. + +# 3.2 Bidirectional Optimization + +Previous research solved the mapping $X$ to $Y$ and the mapping $Y$ to $X$ as two independent problems, i.e., they tried to learn two orthogonal matrix $Q_{1}$ and $Q_{2}$ to match the $XQ_{1}$ with $Y$ and $YQ_{2}$ with $X$ , respectively. Intuitively from the aspect of point cloud matching, we consider these two problems + +Algorithm 2 Bidirectional Optimization with RMP +Require: word vectors from two languages $X, Y$ +Ensure: Transformation $Q$ +1: for each $e \in [1, E]$ do +2: for each $i \in [1, I]$ do +3: Draw $X_b, Y_b$ of size $b$ from $X$ and $Y$ +4: set rand = random() +5: if rand mod 2 = 1 then +6: $Y_b, X_b, Q \Leftarrow X_b, Y_b, QT$ +7: end if +8: Run RMP by solving Problem (4) and obtain $P^*$ +9: Update Q by gradient descent and Procrutes +10: if rand mod 2 = 1 then +11: $Q \Leftarrow QT$ +12: end if +13: end for +14: end for + +in opposite directions are symmetric. Thus we propose an optimization framework to solve only one $Q$ for both directions. + +In our approach, we match $XQ$ with $Y$ and $YQ^T$ with $X$ simultaneously. Based on the stochastic optimization framework of Grave et al. (2019), we randomly choose one direction to optimize at each iteration. + +The entire process of our method is summarized in Algorithm 2. At iteration $i$ , we start with sampling batches $X_{b}, Y_{b}$ with shape $\mathbb{R}^{b \times d}$ . Then we generate a random integer rand and choose to map $X_{b}Q$ to $Y_{b}$ or map $Y_{b}QT$ to $X_{b}$ by rand's parity. Given the mapping direction, we run the RMP procedure to solve Problem (4) by sinkhorn and obtain a matching matrix $P^{*}$ between $X_{b}Q$ and $Y_{b}$ (or $Y_{b}QT$ and $X$ ). Finally we use gradient descent and procrutes to update $Q$ by the given $P^{*}$ . The procedure of $Q$ 's update is detailed in Grave et al. (2019). + +# 4 Experiments + +In this section, we evaluate our method in two settings. First, We conduct distillation experiments to verify the effectiveness of RMP and bidirectional optimization. Then we compare our method consisting of both RMP and bi-directional optimization with various SOTA methods on the BLI task. + +* DataSets* We conduct word translation experiments on 6 pairs of languages and use pretrained + +
MethodEN-ESEN-FREN-DEEN-RUEN-ITAvg.
Supervision
Proc.5K words81.983.482.182.474.272.751.763.777.477.974.7
RCSLS5K words84.186.383.384.179.176.357.967.277.3
GWNone81.780.481.378.971.978.245.143.778.975.271.5
Adv. - RefineNone81.783.382.382.174.072.244.059.177.977.573.4
W.Proc. - RefineNone82.884.182.682.975.473.343.759.173.0
Dema - RefineNone82.884.982.682.475.374.946.962.474.0
Ours - RefineNone82.785.883.083.876.274.948.164.779.180.375.9
+ +Table 1: Comparison between SOTA methods on BLI task. Methods in Line 1-2 are supervised. Methods in Line 3-8 are unsupervised. Except the GW method, other unsupervised methods are refined. In bold, the best among unsupervised approaches. All numbers of others are taken from their papers. ('EN': English, 'ES': Spanish, 'FR': French, 'DE': German, 'RU': Russian, 'IT': Italian). + +word embedding from fasttext. We use the bilingual dictionaries opensourced in the work (Lample et al., 2018) as our evaluate set. We use the CSLS retrieval method for evaluation as Lample et al. (2018) in both settings. All the translation accuracy reported is the precision at 1 with CSLS criterion. We open the source code on Github $\dagger$ . + +# 4.1 Main Results + +Through the experimental evaluation, we seek to demonstrate the effectiveness of our method compared to other SOTA methods. The word embeddings are normalized and centered before entering the model. We start with a batch size 500 and 2000 iterations each epoch. We double the batch size and quarter the iteration number after each epoch. First 2.5K words are taken for initialization, and samples are only drawn from the first 20K words in the frequently ranking vocabulary. The coefficients $\lambda_{1}$ and $\lambda_{2}$ of the relaxed terms in Problem (4) are both set to 0.001. + +Baselines We take basic Procrutes and RCSLS-Loss of Joulin et al. (2018) as two supervised baselines. Five unsupervised methods are also taken into accounts: the Gromov Wasserstein matching method of Alvarez-Melis and Jaakkola (2018), the adversarial training(Adv.-Refine) of Lample et al. (2018), the Wasserstein Procrutes method(W.Proc.-Refine) of Grave et al. (2019), the density matching + +method(Dema-Refine) of Zhou et al. (2019). + +In Table 1, it's shown that leading by an average of 2 percentage points, our approach outperforms other unsupervised methods in most instances and is on par with the supervised method on some language pairs. Surprisingly we find that our method achieves significant progress in some tough cases such as English - Russian, English - Italian, which contain lots of noise. Our method guarantees the precision of mapping computed every step which achieves the effect of noise reduction. + +However, there still exists an noticeable gap between our method and the supervised RCSLS method, which indicates further research can be conducted to absorb the superiority of this metric to unsupervised methods. + +We also compare our method with W.Proc on two non-English pairs including FR-DE and FR-ES to show how bidirectional relaxed matching improves the performance and results are presented in Table 2. Most of the recent researches didn't report results of non-English pairs, which makes it hard for fair comparison. However from the results in Table 2, we could find that our method keeps an advantage over W.Proc. Note that the W.Proc. results here are our implementation rather than that are reported in the original paper. + +
FR-DEDE-FRFR-ESES-FR
W.Proc.65.873.582.084.9
Ours-Refine67.774.083.384.9
+ +Table 2: Comparison bewteen W.Proc. and our method on non-English language pairs + +![](images/b998285fe5c831bec11053a57b7ccbaee8172a3783ef0709dbfae5767189bdbd.jpg) +Figure 1: Ablation study of our methods' effectiveness. 'WP' refers to the original Wasserstein Procrates Method proposed by Grave et al. (2019). 'WP-RMP' applies RMP to 'WP'. 'WP-RMP-bidirection' applies bidirectional optimization framework to 'WP-RMP'. 'WP-RMP-bidirection-refine' applies the refinement procedure to 'WP-RMP-bidirection'.('EN': English, 'ES': Spanish, 'FR': French, 'DE': German, 'RU': Russian, 'IT': Italian). + +# 4.2 Ablation Study + +The algorithms for BLI could be roughly divided into three parts: 1. initialization, 2 iterative optimization, and 3. refinement procedure, such as Lample et al. (2017). W.Proc.(Grave et al., 2019) only covers the first two parts. Our approaches, i.e. relaxed matching and bi-directional optimization are categorized into the second part. To ensure a fair comparison, W.Proc.-Refine is compared to ours-Refine which is discussed in next section. To verify the effectiveness of RMP and bidirectional optimization directly, we apply them to the method proposed in Grave et al. (2019) one by one. We take the same implementation and hyperparameters reported in their paper and code $\ddagger$ but using RMP to solve $P$ instead of ordinary 2-Wasserstein. + +On four language pairs, We applied RMP, bidirectional optimization and refinement procedure to original W.Proc. gradually and evaluate the performance change. In Figure 1 it's clearly shown that after applying bidirectional RMP, the translation accuracy improves by 3 percentage averagely. The results of 'WP-RMP' are worse than 'WP-RMP- + +bidirection' but better than original 'WP'. Moreover, we find that by applying RMP, a more precise $P$ not only eliminates many unnecessary matchings but also leads to a faster converge of the optimization procedure. Furthermore, the effectiveness of refinement procedure is quite significant. + +To summarize, we consider the average of scores (from en-es to ru-en). By mitigating the counterintuitive pairs by polysemies and obscure words, the "relaxed matching" procedure improves the average score about 2 points, the "bi-directional optimization" improves the average score about 0.6 points. From the results we could get some inspiration that our ideas of relaxed matching and bidirectional optimization can also be applied to other frameworks such as adversarial training by Lample et al. (2017) and Gromov-Wasserstein by Alvarez-Melis and Jaakkola (2018). + +# 5 Conclusion + +This paper focuses on the matching procedure of BLI task. Our key insight is that the relaxed matching mitigates the counter-intuitive pairs by polysemy and obscure words, which is supported by comparing W.Proc.-RMP with W.Proc in Table 1. The optimal transport constraint considered by W.Proc. is not proper for BLI tasks. Moreover, Our approach also optimizes the translation mapping Q in a bi-directional way, and has been shown better than all other unsupervised SOTA models with the refinement in Table 1. + +# 6 Acknowledgement + +This work was supported by the National Natural Science Foundation of China (11871297, 91646202), National Key R&D Program of China(2018YFB1404401, 2018YFB1402701), Tsinghua University Initiative Scientific Research Program. + +# References + +Jean Alaux, Edouard Grave, Marco Cuturi, and Armand Joulin. 2019. Unsupervised hyperalignment for multilingual word embeddings. CoRR, abs/1811.01124. +David Alvarez-Melis and Tommi S. Jaakkola. 2018. Gromov-wasserstein alignment of word embedding spaces. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1881-1890. + +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2017. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451-462. +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. Generalizing and improving bilingual word embedding mappings with a multi-step framework of linear transformations. In Thirty-Second AAAI Conference on Artificial Intelligence. +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019. Bilingual lexicon induction through unsupervised machine translation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 5002-5007. +Lenaic Chizat, Gabriel Peyre, Bernhard Schmitzer, and François-Xavier Vialard. 2018a. An interpolating distance between optimal transport and fisher-rao metrics. Foundations of Computational Mathematics, 18(1):1-44. +Lenaic Chizat, Gabriel Peyre, Bernhard Schmitzer, and François-Xavier Vialard. 2018b. Scaling algorithms for unbalanced optimal transport problems. Mathematics of Computation, 87(314):2563-2609. +Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 2292-2300. +Pascale Fung. 1995. Compiling bilingual lexicon entries from a non-parallel english-chinese corpus. In Third Workshop on Very Large Corpora. +Nicolas Garneau, Mathieu Godbout, David Beauchemin, Audrey Durand, and Luc Lamontagne. 2019. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings: Making the method robustly reproducible as well. CoRR, abs/1912.01706. +Edouard Grave, Armand Joulin, and Quentin Berthet. 2019. Unsupervised alignment of embeddings with Wasserstein procrustes. In The 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019, 16-18 April 2019, Naha, Okinawa, Japan, pages 1880-1890. +Armand Joulin, Piotr Bojanowski, Tomas Mikolov, Hervé Jégou, and Edouard Grave. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2979-2984. + +Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2017. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043. +Guillaume Lample, Alexis Conneau, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018. Word translation without parallel data. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. +Victor Lavrenko, Martin Choquette, and W Bruce Croft. 2002. Cross-lingual relevance models. In Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pages 175–182. ACM. +Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. CoRR, abs/1309.4168. +Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111-3119. +Gabriel Peyre, Marco Cuturi, et al. 2019. Computational optimal transport. Foundations and Trends in Machine Learning, 11(5-6):355-607. +Reinhard Rapp. 1995. Identifying word translations in non-parallel texts. arXiv preprint cmp-lg/9505037. +Zihao Wang, Datong Zhou, Yong Zhang, Hao Wu, and Chenglong Bao. 2019. Wasserstein-fisher-rao document distance. CoRR, abs/1904.10294. +Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. 2015. Normalized word embedding and orthogonal transform for bilingual word translation. In NAACL HLT 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 - June 5, 2015, pages 1006-1011. +Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Earth movers distance minimization for unsupervised bilingual lexicon induction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1934-1945. +Chunting Zhou, Xuezhe Ma, Di Wang, and Graham Neubig. 2019. Density matching for bilingual word embedding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 1588-1598. \ No newline at end of file diff --git a/arelaxedmatchingprocedureforunsupervisedbli/images.zip b/arelaxedmatchingprocedureforunsupervisedbli/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b5892d59d1391e39a3075aceaf5ec2cd53f4f695 --- /dev/null +++ b/arelaxedmatchingprocedureforunsupervisedbli/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1682bd5642bb1fe1a656f174e0526c571bafd2c2485aab2e008d0ec656444980 +size 132635 diff --git a/arelaxedmatchingprocedureforunsupervisedbli/layout.json b/arelaxedmatchingprocedureforunsupervisedbli/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..55b5bcada0a318de65d17356becf6f224af6f665 --- /dev/null +++ b/arelaxedmatchingprocedureforunsupervisedbli/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:623187625beb42603c2fde4a5e83a83a2fc55a81afe0ce6f54766b767066b7b2 +size 284292 diff --git a/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/32adc7ac-7312-4aaa-b677-ca1d56301673_content_list.json b/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/32adc7ac-7312-4aaa-b677-ca1d56301673_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a687d6d506238f0d757a8c64b2d20078dc9c5ff1 --- /dev/null +++ b/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/32adc7ac-7312-4aaa-b677-ca1d56301673_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:be4de472be5a88e0a65309cf046ba5d0af84740d4296222ffbb8f4ce09039ac8 +size 46598 diff --git a/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/32adc7ac-7312-4aaa-b677-ca1d56301673_model.json b/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/32adc7ac-7312-4aaa-b677-ca1d56301673_model.json new file mode 100644 index 0000000000000000000000000000000000000000..00d5e2e38972564994d8d1731d2175fe7a4d8d1d --- /dev/null +++ b/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/32adc7ac-7312-4aaa-b677-ca1d56301673_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a689920c28b85a6b4477226c2e50d04791a7adf46c021d073c3485957c92537b +size 55407 diff --git a/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/32adc7ac-7312-4aaa-b677-ca1d56301673_origin.pdf b/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/32adc7ac-7312-4aaa-b677-ca1d56301673_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..77f3b77796282c65ea508269fe18105e9d7d89b3 --- /dev/null +++ b/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/32adc7ac-7312-4aaa-b677-ca1d56301673_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:031a8ae49e8e84d5a59f17b1a286bbc6ea32e06efc94dd709f57854704c050b0 +size 1369108 diff --git a/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/full.md b/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d521b12f3712f65be5addbca279c8122b3aa9277 --- /dev/null +++ b/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/full.md @@ -0,0 +1,179 @@ +# A Retrieve-and-Rewrite Initialization Method for Unsupervised Machine Translation + +Shuo Ren†‡*, Yu Wu §, Shujie Liu§, Ming Zhou§, Shuai Ma†‡ + +†SKLSDE Lab, Beihang University, Beijing, China + +$^{\ddagger}$ Beijing Advanced Innovation Center for Big Data and Brain Computing, China + +Microsoft Research Asia, Beijing, China + +†{shuoren,mashuai} @ buaa.edu.cn §{Wu.Yu,shujliu,mingzhou} @ microsoft.com + +# Abstract + +The commonly used framework for unsupervised machine translation builds initial translation models of both translation directions, and then performs iterative back-translation to jointly boost their translation performance. The initialization stage is very important since bad initialization may wrongly squeeze the search space, and too much noise introduced in this stage may hurt the final performance. In this paper, we propose a novel retrieval and rewriting based method to better initialize unsupervised translation models. We first retrieve semantically comparable sentences from monolingual corpora of two languages and then rewrite the target side to minimize the semantic gap between the source and retrieved targets with a designed rewriting model. The rewritten sentence pairs are used to initialize SMT models which are used to generate pseudo data for two NMT models, followed by the iterative back-translation. Experiments show that our method can build better initial unsupervised translation models and improve the final translation performance by over 4 BLEU scores. + +# 1 Introduction + +Recent work has shown successful practices of unsupervised machine translation (UMT) (Artetxe et al., 2017; Lample et al., 2017, 2018; Artetxe et al., 2018b; Marie and Fujita, 2018; Ren et al., 2019; Lample and Conneau, 2019). The common framework is to build two initial translation models (i.e., source to target and target to source) and then do iterative back-translation (Sennrich et al., 2016a; Zhang et al., 2018) with pseudo data generated by each other. The initialization stage is important because bad initialization may wrongly squeeze the search space, and too much noise introduced in this stage may hurt the final performance. + +Previous methods for UMT (Lample et al., 2018; Artetxe et al., 2018b; Marie and Fujita, 2018; Ren et al., 2019) usually use the following n-gram embeddings based initialization. They first build phrase translation tables with the help of unsupervised cross-lingual n-gram embeddings (Conneau et al., 2017; Artetxe et al., 2018a), and then use them to build two initial Phrase-based Statistical Machine Translation (PBSMT) (Koehn et al., 2003) models with two language models. However, there are two problems with their initialization methods. (1) Some complex sentence structures of original training sentences are hard to be recovered with the n-gram translation tables. (2) The initial translation tables inevitably contain much noise, which will be amplified in the subsequent process. + +In this paper, we propose a novel retrieve-and-write initialization method for UMT. Specifically, we first retrieve semantically similar sentence pairs from monolingual corpora of two languages with the help of unsupervised cross-lingual sentence embeddings. Next, with those retrieved similar sentence pairs, we run GIZA++ (Och and Ney, 2003) to get word alignments which are used to delete unaligned words in the target side of the retrieved sentences. The modified target sentences are then rewritten with a designed sequence-to-sequence rewriting model to minimize the semantic gap between the source and target sides. Taking the pairs of the source sentences and corresponding rewritten targets as pseudo parallel data, we then build two initial PBSMT models (source-to-target and target-to-source), which are used to generate pseudo parallel data to warm up NMT models, followed by an iterative back-translation training process. Our code is released at https://github.com/Imagist-Shuo/RRforUNMT.git. + +Our contributions are threefold. (1) We propose a novel method to initialize unsupervised MT models with a retrieve-and-rewrite schema, which can + +![](images/ab68fd75a382808c5adbc61cffb379bb2441f36dc5e963059eda8190108f0213.jpg) +Figure 1: Method overview. (In the figure, "embs" means "embeddings" and "x-lingual" means "cross-lingual".) + +preserve the rich sentence structure and provide high-quality phrases. (2) We design an effective seq-to-seq architecture based on the Transformer to rewrite sentences with semantic constraints. (3) Our method significantly outperforms the previous non-pre-training based UMT results on $en - fr$ and $en - de$ translation tasks, and give the first unsupervised $en - zh$ translation results on WMT17. + +# 2 Method + +Our method can be divided into three steps as shown in Figure 1. First, we do similar sentences retrieval (§2.1) from two monolingual corpora with the help of unsupervised cross-lingual sentence embeddings. Next, to minimize the semantic gap between the source and retrieved targets, we do target sentences rewriting (§2.2) by deleting unaligned words in the target side, and generate complete and better-aligned targets via our rewriting model with the help of missing information provided by the source. After that, we treat the rewritten pairs as the pseudo parallel data for translation models initialization and training (§2.3). + +# 2.1 Similar Sentences Retrieval + +Given two monolingual corpora $D_{x}$ and $D_{y}$ of two languages $X$ and $Y$ respectively, we first build unsupervised cross-lingual word embeddings of $X$ and $Y$ using fastText (Bojanowski et al., 2017) and vecmap (Artetxe et al., 2018a), and then we obtain cross-lingual sentence embeddings based on the cross-lingual word embeddings via SIF (Arora et al., 2017). After that, we use the marginal-based scoring (Artetxe and Schwenk, 2018) to retrieve + +![](images/25133538cc151e2f8b7d0ef60feb90a00e41ec6c3ae438a037b24ca84e926af7.jpg) +Figure 2: Example of rewriting. The unaligned words, i.e., 250 and建议(suggestion),proposed by GIZA++ have been removed in $y^\prime$ , which is then rewritten by the model to the right target $\hat{y}$ (40 and反馈(responses)). More examples of the sentences before and after rewriting are shown in Appendix B. + +similar sentences from two corpora1. Examples retrieved from monolingual English and Chinese corpora are shown in Figure 1 in the Appendix A. + +# 2.2 Target Sentences Rewriting + +As shown in Figure 2, having retrieved similar sentence pairs $\{x,y\}$ , we first run GIZA++ (Och and Ney, 2003) on these pairs and obtain the word alignment information. Then, for each target sentence $y$ , we remove the unaligned words from it according to lexical translation probabilities of GIZA++ output. We replace each deleted word with $\langle \mathrm{DEL} \rangle$ in $y$ to get the incomplete target sentence $y'$ . Meanwhile, we record the unaligned words in the source as $x_1^m$ where $m$ is the number of the unaligned source words. Next, we feed $y'$ and $x_1^m$ into a sequence-to-sequence model to generate the refined target sentence $\hat{y}$ . The rewritten pairs $\{x,\hat{y}\}$ are + +used as training data to train initial UMT systems. + +![](images/1ab2db42c2ae270e4a307a9eb4a7433c80b8190a1abe5027a2fee16d3720b684.jpg) +Figure 3: The architecture of the rewriting model. We modify the input of the Transformer encoder into two parts. The first part is the incomplete target sentence $y'$ , which is the same as the original Transformer input, and the second part is a sequence of unaligned source words $x_1^m$ , for which we remove positional encoding because the order of these words is not a concern. + +Our rewriting model is a modification of Transformer (Vaswani et al., 2017) shown as Figure 3. We initialize the embedding layer of the second input part with pre-trained cross-lingual word embeddings because its content should be independent of languages. We keep it fixed during training. Thus the second part is like a memory recording semantic information of words. We concatenate the readout embeddings of both parts with a separator, and feed them to the Transformer encoder, so that the attention mechanism will take effect on both parts together. For model training, due to the lack of references, we need to build training data for the rewriting model from monolingual corpus $D_y$ . Firstly, we remove 20 to 30 percent of words from a given sentence $y \in D_y$ , and replace them with $\langle \mathrm{DEL} \rangle$ to get $y'$ . Next, we randomly swap contiguous words in $y'$ with the probability of 0.2 to introduce some noises. Then we record the removed words as set $s_1^m$ and randomly drop/add some words from/to this set. We then treat $y'$ and $s_1^m$ as the inputs, and $y$ as the output to train the model. For model inference, we feed the incomplete sentence $y'$ and unaligned source words $x_1^m$ into the trained model and generate the refined sentence $\hat{y}$ . Note there seems to be a bias between the training and inference that $s_1^m$ during training are in the same language as $y$ , while during inference, they are from the source language $X$ . But the bias has been eliminated since the second input part of the encoder is the readout cross-lingual embeddings, which is independent of languages. + +# 2.3 Translation Models Initialization and Training + +Once we get $\{x, \hat{y}\}$ generated above, we use them to train initial PBSMT models, and use the SMT models to produce pseudo data to setup two NMT models, followed by the iterative back-translation. + +# 3 Experiments + +# 3.1 Setup + +# Dataset + +In our experiments, we consider three language pairs, English-French (en-fr), English-German (en-de) and English-Chinese (en-zh). For en, fr and de, we use 50 million monolingual sentences in NewsCrawl from 2007 to 2017. As for zh, we use the Chinese side from WMT17 en-zh parallel data. For the convenience of comparison, we use newstest 2014 as the test set for en-fr, newstest 2016 for en-de, and newstest 2017 for en-zh. The data preprocessing is described in Appendix D. + +# Baselines + +Our method is compared with eight baselines of unsupervised MT systems listed in the upper area of Table 1. The first three baselines are unsupervised NMT models, and the fourth baseline is an unsupervised PBSMT model. The fifth baseline is an extract-and-edit schema for unsupervised neural machine translation. The sixth and seventh baselines are hybrid models of NMT and PBSMT. And the last baseline is a pre-training based method. + +# 3.2 Results + +# Overall Results + +The comparison results are reported in Table 1. From the table, we find that our method significantly outperforms the best non-pre-training based baseline with an average of 4.63 BLEU scores on all pairs. Note that Lample and Conneau (2019) is based on pre-training, which uses much more monolingual data than our method. Even so, we reach comparable results on the en-fr pair. + +# Comparison of Initial SMT Models + +We compare the performance of SMT models initialized with different methods in Table 2. All + +
Methodfr2enen2frde2enen2dezh2enen2zh
(Artetxe et al., 2017)15.615.1----
(Lample et al., 2017)14.315.113.39.6--
(Yang et al., 2018)15.617.014.610.9--
(Artetxe et al., 2018b)25.926.223.118.2--
(Wu et al., 2019)26.927.623.319.6--
(Lample et al., 2018)27.728.125.220.2--
(Ren et al., 2019)28.929.526.321.711.218.7
(Lample et al.,2019)*33.333.434.326.4--
Ours33.334.031.626.015.323.9
+ +three baselines initialize their SMT models with phrase tables inferred from n-gram embeddings and language models. From the table, we find that our proposed method gives better initialization to SMT models. Even the SMT models trained with only the retrieved sentences reach higher performance than previous methods, which verifies that the noise within the retrieved sentences is random to a greater extent and can be easily eliminated by SMT models, which is consistent with Khayral-lah and Koehn (2018). With the target sentences rewritten by our rewriting model, the quality of extracted phrases can be further improved. We also try to directly train NMT models with the rewritten pseudo data, but only get the BLEU scores under 10, which means there is still much noise for SMT to eliminate in the pseudo pairs. + +Table 1: Comparison of the final test BLEU. en2zh: character-level BLEU. \*: pre-training based method. + +
Initialization Methodfr2enen2frde2enen2de
(Ren et al., 2019)15.3411.7411.038.14
(Lample et al., 2018)17.50-15.63-
(Artetxe et al., 2018b)21.1620.1313.8610.59
Only retrieval21.3620.2315.9612.03
+ target rewriting25.2123.5820.4115.98
+ +# Discussion of Rewriting Model + +We build two test sets to quantify the performance of our rewriting models. The first test set denoted as "in-domain", is from our synthetic training data. As described before, we build training samples using monolingual data according to the rules in §2.2. We select 8M sentences from the monolingual corpus of a certain language for model training and randomly sample 8k sentences as development and test sets respectively. In addition, we also test our rewriting model on newtest2014 (en-fr), which is denoted as "out-domain". We first run GIZA++ on the parallel sentences in the original test set to find the golden alignments between source and tar + +get words. Next, we randomly delete up to $30\%$ words in the target side and record their aligned source words. Then we feed the incomplete target sentence and the recorded source words into our model to recover the original target. The BLEU scores on both test sets are listed in Table 3, which shows our rewriting model has good performance. + +Table 2: BLEU of different initial SMT models. + +
Test setsen as targetfr as target
In-domain59.8758.71
Out-domain48.5247.63
+ +Table 3: Test BLEU scores of the rewriting models. + +# 4 Related Work + +Unsupervised machine translation becomes a hot research topic in recent years. The pioneering methods are based on NMT models (Transformer) (Artetxe et al., 2017; Lample et al., 2017; Yang et al., 2018) trained with denoising auto-encoder (Vincent et al., 2010) and iterative back-translation. The following work shows that SMT methods and the hybrid of NMT and SMT can be more effective (Artetxe et al., 2018b; Lample et al., 2018; Marie and Fujita, 2018; Ren et al., 2019; Artetxe et al., 2019). They build the initial PBSMT models with language models and phrase tables inferred from unsupervised cross-lingual n-gram embeddings. Recently, Lample and Conneau (2019) propose a pre-training method and achieve state-of-the-art performance on unsupervised en-fr and en-de translation tasks. But they use much more monolingual data from Wikipedia than previous work and this paper. We must also mention the work of Wu et al. (2019). They similarly use retrieval and rewriting framework for unsupervised MT. However, ours is different from theirs in two aspects. First, we efficiently calculate the cross-lingual sentence embeddings via a training-free method SIF rather than a pre-trained language model. Second, our rewriting method is based on the word alignment information which is more explicit than their max pooling, and our rewriting model is more simple but effective so that the rewriting results can be directly used without extra training techniques. + +# 5 Conclusion + +In this paper, we propose a novel method for unsupervised machine translation with a retrieve-and-write schema. We first retrieve similar sentences + +from monolingual corpora and then rewrite the targets with a rewriting model. With the pseudo parallel data, we better initialize PBSMT models and significantly improve the final iteration performance as the experiments show. + +# Acknowledgments + +This work is supported in part by National Key R&D Program of China AAA0102301, and NSFC 61925203 & U1636210 & 61421003. + +# References + +Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 789-798. +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018b. Unsupervised statistical machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguistics. +Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2019. An effective approach to unsupervised machine translation. In Proceedings of the 57th Annual Meeting of ACL. +Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2017. Unsupervised neural machine translation. arXiv preprint arXiv:1710.11041. +Mikel Artetxe and Holger Schwenk. 2018. Margin-based parallel corpus mining with multilingual sentence embeddings. arXiv preprint arXiv:1811.01136. +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146. +Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2017. Word translation without parallel data. arXiv preprint arXiv:1710.04087. +Howard Johnson, Joel Martin, George Foster, and Roland Kuhn. 2007. Improving translation quality by discarding most of the phrasetable. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-ConNLL). + +Huda Khayrallah and Philipp Koehn. 2018. On the impact of various types of noise on neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 74-83. +Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, pages 48-54. Association for Computational Linguistics. +Guillaume Lample and Alexis Conneau. 2019. Cross-lingual language model pretraining. arXiv:1901.07291. +Guillaume Lample, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2017. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043. +Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, et al. 2018. Phrase-based & neural unsupervised machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5039-5049. +Yury A Malkov and Dmitry A Yashunin. 2018. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE transactions on pattern analysis and machine intelligence. +Benjamin Marie and Atsushi Fujita. 2018. Unsupervised neural machine translation initialized by unsupervised statistical machine translation. arXiv preprint arXiv:1810.12703. +Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational linguistics, 29(1):19-51. +Shuo Ren, Zhirui Zhang, Shujie Liu, Ming Zhou, and Shuai Ma. 2019. Unsupervised neural machine translation with smt as posterior regularization. arXiv preprint arXiv:1901.04112. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 86-96. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1715-1725. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all + +you need. In Advances in Neural Information Processing Systems, pages 6000-6010. + +Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. 2010. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(Dec):3371-3408. + +Jiawei Wu, Xin Wang, and William Yang Wang. 2019. Extract and edit: An alternative to back-translation for unsupervised neural machine translation. arXiv preprint arXiv:1904.02331. + +Zhen Yang, Wei Chen, Feng Wang, and Bo Xu. 2018. Unsupervised neural machine translation with weight sharing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 46-55. + +Zhirui Zhang, Shujie Liu, Mu Li, Ming Zhou, and Enhong Chen. 2018. Joint training for neural machine translation models with monolingual data. In Thirty-Second AAAI Conference on Artificial Intelligence. + +# A Examples of Retrieval + +Examples retrieved from monolingual English and Chinese corpora are shown in Figure 5. With this method, we can retrieve not only highly similar sentences like the first case, but also sentence pairs with rich sentence structures like the second one. The rest retrieved pairs, though containing some noise, also provide high-quality alignments after rewriting according to our observation. + +
1[src]: This comment received more than 40 reader responses. [trg]: 这项评论收到了250多项读者建议。
2[src]: Cholera is contracted by consuming food or water contaminated with the fecal bacteria Vibrio cholerae. [trg]: 霍乱是由霍乱弧菌引起的急性肠道传染病,主要经由不洁净的水和食物传播。
3[src]: The audience of this meeting is from officials and non-profit organizations. [trg]:这次会议吸引了来自政府和非政府部门的众多听众。 [Note]: A from B and C -> 来自B和C的A
4[src]: The Sydney Opera House is one of the most iconic landmarks in the world. [trg]:山顶是世界上最具特色的景点之一。 [Note]: A is one of B in the world -> A是世界上的B之一
+ +Figure 4: Examples of similar sentences retrieved by our method. The underlined words are already aligned. The note is a hierarchical translation rule, which belongs to a rich sentence structure. + +# B Examples of Rewriting + +We list some rewriting cases from $en$ to $zh$ in this section. Figure 6 shows some retrieved sentence + +pairs before and after being rewritten, to demonstrate the effectiveness of our retrieval method and rewriting model. From the first case, we see that the unaligned word "CPSC" is replaced with the right one "她" (she); unrelated words "锂离子" (lithium-ion) and "消费者" (consumer) are removed; "设备" (device) and "爆炸" (explosion) are added into the rewritten sentence. From the second case, we see that the unaligned word "小组" (group) is replaced with the right one "科学家们" (scientists); unrelated words "迎来" (welcome) and "天文学" (astronomy) are removed; "最大" (biggest) and "突破" (breakthrough) are added in the rewritten sentence. The two cases show that our rewriting model can produce the target sentences that are better aligned with the given sources. + +# C Examples of Translation + +Figure 5 shows some translation results generated by our unsupervised MT models to exemplify the final performance. The cases verify that our method empowers the models to learn rich sentence structure such as the hierarchical translation rules of "be A that B" $\rightarrow$ "是B的A" in the first case and "act as if A" $\rightarrow$ "表现的好像A一样" in the second one. This means that our initialization method can preserve the rich sentence structures of the original monolingual sentences, thus giving better initialization for initial UMT models. + +# D Data Preprocessing + +We use Moses scripts3 for tokenization and truecasing. For Chinese tokenization, we use our in-house tool. For SMT, we use the Moses implementation of hierarchical PBSMT systems with Salm (Johnson et al., 2007). For the rewriting and NMT models, we use the modified version of the public implementation4 of the Transformer (Vaswani et al., 2017) base model. The rewriting model is based on word level with the vocabulary size of 200,000, while the unsupervised NMT model is based on BPE (Sennrich et al., 2016b) level with the vocabulary size of 60,000. The BPE vocabulary space is shared for each language pair. + +
SourceBatteries in some of the devices are overheating, causing a fire or an explosion, she said.
Retrieved targetCPSC说:“锂离子电池包会过热,给消费者造成燃烧和火灾危害。”
Rewritten target她说:“设备电池会过热,爆炸造成燃烧和火灾危害。”
Human reference她说:“一些设备中的电池会过热,从而造成火灾或爆炸。”
SourceScientists have spotted gravitational waves in a historic discovery hailed as “the biggest scientific breakthrough of the century”.
Retrieved target国际研究小组说,对这些引力波的首次探测将迎来天文学的新纪元。
Rewritten target研究科学家们,对这些引力波的首次探测是最大历史性突破的新纪元。
Human reference科学家们探测到了引力波,这一历史性发现被誉为“本世纪最大的科学突破”。
+ +Figure 5: Cases of the WMT17 English-Chinese translation results. The underlined words are in hierarchical rules. + +
SourceHe was the brother that went with the flow.
Output他是一个跟随流动的兄弟。
Reference他是一个随从大家意见的人。
Output在未来的日子里,这里有个报称埃尔多安先生表现的好像没有不好的事情发生过一样。
Reference次日,此间一家报纸写到,埃尔多安先生则表现的好像什么都没发生一样。
+ +Figure 6: Cases of the retrieved and rewritten sentences. The bold words are unaligned source words while the strikethrough words are unaligned target words. Human references are given by a translation expert. \ No newline at end of file diff --git a/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/images.zip b/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..4b3f84b1976d455e1e33c439b268322a45d87fd0 --- /dev/null +++ b/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8696d5cca15a0a5e0a6fde259457afbd1dd44726f7f7b3b7ce3b06134658a18d +size 410895 diff --git a/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/layout.json b/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..df2b5b34adb34a88b07ded362b6f5d41c0941de3 --- /dev/null +++ b/aretrieveandrewriteinitializationmethodforunsupervisedmachinetranslation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1234e4c7eecd1eb0a983f922d83fdbe7517f91df3bb8e7f707b3b9d9a801cdf3 +size 214463 diff --git a/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/d2596ffb-9828-4eb1-82a8-cf7cc5b2dab2_content_list.json b/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/d2596ffb-9828-4eb1-82a8-cf7cc5b2dab2_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..7b0fadc0b1c8bb9d61082ec196e619e12ad4ffbb --- /dev/null +++ b/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/d2596ffb-9828-4eb1-82a8-cf7cc5b2dab2_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:24572e74b74b083bffc73112958723f6f6f9092f2ba3f93a81b1c42703055924 +size 88325 diff --git a/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/d2596ffb-9828-4eb1-82a8-cf7cc5b2dab2_model.json b/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/d2596ffb-9828-4eb1-82a8-cf7cc5b2dab2_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6eb99d8796b151216049cbfbc9af170fb310d9f1 --- /dev/null +++ b/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/d2596ffb-9828-4eb1-82a8-cf7cc5b2dab2_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:99f471f833ed336be3f3d0a6d225cd0f493f55d8bd49c096cb8d91d18afce6ce +size 112376 diff --git a/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/d2596ffb-9828-4eb1-82a8-cf7cc5b2dab2_origin.pdf b/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/d2596ffb-9828-4eb1-82a8-cf7cc5b2dab2_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2658ec00cf04542de5ff9e2211da35277d7564ac --- /dev/null +++ b/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/d2596ffb-9828-4eb1-82a8-cf7cc5b2dab2_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9bc46416f48c7183ff8a0eb8ef5ccdf02ad173049cc46058195fc46fdd2ec2f7 +size 667747 diff --git a/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/full.md b/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f81ce4639fb6ee6e837a758a7bdaaef3f2c1045b --- /dev/null +++ b/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/full.md @@ -0,0 +1,439 @@ +# A Self-Training Method for Machine Reading Comprehension with Soft Evidence Extraction + +Yilin Niu $^{1*}$ , Fangkai Jiao $^{2*}$ , Mantong Zhou $^{1}$ , Ting Yao $^{3}$ , Jingfang Xu $^{3}$ , Minlie Huang $^{1\dagger}$ + +$^{1}$ Department of Computer Science and Technology, Institute for Artificial Intelligence, State Key Lab of Intelligent Technology and Systems, + +Beijing National Research Center for Information Science and Technology, + +Tsinghua University, Beijing 100084, China + +$^{2}$ School of Computer Science and Technology, Shandong University + +Sogou Inc., Beijing, China + +niuy1l4@tsinghua.org.cn jiaofangkai@hotmail.com zmt.keke@gmail.com + +{yaoting, jingfang}@sogou-inc.com aihuang@tsinghua.edu.cn + +# Abstract + +Neural models have achieved great success on machine reading comprehension (MRC), many of which typically consist of two components: an evidence extractor and an answer predictor. The former seeks the most relevant information from a reference text, while the latter is to locate or generate answers from the extracted evidence. Despite the importance of evidence labels for training the evidence extractor, they are not cheaply accessible, particularly in many non-extractive MRC tasks such as YES/NO question answering and multi-choice MRC. + +To address this problem, we present a Self-Training method (STM), which supervises the evidence extractor with auto-generated evidence labels in an iterative process. At each iteration, a base MRC model is trained with golden answers and noisy evidence labels. The trained model will predict pseudo evidence labels as extra supervision in the next iteration. We evaluate STM on seven datasets over three MRC tasks. Experimental results demonstrate the improvement on existing MRC models, and we also analyze how and why such a self-training method works in MRC. + +# 1 Introduction + +Machine reading comprehension (MRC) has received increasing attention recently, which can be roughly divided into two categories: extractive and non-extractive MRC. Extractive MRC requires a model to extract an answer span to a question from reference documents, such as the tasks in SQuAD (Rajpurkar et al., 2016) and CoQA (Reddy et al., 2019). In contrast, non-extractive MRC infers answers based on some evidence in reference + +documents, including Yes/No question answering (Clark et al., 2019), multiple-choice MRC (Lai et al., 2017; Khashabi et al., 2018; Sun et al., 2019), and open domain question answering (Dhingra et al., 2017b). As shown in Table 1, evidence plays a vital role in MRC (Zhou et al., 2019; Ding et al., 2019; Min et al., 2018), and the coarse-to-fine paradigm has been widely adopted in multiple models (Choi et al., 2017; Li et al., 2018; Wang et al., 2018) where an evidence extractor first seeks the evidence from given documents and then an answer predictor infers the answer based on the evidence. However, it is challenging to learn a good evidence extractor due to the lack of evidence labels for supervision. + +Manually annotating the golden evidence is expensive. Therefore, some recent efforts have been dedicated to improving MRC by leveraging noisy evidence labels when training the evidence extractor. Some works (Lin et al., 2018; Min et al., 2018) generate distant labels using hand-crafted rules and external resources. Some studies (Wang et al., 2018; Choi et al., 2017) adopt reinforcement learning (RL) to decide the labels of evidence. However, such RL methods suffer from unstable training. More distant supervision techniques are also used to refine noisy labels, such as deep probability logic (Wang et al., 2019), but they are hard to transfer to other tasks. Nevertheless, improving the evidence extractor remains challenging when golden evidence labels are not available. + +In this paper, we present a general and effective method based on Self-Training (Scudder, 1965) to improve MRC with soft evidence extraction when golden evidence labels are not available. Following the Self-Training paradigm, a base MRC model is iteratively trained. At each iteration, the base model is trained with golden answers, as well as noisy evidence labels obtained at the preceding it + +Q: Did a little boy write the note? +D: ...This note is from a little girl. She wants to be your friend. If you want to be her friend, ... +A: No +Q: Is she carrying something? +D: ...On the step, I find the elderly Chinese lady, small and slight, holding the hand of a little boy. In her other hand, she holds a paper carrier bag. ... +A: Yes + +Table 1: Examples of Yes/No question answering. Evidential sentences in bold in reference documents are crucial to answer the questions. + +eration. Then, the trained model generates noisy evidence labels, which will be used to supervise evidence extraction at the next iteration. The overview of our method is shown in Figure 1. Through this iterative process, the evidence is labeled automatically to guide the RC model to find answers, and then a better RC model benefits the evidence labeling process in return. Our method works without any manual efforts or external information, and therefore can be applied to any MRC tasks. Besides, the Self-Training algorithm converges more stably than RL. Two main contributions in this paper are summarized as follows: + +1. We propose a self-training method to improve machine reading comprehension by soft evidence labeling. Compared with other existing methods, our method is more effective and general. +2. We verify the generalization and effectiveness of STM on several MRC tasks, including Yes/No question answering (YNQA), multiple-choice machine reading comprehension (MMRC), and open-domain question answering (ODQA). Our method is applicable to different base models, including BERT and DSQA (Lin et al., 2018). Experimental results demonstrate that our proposed method improves base models in three MRC tasks remarkably. + +# 2 Related Work + +Early MRC studies focus on modeling semantic matching between a question and a reference document (Seo et al., 2017; Huang et al., 2018; Zhu + +et al., 2018; Mihaylov and Frank, 2018). In order to mimic the reading mode of human, hierarchical coarse-to-fine methods are proposed (Choi et al., 2017; Li et al., 2018). Such models first read the full text to select relevant text spans, and then infer answers from these relevant spans. Extracting such spans in MRC is drawing more and more attention, though still quite challenging (Wang et al., 2019). + +Evidence extraction aims at finding evidential and relevant information for downstream processes in a task, which arguably improves the overall performance of the task. Not surprisingly, evidence extraction is useful and becomes an important component in fact verification (Zhou et al., 2019; Yin and Roth, 2018; Hanselowski et al., 2018; Ma et al., 2019), multiple-choice reading comprehension (Wang et al., 2019; Bax, 2013; Yu et al., 2019), open-domain question answering (Lin et al., 2018; Wang et al., 2018), multi-hop reading comprehension (Nishida et al., 2019; Ding et al., 2019), natural language inference (Wang et al., 2017; Chen et al., 2017), and a wide range of other tasks (Nguyen and Nguyen, 2018; Chen and Bansal, 2018). + +In general, evidence extraction in MRC can be classified into four types according to the training method. First, unsupervised methods provide no guidance for evidence extraction (Seo et al., 2017; Huang et al., 2019). Second, supervised methods train evidence extraction with golden evidence labels, which sometimes can be generated automatically in extractive MRC settings (Lin et al., 2018; Yin and Roth, 2018; Hanselowski et al., 2018). Third, weakly supervised methods rely on noisy evidence labels, where the labels can be obtained by heuristic rules (Min et al., 2018). Moreover, some data programming techniques, such as deep probability logic, were proposed to refine noisy labels (Wang et al., 2019). Last, if a weak extractor is obtained via unsupervised or weakly supervised pre-training, reinforcement learning can be utilized to learn a better policy of evidence extraction (Wang et al., 2018; Choi et al., 2017). + +For non-extractive MRC tasks, such as YNQA and MMRC, it is cumbersome and inefficient to annotate evidence labels (Ma et al., 2019). Although various methods for evidence extraction have been proposed, training an effective extractor is still a challenging problem when golden evidence labels are unavailable. Weakly supervised methods either suffer from the low performance or rely on too many external resources, which makes them diffi + +cult to transfer to other tasks. RL methods can indeed train a better extractor without evidence labels. However, they are much more complicated and unstable to train, and highly dependent on model pre-training. + +Our method is based on Self-Training, a widely used semi-supervised method. Most related studies follow the framework of traditional Self-Training (Scudder, 1965) and Co-Training (Blum and Mitchell, 1998), and focus on designing better policies for selecting confident samples. CoTrade (Zhang and Zhou, 2011) evaluates the confidence of whether a sample has been correctly labeled via a statistic-based data editing technique (Zighed et al., 2002). Self-paced Co-Training (Ma et al., 2017) adjusts labeled data dynamically according to the consistency between the two models trained on different views. A reinforcement learning method (Wu et al., 2018) designs an additional Q-agent as a sample selector. + +# 3 Methods + +# 3.1 Task Definition and Model Overview + +The task of machine reading comprehension can be formalized as follows: given a reference document composed of a number of sentences $D = \{S_{1}, S_{2}, \dots, S_{m}\}$ and a question $Q$ , the model should extract or generate an answer $\hat{A}$ to this question conditioned on the document, formally as + +$$ +\hat{A} = \operatorname *{argmax}_{A^{\prime}}P(A^{\prime}|Q,D). +$$ + +The process can be decomposed into two components, i.e., an evidence extractor and an answer predictor. The golden answer $A$ is given for training the entire model, including the evidence extractor and the answer predictor. Denote $E_{i}$ as a binary evidence label $\{0,1\}$ for the $i$ -th sentence $S_{i}$ , where $0/1$ corresponds to the non-evidence/evidence sentence, respectively. An auxiliary loss on the evidence labels can help the training of the evidence extractor. + +The overview of our method is shown in Figure 1, which is an iterative process. During training, two data pools are maintained and denoted as $U$ (unlabeled data) and $L$ (labeled data). In addition to golden answers, examples in $L$ are annotated with pseudo evidence labels. In contrast, there are only golden answers provided in $U$ . At each iteration, the base model is trained on both data pools (two training arrows). After training, the + +model makes evidence predictions on unlabeled instances (the labeling arrow), and then Selector chooses the most confident instances from $U$ to provide noisy evidence labels. In particular, the instances with newly generated evidence labels are moved from $U$ to $L$ (the moving arrow), which are used to supervise evidence extraction in the next iteration. This process will iterate several times. + +![](images/4269f24021f220b4fc50fdd8af0a205a0388a5b8b5887b9281ec8ba4acc15a6a.jpg) +Figure 1: Overview of Self-Training MRC (STM). The base model is trained on both $L$ and $U$ . After training, the base model is used to generate evidence labels for the data from $U$ , and then Selector chooses the most confident samples, which will be used to supervise the evidence extractor at the next iteration. The selected data is moved from $U$ to $L$ at each iteration. + +# 3.2 Base Model + +As shown in Figure 2, the overall structure of a base model consists of an encoder layer, an evidence extractor, and an answer predictor. + +![](images/3ed102603ea81de0e731c167837fbf3692efdca72760148bd0d94f0e3437fd35.jpg) +Figure 2: Overall structure of a base model that consists of an encoder layer, an evidence extractor, and an answer predictor. The encoders will obtain $\pmb{h}^{Q}$ for the question, and $\pmb{h}_{i}^{D}$ for each sentence in a document. The summary vector $\pmb{h}^{D}$ will be used to predict the answer. + +The encoder layer takes document $D$ and question $Q$ as input to obtain contextual representation for each word. Denote $h_{i,j}^{D}$ as the representation of the $j$ -th word in $S_{i}$ , and $h_{i}^{Q}$ as the representation of the $i$ -th word in question $Q$ . Our framework is agnostic to the architecture of the encoder, and we + +show improvements on two widely used encoding models, i.e., Transformer (with BERT, Devlin et al., 2019) and LSTM (with DSQA, Lin et al., 2018) in the experiments. + +The evidence extractor employs hierarchical attention, including token- and sentence-level attention, to obtain the document representation $h^{D}$ . + +Token-level attention obtains a sentence vector by self-attention (Vaswani et al., 2017) within the words in a sentence, as follows: + +$$ +\boldsymbol {h} _ {\boldsymbol {i}} ^ {\boldsymbol {D}} = \sum_ {j} ^ {| S _ {i} |} \alpha_ {i, j} \boldsymbol {h} _ {\boldsymbol {i}, \boldsymbol {j}} ^ {\boldsymbol {D}}, \alpha_ {i, j} \propto \exp (F ^ {S} (\boldsymbol {h} ^ {\boldsymbol {Q}}, \boldsymbol {h} _ {\boldsymbol {i}, \boldsymbol {j}} ^ {\boldsymbol {D}})), +$$ + +$$ +\boldsymbol {s} _ {\boldsymbol {i}} ^ {D} = \sum_ {j} ^ {| S _ {i} |} \beta_ {i, j} \boldsymbol {h} _ {i, j} ^ {D}, \beta_ {i, j} \propto \exp (\boldsymbol {w} _ {\boldsymbol {s}} \boldsymbol {h} _ {i, j} ^ {D} + b _ {s}), +$$ + +where $h^Q$ is the sentence representation of the question. $\alpha_{i,j}$ refers to the importance of word $j$ in sentence $i$ , and so on for $\beta_{i,j}$ . $\mathbf{w}_s$ and $\mathbf{b}_s$ are learnable parameters. The attention function $F^S$ follows the bilinear form (Kim et al., 2018). + +Sentence-level attention identifies important sentences conditioned on the question in a soft way to get the summary vector $(h^{D})$ , as follows: + +$$ +\pmb {h} ^ {D} = \sum_ {i} ^ {m} \gamma_ {i} \pmb {h} _ {i} ^ {D}, \gamma_ {i} \propto \exp (F ^ {D} (\pmb {h} ^ {\pmb {Q}}, \pmb {s} _ {i} ^ {D})), +$$ + +where $F^D$ has the same bilinear form as $F^S$ with different parameters. $\gamma_{i}$ refers to the importance of the corresponding sentence. + +The answer predictor adopts different structures for different MRC tasks. For Yes/No question answering, we use a simple linear classifier to infer answers. For multiple-choice MRC, we use a Multiple Layer Perceptron (MLP) with Softmax to obtain the score of each choice. And for open-domain question answering, one MLP is used to predict the answer start, and another MLP is used to predict the end. + +# 3.3 Loss Function + +We adopt two loss functions, one for task-specific loss and the other for evidence loss. + +The task-specific loss is defined as the negative log-likelihood (NLL) of predicting golden answers, formally as follows: + +$$ +L _ {A} (D, Q, A) = - \log P (\hat {A} = A | D, Q), +$$ + +where $\hat{A}$ denotes the predicted answer and $A$ is the golden answer. + +When the evidence label $E$ is provided, we can impose supervision on the evidence extractor. For the most general case, we assume that a variable number of evidence sentences exist in each sample $(Q, A, D)$ . Inspired by the previous work (Nishida et al., 2019) that used multiple pieces of evidence, we calculate the evidence loss step by step. Suppose we will extract $K$ evidence sentences. In the first step, we compute the loss of selecting the most plausible evidence sentence. In the second step, we compute the loss in the remaining sentences, where the previously selected sentence is masked and not counted in computing the loss at the second step. The overall loss is the average of all the step-by-step loss until we select out $K$ evidence sentences. In this manner, we devise a BP-able surrogate loss function for choosing the top $K$ evidence sentences. + +Formally, we have + +$$ +L _ {E} (D, Q, E) = \frac {1}{K} \sum_ {k = 1} ^ {K} H (D, Q, E, M ^ {k}), +$$ + +where $K$ is the number of evidence sentences, a pre-specified hyperparameter. $M^{k} = \{M_{1}^{k}, M_{2}^{k}, \dots, M_{m}^{k}\}$ and each $M_{i}^{k} \in \{0, -\infty\}$ is a sentence mask, where 0 means sentence $i$ is not selected before step $k$ , and $-\infty$ means selected. + +At each step, the model will compute an attention distribution over the unselected sentences, as follows: + +$$ +\lambda_ {i} ^ {k} = \frac {\exp (F ^ {D} (\pmb {h} ^ {\pmb {Q}} , \pmb {s _ {i}}) + M _ {i} ^ {k})}{\sum_ {j} (\exp (F ^ {D} (\pmb {h} ^ {\pmb {Q}}, \pmb {s _ {j}}) + M _ {j} ^ {k}))}. +$$ + +As $M_i^k = -\infty$ for the previously selected sentences, the attention weight on those sentences will be zero, in other words, they are masked out. Then, the step-wise loss can be computed as follows: + +$$ +H (D, Q, E, M ^ {k}) = - \log \max _ {i} (\lambda_ {i} ^ {k} * E _ {i}), +$$ + +where $\lambda_i^k$ indicates the attention weight for sentence $i$ , and $E_{i}\in \{0,1\}$ is the evidence label for sentence $i$ . The sentence with the largest attention weight will be chosen as the $k$ -th evidence sentence. + +For each sentence $i$ , $M_i^1$ is initialized to be 0. At each step $k(k > 1)$ , the mask $M_i^k$ will be set to $-\infty$ if sentence $i$ is chosen as an evidence sentence at the preceding step $k - 1$ , and the mask + +remains unchanged otherwise. Formally, the mask is updated as follows: + +$$ +M _ {i} ^ {k} = \left\{ \begin{array}{l l} - \infty & i = \underset {j} {\operatorname {a r g m a x}} (\lambda_ {j} ^ {k - 1} E _ {j}) \\ M _ {i} ^ {k - 1} & o t h e r w i s e \end{array} \right.. +$$ + +During training, the total loss $L$ is the combination of the task-specific loss and the evidence loss: + +$$ +\begin{array}{l} L = \sum_ {(D, Q, A) \in U \cup L} L _ {A} (D, Q, A) + \\ \eta \sum_ {(D, Q, E) \in L} L _ {E} (D, Q, E), \tag {1} \\ \end{array} +$$ + +where $\eta$ is a factor to balance the two loss terms. $L$ and $U$ denote the two sets in which instances with and without evidence labels, respectively. Note that the evidence label in $L$ is automatically obtained in our self-training method. + +# 3.4 Self-Training MRC (STM) + +STM is designed to improve base MRC models via generating pseudo evidence labels for evidence extraction when golden labels are unavailable. STM works in an iterative manner, and each iteration consists of two stages. One is to learn a better base model for answer prediction and evidence labeling. The other is to obtain more precise evidence labels for the next iteration using the updated model. + +At each iteration, STM first trains the base model with golden answers and pseudo evidence labels from the preceding iteration using the total loss as defined Equation 1. Then the trained model can predict a distribution of pseudo evidence labels for each unlabelled instance $(D,Q,A)$ , and decides $\hat{E}$ as + +$$ +\hat {E} = \underset {E ^ {\prime}} {\operatorname {a r g m i n}} L _ {E} (D, Q, E ^ {\prime}). \tag {2} +$$ + +Define the confidence of a labelled instance $(D,Q,A,\hat{E})$ as + +$$ +\begin{array}{l} c (D, Q, A, \hat {E}) = \exp (- L _ {A} (D, Q, A)) * \\ \exp (- L _ {E} (D, Q, \hat {E})). \\ \end{array} +$$ + +Selector selects the instances with the largest confidence scores whose $L_{A}(D,Q,A)$ and $L_{E}(D,Q,\hat{E})$ are smaller than the prespecified thresholds. These labelled instances will be moved from $U$ to $L$ for the next iteration. + +In the first iteration (iteration 0), the initial labeled set $L$ is set to an empty set. Thus the base + +model is supervised only by golden answers. In this case, the evidence extractor is trained in a distant supervised manner. + +The procedure of one iteration of STM is illustrated in Algorithm 1. $\delta$ and $\epsilon$ are two thresholds (hyper-parameters). sort operation ranks the candidate samples according to their confidence scores $s$ and returns the top- $n$ samples. $n$ varies different datasets, and details are presented in the appendix. + +# Algorithm 1 One iteration of STM + +Input: Training sets $U, L$ ; Thresholds $\delta$ and $\epsilon$ ; Number of generated labels $n$ ; Weight of evidence loss $\eta$ ; + +Output: Trained MRC model $M$ ; Updated training sets $U, L$ ; + +1: Randomly initialize $M$ ; +2: Train $M$ on $U$ and $L$ ; +3: Initialize $L' = \varnothing$ +4: for each $(D, Q, A) \in U$ do +5: $l_{A} = L_{A}(D, Q, A)$ ; +6: Generate $\hat{E}$ via Equation 2; +7: $l_{\hat{E}} = L_E(D, Q, \hat{E})$ ; +8: if $l_{A} \leq \delta, l_{\hat{E}} \leq \epsilon$ then +9: $s = c(D, Q, A, \hat{E})$ ; +10: Add $(D, Q, A, \hat{E}, s)$ to $L'$ ; +11: end if +12: end for +13: $L^{\prime} = \text{sort}(L^{\prime}, n)$ ; +14: $L = L\cup L^{\prime},U = U\backslash L^{\prime}$ +15: return $M, U, L$ + +# 3.5 Analysis + +To understand why STM can improve evidence extraction and the performance of MRC, we revisit the training process and present a theoretical explanation, as inspired by (Anonymous, 2020). + +In Section 3.4, we introduce the simple labeling strategy used in STM. If there is no sample selection, the evidence loss can be formulated as + +$$ +\mathcal {L} _ {\theta^ {t}} = - \mathbb {E} _ {x \sim p (x)} \mathbb {E} _ {E \sim p _ {\theta^ {t - 1}} (E | x)} \log p _ {\theta^ {t}} (E | x), +$$ + +where $x$ represents $(D,Q,A)$ , and $\theta^t$ is the parameter of the $t$ -th iteration. In this case, pseudo evidence labels $E$ are randomly sampled from $p_{\theta^{t-1}}(E|x)$ to guide $p_{\theta^t}(E|x)$ , and therefore minimizing $\mathcal{L}_{\theta^t}$ will lead to $\theta^t = \theta^{t-1}$ . As a matter of fact, the sample selection strategy in STM is to filter out the low-quality pseudo labels with two + +
Model / DatasetCoQAMARCOBoolQ
BERT-MLP78.070.871.6
BERT-HA78.871.372.9
BERT-HA+RL79.370.370.4
BERT-HA+Rule78.170.473.8
BERT-HA+STM80.5†72.3‡75.2†
BERT-HA+Gold82.0N/AN/A
+ +Table 2: Classification accuracy on three Yes/No question answering datasets. N/A means there is no golden evidence label. Significance tests were conducted between BERT-HA+STM and the best baseline of each column (t-test). $\ddagger$ means $p$ -value $< 0.01$ , and $\dagger$ means $p$ -value $< 0.05$ . + +distribution mappings, $f$ and $g$ . The optimizing target becomes + +$$ +\mathcal {L} _ {\theta^ {t}} ^ {\prime} = - \mathbb {E} _ {x \sim f (p (x))} \mathbb {E} _ {E \sim g (p _ {\theta^ {t - 1}} (E | x))} \log p _ {\theta^ {t}} (E | x). +$$ + +In STM, $f$ is a filter function with two pre-specified thresholds, $\delta$ and $\epsilon$ . $g$ is defined as argmax (Equation 2). Compared with random sampling, our strategy tends to prevent $\theta^t$ from learning wrong knowledge from $\theta^{t-1}$ . And the subsequent training might benefit from implicitly learning the strategy. In general, the strategy of STM imposes naive prior knowledge on the base models via the two distribution mappings, which may partly explain the performance gains. + +# 4 Experiments + +# 4.1 Datasets + +# 4.1.1 Yes/No Question Answering (YNQA) + +CoQA (Reddy et al., 2019) is a multi-turn conversational question answering dataset where questions may be incomplete and need historical context to get the answers. We extracted the Yes/No questions from CoQA, along with their histories, to form a YNQA dataset. + +BoolQ (Clark et al., 2019) consists of Yes/No questions from the Google search engine. Each question is accompanied by a related paragraph. We expanded each short paragraph by concatenating some randomly sampled sentences. + +MS MARCO (Nguyen et al., 2016) is a large MRC dataset. Each question is paired with a set of reference documents, and the answer may not exist in the documents. We extracted all Yes/No questions, and randomly picked some reference documents containing evidence1. To balance the ratio of Yes + +and No questions, we randomly removed some questions whose answers are Yes. + +# 4.1.2 Multiple-choice MRC + +RACE (Lai et al., 2017) consists of about 28,000 passages and 100,000 questions from English exams for middle (RACE-M) and high (RACE-H) schools of China. The average number of sentences per passage in RACE-M and RACE-H is about 16 and 17, respectively. + +DREAM (Sun et al., 2019) contains 10,197 multiple-choice questions with 6,444 dialogues, collected from English examinations. In DREAM, $85\%$ of the questions require reasoning with multiple evidential sentences. + +MultiRC (Khashabi et al., 2018) is an MMRC dataset where the amount of correct options to each question varies from 1 to 10. Each question in MultiRC is annotated with evidence from its reference document. The average number of annotated evidence sentences for each question is 2.3. + +# 4.1.3 Open-domain QA (ODQA) + +Quasar-T (Dhingra et al., 2017b) consists of 43,000 open-domain trivial questions, whose answers were extracted from ClueWeb09. For fair comparison, we retrieved 50 reference sentences from ClueWeb09 for each question the same as DSQA (Lin et al., 2018). + +# 4.2 Baselines + +We compared several methods in our experiments, including some powerful base models without evidence supervision and some existing methods (*+Rule/RL/DPL/STM), which improve MRC with noisy evidence labels. Experimental details are shown in the appendix. + +YNQA and MMRC: (1) BERT-MLP utilizes a BERT encoder and an MLP answer predictor. The predictor makes classification based on the BERT representation at the position of [CLS]. The parameters of the BERT module were initialized from BERT-base. (2) BERT-HA refers to the base model introduced in Section 3.2, which applies hierarchical attention over words and sentences. (3) Based on BERT-HA, BERT-HA+Rule supervises the evidence extractor with noisy evidence labels, which are derived from hand-crafted rules. We have explored three types of rules based on Jaccard similarity, integer linear programming + +
Model / DatasetRACE-MRACE-HMultiRCDREAM
Dev AccTest AccDev AccTest AccF1mDev F1aEM0Dev AccTest Acc
GPT+DPL64.262.458.560.270.567.813.357.357.7
BERT-MLP66.265.561.659.571.869.121.263.963.2
BERT-HA67.868.262.660.470.168.119.964.262.8
BERT-HA+RL68.566.962.560.072.169.521.163.163.4
BERT-HA+Rule66.666.461.659.069.566.717.962.563.0
BERT-HA+STM69.3‡69.2†64.7‡62.6‡74.0‡70.9‡22.0†65.3‡65.8†
BERT-HA+GoldN/AN/AN/AN/A73.770.927.2N/AN/A
+ +(ILP) (Boudin et al., 2015), and inverse term frequency (ITF) (Wang et al., 2019), among which ITF performed best in most cases. For simplicity, we merely provided experimental results with the rule of ITF. (4) Based on BERT-HA, BERT-HA+RL trains the evidence extractor via reinforcement learning, similar to (Choi et al., 2017). And (5) another deep programming logic (DPL) method, GPT+DPL (Wang et al., 2019), is complicated, and the source code is not provided. Thus we directly used the results from the original paper and did not evaluate it on BERT. + +ODQA: (1) For each question, DSQA (Lin et al., 2018) aggregates multiple relevant paragraphs from ClueWeb09, and then infers an answer from these paragraphs. (2) GA (Dhingra et al., 2017a) and BiDAF (Seo et al., 2017) perform semantic matching between questions and paragraphs with attention mechanisms. And (3) $\mathbf{R}^3$ (Wang et al., 2018) is a reinforcement learning method that explicitly selects the most relevant paragraph to a given question for the subsequent reading comprehension module. + +# 4.3 Main Results + +# 4.3.1 Yes/No Question Answering + +Table 2 shows the results on the three YNQA datasets. We merely reported the classification accuracy on the development sets since the test sets are unavailable. + +BERT-HA+STM outperformed all the baselines, which demonstrates the effectiveness of our method. Compared with BERT-MLP, BERTHA achieved better performance on all the three + +Table 3: Results on three multiple-choice reading comprehension datasets. $\mathrm{(F1_a}$ : F1 score on all answer-options; $\mathrm{F1}_m$ : macro-average F1 score of all questions; $\mathrm{EM_0}$ : exact match.) Note that there is no golden evidence label on RACE and DREAM. The results for DPL (deep programming logic) are copied from (Wang et al., 2019). Significance tests were conducted between BERT-HA+STM and the best baseline of each column (t-test). $\ddagger$ means $p$ -value $< 0.01$ , and $\dagger$ means $p$ -value $< 0.05$ . + +
ModelEMF1
GA (Dhingra et al., 2017a)26.426.4
BiDAF (Seo et al., 2017)25.928.5
R3(Wang et al., 2018)35.341.7
DSQA (Lin et al., 2018)40.747.6
+distant supervision41.748.7
+STM41.8†49.2†
+ +Table 4: Experimental results on the test set of Quasar-T. $\mathbf{R}^3$ is a RL-based method. Results of GA, BiDAF and $\mathbb{R}^3$ are copied from (Lin et al., 2018). DSQA+STM outperforms the best baseline (DSQA+DS) significantly (t-test, $p$ -value $< 0.05$ , DS= distant supervision). + +datasets, indicating that distant supervision on evidence extraction can benefit Yes-No question answering. However, compared with BERT-HA, BERT-HA+RL made no improvement on MARCO and BoolQ, possibly due to the high variance in training. Similarly, BERT-HA+Rule performed worse than BERT-HA on CoQA and MARCO, implying that it is more difficult for the rule-based methods (inverse term frequency) to find correct evidence in these two datasets. In contrast, our method BERT-HA+STM is more general and performed the best on all datasets. BERT-HA+STM achieved comparable performance with BERT-HA+Gold, which stands for the upper bound by providing golden evidence labels, indicating that the effectiveness of noisy labels in our method. + +# 4.3.2 Multiple-choice MRC + +Table 3 shows the experimental results on the three MMRC datasets. We adopt the metrics from the referred papers. STM improved BERT-HA consis + +
Model/DatasetCoQA P@1MultiRC
R@1R@2R@3P@1P@2P@3
BERT-HA20.028.249.862.562.355.246.6
+RL5.210.522.332.924.025.324.7
+Rule38.432.453.665.171.859.648.7
+STM (iter 1)32.732.857.170.172.263.352.5
+STM (iter 2)37.332.958.071.372.764.453.5
+STM (iter 3)39.931.455.368.869.561.651.6
BERT-HA+Gold53.633.759.573.474.565.954.8
+ +Table 5: Evidence extraction evaluation on the development sets of CoQA and MultiRC. $\mathrm{P@k / R@k}$ represent precision / recall of the generated evidence labels, respectively for top $k$ predicted evidence sentences. + +tently on RACE-H, MultiRC and DREAM in terms of all the metrics. However, the improvement on RACE-M is limited (1.0 gain on the test sets). The reason may be that RACE-M is much simpler than RACE-H, and thus, it is not challenging for the evidence extractor of BERT-HA to find the correct evidence on RACE-M. + +# 4.3.3 Open-domain Question Answering + +Table 4 shows the exact match scores and F1 scores on Quasar-T. Distant evidence supervision (DS) indicates whether a passage contains the answer text. Compared with the base models DSQA and DSQA+DS, DSQA+STM achieved better performance in both metrics, which verifies that DSQA can also benefit from Self-Training. Our method is general and can improve both lightweight and heavyweight models, like LSTM-based and BERT-based models, in different tasks. + +# 4.4 Performance of Evidence Extraction + +To evaluate the performance of STM on evidence extraction, we validated the evidence labels generated by several methods on the development sets of CoQA and MultiRC. Considering that the evidence of each question in MultiRC is a set of sentences, we adopted precision@k and recall@k as the metrics for MultiRC, which represent the precision and recall of the generated evidence labels, respectively, when $k$ sentences are predicted as evidence. We adopted only precision@1 as the metric for CoQA as this dataset provides each question with one golden evidence sentence. + +Table 5 shows the performance of five methods for evidence labeling on the CoQA and MultiRC development sets. It can be seen that BERT-HA+STM outperformed the base model BERT-HA by a large margin in terms of all the metrics. As a result, the evidence extractor augmented with STM pro + +vided more evidential information for the answer predictor, which may explain the improvements of BERT-HA+STM on the two datasets. + +# 4.5 Analysis on Error Propagation + +To examine whether error propagation exists and how severe it is in STM, we visualized the evolution of evidence predictions on the development set of CoQA (Figure 3). From the inside to the outside, the four rings show the statistic results of the evidence predicted by BERT-HA (iteration 0) and BERT-HA+STM (iteration 1, 2, 3). Each ring is composed of all the instances from the development set of CoQA, and each radius corresponds to one sample. If the evidence of an instance is predicted correctly, the corresponding radius is marked in green, otherwise in purple. Two examples are shown in the appendix due to space limit. + +Self-correction. As the innermost ring shows, about $80\%$ of the evidence predicted by BERT-HA (iter 0) was incorrect. However, the proportion of wrong instances reduced to $60\%$ after self-training (iter 3). More concretely, $27\%$ of the wrong predictions were gradually corrected with high confidence within three self-training iterations, as exemplified by instance A in Figure 3. + +Error propagation. We observed that $4\%$ of the evidence was mistakenly revised by STM, as exemplified by instance $B$ in Figure 3. In such a case, the incorrect predictions are likely to be retained in the next iteration. But almost $50\%$ of such mistakes were finally corrected during the subsequent iterations like instance $C$ . This observation shows that STM can prevent error propagation to avoid catastrophic failure. + +![](images/8fe33480e4c1bd8442b3a0f93d6c77970439a5faaaca792d53450456fc1a015f.jpg) +Figure 3: Evolution of evidence predictions on the development set of CoQA. From the inside to the outside, the four rings correspond to BERT-HA (iteration 0) and BERT-HA+STM (iteration 1, 2, 3), respectively. + +
Model/MetricAns. AccEvi. Acc
RoBERTa-HA92.613.8
RoBERTa-HA+STM92.719.3(+40%)
+ +Table 6: Answer prediction accuracy (Ans. Acc) and evidence extraction accuracy (Evi. Acc) on the development set of CoQA. + +# 4.6 Improvement Over Stronger Pretrained Models + +To evaluate the improvement of STM over stronger pre-trained models, we employed RoBERTa-large (Liu et al., 2019) as the encoder in the base model. Table 6 shows the results on CoQA. STM significantly improved the evidence extraction (Evi. Acc) of the base model. However, the improvement on answer prediction (Ans. Acc) is marginal. One reason is that RoBERTa-HA achieved such a high performance that there was limited room to improve. Another possible explanation is that evidence information is not important for such stronger models to generate answers. In other words, they may be more adept at exploiting data bias to make answer prediction. In comparison, weaker pre-trained models, such as BERT-base, can benefit from evidence information due to their weaker ability to exploit data bias. + +# 5 Conclusion and Future Work + +We present an iterative self-training method (STM) to improve MRC models with soft evidence extraction, when golden evidence labels are unavailable. In this iterative method, we train the base model with golden answers and pseudo evidence labels. The updated model then generates new pseudo evidence labels, which can be used as additional supervision in the next iteration. Experiment results show that our proposed method consistently improves the base models in seven datasets for three MRC tasks, and that better evidence extraction indeed enhances the final performance of MRC. + +As future work, we plan to extend our method to other NLP tasks which rely on evidence finding, such as natural language inference. + +# Acknowledgments + +This work was jointly supported by the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096), and the National Key R&D Program of China (Grant No. + +2018YFC0830200). We thank THUNUS NExT Joint-Lab for the support. + +# References + +Anonymous. 2020. Revisiting self-training for neural sequence generation. ICLR under review. +Stephen Bax. 2013. The cognitive processing of candidates during reading tests: Evidence from eyetracking. Language Testing, 30:441-465. +Avrim Blum and Tom M. Mitchell. 1998. Combining labeled and unlabeled data with co-training. In $COLT$ , pages 92-100. +Florian Boudin, Hugo Mougard, and Benoit Favre. 2015. Concept-based summarization using integer linear programming: From concept pruning to multiple optimal solutions. In EMNLP, pages 1914-1918. +Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In ACL, pages 1657-1668. +Yen-Chun Chen and Mohit Bansal. 2018. Fast abstractive summarization with reinforce-selected sentence rewriting. In ACL, pages 675-686. +Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, and Jonathan Berant. 2017. Coarse-to-fine question answering for long documents. In ACL, pages 209-220. +Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. *Boolq: Exploring the surprising difficulty of natural yes/no questions.* In *NAACL*, pages 2924–2936. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In *NAACL*, pages 4171–4186. +Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W. Cohen, and Ruslan Salakhutdinov. 2017a. Gated-attention readers for text comprehension. In ACL, pages 1832–1846. +Bhuwan Dhingra, Kathryn Mazaitis, and William W. Cohen. 2017b. Quasar: Datasets for question answering by search and reading. CoRR, abs/1707.03904. +Ming Ding, Chang Zhou, Qibin Chen, Hongxia Yang, and Jie Tang. 2019. Cognitive graph for multi-hop reading comprehension at scale. In ACL, pages 2694-2703. +Andreas Hanselowski, Hao Zhang, Zile Li, Daniil Sorokin, Benjamin Schiller, Claudia Schulz, and Iryna Gurevych. 2018. Ukp-athene: Multi-sentence + +textual entailment for claim verification. CoRR, abs/1809.01479. +Hsin-Yuan Huang, Eunsol Choi, and Wen-tau Yih. 2019. Flowqa: Grasping flow in history for conversational machine comprehension. In ICLR. +Hsin-Yuan Huang, Chenguang Zhu, Yelong Shen, and Weizhu Chen. 2018. Fusionnet: Fusing via fully-aware attention with application to machine comprehension. In ICLR. +Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In NAACL, pages 252-262. +Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. 2018. Bilinear attention networks. In NIPS, pages 1571-1581. +Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. In EMNLP, pages 785-794. +Weikang Li, Wei Li, and Yunfang Wu. 2018. A unified model for document-based question answering based on human-like reading strategy. In AAAI, pages 604-611. +Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised open-domain question answering. In ACL, pages 1736-1745. +Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692. +Fan Ma, Deyu Meng, Qi Xie, Zina Li, and Xuanyi Dong. 2017. Self-paced co-training. In ICML, pages 2275-2284. +Jing Ma, Wei Gao, Shafiq R. Joty, and Kam-Fai Wong. 2019. Sentence-level evidence embedding for claim verification with hierarchical attention networks. In ACL, pages 2561-2571. +Todor Mihaylov and Anette Frank. 2018. Knowledgeable reader: Enhancing cloze-style reading comprehension with external commonsense knowledge. In ACL, pages 821-832. +Sewon Min, Victor Zhong, Richard Socher, and Caiming Xiong. 2018. Efficient and robust question answering from minimal context over documents. In ACL, pages 1725-1735. +Minh Nguyen and Thien Nguyen. 2018. Who is killed by police: Introducing supervised attention for hierarchical lstms. In COLING, pages 2277-2287. + +Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In NIPS. +Kosuke Nishida, Kyosuke Nishida, Masaaki Nagata, Atsushi Otsuka, Itsumi Saito, Hisako Asano, and Junji Tomita. 2019. Answering while summarizing: Multi-task learning for multi-hop QA with evidence extraction. In ACL, pages 2335-2345. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100, $000+$ questions for machine comprehension of text. In EMNLP, pages 2383-2392. +Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. Coqa: A conversational question answering challenge. TACL, 7:249-266. +H. J. Scudder. 1965. Probability of error of some adaptive pattern-recognition machines. IEEE Trans. Information Theory, 11. +Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In ICLR. +Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. DREAM: A challenge dataset and models for dialogue-based reading comprehension. TACL, 7:217-231. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS, pages 6000-6010. +Hai Wang, Dian Yu, Kai Sun, Jianshu Chen, Dong Yu, Dan Roth, and David A. McAllester. 2019. Evidence sentence extraction for machine reading comprehension. CoRR, abs/1902.08852. +Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerry Tesauro, Bowen Zhou, and Jing Jiang. 2018. $\mathbf{R}^{3}$ : Reinforced ranker-reader for open-domain question answering. In AAAI, pages 5981-5988. +Zhiguo Wang, Wael Hamza, and Radu Florian. 2017. Bilateral multi-perspective matching for natural language sentences. In *IJCAI*, pages 4144-4150. +Jiawei Wu, Lei Li, and William Yang Wang. 2018. Reinforced co-training. In NAACL, pages 1252-1262. +Wenpeng Yin and Dan Roth. 2018. Twowingos: A twowing optimization strategy for evidential claim verification. In EMNLP, pages 105-114. +Jianxing Yu, Zhengjun Zha, and Jian Yin. 2019. Inferential machine comprehension: Answering questions by recursively deducing the evidence chain from text. In ACL, pages 2241-2251. + +Min-Ling Zhang and Zhi-Hua Zhou. 2011. Cotrade: Confident co-training with data editing. TSMCB, 41:1612-1626. +Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019. GEAR: graph-based evidence aggregating and reasoning for fact verification. In ACL, pages 892-901. +Chenguang Zhu, Michael Zeng, and Xuedong Huang. 2018. Sdnet: Contextualized attention-based deep network for conversational question answering. CoRR, abs/1812.03593. +Djamel A. Zighed, Stéphane Lallich, and Fabrice Muhlenbach. 2002. Separability index in supervised learning. In PKDD, pages 475-487. + +![](images/7098b526a8b94c4fc9ee5ceae276c8c671d5e22a44b8fb90cd6fffff55935c11.jpg) +A Case Study +Case 1 + +![](images/74e6f7ec678e64af7b26ed07c56c90e5cca58a53417a368bb2254f3230da2b95.jpg) +Case 2 +Figure 4: Weight distribution of the two cases from the sentence-level attention. + +In Section 4.5 of the main paper, we provide a quantitative analysis of the evolution of evidence predictions, and draw two conclusions: (1) STM can help the base model to correct itself; (2) Error propagation will not result in catastrophic failure, though exists. + +To help understand these two conclusions, we provide two corresponding cases from the development set of CoQA (Reddy et al., 2019). The original instances are shown in Table 7, and the weight distribution from the sentence-level attention is shown in Figure 4. In case 1, BERT-HA made wrong evidence prediction, while STM revised it subsequently, which shows the ability of self-correction. In case 2, BERT-HA first selected the correct evidence with high confidence. However, in the iteration 1, BERT-HA with STM was distracted by another plausible sentence. Instead of insisting on the incorrect prediction, STM led BERT-HA back to the right way, which shows that error propagation is not catastrophic. + +# B Hyper-Parameters for Self-Training + +We implemented BERT-HA with BERT-base from a commonly used library $^2$ , and directly used the original source code of $\mathrm{DSQA}^3$ (Lin et al., 2018). All the codes and datasets will be released after the review period. The hyper-parameters used in BERT-HA and BERT-HA+STM are shown in Table 8. + +# Case 1) + +# Passage: + +...(3) "Why don't you tackle Indian River, Daylight?" (4) Harper advised, at parting. (5) There's whole slathers of creeks and draws draining in up there, and somewhere gold just crying to be found. (6) That's my hunch. (7) There's a big strike coming, and Indian River ain't going to be a million miles away. (8) "And the place is swarming with moose," Joe Ladue added. (9) "Bob Henderson's up there somewhere, been there three years now, swearing something big is going to happen, living off'n straight moose and prospecting around like a crazy man." (10) Daylight decided to go Indian River a flutter, as he expressed it; but Elijah could not be persuaded into accompanying him. Elijah's soul had been seared by famine, and he was obsessed by fear of repeating the experience. (11) "I jest can't bear to separate from grub," he explained. (12) "I know it's downright foolishness, but I jest can't help it..." + +Question: Are there many bodies of water there? + +Answer: No + +# Case 2) + +# Passage: + +(1)If you live in the United States, you can't have a full-time job until you are 16 years old. (2)At 14 or 15, you work part-time after school or on weekends, and during summer vacation you can work 40 hours each week. (3)Does all that mean that if you are younger than 14, you can't make your own money? (4)Of course not! (5)Kids from 10-13 years of age can make money by doing lots of things. (6)Valerie, 11, told us that she made money by cleaning up other people's yards. ...(11)Kids can learn lots of things from making money. (12)By working to make your own money, you are learning the skills you will need in life. (13)These skills can include things like how to get along with others, how to use technology and how to use your time wisely. (14)Some people think that asking for money is a lot easier than making it; however, if you can make your own money, you don't have to depend on anyone else... + +Question: Can they learn time management? + +Answer: No + +Table 7: Examples from the development set of CoQA. Evidential sentences in red in reference passages are crucial to answer the questions. Sentences in blue are distracting as Figure 4 shows. + +
DatasetRACE-HRACE-MDREAMMultiRCCoQAMARCOBoolQ
Lmax380380512512512480512
learning rate5e-5♣/4e-5♣5e-5♣/4e-5♣2e-52e-52e-52e-53e-5
epoch335832♣/3♣4
η0.80.80.80.80.80.80.8
batch size32323232686
ε0.50.50.50.50.60.50.5
δ0.90.90.80.80.90.90.7
n40000100003000200015001000500
Kmax2343111
+ +Table 8: Hyper-parameters marked with $\clubsuit/\spadesuit$ are used in BERT-HA/BERT-HA+STM, respectively. Other unmarked hyper-parameters are shared by these two models. \ No newline at end of file diff --git a/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/images.zip b/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..f18ac0dd6b60813ef5b4cc2649ede8903ce35bec --- /dev/null +++ b/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5215de2cfaa9bef577d036392efc4b737e0479ff500e90441213d825cf7a6318 +size 388626 diff --git a/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/layout.json b/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..aa30c773f38e086363944fbf2c55d11ab7ca2c35 --- /dev/null +++ b/aselftrainingmethodformachinereadingcomprehensionwithsoftevidenceextraction/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cb76a5997d99e73c151cff0e1ea16af5e8d57114897b88034b148f9bb9be4b24 +size 534300 diff --git a/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/dc91b364-1237-4a58-bbf4-721a1a470532_content_list.json b/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/dc91b364-1237-4a58-bbf4-721a1a470532_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..a3bd3107a0295db0624784799f263d1dedb623ce --- /dev/null +++ b/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/dc91b364-1237-4a58-bbf4-721a1a470532_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8edfc6b88d36b01abcf0eef633d6af9d95074016606809079267fac69d793531 +size 45420 diff --git a/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/dc91b364-1237-4a58-bbf4-721a1a470532_model.json b/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/dc91b364-1237-4a58-bbf4-721a1a470532_model.json new file mode 100644 index 0000000000000000000000000000000000000000..c5e304c3d9c6808a3b89ddd05ff87d8b184b6c21 --- /dev/null +++ b/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/dc91b364-1237-4a58-bbf4-721a1a470532_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:79913d2fa215f7a313a119b1da11dc8fabe06da7ac61535588b02e6714c7a70e +size 57025 diff --git a/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/dc91b364-1237-4a58-bbf4-721a1a470532_origin.pdf b/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/dc91b364-1237-4a58-bbf4-721a1a470532_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..653e748dd1d3377134d47424dfcf20e6f340ddb1 --- /dev/null +++ b/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/dc91b364-1237-4a58-bbf4-721a1a470532_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:870530126e86a2844e6a3696a2f85e24175b0be6ee118dc3640fb71f6b9cbf96 +size 417885 diff --git a/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/full.md b/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/full.md new file mode 100644 index 0000000000000000000000000000000000000000..b40f48248b6f218c6b5244bfc7107ec1c4829771 --- /dev/null +++ b/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/full.md @@ -0,0 +1,207 @@ +# A Simple and Effective Unified Encoder for Document-Level Machine Translation + +Shuming Ma, Dongdong Zhang, Ming Zhou + +Microsoft Research Asia +{shumma, dozhang, mingzhou} @microsoft.com + +# Abstract + +Most of the existing models for document-level machine translation adopt dual-encoder structures. The representation of the source sentences and the document-level contexts1 are modeled with two separate encoders. Although these models can make use of the document-level contexts, they do not fully model the interaction between the contexts and the source sentences, and can not directly adapt to the recent pre-training models (e.g., BERT) which encodes multiple sentences with a single encoder. In this work, we propose a simple and effective unified encoder that can outperform the baseline models of dual-encoder models in terms of BLEU and ME-TEOR scores. Moreover, the pre-training models can further boost the performance of our proposed model. + +# 1 Introduction + +Thanks to the development of the deep learning methods, the machine translation systems have achieved good performance that is even comparable with human translation in the news domain (Hassan et al., 2018). However, there are still some problems with machine translation in the document-level context (Läubli et al., 2018). Therefore, more recent work (Jean et al., 2017; Wang et al., 2017; Tiedemann and Scherrer, 2017; Maruf and Haffari, 2018; Bawden et al., 2018; Voita et al., 2019a; Junczys-Dowmunt, 2019) is focusing on the document-level machine translation. + +Most of the existing models (Zhang et al., 2018; Maruf et al., 2019; Werlen et al., 2018) for document-level machine translation use two encoders to model the source sentences and the document-level contexts. Figure 1a illustrates the structure of these models. They extend the standard + +![](images/e3557dab2bcd14abad2f48ab710a76c67c3d9648457129d47d4b9c11e888d019.jpg) +(a) Dual-Encoder Structure + +![](images/f57bf3123c53331601c96e6667387e370ddd23b1fac2db93ef78919a3c24e1fd.jpg) +(b) Uni-Encoder Structure +Figure 1: The overview of the dual-encoder structure and the uni-encoder structure for document-level machine translation. + +Transformer model with a new context encoder, and the encoder for source sentences is conditioned on this context encoder. However, they do not fully model the interaction between the contexts and the source sentences because the self-attention layers are performed inside each encoder separately. Moreover, it cannot be directly adapted to the recent pre-training models (Devlin et al., 2019; Peters et al., 2018; Radford et al., 2019; Dong et al., 2019; Song et al., 2019; Lample and Conneau, 2019), which encodes multiple sentences with a single encoder. + +Different from the dual-encoder structure, the uni-encoder structure takes the concatenation of contexts and source sentences as the input (as shown in Figure 1b). Therefore, when modeling the contexts, it can make full use of the interaction between the source sentences and the contexts, while the dual-encoder model fails to exploit this information. Moreover, the uni-encoder structure is identical to the recent pre-training models (e.g., + +BERT). However, the previous uni structure suffers from two problems for document-level machine translation. First, the attention is distracted due to longer sequences. Second, the source sentences and the contexts are modeled equally, which is contrary to the fact that the translation is more related to the current source sentences. + +To address these problems, we propose a novel flat structure with a unified encoder called FlatTransformer. It separates the encoder of standard Transformers into two parts so that the attention can concentrate at both the global level and the local level. At the bottom of the encoder blocks, the self-attention is applied to the whole sequence. At the top of the blocks, it is only implemented at the position of the source sentences. We evaluate this model on three document-level machine translation datasets. Experiments show that it can achieve better performance than the baseline models of dual-encoder structures in terms of BLEU and METEOR scores. Moreover, the pre-training models can further boost the performance of the proposed structure. + +# 2 Flat-Transformer + +In this section, we introduce our proposed flat structured model, which we denote as Flat-Transformer. + +# 2.1 Document-Level Translation + +Formally, we denote $X = \{x_{1}, x_{2}, \dots, x_{N}\}$ as the source document with $N$ sentences, and $Y = \{y_{1}, y_{2}, \dots, y_{M}\}$ as the target document with $M$ sentences. We assume that $N = M$ because the sentence mismatches can be fixed by merging sentences with sentence alignment algorithms (Sennrich and Volk, 2011). Therefore, we can assume that $(x_{i}, y_{i})$ is a parallel sentence pair. + +Following Zhang et al. (2018), $y_{< i}$ can be omitted because $x_{< i}$ and $y_{< i}$ conveys the same information. As a result, the probability can be approximated as: + +$$ +P (Y | X) \approx \prod_ {i = 1} ^ {N} P \left(y _ {i} \mid x _ {i}; x _ {< i}; x _ {> i}\right) \tag {1} +$$ + +where $x_{i}$ is the source sentence aligned to $y_{i}$ , and $(x_{i})$ is the document-level context used to translate $y_{i}$ . + +![](images/310ac5ebb009f574f188d1c27852c8379588b712e878dceeb7102da11527634b.jpg) +Figure 2: The architecture of the proposed Flat-Transformer model. + +# 2.2 Segment Embedding + +The flat structure adopts a unified encoder that does not distinguish the context sentences and the source sentences. Therefore, we introduce the segment embedding to identify these two types of inputs. Formally, given the source input of the surrounding context $c$ and the current sentence $x$ , we project them into word embedding and segment embedding. Then, we perform a concatenation operation to unify them into a single input: + +$$ +\boldsymbol {e} = \left[ E (\boldsymbol {c}); E (\boldsymbol {x}) \right] \tag {2} +$$ + +$$ +\boldsymbol {s} = [ S (\boldsymbol {c}); S (\boldsymbol {x}) ] \tag {3} +$$ + +where $[;]$ denotes the concatenation operation, $E$ is the word embedding matrix, and $S$ is the segment embedding matrix. Finally, we add $e$ and $s$ as the input of the encoder. + +# 2.3 Unified Flat Encoder + +Given the document context, the input sequences of Flat-Transformer are much longer than the standard Transformer, which brings additional challenges. First, the attention is distracted, and its weights become much smaller after the normalization operation. Second, the memory consumption and the computation cost increase, so it is difficult to enlarge the model size, which hinders the adaptation to the pre-training model. + +To address this problem, we introduce a unified flat encoder. As shown in Figure 2, at the bottom of the encoder blocks, we apply self-attention and the feed-forward layer to the concatenated sequence of the contexts and the current sentence: + +$$ +h _ {1} = \operatorname {T r a n s f o r m e r} (e + s; \theta) \tag {4} +$$ + +where $\theta$ is the parameter of the Transformer blocks. At the top of encoder blocks, each self-attention and feed-forward layer is only implemented on the position of the current sentences: + +$$ +h _ {2} = \operatorname {T r a n s f o r m e r} \left(h _ {1} [ s: t ]; \theta\right) \tag {5} +$$ + +where $s$ and $t$ are the starting and ending positions of the source sentences in the concatenation sequence. In this way, the attention can focus more on the current sentences, while the contexts are served as the supplemental semantics for the current sentences. It is noted that the total number of the bottom blocks and the top blocks is equal to the number of standard Transformer's blocks, so there is no more parameter than that of the standard Transformer. + +# 2.4 Training and Decoding + +The training of Flat-Transformer is consistent with that of standard Transformer, using the cross entropy loss: + +$$ +L = - \sum_ {i = 1} ^ {n} \log P \left(\mathbf {Y} _ {i} \mid \mathbf {X} _ {i}\right) \tag {6} +$$ + +At the decoding step, it translates the document sentence-by-sentence. When translating each sentences, it predicts the target sequence with the highest probability given the current sentence $x_{i}$ and the surrounding contexts $x_{i}$ : + +$$ +\hat {y} _ {i} = \underset {y _ {i} \in \mathcal {V}} {\arg \max } P \left(y _ {i} \mid x _ {i}; x _ {< i}; x _ {> i}\right) \tag {7} +$$ + +# 2.5 Comparison with Existing Models + +Here, we summarize some significant differences compared with the existing models for document-level machine translation: + +1. Compared with the dual-encoder models, our model uses a unified encoder. To combine the representation of two encoders for the decoder, these dual-encoder models should add a layer inside the encoders. Flat-Transformer does not put any layer on top of the standard Transformer, so it is consistent with the recent pre-training models. +2. Compared with the previous uni-encoder models, our model limits the top transformer layers to only model the source sentences. In this way, our model has an inductive bias of modeling on more current sentences than the contexts, because the translation is more related to the current sentences. + +
Dataset#SentAvg. #Sent
TED0.21M/9K/2.3K121/96/99
News0.24M/2K/3K39/27/19
Europarl1.67M/3.6K/5.1K14/15/14
+ +Table 1: Statistics of three document-level machine translation datasets. + +3. There are also some alternative approaches to limit the use of context vectors. For example, we can limit only the top attention layers to attend to the source sentence while keeping the feed-forward layers the same. Compared with this approach, our model does not feed the output vectors of the context encoder to the decoder, so that the decoder attention is not distracted by the contexts. The context vectors in our model is only to help encode a better representation for current source sentences. + +# 3 Experiments + +We evaluate the proposed model and several state-of-the-art models on three document-level machine translation benchmarks. We denote the proposed model as Flat-Transformer. + +# 3.1 Datasets + +Following the previous work (Maruf et al., 2019), we use three English-German datasets as the benchmark datasets, which are TED, News, and Europarl. The statistic of these datasets can be found in Table 1. We obtain the processed datasets from Maruf et al. $(2019)^{2}$ , so that our results can be compared with theirs reported in Maruf et al. (2019). We use the scripts of Moses toolkit to tokenize the sentences. We also split the words into subword units (Sennrich et al., 2016) with $30\mathrm{K}$ mergeoperations. The evaluation metrics are BLEU (Papineni et al., 2002) and Meteor (Banerjee and Lavie, 2005). + +# 3.2 Implementation Details + +The batch size is limited to 4,000 tokens for all models. We set the hidden units of the multi-head component and the feed-forward layer as 512 and 1024. The embedding size is 512, the number of heads is 4, and the dropout rate (Srivastava et al., 2014) is 0.3. The number of Transformer blocks + +
ModelTEDNewsEuroparl
BLEUMETRBLEUMETRBLEUMETR
DualHAN (Werlen et al., 2018)24.5845.4825.0344.0229.5846.91
SAN (Maruf et al., 2019)24.6245.3224.8444.2729.9047.11
QCN (Yang et al., 2019)25.1945.9122.3741.8829.8247.86
Transformer (Zhang et al., 2018)24.0145.3022.4242.3029.9348.16
+BERT23.1945.2522.0642.2530.7248.62
UniRNN (Bahdanau et al., 2015)19.2440.8116.5136.7926.2644.14
Transformer (Vaswani et al., 2017)23.2844.1722.7842.1928.7246.22
Our Flat-Transformer24.8747.0523.5543.9730.0948.56
+BERT26.6148.5324.5245.4031.9949.76
+ +Table 2: Results on three document-level machine translation benchmarks ("Dual" denotes dual-encoder, while "Uni" means uni-encoder). + +
TEDBLEUMETEOR
Flat-Transformer24.8747.05
w/o Segment24.3646.20
w/o Unified23.2844.17
+ +Table 3: Ablation study on the TED dataset. + +for the top encoder is 5, while that for the bottom encoder is 1. When fine-tuning on the pre-training BERT, we adopt the base setting, and the hidden size, the feed-forward dimension, and the number of heads are 768, 3072, 12. To balance the accuracy and the computation cost, we use one previous sentence and one next sentence as the surrounding contexts. + +We use the Adam (Kingma and Ba, 2014) optimizer to train the models. For the hyper-parameters of Adam optimizer, we set two momentum parameters $\beta_{1} = 0.9$ and $\beta_{2} = 0.98$ , and $\epsilon = 1\times 10^{-8}$ . The learning rate linearly increases from 0 to $5\times 10^{-4}$ for the first 4,000 warming-up steps and then decreases proportional to the inverse square root of the update numbers. We also apply label smoothing to the cross-entropy loss, and the smoothing rate is 0.1. We implement the early stopping mechanism with patience that the loss on the validation set does not fall in 10 epochs. + +# 3.3 Baselines + +We compare our models with two categories of baseline models: the dual-encoder models and the uni-encoder models. + +Uni-encoder: RNNSearch (Bahdanau et al., 2015) is an RNN-based sequence-to-sequence + +model with the attention mechanism. Transformer (Vaswani et al., 2017) is a popular model for machine translation, based solely on attention mechanisms. For a fair comparison, we use the same hyper-parameters as our model's, which is described in Section 3.2. + +Dual-encoder: Zhang et al. (2018) extends the Transformer model with a new context encoder to represent the contexts. HAN (Werlen et al., 2018) is the first to use a hierarchical attention model to capture the context in a structured and dynamic manner. SAN (Maruf et al., 2019) proposes a new selective attention model that uses sparse attention to focus on relevant sentences in the document context. QCN (Yang et al., 2019) proposes a query-guided capsule networks to cluster context information into different perspectives. + +# 3.4 Results + +We compare our Flat-Transformer model with the above baselines. Table 2 summarizes the results of these models. It shows that our Flat-Transformer can obtain scores of 24.87/23.55/30.09 on three datasets in terms of BLEU, and 47.05/43.97/48.56 in terms of METEOR, which significantly outperforms the previous flat models (RNNSearch and Transformer). + +By fine-tuning on BERT, Flat-Transformer can achieve improvements of $+1.74 / + 0.97 / + 1.90$ BLEU scores as well as $+1.48 / + 1.43 / + 1.20$ METEOR scores. It proves that Flat-Transformer can be compatible with the pre-training BERT model. Except for the BLEU score on the News dataset, the Flat-Transformer can significantly outperform the dual-encoder models, achieving state-of-the + +art performance in terms of both BLEU and METEOR scores. On the contrary, the dual-encoder Transformer is not compatible with BERT. It gets slightly worse performance on two datasets, mainly because the model size becomes larger to adapt the setting of BERT. Still, BERT does not provide a good prior initialization for modeling the uni-directional relationship from contexts to source sentences. + +# 3.5 Ablation Study + +To analyze the effect of each component of Flat-Transformer, we conduct an ablation study by removing them from our models on the TED dataset. Table 3 summarizes the results of the ablation study. We remove the segment embedding but reserve the unified structure. It concludes that the segment embedding contributes to an improvement of 0.51 BLEU score and 0.85 METEOR score, showing the importance of explicitly identifying the contexts and the source sentences. After further removing the unified structure of Flat-Transformer, the model becomes a standard Transformer. It shows that the unified structures contribute a gain of 1.08 in terms of BLEU and 2.03 in terms of METEOR. The reason is that the unified structures encourage the model to focus more on the source sentences, while the contexts can be regarded as the semantic supplements. + +# 4 Related Work + +Here we summarize the recent advances in document-level neural machine translation. Some work focuses on improving the architectures of the document machine translation models. Tiedemann and Scherrer (2017) and Wang et al. (2017) explore possible solutions to exploit the cross-sentence contexts for neural machine translation. Zhang et al. (2018) extends the Transformer model with a new context encoder to represent document-level context. Werlen et al. (2018) and (Maruf et al., 2019) propose two different hierarchical attention models to model the contexts. Yang et al. (2019) introduces a capsule network to improve these hierarchical structures. There are also some works analyzing the contextual errors (Voita et al., 2018, 2019b; Bawden et al., 2018) and providing the test suites (Müller et al., 2018). More recently, Voita et al. (2019a) explores the approaches to incorporate the mono-lingual data to augment the document-level bi-lingual dataset. Different + +from these works, this paper mainly discusses the comparison between dual-encoder models and uni-encoder models and proposes a novel method to improve the uni-encoder structure. + +# 5 Conclusions + +In this work, we explore the solutions to improve the uni-encoder structures for document-level machine translation. We propose a Flat-Transformer model with a unified encoder, which is simple and can model the bi-directional relationship between the contexts and the source sentences. Besides, our Flat-Transformer is compatible with the pretraining model, yielding a better performance than both the existing uni-encoder models and the dual-encoder models on two datasets. + +# Acknowledgments + +The authors would like to thank the anonymous reviewers for their valuable suggestions and comments. We appreciate Sameen Maruf providing the same processed document data as in their work. + +# References + +Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. +Satanjeev Banerjee and Alon Lavie. 2005. METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization@ACL 2005, Ann Arbor, Michigan, USA, June 29, 2005, pages 65-72. +Rachel Bawden, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2018. Evaluating discourse phenomena in neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1304-1313, New Orleans, Louisiana. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL 2019, pages 4171-4186. +Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language + +model pre-training for natural language understanding and generation. CoRR, abs/1905.03197. +Hany Hassan, Anthony Aue, Chang Chen, Vishal Chowdhary, Jonathan Clark, Christian Federmann, Xuedong Huang, Marcin Junczys-Dowmunt, William Lewis, Mu Li, Shujie Liu, Tie-Yan Liu, Renqian Luo, Arul Menezes, Tao Qin, Frank Seide, Xu Tan, Fei Tian, Lijun Wu, Shuangzhi Wu, Yingce Xia, Dongdong Zhang, Zhirui Zhang, and Ming Zhou. 2018. Achieving human parity on automatic chinese to english news translation. CoRR, abs/1803.05567. +Sebastien Jean, Stanislas Lauly, Orhan First, and Kyunghyun Cho. 2017. Does neural machine translation benefit from larger context? arXiv preprint arXiv:1704.05135. +Marcin Junczys-Dowmunt. 2019. Microsoft translator at WMT 2019: Towards large-scale document-level neural machine translation. In Proceedings of the Fourth Conference on Machine Translation, WMT 2019, Florence, Italy, August 1-2, 2019 - Volume 2: Shared Task Papers, Day 1, pages 225-233. +Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. +Guillaume Lample and Alexis Conneau. 2019. Crosslingual language model pretraining. CoRR, abs/1901.07291. +Samuel Läubli, Rico Sennrich, and Martin Volk. 2018. Has machine translation achieved human parity? A case for document-level evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4791-4796. +Sameen Maruf and Gholamreza Haffari. 2018. Document context neural machine translation with memory networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1275-1284, Melbourne, Australia. Association for Computational Linguistics. +Sameen Maruf, André F. T. Martins, and Gholamreza Haffari. 2019. Selective attention for context-aware neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 3092-3102. +Mathias Müller, Annette Rios, Elena Voita, and Rico Sennrich. 2018. A large-scale test set for the evaluation of context-aware pronoun translation in neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 61-72, Brussels, Belgium. Association for Computational Linguistics. + +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318. +Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In *NAACL* 2018, pages 2227-2237. +Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. +Rico Sennrich and Martin Volk. 2011. Iterative, mt-based sentence alignment of parallel texts. In Proceedings of the 18th Nordic Conference of Computational Linguistics, NODALIDA 2011, May 11-13, 2011, Riga, Latvia, pages 175-182. +Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and TieYan Liu. 2019. MASS: masked sequence to sequence pre-training for language generation. In ICML 2019, pages 5926-5936. +Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929-1958. +Jörg Tiedemann and Yves Scherrer. 2017. Neural machine translation with extended context. In Proceedings of the Third Workshop on Discourse in Machine Translation, pages 82-92, Copenhagen, Denmark. Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 5998-6008. +Elena Voita, Rico Sennrich, and Ivan Titov. 2019a. Context-aware monolingual repair for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 876-885. Association for Computational Linguistics. +Elena Voita, Rico Sennrich, and Ivan Titov. 2019b. When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion. In Proceedings + +of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1198-1212, Florence, Italy. Association for Computational Linguistics. +Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine translation learns anaphora resolution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1264-1274, Melbourne, Australia. Association for Computational Linguistics. +Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017. Exploiting cross-sentence context for neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2826-2831, Copenhagen, Denmark. Association for Computational Linguistics. +Lesly Miculicich Werlen, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2947-2954. +Zhengxin Yang, Jinchao Zhang, Fandong Meng, Shuhao Gu, Yang Feng, and Jie Zhou. 2019. Enhancing context modeling with a query-guided capsule network for document-level translation. CoRR, abs/1909.00564. +Jiacheng Zhang, Huanbo Luan, Maosong Sun, Feifei Zhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018. Improving the transformer translation model with document-level context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 533-542. \ No newline at end of file diff --git a/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/images.zip b/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..db2be559b80a41bcb9a18c635cb53863c14ff5e7 --- /dev/null +++ b/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f17f7247f0d77e639f6e8a90febcd2f086ed3095acd17356626da98cd835a34e +size 209103 diff --git a/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/layout.json b/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..939729baa7083a84721d9d3b00c81edacfee0d07 --- /dev/null +++ b/asimpleandeffectiveunifiedencoderfordocumentlevelmachinetranslation/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:28b6b4b325bff507d622b4387f603a6e93189780de8276eadd121fecb1640748 +size 215614 diff --git a/aspanbasedlinearizationforconstituenttrees/0a7ea8f5-2a11-491f-ba76-07a102ef8d5f_content_list.json b/aspanbasedlinearizationforconstituenttrees/0a7ea8f5-2a11-491f-ba76-07a102ef8d5f_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..9c3ba613a050454ec32e44bd73406a776fda3027 --- /dev/null +++ b/aspanbasedlinearizationforconstituenttrees/0a7ea8f5-2a11-491f-ba76-07a102ef8d5f_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7e9ad07522ceda7f9aa83796f9fa296ae033218d730c1f2718180fd4a36fc98 +size 79646 diff --git a/aspanbasedlinearizationforconstituenttrees/0a7ea8f5-2a11-491f-ba76-07a102ef8d5f_model.json b/aspanbasedlinearizationforconstituenttrees/0a7ea8f5-2a11-491f-ba76-07a102ef8d5f_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3ebcdc2e6340630d85e94c14d34b4c55965b169f --- /dev/null +++ b/aspanbasedlinearizationforconstituenttrees/0a7ea8f5-2a11-491f-ba76-07a102ef8d5f_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:91f13222396222468923b5af7f2583d75c559fb587f7729cf3930c57d02176d8 +size 96710 diff --git a/aspanbasedlinearizationforconstituenttrees/0a7ea8f5-2a11-491f-ba76-07a102ef8d5f_origin.pdf b/aspanbasedlinearizationforconstituenttrees/0a7ea8f5-2a11-491f-ba76-07a102ef8d5f_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b7df8dc89f2e23a8a23b031f96ac8ff2cf5d91e1 --- /dev/null +++ b/aspanbasedlinearizationforconstituenttrees/0a7ea8f5-2a11-491f-ba76-07a102ef8d5f_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:350b9d6151509122a68dc30608ad719a9f7bdb66336b0708a1e84891f985f0e2 +size 668544 diff --git a/aspanbasedlinearizationforconstituenttrees/full.md b/aspanbasedlinearizationforconstituenttrees/full.md new file mode 100644 index 0000000000000000000000000000000000000000..4a3604cf0412d8810b019ad1c7ab5099c8c7ef72 --- /dev/null +++ b/aspanbasedlinearizationforconstituenttrees/full.md @@ -0,0 +1,376 @@ +# A Span-based Linearization for Constituent Trees + +Yang Wei, Yuanbin Wu, and Man Lan + +School of Computer Science and Technology + +East China Normal University + +godweiyang@gmail.com {ybwu,mlan}@cs.ecnu.edu.cn + +# Abstract + +We propose a novel linearization of a constituent tree, together with a new locally normalized model. For each split point in a sentence, our model computes the normalizer on all spans ending with that split point, and then predicts a tree span from them. Compared with global models, our model is fast and parallelizable. Different from previous local models, our linearization method is tied on the spans directly and considers more local features when performing span prediction, which is more interpretable and effective. Experiments on PTB (95.8 F1) and CTB (92.1 F1) show that our model significantly outperforms existing local models and efficiently achieves competitive results with global models. + +# 1 Introduction + +Constituent parsers map natural language sentences to hierarchically organized spans (Cross and Huang, 2016). According to the complexity of decoders, two types of parsers have been studied, globally normalized models which normalize probability of a constituent tree on the whole candidate tree space (e.g. chart parser (Stern et al., 2017a)) and locally normalized models which normalize tree probability on smaller subtrees or spans. It is believed that global models have better parsing performance (Gaddy et al., 2018). But with the fast development of neural-network-based feature representations (Hochreiter and Schmidhuber, 1997; Vaswani et al., 2017), local models are able to get competitive parsing accuracy while enjoying fast training and testing speed, and thus become an active research topic in constituent parsing. + +Locally normalized parsers usually rely on tree decompositions or linearizations. From the perspective of decomposition, the probability of trees can be factorized, for example, on individual spans. Teng and Zhang (2018) investigates such a model + +which predicts probability on each candidate span. It achieves quite promising parsing results, while the simple local probability factorization still leaves room for improvements. From the perspective of linearization, there are many ways to transform a structured tree into a shallow sequence. As a recent example, Shen et al. (2018) linearizes a tree with a sequence of numbers, each of which indicates words' syntactic distance in the tree (i.e., height of the lowest common ancestor of two adjacent words). Similar ideas are also applied in Vinyals et al. (2015), Choe and Charniak (2016) and transition-based systems (Cross and Huang, 2016; Liu and Zhang, 2017a). With tree linearizations, the training time can be further accelerated to $\mathcal{O}(n)$ , but the parsers often sacrifice a clear connection with original spans in trees, which makes both features and supervision signals from spans hard to use. + +In this work, we propose a novel linearization of constituent trees tied on their span representations. Given a sentence $\mathcal{W}$ and its parsing tree $\mathcal{T}$ , for each split point after $w_{i}$ in the sentence, we assign it a parsing target $d_{i}$ , where $(d_i,i)$ is the longest span ending with $i$ in $\mathcal{T}$ . We can show that, for a binary parsing tree, the set $\{(d_i,i)\}$ includes all left child spans in $\mathcal{T}$ . Thus the linearization is actually sufficient to recover a parsing tree of the sentence. + +Compared with prior work, the linearization is directly based on tree spans, which might make estimating model parameters easier. We also build a different local normalization compared with the simple per-span-normalization in Teng and Zhang (2018). Specifically, the probability $P(d_{i}|i)$ is normalized on all candidate split points on the left of $i$ . The more powerful local model can help to further improve parsing performance while retaining the fast learning and inference speed (with a greedy heuristic for handling illegal sequences, we can achieve $\mathcal{O}(n\log n)$ average inference complexity). + +![](images/3e50dca85357d6f46b325bcfe6a767f12df8463ed65571fe13d2b6fb26f4df8e.jpg) +(a) Original parsing tree. + +![](images/b5dd60ee4adc2c98f9a2134de7a17efc8e2492a471e4912a4eb8d3d9af7b04e0.jpg) +(b) Right binary tree. + +![](images/c451f5379a46b73d4801384ef18f4ac6ccf732fc3820e0beb7edc382cbdff29f.jpg) +(c) Span table and linearization. +Figure 1: The process of generating the linearization of the sentence "She loves writing code". Given an original parsing tree (a), we firstly convert it to a right binary tree by recursively combining the rightmost two children (b). Then, we represent the tree as a span table, and divide it into five parts according to the right boundaries of the spans (c). Green and red circles represent left and right child spans respectively. Gray circles represent spans which do not appear in the tree. In each part, there is only one longest span (green circles), thus the corresponding value of that part is just the left boundary of the green circle. + +We perform experiments on PTB and CTB. The proposed parser significantly outperforms existing locally normalized models, and achieves competitive results with state-of-the-art global models (95.8 F1 on PTB and 92.1 F1 on CTB). We also evaluate how the new linearization helps parse spans with different lengths and types. + +To summarize, our main contributions include: + +- Proposing a new linearization which has clear interpretation (Section 2). +- Building a new locally normalized model with constraints on span scores (Section 3). +- Compared with previous local models, the proposed parser achieves better performance (competitive with global models) and has faster parsing speed (Section 4). + +# 2 Tree Linearization + +We first prepare some notations. Let $\mathcal{W} = (w_1, w_2, \ldots, w_n)$ be a sentence, $\mathcal{T}$ be its binary constituent tree and $A_{ij} \to B_{ik} C_{kj}$ be a derivation in $\mathcal{T}$ . Denote $(i,j) (0 \leq i < j \leq n)$ to be a span from $w_{i+1}$ to $w_j$ (for simplicity, we ignore the label of a span). + +Definition 1. Given a sentence $\mathcal{W}$ and its tree $\mathcal{T}$ , we call $\mathcal{D} = (d_1, d_2, \ldots, d_n)$ a linearization of $\mathcal{T}$ , where $d_i \in \{0, 1, \ldots, i-1\}$ and $(d_i, i)$ is the longest span ending with $i$ in $\mathcal{T}$ . + +Clearly, there is only one such linearization for a tree. We have an equal definition of $\mathcal{D}$ , which + +shows the span $(d_i, i)$ is a left child span. + +Proposition 1. Given a tree $\mathcal{T}$ , the set of spans $\{(d_i, i) \mid i = 1, 2, \ldots, n\}$ is equal to the set of left child spans + +$$ +\mathcal {S} = \{(i, j) \mid \exists A _ {i k} \rightarrow B _ {i j} C _ {j k} \} \cup \{(0, n) \}. +$$ + +Proof. First, for each $j$ , there is only one left child span $(i,j)$ ending with $j$ , otherwise if $(i',j)$ is a left child span with $i' \neq i$ (e.g. $i' < i$ ), $(i,j)$ must also be a right child span. Therefore $|S| = n$ . Similarly, if $i \neq d_j$ , $(i,j)$ should be a right child span of $(d_j,j)$ . + +Thus we can generate the linearization using Algorithm 1. For span $(i,j)$ and its gold split $k$ , we can get $d_{k} = i$ . Then we recursively calculate the linearization of span $(i,k)$ and $(k,j)$ . Note that the returned linearization $\mathcal{D}$ does not contain $d_{n}$ , so we append zero $(d_{n} = 0$ for the root node) to the end as the final linearization. Figure 1 is a generation process of sentence "She loves writing code." From the span table, it is obvious that there is only one left child span (green circles) ending with the same right boundary. + +In the following discussions, we will use $\mathcal{D}$ and $S$ interchangeably. Next, we show two properties of a legal $\mathcal{D}$ . + +Proposition 2. A linearization $\mathcal{D}$ can recover a tree $\mathcal{T}$ iff. + +$$ +I. 0 \leq d _ {i} < i, \forall 1 \leq i \leq n. +$$ + +The root node is also regarded as a left child span. + +Algorithm 1 Tree linearization. +1: function LINEARIZATION(i, j, T) +2: if i + 1 = j then +3: D ← [] +4: else +5: k ← the split point of span (i, j) in T +6: Dl ← LINEARIZATION(i, k, T) +7: Dr ← LINEARIZATION(k, j, T) +8: D ← Dl ⊕ [i] ⊕ Dr +9: end if +10: return D +11: end function + +2. $d_{j}$ is not in the range $(d_i,i),\forall j > i$ + +Proof. The necessity is obvious. We show the sufficiency by induction on the sentence length. When $n = 1$ , the conclusion stands. Assuming for all linearizations with length less than $n$ , property 1 and 2 lead to a well-formed tree, and now consider a linearization with length $n$ . + +Define $k = \max \{k' \mid d_{k'} = 0, k' < n\}$ . Since $d_1 = 0$ (by property 1), $k$ is not none. We split the sentence into $(0, k), (k, n)$ , and claim that after removing $(0, n)$ , the spans in $\mathcal{D}$ are either in $(0, k)$ or $(k, n)$ , thus by induction we obtain the conclusion. To validate the claim, for $k' < k$ , by property 1, we have $d_{k'} < k' < k$ , thus $(d_{k'}, k')$ is in $(0, k)$ . For $k' > k$ , by property 2, either $d_{k'} \geq k$ or $d_{k'} = 0$ . Since $k$ is the largest index with $d_k = 0$ , we have $d_{k'} \neq 0$ , which means $(d_{k'}, k')$ is in $(k, n)$ . Therefore, we show the existence of a tree from $\mathcal{D}$ . The tree is also unique, because if two trees $\mathcal{T}$ and $\mathcal{T}'$ have the same linearization, by Proposition 1, we have $\mathcal{T} = \mathcal{T}'$ . + +Proposition 2 also suggests a top-down algorithm (Algorithm 2) for performing tree inference given a legal linearization. For span $(i,j)$ (with label $\ell(i,j)$ ), we find the rightmost split $k$ satisfying $d_k = i$ , and then recursively decode the two subtrees rooted at span $(i,k)$ and $(k,j)$ , respectively. When $\mathcal{D}$ does not satisfy property 2 (our model can ensure property 1), one solution is to seek a minimum change of $\mathcal{D}$ to make it legal. However, it is reduced to a minimum vertex cover problem (regarding each span $(d_i,i)$ as a point, if two spans violate property 2, we connect an edge between them). We can also slightly modify Algorithm 2 to perform an approximate inference (Section 3.4). + +Algorithm 2 Tree reconstruction. +1: function TREE(i, j, D) +2: if i + 1 = j then +3: node ← Leaf(wj, ℓ(i, j)) +4: else +5: k ← max{k' | dk' = i, i < k' < j} +6: childl ← TREE(i, k, D) +7: childr ← TREE(k, j, D) +8: node ← Node(childl, childr, ℓ(i, j)) +9: end if +10: return node +11: end function + +Finally we need to deal with the linearization of non-binary trees. For spans having more than two child spans, there is no definition for their middle child spans whether they are left children or right children, thus Proposition 1 might not stand. We recursively combine two adjacent spans from right to left using an empty label $\varnothing$ . Then the tree can be converted to a binary tree (Stern et al., 2017a). For a unary branch, we treat it as a unique span with a new label which concatenates all the labels in the branch. + +# 3 The Parser + +In this section, we introduce our encoder, decoder and inference algorithms in detail. Then we compare our normalization method with two other methods, globally normalized and existing locally normalized methods. + +# 3.1 Encoder + +We represent each word $w_{i}$ using three pieces of information, a randomly initialized word embedding $e_{i}$ , a character-based embedding $c_{i}$ obtained by a character-level LSTM and a randomly initialized part-of-speech tag embedding $p_{i}$ . We concatenate these three embeddings to generate a representation of word $w_{i}$ , + +$$ +\boldsymbol {x} _ {i} = \left[ \boldsymbol {e} _ {i}; \boldsymbol {c} _ {i}; \boldsymbol {p} _ {i} \right]. +$$ + +To get the representation of the split points, the word representation matrix $\mathbf{X} = [x_1, x_2, \ldots, x_n]$ is fed into a bidirectional LSTM or Transformer (Vaswani et al., 2017) firstly. Then we calculate the representation of the split point between $w_i$ and $w_{i+1}$ using the outputs from the encoders, + +$$ +\boldsymbol {h} _ {i} = \left[ \vec {h} _ {i}; \vec {h} _ {i + 1} \right]. \tag {1} +$$ + +Note that for Transformer encoder, $\vec{h}_i$ is calculated in the same way as Kitaev and Klein (2018a). + +# 3.2 Decoder + +Since a split point can play two different roles when it is the left or right boundary of a span, we use two different vectors to represent the two roles inspired by Dozat and Manning (2017). Concretely, we use two multi-layer perceptrons to generate two different representations, + +$$ +\boldsymbol {l} _ {i} = \operatorname {M L P} _ {l} (\boldsymbol {h} _ {i}), \quad \boldsymbol {r} _ {i} = \operatorname {M L P} _ {r} (\boldsymbol {h} _ {i}). \tag {2} +$$ + +Then we can define the score of span $(i,j)$ using a biaffine attention function (Dozat and Manning, 2017; Li et al., 2019), + +$$ +\alpha_ {i j} = \boldsymbol {l} _ {i} ^ {\top} \mathbf {W} \boldsymbol {r} _ {j} + \boldsymbol {b} _ {1} ^ {\top} \boldsymbol {l} _ {i} + \boldsymbol {b} _ {2} ^ {\top} \boldsymbol {r} _ {j}, +$$ + +where $\mathbf{W}$ , $b_{1}$ and $b_{2}$ are all model parameters. $\alpha_{ij}$ measures the possibility of $(i,j)$ being a left child span in the tree. + +Different from Stern et al. (2017a) which does global normalization on the probability of the whole tree and Teng and Zhang (2018) which does local normalization on each candidate span, we do normalization on all spans with the same right boundary $j$ . Thus the probability of span $(i,j)$ to be a left child span is defined as, + +$$ +P (i | j) = \operatorname {S o f t m a x} _ {i} \left(\alpha_ {i j}\right), \forall i < j. \tag {3} +$$ + +Finally, we can predict the linearization using the probability $P(i|j)$ , + +$$ +d _ {j} = \underset {i} {\arg \max } P (i | j), \forall i < j. \tag {4} +$$ + +For label prediction, we first infer the tree structure from the linearization (Section 3.4). Then we use a multi-layer perceptron to calculate the label probability of span $(i,j)$ , + +$$ +P (\ell | i, j) = \operatorname {S o f t m a x} \left(\operatorname {M L P} _ {\text {l a b e l}} \left([ \boldsymbol {l} _ {i}; \boldsymbol {r} _ {j} ]\right)\right) _ {\ell}. +$$ + +Final predicted label of span $(i,j)$ is $\ell(i,j) = \arg \max_{\ell} P(\ell | i, j)$ . + +# 3.3 Training Objective + +Given a gold parsing tree $\mathcal{T}$ and its linearization $(d_{1},d_{2},\ldots ,d_{n})$ , we can calculate the loss using the negative log-likelihood: + +$$ +\mathcal {L} = - \frac {1}{n} \left(\sum_ {i = 1} ^ {n} \log P (d _ {i} | i) + \sum_ {(i, j, \ell) \in \mathcal {T}} \log P (\ell | i, j)\right). +$$ + +The loss function consists of two parts. One is the structure loss, which is only defined on the left child spans. The other one is the label loss, which is defined on all the spans in $\mathcal{T}$ . + +# 3.4 Tree Inference + +To reconstruct the tree structure from the predicted linearization $(d_{1}, d_{2}, \ldots, d_{n})$ , we must deal with illegal sequences. One solution is to convert an illegal linearization to a legal one, and then use Algorithm 2 to recover the tree. However, the optimal converting algorithm is NP hard as discussed in Section 2. We propose two approximate reconstruction methods, both of which are based on replacing line 5 of Algorithm 2. One is to find the largest $k$ satisfying $d_{k} \leq i$ , + +$$ +k \gets \max \left\{k ^ {\prime} \mid d _ {k ^ {\prime}} \leq i, i < k ^ {\prime} < j \right\}. +$$ + +The other is to find the index $k$ of the smallest $d_{k}$ (if there are multiple choices, we choose the largest one), + +$$ +k\leftarrow \operatorname *{arg min}_{k^{\prime}}d_{k^{\prime}}. +$$ + +Both methods are applicable to legal situations, and they have similar performance in our empirical evaluations. The inference time complexity is $\mathcal{O}(n^2)$ in the worst-case for unbalanced trees, while in average it is $\mathcal{O}(n\log n)$ (which is the same as Stern et al. (2017a)). + +Finally, instead of reconstructing trees from linearization sequences $(d_{1}, d_{2}, \ldots, d_{n})$ , we could have an accurate CKY-style decoding algorithm from probabilities $P(i|j)$ (Equation 3). Specifically, it maximizes the product of left child span probabilities, + +$$ +\mathcal {G} (i, j) = \max \left\{P (i | k) \times \mathcal {G} (k, j) \mid i < k < j \right\}, +$$ + +where $\mathcal{G}(i,j)$ represents the highest probability of subtree with root node $(i,j)$ . We can calculate $\mathcal{G}(0,n)$ using dynamic programming algorithm and back-trace the tree accordingly. The complexity is $\mathcal{O}(n^3)$ . + +![](images/1fa6f83c7f3ffe19b24185d5bd747dbeb24ad4bdd9477ff1df47c31fc2db8d8a.jpg) +(a) Global normalization. + +![](images/de6b079943557804aa866599436a6af9d40453c914008882ffa184bd89b14386.jpg) +(b) Local normalization. + +![](images/1772d0a9e9dbbd816f2cb2b967138b6674ea3b4260bd0fdf8ee6dcf8eafa36ee.jpg) +(c) Our normalization. +Figure 2: Factor graphs of three types of normalization. Green circles represent all potential spans in the span table. Red blocks represent scores of the spans. Blue blocks represent normalization operations and dotted lines connect all the spans involved in the normalization. Global normalization (a) needs to calculate the sum of all span scores in parsing tree $\mathcal{T}$ . Existing local normalization (e.g. Teng and Zhang (2018)) (b) only calculates the probability of each candidate span. Our method (c) does local normalization on all the spans with the same right boundary. + +# 3.5 More Discussions on Normalization + +We can compare our locally normalized model (Equation 3) with other probability factorizations of constituent trees (Figure 2). + +Global normalization (Figure 2(a)) performs marginalization over all candidate trees, which requires dynamic programming decoding. As a local model, our parser is a span-level factorization of the tree probability, and each factor only marginalizes over a linear number of items (i.e., the probability of span $(i,j)$ is normalized with all scores of $(i',j), i' < j$ ). It is easier to be parallelized and enjoys a much faster parsing speed. We will show that its performance is also competitive with global models. + +Teng and Zhang (2018) studies two local normalized models over spans, namely the span model and the rule model. The span model simply considers individual spans independently (Figure 2(b)) which may be the finest factorization. Our model lies between it and the global model. + +The rule model considers a similar normalization with our model. If it is combined with the top-down decoding (Stern et al., 2017a), the two parsers look similar. We discuss their differences. The rule model takes all ground truth spans from the gold trees, and for each span $(i,j)$ , it compiles a probability $P((i,j) \gets (i,k)(k,j))$ for its ground truth split $k$ . Our parser, on the other side, factorizes on each word. Therefore, for the + +same span $(i,j)$ , their normalization is constrained within $(i,j)$ , while ours is over all $i' < j$ . The main advantage of our parser is simpler span representations (not depend on parent spans): it makes the parser easy to batch for sentences with different lengths and tree structures since each $d_i$ can be calculated offline before training. + +# 4 Experiments + +# 4.1 Data and Settings + +Datasets and Preprocessing All models are trained on two standard benchmark treebanks, English Penn Treebank (PTB) (Marcus et al., 1993) and Chinese Penn Treebank (CTB) 5.1. The POS tags are predicted using Stanford Tagger (Toutanova et al., 2003). To clean the treebanks, we strip the leaf nodes with POS tag -NONE- from the two treebanks and delete the root nodes with constituent type ROOT. For evaluating the results, we use the standard evaluation tool $^{4}$ . + +For words in the testing corpus but not in the training corpus, we replace them with a unique label $<\mathrm{UNK}>$ . We also replace the words in the training corpus with the unknown label $<\mathrm{UNK}>$ with probability $p_{\mathrm{unk}}(w) = \frac{z}{z + c(w)}$ , where $c(w)$ is the number of time words $w$ appears in the training corpus and we set $z = 0.8375$ as Cross and Huang (2016). + +Hyperparameters We use 100D GloVe (Pennington et al., 2014) embedding for PTB and 80D structured-skipgram (Ling et al., 2015) embedding + +
TypeNPVPSPPSBARADVPADJPQPWHNP
Count1863087435663549217971213893490429
PSN Model93.1591.8191.2189.7387.8186.8973.0189.8097.20
Our Model93.4292.6291.9589.9188.9387.3975.1491.6397.44
Difference+0.27+0.81+0.74+0.18+1.12+0.50+2.13+1.83+0.24
+ +Table 1: Comparison on different phrases types. Here we only list top nine types. + +![](images/34cba5ae2f12b619e240b80b201fd3885bb073090836cf6007e7c8d0b01a26e3.jpg) +Figure 3: F1 scores against span length. Here the length $l$ represents lengths between $[l, l + 4]$ . + +for CTB. For character encoding, we randomly initialize the character embeddings with dimension 64. + +We use Adam optimizer with initial learning rate 1.0 and epsilon $10^{-9}$ . For LSTM encoder, we use a hidden size of 1024, with 0.33 dropout in all the feed-forward and recurrent connections. For Transformer encoder, we use the same hyperparameters as Kitaev and Klein (2018a). For split point representation, we apply two 1024-dimensional hidden size feed-forward networks. All the dropout we use in the decoder layer is 0.33. We also use BERT (Devlin et al., 2019) (uncased, 24 layers, 16 attention heads per layer and 1024-dimensional hidden vectors) and use the output of the last layer as the pre-trained word embeddings. + +Training Details We use PyTorch as our neural network toolkit and run the code on a NVIDIA GeForce GTX Titan Xp GPU and Intel Xeon E5-2603 v4 CPU. All models are trained for up to 150 epochs with batch size 150 (Zhou and Zhao, 2019). + +# 4.2 Main Results + +Table 2 shows the final results on PTB test set. Our models (92.6 F1 with LSTM, 93.7 F1 with Trans + +former) significantly outperform the single locally normalized models. Compared with globally normalized models, our models also outperform those parsers with LSTM encoder and achieve a competitive result with Transformer encoder parsers. With the help of BERT (Devlin et al., 2018), our models with two encoders both achieve the same performance (95.8 F1) as the best parser (Zhou and Zhao, 2019). Table 3 shows the final results on CTB test set. Our models (92.1 F1) also significantly outperform local models and achieve competitive result amongst global models. + +Compared with Teng and Zhang (2018) which does local normalization on single span, our model increases 0.2 F1 on PTB, which shows that doing normalization on more spans is really better. Our model also significantly outperforms Shen et al. (2018) which predicts the syntactic distance of a tree. This indicates the superiority of our linearization method directly tied on the spans. + +# 4.3 Evaluation + +To better understand the extent to which our model transcends the locally normalized model which does normalization on a single span described in Teng and Zhang (2018), we do several experiments to compare the performance about different lengths of spans and different constituent types. + +In order to make a fair comparison, we implement their model by ourselves using the same LSTM encoder as ours. Besides, we ignore the LSTM for label prediction and complex span representations in their models and use simpler settings. Our own implementation achieves the same result as they report (92.4 F1). For convenience, we call their model per-span-normalization (PSN for short) model in the following. + +Influence of Span Length First, we analyse the influence of different lengths of spans and the results are shown in Figure 3. We find that for sentences of lengths between [11, 45], our model significantly outperforms PSN model. For short + +
ModelLRLPF1
Global Model
Stern et al. (2017a)90.693.091.8
Gaddy et al. (2018)91.892.492.1
Kitaev and Klein (2018a)♣93.293.993.6
Zhou and Zhao (2019)♣93.693.993.8
Local Model
Vilares et al. (2019)--90.6
Liu et al. (2018)--91.2
Ma et al. (2017)--91.5
Shen et al. (2018)91.792.091.8
Liu and Zhang (2017a)--91.8
Hong and Huang (2018)91.592.592.0
Teng and Zhang (2018)92.292.592.4
Dyer et al. (2016)♥--92.4
Stern et al. (2017b)♥92.692.692.6
Our Model92.392.992.6
Our Model♣93.394.193.7
Pre-training/Ensemble/Re-ranking
Liu et al. (2018)--92.3
Choe and Charniak (2016)--93.8
Liu and Zhang (2017a)--94.2
Fried et al. (2017)--94.7
Kitaev and Klein (2018a)♣94.995.495.1
Kitaev and Klein (2018b)♣95.595.795.6
Zhou and Zhao (2019)♣95.796.095.8
Our Model (+BERT)95.696.095.8
Our Model (+BERT)♣95.596.195.8
+ +spans, PSN model only needs to consider few spans, which is more local and it is enough for the perspan-normalization to handle this situation. For long spans, our model needs to do normalization on more spans and the state space becomes large linearly. So the accuracy decreases fast, and there is no advantage compared with PSN model which uses CKY algorithm for inference. For spans of other lengths, our locally normalized method can take all spans with the same right boundary into consideration and add sum-to-one constraints on their scores. As a result, our model outperforms PSN model even without the help of accurate inference. + +Influence of Constituent Type Then we compare the accuracy of different constituent types. Table 1 shows the results of nine types which occur most frequently. Our model all performs better + +Table 2: Final results on the PTB test set. $\spadesuit$ means the models use Transformer as their encoder. $\heartsuit$ means generative models. + +
ModelLRLPF1
Global Model
Kitaev and Klein (2018a)86.888.187.4
Zhou and Zhao (2019)89.490.189.7
Local Model
Dyer et al. (2016)--84.6
Liu et al. (2018)--85.4
Liu and Zhang (2017b)85.285.985.5
Vilares et al. (2019)--85.6
Liu and Zhang (2017a)--86.1
Shen et al. (2018)86.486.686.5
Fried and Klein (2018)--87.0
Teng and Zhang (2018)87.187.587.3
Our Model87.989.388.6
Our Model87.489.988.7
Pre-training/Ensemble/Re-ranking
Kitaev and Klein (2018b)91.692.091.8
Our Model (+BERT)91.792.492.0
Our Model (+BERT)91.992.392.1
+ +Table 3: Final results on the CTB test set. $\spadesuit$ means the models use Transformer as their encoder. Note that Zhou and Zhao (2019) uses gold POS tags in their code, so we rerun their code using predicted POS tags for fair comparison. + +
ModelLRLPF1
Full model92.3192.8792.59
- MLPl and MLPr92.1592.7292.43
- normalization91.2592.9392.08
+ label linearization90.7991.5691.17
+ +Table 4: Ablation test on the PTB test set. Here we use the same settings as in Section 4.3. + +than PSN model, especially in types SBAR, ADJP and QP. When optimizing the representation of one split point, our model can consider all of the words before it, which can be helpful to predict some types. For example, when we predict an adjective phrase (ADJP), its representation has fused the words' information before it (e.g. linking verb like "is"), which can narrow the scope of prediction. + +# 4.4 Ablation Study + +We perform several ablation experiments by modifying the structure of the decoder layer. The results are shown in Table 4. + +First, we delete the two different split point representations described in Equation (2) and directly use the output of LSTM as the final representation. Final performance slightly decreases, which indi + +
Inference AlgorithmLRLPF1
G(i, j)92.3192.8792.59
k = max{k' | dk' ≤ i}92.3992.7592.57
k = arg min{k' | dk'}91.9393.2192.57
+ +Table 5: Results of different inference algorithms described in Section 3.4. + +
Modelsents/sec
Global Model
Stern et al. (2017a)20
Kitaev and Klein (2018a)♦(w. Cython)150
Zhou and Zhao (2019)♣(w. Cython)159
Local Model
Teng and Zhang (2018)22
Stern et al. (2017a)76
Liu and Zhang (2017b)79
Shen et al. (2018)111
Shen et al. (2018) (w/o tree inference)351
Vilares et al. (2019)942
Our Model220
Our Model♦155
+ +Table 6: Parsing speeds on the PTB test set. $\spadesuit$ means the models use Transformer as their encoders. "w. Cython" stands for using Cython to optimize the python code. "w/o tree inference" stands for evaluating without tree inference. The model in Kitaev and Klein (2018a) is ran by ourselves, and other speeds are extracted from their original papers. + +cates that distinguishing the representations of left and right boundaries of a span is really helpful. + +Then we delete the local normalization on partial spans and only calculate the probability of each span to be a left child. The inference algorithm is the same as our full model. Final result decreases by 0.5 F1, despite improvement on precision. This might be because our normalization method can add constraints on all the spans with the same right boundary, which makes it effective when only one span is correct. + +Finally, we try to predict the labels sequentially, which means assigning each split $i$ a tuple $(d_i, \ell_i^{\text{left}}, \ell_i^{\text{right}})$ , where $\ell_i^{\text{left}}$ and $\ell_i^{\text{right}}$ represent the labels of the longest spans ending and starting with $i$ in the tree, respectively. This may make our model become a sequence labeling model similar to Gómez-Rodríguez and Vilares (2018). However, the performance is very poor, and this is largely due to the loss of structural information in the label prediction. Therefore, how to balance efficiency and label prediction accuracy might be a research + +problem in the future. + +# 4.5 Inference Algorithms + +We compare three inference algorithms described in Section 3.4. The results are shown in Table 5. We find that different inference algorithms have no obvious effect on the performance, mainly due to the powerful learning ability of our model. Thus we use the third method which is the most convenient to implement. + +# 4.6 Parsing Speed + +The parsing speeds of our parser and other parsers are shown in Table 6. Although our inference complexity is $\mathcal{O}(n\log n)$ , our speed is faster than other local models, except Shen et al. (2018) which evaluates without tree inference and Vilares et al. (2019) which utilizes a pure sequence tagging framework. This is mainly due to the simplicity of our model and the parallelism of matrix operations for structure prediction. Compared with globally normalized parsers like Zhou and Zhao (2019) and Kitaev and Klein (2018a), our model is also faster even if they use optimization for python code (e.g. Cython 6). Other global model like Stern et al. (2017a) which infers in $O(n^3)$ complexity is much slower than ours, and this shows the superiority of our linearization in speed. + +# 5 Related Work + +Globally normalized parsers often have high performance on constituent parsing due to their search on the global state space (Stern et al., 2017a; Kitaev and Klein, 2018a; Zhou and Zhao, 2019). However, they suffer from high time complexity and are difficult to parallelize. Thus many efforts have been made to optimize their efficiency (Vieira and Eisner, 2017). + +Recently, the rapid development of encoders (Hochreiter and Schmidhuber, 1997; Vaswani et al., 2017) and pre-trained language models (Devlin et al., 2018) have enabled local models to achieve similar performance as global models. Teng and Zhang (2018) propose two local models, one does normalization on each candidate span and one on each grammar rule. Their models even outperform the global model in Stern et al. (2017a) thanks to the better representation of spans. However, they still need an $\mathcal{O}(n^3)$ complexity inference algorithm to reconstruct the final parsing tree. + +Meanwhile, many work do research on faster sequential models. Transition-based models predict a sequence of actions and achieve an $\mathcal{O}(n)$ complexity (Watanabe and Sumita, 2015; Cross and Huang, 2016; Liu and Zhang, 2017a). However, they suffer from the issue of error propagation and cannot be parallel. Sequence labeling models regard tree prediction as sequence prediction problem (Gomez-Rodriguez and Vilares, 2018; Shen et al., 2018). These models have high efficiency, but their linearizations have no direct relation to the spans, so the performance is much worse than span-based models. + +We propose a novel linearization method closely related to the spans and decode the tree in $\mathcal{O}(n\log n)$ complexity. Compared with Teng and Zhang (2018), we do normalization on more spans, thus achieve a better performance. + +In future work, we will apply graph neural network (Velickovic et al., 2018; Ji et al., 2019; Sun et al., 2019) to enhance the span representation. Due to the excellent properties of our linearization, we can jointly learn constituent parsing and dependency parsing in one graph-based model. In addition, there is also a right linearization defined on the set of right child spans. We can study how to combine the two linear representations to further improve the performance of the model. + +# 6 Conclusion + +In this work, we propose a novel linearization of constituent trees tied on the spans tightly. In addition, we build a new normalization method, which can add constraints on all the spans with the same right boundary. Compared with previous local normalization methods, our method is more accurate for considering more span information, and reserves the fast running speed due to the parallelizable linearization model. The experiments show that our model significantly outperforms existing local models and achieves competitive results with global models. + +# Acknowledgments + +The authors would like to thank the reviewers for their helpful comments and suggestions. The authors would also like to thank Tao Ji and Changzhi Sun for their advices on models and experiments. The corresponding author is Yuanbin Wu. This research is (partially) supported by STCSM (18ZR1411500), the Foundation of + +State Key Laboratory of Cognitive Intelligence, iFLYTEK(COGOS-20190003), and an open research fund of KLATASDS-MOE. + +# References + +Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2331-2336. The Association for Computational Linguistics. +James Cross and Liang Huang. 2016. Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 1-11. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics. +Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In 5th International Conference on Learning Representations, ICLR 2017, Toulouse, France, April 24-26, 2017, Conference Track Proceedings. Open-Review.net. +Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In *NAACL HLT* 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12-17, 2016, pages 199-209. +Daniel Fried and Dan Klein. 2018. Policy gradient as a proxy for dynamic oracles in constituency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 469-476. +Daniel Fried, Mitchell Stern, and Dan Klein. 2017. Improving neural parsing by disentangling model combination and reranking effects. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 2: Short Papers, + +pages 161-166. Association for Computational Linguistics. +David Gaddy, Mitchell Stern, and Dan Klein. 2018. What's going on in neural constituency parsers? an analysis. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 999-1010. +Carlos Gómez-Rodríguez and David Vilares. 2018. Constituent parsing as sequence labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1314-1324. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780. +Juneki Hong and Liang Huang. 2018. Linear-time constituency parsing with rnns and dynamic programming. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 2: Short Papers, pages 477-483. +Tao Ji, Yuanbin Wu, and Man Lan. 2019. Graph-based dependency parsing with graph neural networks. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 2475-2485. Association for Computational Linguistics. +Nikita Kitaev and Dan Klein. 2018a. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 2675-2685. +Nikita Kitaev and Dan Klein. 2018b. Multilingual constituency parsing with self-attention and pre-training. CoRR, abs/1812.11760. +Ying Li, Zhenghua Li, Min Zhang, Rui Wang, Sheng Li, and Luo Si. 2019. Self-attentive biaffine dependency parsing. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, August 10-16, 2019, pages 5067-5073. ijcai.org. +Wang Ling, Chris Dyer, Alan W. Black, and Isabel Trancoso. 2015. Two/too simple adaptations of word2vec for syntax problems. In *NAACL HLT* 2015, The 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Denver, Colorado, USA, May 31 - June 5, 2015, pages 1299-1304. The Association for Computational Linguistics. + +Jiangming Liu and Yue Zhang. 2017a. In-order transition-based constituent parsing. Transactions of the Association for Computational Linguistics, 5:413-424. +Jiangming Liu and Yue Zhang. 2017b. Shift-reduce constituent parsing with neural lookahead features. Transactions of the Association for Computational Linguistics, 5:45-58. +Lemao Liu, Muhua Zhu, and Shuming Shi. 2018. Improving sequence-to-sequence constituency parsing. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 4873-4880. AAAI Press. +Chunpeng Ma, Akihiro Tamura, Lemao Liu, Tiejun Zhao, and Eiichiro Sumita. 2017. Improving feature-rich transition-based constituent parsing using recurrent neural networks. IEICE Transactions, 100-D(9):2205-2214. +Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2):313-330. +Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar; A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1532-1543. ACL. +Yikang Shen, Zhouhan Lin, Athul Paul Jacob, Alessandro Sordoni, Aaron C. Courville, and Yoshua Bengio. 2018. Straight to the tree: Constituency parsing with neural syntactic distance. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, July 15-20, 2018, Volume 1: Long Papers, pages 1171-1180. +Mitchell Stern, Jacob Andreas, and Dan Klein. 2017a. A minimal span-based neural constituency parser. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 818-827. +Mitchell Stern, Daniel Fried, and Dan Klein. 2017b. Effective inference for generative neural parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 1695-1700. +Changzhi Sun, Yeyun Gong, Yuanbin Wu, Ming Gong, Daxin Jiang, Man Lan, Shiliang Sun, and Nan Duan. 2019. Joint type inference on entities and relations + +via graph convolutional networks. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 1361-1370. Association for Computational Linguistics. +Zhiyang Teng and Yue Zhang. 2018. Two local models for neural constituent parsing. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 119-132. +Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL 2003, Edmonton, Canada, May 27 - June 1, 2003. The Association for Computational Linguistics. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 6000-6010. +Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. +Tim Vieira and Jason Eisner. 2017. Learning to prune: Exploring the frontier of fast and accurate parsing. TACL, 5:263-278. +David Vilares, Mostafa Abdou, and Anders Søgaard. 2019. Better, faster, stronger sequence tagging constituent parsers. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 3372-3383. +Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. 2015. Grammar as a foreign language. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2773-2781. +Taro Watanabe and Eiichiro Sumita. 2015. Transition-based neural constituent parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing + +of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 1169-1179. +Junru Zhou and Hai Zhao. 2019. Head-driven phrase structure grammar parsing on penn treebank. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 2396-2408. \ No newline at end of file diff --git a/aspanbasedlinearizationforconstituenttrees/images.zip b/aspanbasedlinearizationforconstituenttrees/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..a733800c6d5130ac363c53d3c6532b72beca0104 --- /dev/null +++ b/aspanbasedlinearizationforconstituenttrees/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c2dbc8b023806dbd860dd44119fac0da489d8a6362e85fc8b72ec6ce3e61e801 +size 484423 diff --git a/aspanbasedlinearizationforconstituenttrees/layout.json b/aspanbasedlinearizationforconstituenttrees/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..68e61b5cba9de9fa2b14381b74e5532bcc4a1458 --- /dev/null +++ b/aspanbasedlinearizationforconstituenttrees/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:761f126080a78ea277f58de0ce5eb782a5f87deac8ea18dc62d3f624da7001b4 +size 456614 diff --git a/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/3fb18971-e946-4efc-9d46-4b50180260c0_content_list.json b/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/3fb18971-e946-4efc-9d46-4b50180260c0_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..3f3f9cae3e41132541c7c7ec087a468947488b17 --- /dev/null +++ b/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/3fb18971-e946-4efc-9d46-4b50180260c0_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:380819ee11e39837437d7f5c711902a9f94ff8411ad91ddda0feb0b9da6777a3 +size 83097 diff --git a/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/3fb18971-e946-4efc-9d46-4b50180260c0_model.json b/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/3fb18971-e946-4efc-9d46-4b50180260c0_model.json new file mode 100644 index 0000000000000000000000000000000000000000..e4af277e37601555951c4100354b7c2edc2f3424 --- /dev/null +++ b/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/3fb18971-e946-4efc-9d46-4b50180260c0_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5d1ac3e1b9dc88d5a015cc6120445b6c352cec4cfa57c0ac7f9acab848306c8d +size 103370 diff --git a/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/3fb18971-e946-4efc-9d46-4b50180260c0_origin.pdf b/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/3fb18971-e946-4efc-9d46-4b50180260c0_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c8a79021a095b4566bdd296cb25c9aaa7da09ee1 --- /dev/null +++ b/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/3fb18971-e946-4efc-9d46-4b50180260c0_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:285718a1646555f57619f0611e3ed06cf2f21ae58ef5ef6f4a5b3723c730fee1 +size 318073 diff --git a/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/full.md b/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d962f9ec253f2a6963cc8b3a0bc2772f858765be --- /dev/null +++ b/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/full.md @@ -0,0 +1,316 @@ +# ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations + +Fernando Alva-Manchego $^{1*}$ and Louis Martin $^{2,3*}$ and Antoine Bordes $^{3}$ + +Carolina Scarton1 and Benoit Sagot2 and Lucia Specia1,4 + +1University of Sheffield, 2Inria, 3Facebook AI Research, 4Imperial College London f.alva@sheffield.ac.uk, louismartin@fb.com, abordes@fb.com c.scarton@sheffield.ac.uk, benoit.sagot@inria.fr l.specia@imperial.ac.uk + +# Abstract + +In order to simplify a sentence, human editors perform multiple rewriting transformations: they split it into several shorter sentences, paraphrase words (i.e. replacing complex words or phrases by simpler synonyms), reorder components, and/or delete information deemed unnecessary. Despite these varied range of possible text alterations, current models for automatic sentence simplification are evaluated using datasets that are focused on a single transformation, such as lexical paraphrasing or splitting. This makes it impossible to understand the ability of simplification models in more realistic settings. To alleviate this limitation, this paper introduces ASSET, a new dataset for assessing sentence simplification in English. ASSET is a crowdsourced multi-reference corpus where each simplification was produced by executing several rewriting transformations. Through quantitative and qualitative experiments, we show that simplifications in ASSET are better at capturing characteristics of simplicity when compared to other standard evaluation datasets for the task. Furthermore, we motivate the need for developing better methods for automatic evaluation using ASSET, since we show that current popular metrics may not be suitable when multiple simplification transformations are performed. + +# 1 Introduction + +Sentence Simplification (SS) consists in modifying the content and structure of a sentence to make it easier to understand, while retaining its main idea and most of its original meaning (Alva-Manchego et al., 2020). Simplified texts can benefit non-native speakers (Paetzold, 2016), people suffering from aphasia (Carroll et al., 1998), dyslexia (Rello et al., 2013) or autism (Evans et al., 2014). They also help language processing tasks, such as parsing (Chandrasekar et al., 1996), summarisation (Silveira and + +Branco, 2012), and machine translation (Hasler et al., 2017). + +In order to simplify a sentence, several rewriting transformations can be performed: replacing complex words/phrases with simpler synonyms (i.e. lexical paraphrasing), changing the syntactic structure of the sentence (e.g. splitting), or removing superfluous information that make the sentence more complicated (Petersen, 2007; Aluisio et al., 2008; Bott and Saggion, 2011). However, models for automatic SS are evaluated on datasets whose simplifications are not representative of this variety of transformations. For instance, TurkCorpus (Xu et al., 2016), a standard dataset for assessment in SS, contains simplifications produced mostly by lexical paraphrasing, while reference simplifications in HSplit (Sulem et al., 2018a) focus on splitting sentences. The Newsela corpus (Xu et al., 2015) contains simplifications produced by professionals applying multiple rewriting transformations, but sentence alignments are automatically computed and thus imperfect, and its data can only be accessed after signing a restrictive public-sharing licence and cannot be redistributed, hampering reproducibility. + +These limitations in evaluation data prevent studying models' capabilities to perform a broad range of simplification transformations. Even though most SS models are trained on simplification instances displaying several text transformations (e.g. WikiLarge (Zhang and Lapata, 2017)), we currently do not measure their performance in more abstractive scenarios, i.e. cases with substantial modifications to the original sentences. + +In this paper we introduce ASSET (Abstractive Sentence Simplification Evaluation and Tuning), a new dataset for tuning and evaluation of automatic SS models. ASSET consists of 23,590 human simplifications associated with the 2,359 original sentences from TurkCorpus (10 simplifications per + +original sentence). Simplifications in ASSET were collected via crowdsourcing (§ 3), and encompass a variety of rewriting transformations (§ 4), which make them simpler than those in TurkCorpus and HSplit (§ 5), thus providing an additional suitable benchmark for comparing and evaluating automatic SS models. In addition, we study the applicability of standard metrics for evaluating SS using simplifications in ASSET as references (§ 6). We analyse whether BLEU (Papineni et al., 2002) or SARI (Xuet al., 2016) scores correlate with human judgements of fluency, adequacy and simplicity, and find that neither of the metrics shows a strong correlation with simplicity ratings. This motivates the need for developing better metrics for assessing SS when multiple rewriting transformations are performed. + +We make the following contributions: + +- A high quality large dataset for tuning and evaluation of SS models containing simplifications produced by applying multiple rewriting transformations. +- An analysis of the characteristics of the dataset that turn it into a new suitable benchmark for evaluation. +- A study questioning the suitability of popular metrics for evaluating automatic simplifications in a multiple-transformation scenario. + +# 2 Related Work + +# 2.1 Studies on Human Simplification + +A few corpus studies have been carried out to analyse how humans simplify sentences, and to attempt to determine the rewriting transformations that are performed. + +Petersen and Ostendorf (2007) analysed a corpus of 104 original and professionally simplified news articles in English. Sentences were manually aligned and each simplification instance was categorised as dropped (1-to-0 alignment), split (1-to-N), total (1-to-1) or merged (2-to-1). Some splits were further sub-categorised as edited (i.e. the sentence was split and some part was dropped) or different (i.e. same information but very different wording). This provides evidence that sentence splitting and deletion of information can be performed simultaneously. + +Aluísio et al. (2008) studied six corpora of simple texts (different genres) and a corpus of complex news texts in Brazilian Portuguese, to produce a manual for Portuguese text simplification (Specia et al., 2008). It contains several rules to perform the task focused on syntactic alterations: to split adverbial/coordinated/subordinated sentences, to reorder clauses to a subject-verb-object structure, to transform passive to active voice, among others. + +Bott and Saggion (2011) worked with a dataset of 200 news articles in Spanish with their corresponding manual simplifications. After automatically aligning the sentences, the authors determined the simplification transformations performed: change (e.g. difficult words, pronouns, voice of verb), delete (words, phrases or clauses), insert (word or phrases), split (relative clauses, coordination, etc.), proximisation (add locative phrases, change from third to second person), reorder, select, and join (sentences). + +From all these studies, it can be argued that the scope of rewriting transformations involved in the simplification process goes beyond only replacing words with simpler synonyms. In fact, human perception of complexity is most affected by syntactic features related to sentence structure (Brunato et al., 2018). Therefore, since human editors make several changes to both the lexical content and syntactic structure of sentences when simplifying them, we should expect that models for automatic sentence simplification can also make such changes. + +# 2.2 Evaluation Data for SS + +Most datasets for SS (Zhu et al., 2010; Coster and Kauchak, 2011; Hwang et al., 2015) consist of automatic sentence alignments between related articles in English Wikipedia (EW) and Simple English Wikipedia (SEW). In SEW, contributors are asked to write texts using simpler language, such as by shortening sentences or by using words from Basic English (Ogden, 1930). However, Yasseri et al. (2012) found that the syntactic complexity of sentences in SEW is almost the same as in EW. In addition, Xu et al. (2015) determined that automatically-aligned simple sentences are sometimes just as complex as their original counterparts, with only a few words replaced or dropped and the rest of the sentences left unchanged. + +More diverse simplifications are available in the Newsela corpus (Xu et al., 2015), a dataset of 1,130 news articles that were each manually simplified + +to up to 5 levels of simplicity. The parallel articles can be automatically aligned at the sentence level to train and test simplification models (Alva-Manchego et al., 2017; Stajner et al., 2018). However, the Newsela corpus can only be accessed after signing a restrictive license that prevents publicly sharing train/test splits of the dataset, which impedes reproducibility. + +Evaluating models on automatically-aligned sentences is problematic. Even more so if only one (potentially noisy) reference simplification for each original sentence is available. With this concern in mind, Xu et al. (2016) collected the TurkCorpus, a dataset with 2,359 original sentences from EW, each with 8 manual reference simplifications. The dataset is divided into two subsets: 2,000 sentences for validation and 359 for testing of sentence simplification models. TurkCorpus is suitable for automatic evaluation that involves metrics requiring multiple references, such as BLEU (Papineni et al., 2002) and SARI (Xu et al., 2016). However, Xu et al. (2016) focused on simplifications through lexical paraphrasing, instructing annotators to rewrite sentences by reducing the number of difficult words or idioms, but without deleting content or splitting the sentences. This prevents evaluating a model's ability to perform a more diverse set of rewriting transformations when simplifying sentences. HSplit (Sulem et al., 2018a), on the other hand, provides simplifications involving only splitting for sentences in the test set of TurkCorpus. We build on TurkCorpus and HSplit by collecting a dataset that provides several manually-produced simplifications involving multiple types of rewriting transformations. + +# 2.3 Crowdsourcing Manual Simplifications + +A few projects have been carried out to collect manual simplifications through crowdsourcing. Pellow and Eskenazi (2014a) built a corpus of everyday documents (e.g. driving test preparation materials), and analysed the feasibly of crowdsourcing their sentence-level simplifications. Of all the quality control measures taken, the most successful was providing a training session to workers, since it allowed to block spammers and those without the skills to perform the task. Additionally, they proposed to use workers' self-reported confidence scores to flag submissions that could be discarded or reviewed. Later on, Pellow and Eskenazi (2014b) presented a preliminary study on + +producing simplifications through a collaborative process. Groups of four workers were assigned one sentence to simplify, and they had to discuss and agree on the process to perform it. Unfortunately, the data collected in these studies is no longer publicly available. + +Simplifications in TurkCorpus were also collected through crowdsourcing. Regarding the methodology followed, Xu et al. (2016) only report removing bad workers after manual check of their first several submissions. More recently, Scarton et al. (2018) used volunteers to collect simplifications for SimPA, a dataset with sentences from the Public Administration domain. One particular characteristic of the methodology followed is that lexical and syntactic simplifications were performed independently. + +# 3 Creating ASSET + +We extended TurkCorpus (Xu et al., 2016) by using the same original sentences, but crowdsourced manual simplifications that encompass a richer set of rewriting transformations. Since TurkCorpus was adopted as the standard dataset for evaluating SS models, several system outputs on this data are already publicly available (Zhang and Lapata, 2017; Zhao et al., 2018; Martin et al., 2020). Therefore, we can now assess the capabilities of these and other systems in scenarios with varying simplification expectations: lexical paraphrasing with TurkCorpus, sentence splitting with HSplit, and multiple transformations with ASSET. + +# 3.1 Data Collection Protocol + +Manual simplifications were collected using Amazon Mechanical Turk (AMT). AMT allows us to publish HITs (Human Intelligence Tasks), which workers can choose to work on, submit an answer, and collect a reward if the work is approved. This was also the platform used for TurkCorpus. + +Worker Requirements. Participants were workers who: (1) have a HIT approval rate $> = 95\%$ ; (2) have a number of HITs approved $>1000$ ; (3) are residents of the United States of America, the United Kingdom or Canada; and (4) passed the corresponding Qualification Test designed for our task (more details below). The first two requirements are measured by the AMT platform and ensure that the workers have experience on different tasks and have had most of their work approved by previous requesters. The last two requirements are intended + +
OriginalTheir eyes are quite small, and their visual acuity is poor.
TurkCorpusTheir eyes are very little, and their sight is inferior.
HSplitTheir eyes are quite small. Their visual acuity is poor as well.
ASSETThey have small eyes and poor eyesight.
OriginalHis next work, Saturday, follows an especially eventful day in the life of a successful neurosurgeon.
TurkCorpusHis next work at Saturday will be a successful Neurosurgeon.
HSplitHis next work was Saturday. It follows an especially eventful day in the life of a successful Neurosurgeon.
ASSET"Saturday" records a very eventful day in the life of a successful neurosurgeon.
OriginalHe settled in London, devoting himself chiefly to practical teaching.
TurkCorpusHe rooted in London, devoting himself mainly to practical teaching.
HSplitHe settled in London. He devoted himself chiefly to practical teaching.
ASSETHe lived in London. He was a teacher.
+ +Table 1: Examples of simplifications collected for ASSET together with their corresponding version from TurkCorpus and HSplit for the same original sentences. + +to ensure that the workers have a proficient level of English, and are capable of performing the simplification task. + +Qualification Test. We provided a training session to workers in the form of a Qualification Test (QT). Following Pellow and Eskenazi (2014a), we showed them explanations and examples of multiple simplification transformations (see details below). Each HIT consisted of three sentences to simplify, and all submissions were manually checked to filter out spammers and workers who could not perform the task correctly. The sentences used in this stage were extracted from the QATS dataset (Štajner et al., 2016). We had 100 workers take the QT, out of which 42 passed the test $(42\%)$ and worked on the task. + +Annotation Round. Workers who passed the QT had access to this round. Similar to Pellow and Eskenazi (2014a), each HIT now consisted of four original sentences that needed to be simplified. In addition to the simplification of each sentence, workers were asked to submit confidence scores on their simplifications using a 5-point likert scale (1:Very Low, 5:Very High). We collected 10 simplifications (similar to Pellow and Eskenazi (2014a)) for each of the 2,359 original sentences in TurkCorpus. + +Simplification Instructions. For both the QT and the Annotation Round, workers received the same set of instructions about how to simplify a sentence. We provided examples of lexical paraphrasing (lexical simplification and reordering), sentence splitting, and compression (deleting unimportant information). We also included an example where all transformations were performed. However, we clarified that it was at their discretion to decide + +which types of rewriting to execute in any given original sentence.2 + +Table 1 presents a few examples of simplifications in ASSET, together with references from TurkCorpus and HSplit, randomly sampled for the same original sentences. It can be noticed that annotators in ASSET had more freedom to change the structure of the original sentences. + +# 3.2 Dataset Statistics + +ASSET contains 23,590 human simplifications associated with the 2,359 original sentences from TurkCorpus (2,000 from the validation set and 359 from the test set). Table 2 presents some general statistics from simplifications in ASSET. We show the same statistics for TurkCorpus and HSplit for comparison. $^{3}$ + +In addition to having more references per original sentence, ASSET's simplifications offer more variability, for example containing many more instances of natural sentence splitting than TurkCorpus. In addition, reference simplifications are shorter on average in ASSET, given that we allowed annotators to delete information that they considered unnecessary. In the next section, we further compare these datasets with more detailed text features. + +# 4 Rewriting Transformations in ASSET + +We study the simplifications collected for ASSET through a series of text features to measure the + +
ASSETTurkCorpusHSplit
Original Sentences2,3592,359359
Num. of References1084
Type of Simp. Instances
1-to-117,24518,499408
1-to-N6,3453731,028
Tokens per Reference19.0421.2925.49
+ +Table 2: General surface statistics for ASSET compared with TurkCorpus and HSplit. A simplification instance is an original-simplified sentence pair. + +abstractiveness of the rewriting transformations performed by the annotators. From here on, the analysis and statistics reported refer to the test set only (i.e. 359 original sentences), so that we can fairly compare ASSET, TurkCorpus and HSplit. + +# 4.1 Text Features + +In order to quantify the rewriting transformations, we computed several low-level features for all simplification instances using the tseval package (Martin et al., 2018): + +- Number of sentence splits: Corresponds to the difference between the number of sentences in the simplification and the number of sentences in the original sentence. In tseval, the number of sentences is calculated using NLTK (Loper and Bird, 2002). +- Compression level: Number of characters in the simplification divided by the number of characters in the original sentence. +- Replace-only Levenshtein distance: Computed as the normalised character-level Levenshtein distance (Levenshtein, 1966) for replace operations only, between the original sentence and the simplification. Replace-only Levenshtein distance is computed as follows (with $o$ the original sentence and $s$ the simplification): + +$$ +\frac {\text {r e p l a c e} _ {-} \text {o p s} (o , s)}{\min (\text {l e n} (o) , \text {l e n} (s))} +$$ + +We do not consider insertions and deletions in the Levenshtein distance computation so that this feature is independent from the compression level. It therefore serves as a proxy for measuring the lexical paraphrases of the simplification. + +- Proportion of words deleted, added and reordered: Number of words deleted/reordered from the original sentence divided by the number of words in the original sentence; and the number of words that were added to the original sentence divided by the number of words in the simplification. +- Exact match: Boolean feature that equals to true when the original sentence and the simplification are exactly the same, to account for unchanged sentences. +- Word deletion only: Boolean feature that equals to true when the simplification is obtained only by deleting words from the original sentence. This feature captures extractive compression. +- Lexical complexity score ratio: We compute the score as the mean squared log-ranks of content words in a sentence (i.e. without stopwords). We use the 50k most frequent words of the FastText word embeddings vocabulary (Bojanowski et al., 2016). This vocabulary was originally sorted with frequencies of words in the Common CWEl. This score is a proxy to the lexical complexity of the sentence given that word ranks (in a frequency table) have been shown to be best indicators of word complexity (Paetzold and Specia, 2016). The ratio is then the value of this score on the simplification divided by that of the original sentence. +- Dependency tree depth ratio: We compute the ratio of the depth of the dependency parse tree of the simplification relative to that of the original sentence. When a simplification is composed by more than one sentence, we choose the maximum depth of all dependency trees. Parsing is performed using spaCy. This feature serves as a proxy to measure improvements in structural simplicity. + +Each feature was computed for all simplification instances in the dataset and then aggregated as a histogram (Figure 1) and as a percentage (Table 3). + +# 4.2 Results and Analysis + +Figure 1 shows the density of all features in ASSET, and compares them with those in TurkCorpus and + +![](images/2dc091df62d771f772019fb4072b7bfc26423a0ad69232855c16e9e7183d7f25.jpg) + +![](images/236874dda291194309085de49fafec8f031606fc2b648092cfe2b9aca0288170.jpg) + +![](images/d4c4b1daac0fad2bebdd7dfbd61eed06c23d264ea72f3562e4b749144a074808.jpg) + +![](images/eaba5e86053cbd7eabbd086a3058297681404af30063ed9f456d951f27f255df.jpg) + +![](images/0ec4606f0470711d728565ff99a8031bc63ed13d3ac7654a33713ced7e48dcb3.jpg) +Figure 1: Density of text features in simplifications from HSplit, TurkCorpus, and ASSET. + +![](images/7b54bf7024e13f459b2f64b1a6c5254c1af7def16aa5a2db3579c2abdba3e948.jpg) + +![](images/7f552267daa5c0a61c8827fd996b85d2273aa5c24feebca7160565a491fe5789.jpg) + +![](images/3065eb7fcae176dc2f8a311ebfc5a4e8457998d5bfa1cd8767743fa0e51b87a1.jpg) + +
ASSETTurkCorpusHSplit
Sentence Splitting20.2%4.6%68.2%
Compression (<75%)31.2%9.9%0.1%
Word Reordering28.3%19.4%10.1%
Exact Match0.4%16.3%26.5%
Word Deletion Only4.5%3.9%0.0%
+ +Table 3: Percentage of simplifications featuring one of different rewriting transformations operated in ASSET, TurkCorpus and HSplit. A simplification is considered as compressed when its character length is less than $75\%$ of that of the original sentence. + +HSplit. Table 3 highlights some of these statistics. In particular, we report the percentage of sentences that: have at least one sentence split, have a compression level of $75\%$ or lower, have at least one reordered word, are exact copies of the original sentences, and operated word deletion only (e.g. by removing only an adverb). + +Sentence splits are practically non-existent in TurkCorpus (only $4.6\%$ have one split or more), and are more present and distributed in HSplit. In ASSET, annotators tended to not split sentences, and those who did mostly divided the original sentence into just two sentences (1 split). + +Compression is a differentiating feature of AS-SET. Both TurkCorpus and HSplit have high density of a compression ratio of 1.0, which means that no compression was performed. In fact, HSplit has several instances with compression levels greater than 1.0, which could be explained by splitting requiring adding words to preserve fluency. In contrast, ASSET offers more variability, perhaps signalling that annotators consider deleting infor + +mation as an important simplification operation. + +By analysing replace-only Levenshtein distance, we can see that simplifications in ASSET paraphrase the input more. For TurkCorpus and HSplit, most simplifications are similar to their original counterparts (higher densities closer to 0). On the other hand, ASSET's simplifications are distributed in all levels, indicating more diversity in the rewordings performed. This observation is complemented by the distributions of deleted, added and reordered words. Both TurkCorpus and HSplit have high densities of ratios close to 0.0 in all these features, while ASSET's are more distributed. Moreover, these ratios are rarely equal to 0 (low density), meaning that for most simplifications, at least some effort was put into rewriting the original sentence. This is confirmed by the low percentage of exact matches in ASSET $(0.4\%)$ with respect to TurkCorpus $(16.3\%)$ and HSplit $(26.5\%)$ . Once again, it suggests that more rewriting transformations are being performed in ASSET. + +In terms of lexical complexity, HSplit has a high density of ratios close to 1.0 due to its simplifications being structural and not lexical. TurkCorpus offers more variability, as expected, but still their simplifications contain a high number of words that are equally complex, perhaps due to most simplifications just changing a few words. On the other hand, ASSET's simplifications are more distributed across different levels of reductions in lexical complexity. + +Finally, all datasets show high densities of a 1.0 ratio in dependency tree depth. This could mean that significant structural changes were not made, which is indicated by most instances corresponding + +to operations other than splitting. However, ASSET still contains more simplifications that reduce syntactic complexity than TurkCorpus and HSplit. + +# 5 Rating Simplifications in ASSET + +Here we measure the quality of the collected simplifications using human judges. In particular, we study if the abstractive simplifications in ASSET (test set) are preferred over lexical-paraphrase-only or splitting-only simplifications in TurkCorpus (test set) and HSplit, respectively. + +# 5.1 Collecting Human Preferences + +Preference judgments were crowdsourced with a protocol similar to that of the simplifications (§ 3.1). + +Selecting Human Judges. Workers needed to comply with the same basic requirements as described in § 3.1. For this task, the Qualification Test (QT) consisted in rating the quality of simplifications based on three criteria: fluency (or grammaticality), adequacy (or meaning preservation), and simplicity. Each HIT consisted of six original-simplified sentence pairs, and workers were asked to use a continuous scale (0-100) to submit their level of agreement (0: Strongly disagree, 100: Strongly agree) with the following statements: + +1. The Simplified sentence adequately expresses the meaning of the Original, perhaps omitting the least important information. +2. The Simplified sentence is fluent, there are no grammatical errors. +3. The Simplified sentence is easier to understand than the Original sentence. + +Using continuous scales when crowdsourcing human evaluations is common practice in Machine Translation (Bojar et al., 2018; Barrault et al., 2019), since it results in higher levels of inter-annotator consistency (Graham et al., 2013). The six sentence pairs for the Rating QT consisted of: + +- Three submissions to the Annotation QT, manually selected so that one contains splitting, one has a medium level of compression, and one contains grammatical and spelling mistakes. These allowed to check that the particular characteristics of each sentence pair affect the corresponding evaluation criteria. + +- One sentence pair extracted from WikiLarge (Zhang and Lapata, 2017) that contains several sentence splits. This instance appeared twice in the HIT and allowed checking for intra-annotator consistency. +- One sentence pair from WikiLarge where the Original and the Simplification had no relation to each other. This served to check the attention level of the worker. + +All submitted ratings were manually reviewed to validate the quality control established and to select the qualified workers for the task. + +Preference Task. For each of the 359 original sentences in the test set, we randomly sampled one reference simplification from ASSET and one from TurkCorpus, and then asked qualified workers to choose which simplification answers best each of the following questions: + +- Fluency: Which sentence is more fluent? +- Meaning: Which sentence expresses the original meaning the best? +- Simplicity: Which sentence is easier to read and understand? + +Workers were also allowed to judge simplifications as "similar" when they could not determine which one was better. The same process was followed to compare simplifications in ASSET against those in HSplit. Each HIT consisted of 10 sentence pairs. + +# 5.2 Results and Analysis + +Table 4 (top section) presents, for each evaluation dimension, the percentage of times a simplification from ASSET or TurkCorpus was preferred over the other, and the percentage of times they were judged as "similar". In general, judges preferred ASSET's simplifications in terms of fluency and simplicity. However, they found TurkCorpus' simplifications more meaning preserving. This is expected since they were produced mainly by replacing words/phrases with virtually no deletion of content. + +A similar behaviour was observed when comparing ASSET to HSplit (bottom section of Table 4). In this case, however, the differences in preferences are greater than with TurkCorpus. This could indicate that changes in syntactic structure are not enough for a sentence to be consider simpler. + +
FluencyMeaningSimplicity
ASSET38.4%*23.7%41.2%*
TurkCorpus22.8%37.9%*20.1%
Similar38.7%38.4%38.7%
ASSET53.5%*17.0%59.0%*
HSplit19.5%51.5%*14.8%
Similar27.0%31.5%26.2%
+ +Table 4: Percentages of human judges who preferred simplifications in ASSET or TurkCorpus, and ASSET or HSplit, out of 359 comparisons. * indicates a statistically significant difference between the two datasets (binomial test with p-value $< 0.001$ ). + +# 6 Evaluating Evaluation Metrics + +In this section we study the behaviour of evaluation metrics for SS when using ASSET's simplifications (test set) as references. In particular, we measure the correlation of standard metrics with human judgements of fluency, adequacy and simplicity, on simplifications produced by automatic systems. + +# 6.1 Experimental Setup + +Evaluation Metrics. We analysed the behaviour of two standard metrics in automatic evaluation of SS outputs: BLEU (Papineni et al., 2002) and SARI (Xu et al., 2016). BLEU is a precision-oriented metric that relies on the number of $n$ -grams in the output that match $n$ -grams in the references, independently of position. SARI measures improvement in the simplicity of a sentence based on the $n$ -grams added, deleted and kept by the simplification system. It does so by comparing the output of the simplification model to multiple references and the original sentence, using both precision and recall. BLEU has shown positive correlation with human judgements of grammaticality and meaning preservation (Šajner et al., 2014; Wubben et al., 2012; Xu et al., 2016), while SARI has high correlation with judgements of simplicity gain (Xu et al., 2016). In our experiments, we used the implementations of these metrics available in the EASSE package for automatic sentence simplification evaluation (Alva-Manchego et al., 2019). We computed all the scores at sentence-level as in the experiment by Xu et al. (2016), where they compared sentence-level correlations of FKGL, BLEU and SARI with human ratings. We used a smoothed sentence-level version of BLEU so that comparison is possible, + +even though BLEU was designed as a corpus-level metric. + +System Outputs. We used publicly-available simplifications produced by automatic SS systems: PBSMT-R (Wubben et al., 2012), which is a phrase-based MT model; Hybrid (Narayan and Gardent, 2014), which uses phrase-based MT coupled with semantic analysis; SBSMT-SARI (Xu et al., 2016), which relies on syntax-based MT; NTS-SARI (Nisioi et al., 2017), a neural sequence-to-sequence model with a standard encoder-decoder architecture; and ACCESS (Martin et al., 2020), an encoder-decoder architecture conditioned on explicit attributes of sentence simplification. + +Collection of Human Ratings. We randomly chose 100 original sentences from ASSET and, for each of them, we sampled one system simplification. The automatic simplifications were selected so that the distribution of simplification transformations (e.g. sentence splitting, compression, paraphrases) would match that from human simplifications in ASSET. That was done so that we could obtain a sample that has variability in the types of rewrites performed. For each sentence pair (original and automatic simplification), we crowdsourced 15 human ratings on fluency (i.e. grammaticality), adequacy (i.e. meaning preservation) and simplicity, using the same worker selection criteria and HIT design of the Qualification Test as in § 5.1. + +# 6.2 Inter-Annotator Agreement + +We followed the process suggested in (Graham et al., 2013). First, we normalised the scores of each rater by their individual mean and standard deviation, which helps eliminate individual judge preferences. Then, the normalised continuous scores were converted to five interval categories using equally spaced bins. After that, we followed Pavlick and Tetreault (2016) and computed quadratic weighted Cohen's $\kappa$ (Cohen, 1968) simulating two raters: for each sentence, we chose one worker's rating as the category for annotator A, and selected the rounded average scores for the remaining workers as the category for annotator B. We then computed $\kappa$ for this pair over the whole dataset. We repeated the process 1,000 times to compute the mean and variance of $\kappa$ . The resulting values are: $0.687 \pm 0.028$ for Fluency, $0.686 \pm 0.030$ for Meaning and $0.628 \pm 0.032$ for Simplicity. All values point to a moderate level + +
MetricReferencesFluencyMeaningSimplicity
BLEUASSET0.42*0.61*0.31*
TurkCorpus0.35*0.59*0.18
SARIASSET0.160.130.28*
TurkCorpus0.140.100.17
+ +of agreement, which is in line with the subjective nature of the simplification task. + +# 6.3 Correlation with Evaluation Metrics + +We computed the Pearson correlation between the normalised ratings and the evaluation metrics of our interest (BLEU and SARI) using ASSET or TurkCorpus as the set of references. We refrained from experimenting with HSplit since neither BLEU nor SARI correlate with human judgements when calculated using that dataset as references (Sulem et al., 2018a). Results are reported in Table 5. + +BLEU shows a strong positive correlation with Meaning Preservation using either simplifications from ASSET or TurkCorpus as references. There is also some positive correlation with Fluency judgements, but that is not always the case for Simplicity: no correlation when using TurkCorpus and moderate when using ASSET. This is in line with previous studies that have shown that BLEU is not a good estimate for simplicity (Wubben et al., 2012; Xu et al., 2016; Sulem et al., 2018b). + +In the case of SARI, correlations are positive but low with all criteria and significant only for simplicity with ASSET's references. Xu et al. (2016) showed that SARI correlated with human judgements of simplicity gain, when instructing judges to "grade the quality of the variations by identifying the words/phrases that are altered, and counting how many of them are good simplifications". The judgements they requested differ from the ones we collected, since theirs were tailored to rate simplifications produced by lexical paraphrasing only. These results show that SARI might not be suitable for the evaluation of automatic simplifications with multiple rewrite operations. + +In Table 6, we further analyse the human ratings collected, and compute their correlations with similar text features as in § 4. The results shown re + +Table 5: Pearson correlation of human ratings with automatic metrics on system simplifications. * indicates a significance level of p-value $< {0.05}$ . + +
FeatureFluencyMeaningSimplicity
Length0.120.31*0.03
Sentence Splits-0.13-0.06-0.08
Compression Level0.26*0.46*0.04
Levenshtein Distance-0.40*-0.67*-0.18
Replace-only Lev. Dist.-0.04-0.17-0.06
Prop. Deleted Words-0.43*-0.67*-0.19
Prop.Added Words-0.19-0.38*-0.12
Prop.Reordered Words-0.37*-0.57*-0.18
Dep.Tree Depth Ratio0.200.240.06
Word Rank Ratio0.040.08-0.05
+ +Table 6: Pearson correlation of human ratings with text features on system simplifications. * indicates a significance level of p-value $< {0.01}$ . + +inforce our previous observations that judgements on Meaning correlate with making few changes to the sentence: strong negative correlation with Levenshtein distance, and strong negative correlation with proportion of words added, deleted, and reordered. No conclusions could be drawn with respect to Simplicity. + +# 7 Conclusion + +We have introduced ASSET, a new dataset for tuning and evaluation of SS models. Simplifications in ASSET were crowdsourced, and annotators were instructed to apply multiple rewriting transformations. This improves current publicly-available evaluation datasets, which are focused on only one type of transformation. Through several experiments, we have shown that ASSET contains simplifications that are more abstractive, and that are consider simpler than those in other evaluation corpora. Furthermore, we have motivated the need to develop new metrics for automatic evaluation of SS models, especially when evaluating simplifications with multiple rewriting operations. Finally, we hope that ASSET's multi-transformation features will motivate the development of SS models that benefit a variety of target audiences according to their specific needs such as people with low literacy or cognitive disabilities. + +# Acknowledgements + +This work was partly supported by Benoit Sagot's chair in the PRAIRIE institute, funded by the French national agency ANR as part of the "Investissements d'avoir" programme under the reference ANR-19-P3IA-0001. + +# References + +Sandra M. Aluisio, Lucia Specia, Thiago A. S. Pardo, Erick G. Maziero, Helena M. Caseli, and Renata P. M. Fortes. 2008. A corpus analysis of simple account texts and the proposal of simplification strategies: First steps towards text simplification systems. In Proceedings of the 26th Annual ACM International Conference on Design of Communication, SIGDOC '08, pages 15-22, Lisbon, Portugal. ACM. +Fernando Alva-Manchego, Joachim Bingel, Gustavo Paetzold, Carolina Scarton, and Lucia Specia. 2017. Learning how to simplify from explicit labeling of complex-simplified text pairs. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 295-305, Taipei, Taiwan. Asian Federation of Natural Language Processing. +Fernando Alva-Manchego, Louis Martin, Carolina Scarton, and Lucia Specia. 2019. EASSE: Easier automatic sentence simplification evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 49-54, Hong Kong, China. Association for Computational Linguistics. +Fernando Alva-Manchego, Carolina Scarton, and Lucia Specia. 2020. Data-driven sentence simplification: Survey and benchmark. Computational Linguistics, 46(1):135-187. +Loic Barrault, Ondrej Bojar, Marta R. Costa-jussa, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61, Florence, Italy. Association for Computational Linguistics. +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606. +Ondrej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 conference on machine translation (WMT18). In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 272-303, Belgium, Brussels. Association for Computational Linguistics. +Stefan Bott and Horacio Saggion. 2011. Spanish text simplification: An exploratory study. Procesamento del Lenguaje Natural, 47:87-95. + +Dominique Brunato, Lorenzo De Mattei, Felice Dell'Orletta, Benedetta Iavarone, and Giulia Venturi. 2018. Is this sentence difficult? do you agree? In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2690-2699, Brussels, Belgium. Association for Computational Linguistics. +John Carroll, Guido Minnen, Yvonne Canning, Siobhan Devlin, and John Tait. 1998. Practical simplification of english newspaper text to assist aphasic readers. In Proceedings of AAAI-98 Workshop on Integrating Artificial Intelligence and Assistive Technology, pages 7-10. +R. Chandrasekar, Christine Doran, and B. Srinivas. 1996. Motivations and methods for text simplification. In Proceedings of the 16th Conference on Computational Linguistics, volume 2 of COLING '96, pages 1041-1044, Copenhagen, Denmark. Association for Computational Linguistics. +Jacob Cohen. 1968. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4):213-220. +William Coster and David Kauchak. 2011. Simple english wikipedia: A new text simplification task. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers - Volume 2, HLT '11, pages 665-669, Stroudsburg, PA, USA. Association for Computational Linguistics. +Richard Evans, Constantin Orasan, and Iustin Dornescu. 2014. An evaluation of syntactic simplification rules for people with autism. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations, PIT 2014, pages 131-140, Gothenburg, Sweden. Association for Computational Linguistics. +Yvette Graham, Timothy Baldwin, Alistair Moffat, and Justin Zobel. 2013. Continuous measurement scales in human evaluation of machine translation. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 33-41, Sofia, Bulgaria. Association for Computational Linguistics. +Eva Hasler, Adri de Gispert, Felix Stahlberg, Aurelien Waite, and Bill Byrne. 2017. Source sentence simplification for statistical machine translation. Computer Speech & Language, 45(C):221-235. +William Hwang, Hannaneh Hajishirzi, Mari Ostendorf, and Wei Wu. 2015. Aligning Sentences from Standard Wikipedia to Simple Wikipedia. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 211-217, Denver, Colorado. Association for Computational Linguistics. + +VI Levenshtein. 1966. Binary Codes Capable of Correcting Deletions, Insertions and Reversals. Soviet Physics Doklady, 10:707. +Edward Loper and Steven Bird. 2002. NLTK: the natural language toolkit. CoRR, cs.CL/0205028. +Louis Martin, Samuel Humeau, Pierre-Emmanuel Mazaré, Éric de La Clergerie, Antoine Bordes, and Benoit Sagot. 2018. Reference-less quality estimation of text simplification systems. In Proceedings of the 1st Workshop on Automatic Text Adaptation (ATA), pages 29-38, Tilburg, the Netherlands. ACL. +Louis Martin, Benoit Sagot, Éric de la Clergerie, and Antoine Bordes. 2020. Controllable sentence simplification. In Proceedings of the 12th International Conference on Language Resources and Evaluation (LREC 2020). +Shashi Narayan and Claire Gardent. 2014. Hybrid simplification using deep semantics and machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 435-445, Baltimore, Maryland. Association for Computational Linguistics. +Sergiu Nisioi, Sanja Štajner, Simone Paolo Ponzetto, and Liviu P. Dinu. 2017. Exploring neural text simplification models. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 85-91, Vancouver, Canada. Association for Computational Linguistics. +Charles Kay Ogden. 1930. *Basic English: A General Introduction with Rules and Grammar*. Kegan Paul, Trench, Trubner & Co. +Gustavo Paetzold and Lucia Specia. 2016. SemEval 2016 task 11: Complex word identification. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 560-569, San Diego, California. Association for Computational Linguistics. +Gustavo Henrique Paetzold. 2016. Lexical Simplification for Non-Native English Speakers. Ph.D. thesis, University of Sheffield, Sheffield, UK. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02, pages 311-318, Philadelphia, Pennsylvania. ACL. +Ellie Pavlick and Joel Tetreault. 2016. An empirical analysis of formality in online communication. Transactions of the Association for Computational Linguistics, 4:61-74. +David Pellow and Maxine Eskenazi. 2014a. An open corpus of everyday documents for simplification + +tasks. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR), pages 84-93, Gothenburg, Sweden. Association for Computational Linguistics. +David Pellow and Maxine Eskenazi. 2014b. Tracking human process using crowd collaboration to enrich data. In Human Computation and Crowdsourcing: Works in Progress and Demonstration Abstracts. An Adjunct to the Proceedings of the Second AAAI Conference on Human Computation and Crowdsourcing, pages 52-53. +Sarah E. Petersen. 2007. Natural Language Processing Tools for Reading Level Assessment and Text Simplification for Bilingual Education. Ph.D. thesis, University of Washington, Seattle, WA, USA. AAI3275902. +Sarah E. Petersen and Mari Ostendorf. 2007. Text simplification for language learners: a corpus analysis. In Proceedings of the Speech and Language Technology for Education Workshop, SLaTE 2007, pages 69-72. +Luz Rello, Clara Bayarri, Azuki Gorriz, Ricardo Baeza-Yates, Saurabh Gupta, Gaurang Kanvirde, Horacio Saggion, Stefan Bott, Roberto Carlini, and Vasile Topac. 2013. "dyswebxia 2.0!: More accessible text for people with dyslexia". In Proceedings of the 10th International Cross-Disciplinary Conference on Web Accessibility, W4A '13, pages 25:1-25:2, Rio de Janeiro, Brazil. ACM. +Carolina Scarton, Gustavo H. Paetzold, and Lucia Specia. 2018. Simpa: A sentence-level simplification corpus for the public administration domain. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). +Sara Botelho Silveira and Antonio Branco. 2012. Enhancing multi-document summaries with sentence simplificatio. In Proceedings of the 14th International Conference on Artificial Intelligence., ICAI 2012, pages 742-748, Las Vegas, USA. +Lúcia Specia, Sandra Maria Aluísio, and Thiago A. Salgueiro Pardo. 2008. Manual de simplificações sintática para o portugues. Technical Report NILC-TR-08-06, NILC-ICMC-USP, São Carlos, SP, Brasil. Available in http://www.nilc.icmc.usp.br/nilc/download/NILC_TR_08_06.pdf. +Elior Sulem, Omri Abend, and Ari Rappoport. 2018a. Bleu is not suitable for the evaluation of text simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 738-744. Association for Computational Linguistics. +Elior Sulem, Omri Abend, and Ari Rappoport. 2018b. Semantic structural evaluation for text simplification. In Proceedings of the 2018 Conference of the + +North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 685-696, New Orleans, Louisiana. Association for Computational Linguistics. +Sanja Štajner, Marc Franco-Salvador, Paolo Rosso, and Simone Paolo Ponzetto. 2018. Cats: A tool for customized alignment of text simplification corpora. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). +Sanja Štajner, Ruslan Mitkov, and Horacio Saggion. 2014. One step closer to automatic evaluation of text simplification systems. In Proceedings of the 3rd Workshop on Predicting and Improving Text Readability for Target Reader Populations (PITR), pages 1-10, Gothenburg, Sweden. Association for Computational Linguistics. +Sanja Štajner, Maja Popović, Horacio Saggion, Lucia Specia, and Mark Fishel. 2016. Shared task on quality assessment for text simplification. In *Proceeding of the Workshop on Quality Assessment for Text Simplification - LREC* 2016, QATS 2016, pages 22-31, Portož, Slovenia. European Language Resources Association (ELRA). +Sander Wubben, Antal van den Bosch, and Emiel Krahmer. 2012. Sentence simplification by monolingual machine translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1, ACL '12, pages 1015-1024, Stroudsburg, PA, USA. Association for Computational Linguistics. +Wei Xu, Chris Callison-Burch, and Courtney Naples. 2015. Problems in current text simplification research: New data can help. Transactions of the Association for Computational Linguistics, 3:283-297. +Wei Xu, Courtney Naples, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401-415. +Taha Yasseri, András Kornai, and János Kertész. 2012. A practical approach to language complexity: A wikipedia case study. PLOS ONE, 7(11):1-8. +Xingxing Zhang and Mirella Lapata. 2017. Sentence simplification with deep reinforcement learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 595-605, Copenhagen, Denmark. Association for Computational Linguistics. +Sanqiang Zhao, Rui Meng, Daqing He, Andi Saptono, and Bambang Parmanto. 2018. Integrating transformer and paraphrase rules for sentence simplification. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, + +pages 3164-3173, Brussels, Belgium. Association for Computational Linguistics. +Zhemin Zhu, Delphine Bernhard, and Iryna Gurevych. 2010. A monolingual tree-based translation model for sentence simplification. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10, pages 1353-1361, Stroudsburg, PA, USA. Association for Computational Linguistics. \ No newline at end of file diff --git a/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/images.zip b/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..761708f95fee22b02c656d6376aa9a2e432be6e7 --- /dev/null +++ b/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:73e1659c6eed1ce07412ac8f60368ef8753e494ea2183f76afbab655c800a109 +size 306171 diff --git a/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/layout.json b/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..39d0203049243344b6de3d4a59cf62ca8c4c8f4a --- /dev/null +++ b/assetadatasetfortuningandevaluationofsentencesimplificationmodelswithmultiplerewritingtransformations/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3c3e2870f42ade87c551b49dcf561e168ee9a1f1b16a74a831eef7965fc65ce1 +size 347586 diff --git a/astudyofnonautoregressivemodelforsequencegeneration/4eebe3ba-7f79-4f0a-a485-4fe109c9b661_content_list.json b/astudyofnonautoregressivemodelforsequencegeneration/4eebe3ba-7f79-4f0a-a485-4fe109c9b661_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e4ceec740a290ea21c85bfde230e62ade89a9f39 --- /dev/null +++ b/astudyofnonautoregressivemodelforsequencegeneration/4eebe3ba-7f79-4f0a-a485-4fe109c9b661_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:eba1ca112ea48d1642924a9e1dcdcf69e566d025af094dfac39b766f8214624a +size 72451 diff --git a/astudyofnonautoregressivemodelforsequencegeneration/4eebe3ba-7f79-4f0a-a485-4fe109c9b661_model.json b/astudyofnonautoregressivemodelforsequencegeneration/4eebe3ba-7f79-4f0a-a485-4fe109c9b661_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0654b1dff0b24f57d12f56c359fd920d3aea507e --- /dev/null +++ b/astudyofnonautoregressivemodelforsequencegeneration/4eebe3ba-7f79-4f0a-a485-4fe109c9b661_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a6714e4ebdafc8a3cfdf49e6b7e6cb92973915d6ab616ae46ec64e3099ba7b9a +size 87770 diff --git a/astudyofnonautoregressivemodelforsequencegeneration/4eebe3ba-7f79-4f0a-a485-4fe109c9b661_origin.pdf b/astudyofnonautoregressivemodelforsequencegeneration/4eebe3ba-7f79-4f0a-a485-4fe109c9b661_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c6ca72f4781ec76992cf5d4cbe48cebacaa5b718 --- /dev/null +++ b/astudyofnonautoregressivemodelforsequencegeneration/4eebe3ba-7f79-4f0a-a485-4fe109c9b661_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:006a7689b2b1fb6e2f790443aa1ff68b00d7fef1f59bc4f617ae49f068e70957 +size 545647 diff --git a/astudyofnonautoregressivemodelforsequencegeneration/full.md b/astudyofnonautoregressivemodelforsequencegeneration/full.md new file mode 100644 index 0000000000000000000000000000000000000000..e6afae0f03c5f93706d4c52522fc71e991de3417 --- /dev/null +++ b/astudyofnonautoregressivemodelforsequencegeneration/full.md @@ -0,0 +1,336 @@ +# A Study of Non-autoregressive Model for Sequence Generation + +Yi Ren + +Zhejiang University + +rayeren@zju.edu.cn + +Jinglin Liu * + +Zhejiang University + +jinglinliu@zju.edu.cn + +Xu Tan + +Microsoft Research Asia + +xuta@microsoft.com + +Zhou Zhao† + +Zhejiang University + +zhaozhou@zju.edu.cn + +Sheng Zhao + +Microsoft STC Asia + +Sheng.Zhao@microsoft.com + +Tie-Yan Liu + +Microsoft Research Asia + +tyliu@microsoft.com + +# Abstract + +Non-autoregressive (NAR) models generate all the tokens of a sequence in parallel, resulting in faster generation speed compared to their autoregressive (AR) counterparts but at the cost of lower accuracy. Different techniques including knowledge distillation and source-target alignment have been proposed to bridge the gap between AR and NAR models in various tasks such as neural machine translation (NMT), automatic speech recognition (ASR), and text to speech (TTS). With the help of those techniques, NAR models can catch up with the accuracy of AR models in some tasks but not in some others. In this work, we conduct a study to understand the difficulty of NAR sequence generation and try to answer: (1) Why NAR models can catch up with AR models in some tasks but not all? (2) Why techniques like knowledge distillation and source-target alignment can help NAR models. Since the main difference between AR and NAR models is that NAR models do not use dependency among target tokens while AR models do, intuitively the difficulty of NAR sequence generation heavily depends on the strength of dependency among target tokens. To quantify such dependency, we propose an analysis model called CoMMA to characterize the difficulty of different NAR sequence generation tasks. We have several interesting findings: 1) Among the NMT, ASR and TTS tasks, ASR has the most target-token dependency while TTS has the least. 2) Knowledge distillation reduces the target-token dependency in target sequence and thus improves the accuracy of NAR models. 3) Source-target alignment constraint encourages dependency + +of a target token on source tokens and thus eases the training of NAR models. + +# 1 Introduction + +Non-autoregressive (NAR) models (Oord et al., 2017; Gu et al., 2017; Chen et al., 2019; Ren et al., 2019), which generate all the tokens in a target sequence in parallel and can speed up inference, are widely explored in natural language and speech processing tasks such as neural machine translation (NMT) (Gu et al., 2017; Lee et al., 2018; Guo et al., 2019a; Wang et al., 2019; Li et al., 2019b; Guo et al., 2019b), automatic speech recognition (ASR) (Chen et al., 2019) and text to speech (TTS) synthesis (Oord et al., 2017; Ren et al., 2019). However, NAR models usually lead to lower accuracy than their autoregressive (AR) counterparts since the inner dependencies among the target tokens are explicitly removed. + +Several techniques have been proposed to alleviate the accuracy degradation, including 1) knowledge distillation (Oord et al., 2017; Gu et al., 2017; Guo et al., 2019a,b; Ren et al., 2019), 2) imposing source-target alignment constraint with fertility (Gu et al., 2017), word mapping (Guo et al., 2019a), attention distillation (Li et al., 2019b) and duration prediction (Ren et al., 2019). With the help of those techniques, it is observed that NAR models can match the accuracy of AR models for some tasks (Ren et al., 2019), but the gap still exists for some other tasks (Gu et al., 2017; Chen et al., 2019). Therefore, several questions come out naturally: (1) Why the gap still exists for some tasks? Are some tasks more difficult for NAR generation than others? (2) Why the techniques like knowledge distillation and source-target alignment can help NAR generation? + +The main difference between AR and NAR models is that NAR models do not consider the dependency among target tokens, which is also the root cause of accuracy drop of NAR models. Thus, to better understand NAR sequence generation and answer the above questions, we need to characterize and quantify the target-token dependency, which turns out to be non-trivial since the sequences could be of different modalities (i.e., speech or text). For this purpose, we design a novel model called COditional Masked prediction model with Mix-Attention (CoMMA), inspired by the mix-attention in He et al. (2018) and the masked language modeling in Devlin et al. (2018): in CoMMA, (1) the prediction of one target token can attend to all the source and target tokens with mix-attention, and 2) target tokens are randomly masked with varying probabilities. CoMMA can help us to measure target-token dependency using the ratio of the attention weights on target context over that on full (both source and target) context when predicting a target token: bigger ratio, larger dependency among target tokens. + +We conduct a comprehensive study in this work and obtain several interesting discoveries that can answer previous questions. First, we find that the rank of the target-token dependency among the three tasks is $\mathrm{ASR} > \mathrm{NMT} > \mathrm{TTS}$ : ASR has the largest dependency while TTS has the smallest. This finding is consistent with the accuracy gap between AR and NAR models and demonstrates the difficulty of NAR generation across tasks. Second, we replace the target sequence of original training data with the sequence generated by an AR model (i.e., through knowledge distillation) and use the new data to train CoMMA; we find that the target-token dependency is reduced. Smaller target-token dependency makes NAR training easier and thus improves the accuracy. Third, source-target alignment constraint such as explicit duration prediction (Ren et al., 2019) or implicit attention distillation (Li et al., 2019b) also reduces the target-token dependency, thus helping the training of NAR models. + +The main contributions of this work are as follows: + +- We design a novel model, conditional masked prediction model with mix-attention (CoMMA), to measure the token dependency for sequence generation. +- With CoMMA, we find that: 1) Among the + +three tasks, ASR is the most difficult and TTS is the least for NAR generation; 2) both knowledge distillation and imposing source-target alignment constraint reduce the target-token dependency, and thus reduce the difficulty of training NAR models. + +# 2 CoMMA + +In this section, we analyze the token dependency in the target sequence with a novel conditional masked prediction model with mix-attention (CoMMA). We first introduce the design and structure of CoMMA, and then describe how to measure the target token dependency based on CoMMA. + +# 2.1 The Design of CoMMA + +It is non-trivial to directly measure and compare the target token dependency in different modalities (i.e., speech or text) and different conditional source modalities (i.e., speech or text). Therefore, we have several considerations in the design of CoMMA: 1) We use masked language modeling in BERT (Devlin et al., 2018) with source condition to train CoMMA, which can help measure the dependency on target context when predicting the current masked token. 2) In order to ensure the dependency on source and target tokens can be comparable, we use mix-attention (He et al., 2018) to calculate the attention weights on both source and target tokens in a single softmax function. + +The model architecture of CoMMA is shown in Figure 1. Specifically, CoMMA differs from standard Transformer (Vaswani et al., 2017) as follows: 1) Some tokens are randomly replaced by a special mask token $\langle M\rangle$ with probability $p$ , and the model is trained to predict original unmasked tokens. 2) We employ mix-attention mechanism (He et al., 2018) where layer $i$ in the decoder can attend to itself and the layer $i$ in the encoder at the same time and compute the attention weights in a single softmax function. We share the parameters of attention and feed-forward layer between the encoder and decoder. 3) Following He et al. (2018), we add source/target embedding to tell the model whether a token is from the source or target sequence, and also add position embedding with the positions of source and target tokens both starting from zero. 4) The encoder and decoder pre-net (Shen et al., 2018) vary in different tasks: For TTS, encoder pre-net consists of only embedding lookup table, and decoder pre-net consists of 2-layer dense network + +![](images/af54dfe8cb38a130fa4c95d1bd59065217c0ad1cc2f56b116566b6d41e360bc0.jpg) +(a) The main structure of CoMMA. + +![](images/ce038286bc768829a72502bf22f1697d74d4d00b359b6a7b4eeaf82b5164564a.jpg) +(b) The input module of CoMMA. +Figure 1: The architecture of conditional masked prediction model with mix-attention (CoMMA). + +with ReLU activation. For ASR, encoder pre-net consists of 3-layer 2D convolutional network, and decoder pre-net consists of only embedding lookup table. For NMT, both encoder and decoder pre-net consist of only embedding lookup table. + +CoMMA is designed to measure the target token dependency in a variety of sequence generations, including AR (unidirectional) generation, NAR generation, bidirectional generation or even identity copy. To this end, we vary the mask probability $p$ (the ratio of the masked tokens in the whole target tokens1) in a uniform distribution $p \sim U(0.0, 1.0)$ when training CoMMA. In this way, $p = 1$ covers NAR generation, $p = 0$ covers identity copy, and in some cases, $p$ can also cover AR generation. + +# 2.2 How to Measure Target Token Dependency based on CoMMA + +To measure the target token dependency, we define a metric called attention density ratio $R$ , which represents the ratio of the attention density (the normalized attention weights) on target context in mix-attention when predicting the target token with a well-trained CoMMA. We describe the calculation of $R$ in the following steps. + +First, we define the attention density ratio $\alpha$ for a single target token $i$ as + +$$ +\alpha_ {i} = \frac {\frac {1}{N} \sum_ {j = 1} ^ {N} A _ {i , j}}{\frac {1}{N} \sum_ {j = 1} ^ {N} A _ {i , j} + \frac {1}{M} \sum_ {j = N + 1} ^ {N + M} A _ {i , j}}, \tag {1} +$$ + +where $A_{i,j}$ denotes the attention weights from token $i$ to token $j$ in mix-attention, and $i \in [1,N]$ represents the target token while $j \in [N + 1,N + M]$ represents the source token, $M$ and $N$ is the length of source and target sequence respectively, $\sum_{j = 1}^{N + M} A_{i,j} = 1$ . $\alpha_i$ represents the ratio of attention density on target context when predicting target token $i$ . + +Second, we average the attention density ratio $\alpha_{i}$ over all the predicted tokens (with masked probability $p$ ) in a sentence and get + +$$ +\frac {1}{| \mathcal {M} ^ {p} |} \sum_ {i \in \mathcal {M} ^ {p}} \alpha_ {i}, \tag {2} +$$ + +where $\mathcal{M}^p$ represents the set of masked target tokens under mask probability $p$ and $|\mathcal{M}^p|$ denotes the number of tokens in the set. + +Third, for a given $p$ , we calculate $R(p)$ over all test data and average them to get the final attention density ratio + +$$ +R (p) = \operatorname {A v g} \left(\frac {1}{\left| \mathcal {M} ^ {p} \right|} \sum_ {i \in \mathcal {M} ^ {p}} \alpha_ {i}\right). \tag {3} +$$ + +We vary $p$ and calculate $R(p)$ to measure the density ratio under different conditions, where a small $p$ represents more target context that can be leveraged and a large $p$ represents less context. In the extreme cases, $p = 1$ represent NAR generation while $p = 0$ represents to learn identity copy. + +Given the proposed attention density ratio $R(p)$ based on CoMMA, we can measure the target token dependency of the NAR model in different tasks, + +
TaskNMTASRTTS
ARTransformer (Vaswani et al., 2017)Transformer ASR (Karita et al., 2019)Transformer TTS (Li et al., 2019a)
NARNAT (Gu et al., 2017) w/ ACNAR-ASR (Chen et al., 2019) w/ ACFastSpeech (Ren et al., 2019)
+ +which can help understand a series of important research questions, as we introduce in the following three sections. + +# 3 Study on the Difficulty of NAR Generation + +In this section, we aim to find out why the gap still exists for ASR and NMT tasks, while in TTS, NAR can catch up with the accuracy of AR model. We also analyze the causes of different difficulties for different tasks. We start from evaluating the accuracy gap between AR and NAR models for NMT, ASR and TTS, and then measure the token dependency based on our proposed CoMMA. + +# 3.1 The Accuracy Gap + +We first train the AR and NAR models in each task and check the accuracy gap between AR and NAR models to measure the difficulty of NAR generation in each task. + +Configuration of AR and NAR Model The AR and NAR models we considered are shown in Table 1, where we use Transformer as the AR models while the representative NAR models in each task. For a fair comparison, we make some modifications on the NAR models: 1) For ASR, we train a Transformer ASR first as teacher model and then constrain the attention distributions of NAR-ASR with the alignments converted by teacher attention weights, which will be introduced and discussed in Section 5. 2) For NMT, we constrain the KL-divergence of the encoder-to-decoder attention distributions between the AR and NAR models following Li et al. (2019b). We also list the hyperparameters of AR and NAR models for each task in Section A. + +Datasets and Evaluations for NMT, ASR and TTS We conduct experiments on IWSLT 2014 German-English (De-En) translation dataset² for NMT, LibriTTS dataset (Zen et al., 2019) for ASR and LJSpeech dataset (Ito) for TTS. For + +speech data, we transform the raw audio into melspectrograms following Shen et al. (2018) with $50~\mathrm{ms}$ frame size and $12.5~\mathrm{ms}$ hop size. For text data, we tokenize sentences with moses tokenizer3 and then segment into subword symbols using Byte Pair Encoding (BPE) (Sennrich et al., 2015) for subword-level analysis, and convert the text sequence into phoneme sequence with grapheme-to-phoneme conversion (Sun et al., 2019) for phoneme-level analysis. We use BPE for NMT and ASR, while phoneme for TTS by default unless otherwise stated. We train all models on 2 NVIDIA 2080Ti GPUs using Adam optimizer with $\beta_{1} = 0.9$ , $\beta_{2} = 0.98$ , $\varepsilon = 10^{-9}$ and following the same learning rate schedule in (Vaswani et al., 2017). + +For ASR, we evaluate word error rate (WER) on test-clean set in LibriTTS dataset. For NMT, we evaluate the BLEU score on IWSLT 2014 De-En test set. For TTS, we randomly split the LJSpeech dataset into 3 sets: 12500 samples for training, 300 samples for validation and 300 samples for testing, and then evaluate the mean opinion score (MOS) on the test set to measure the audio quality. The output mel-spectrograms of TTS model are transformed into audio samples using the pretrained WaveGlow (Prenger et al., 2019). Each audio is listened by at least 20 testers, who are all native English speakers. + +Table 1: The AR and NAR model we consider in each task. "AC" means attention constraint we mentioned in Section 5. + +
TaskModelAccuracy
NMT (BLEU/WER)Transformer33.90/47.18
NAT27.12/54.90
ASR (BLEU/WER)Transformer ASR66.60/20.10
NAR-ASR39.23/36.20
TTS (MOS)Transformer TTS3.82 ± 0.08
FastSpeech3.79 ± 0.12
+ +Table 2: The accuracy gap between NAR and AR models. + +Results of Accuracy Gap The accuracies of the AR and NAR models in each task are shown in + +Table 2. It can be seen that NAR model can match the accuracy of AR model gap in TTS, while the gap still exists in ASR and NMT. We calculate both the WER and BLEU metrics in ASR and NMT for better comparison. It can be seen that ASR has a larger gap than NMT. Larger accuracy gap may indicate more difficult for NAR generation in this task. Next, we try to understand what factors influence difficulties among different tasks. + +# 3.2 The Token Dependency + +In the last subsection, we analyze the difficulty of NAR models from the perspective of the accuracy gap. In this subsection, we try to find evidence from the target token dependency, which is supposed to be consistent with the accuracy gap to measure the task difficulty. + +Configuration of CoMMA We train CoMMA with the same configuration on NMT, ASR and TTS: the hidden size and the feed-forward hidden size and the number of layers are set to 512, 1024 and 6 respectively. We list other hyperparameters of CoMMA in Section B. We also use the same datasets for each task as described in Section 3.1 to train CoMMA. + +Results of Token Dependency We use the attention density ratio calculated from CoMMA (as described in Section 2.2) to measure the target token dependency and show the results in Figure 2. It can be seen that the rank of attention density ratio $R(p)$ is $\mathrm{ASR} > \mathrm{NMT} > \mathrm{TTS}$ for all $p$ . Considering that $R(p)$ measures how much context information from target side is needed to generate a target token, we can see that ASR has more dependency on the target context and less on the source context, while TTS is the opposite, which is consistent with the accuracy gap between AR and NAR models as we described in Section 3.1. + +As we vary $p$ from 0.1 to 0.5, $R(p)$ decreases for all tasks since more tokens in the target side are masked. We also find that $R(p)$ in NMT decreases quicker than the other two tasks, which indicates that NMT is good at learning from source context when less context information can be leveraged from the target side while $R(p)$ in ASR decreases little. This can also explain why NAR in NMT achieves less gap than ASR. + +![](images/b6a52dc6204d1fdc59ed45cb8d6dc1593e892004a9c21c1fc03d801a0f125978.jpg) +Figure 2: Attention density ratio $R(p)$ under different $p$ in different tasks for performance gap analysis. + +# 4 Study on Knowledge Distillation + +In the current and next sections, we investigate why some techniques can help NAR generation from the aspect of target token dependency. We only analyze knowledge distillation and attention alignment techniques which are widely used in NAR, but we believe our analysis method can be applied to other NAR techniques, such as iterative refinement (Lee et al., 2018), fine-tuning from an AR model (Guo et al., 2019b) and so on. + +Most existing NAR models (Oord et al., 2017; Gu et al., 2017; Wang et al., 2019; Guo et al., 2019a,b; Ren et al., 2019) rely on the technique of knowledge distillation, which generates the new target sequence given original source sequence from a pre-trained AR model and trains the NAR model for better accuracy. In this section, we first conduct experiments to verify the accuracy improvements of knowledge distillation. Next, based on our proposed CoMMA, we analyze why knowledge distillation could help NAR models. + +# 4.1 The Effectiveness of Knowledge Distillation + +# Knowledge Distillation for NAR Models + +Given a well-trained AR model $\theta_T$ and source sequence $x \in \mathcal{X}$ from the original training data, a new target sequence can be generated through + +$$ +y ^ {\prime} \sim P (y | x; \theta_ {T}). \tag {4} +$$ + +We can use beam search for NMT and ASR and greedy search for TTS to generate $y'$ . Given the set of generated sequence pairs $(\mathcal{X},\mathcal{Y}')$ , we train the NAR models with negative log-likelihood loss + +$$ +\mathcal {L} \left(\left(\mathcal {X}, \mathcal {Y} ^ {\prime}\right); \theta\right) = - \sum_ {\left(x, y ^ {\prime}\right) \in \left(\mathcal {X}, \mathcal {Y} ^ {\prime}\right)} \log P \left(y ^ {\prime} \mid x; \theta\right), \tag {5} +$$ + +where $\theta$ is the parameters set of the NAR model. + +
TaskModelAccuracy
NMT (BLEU)Transformer33.90
NAT27.12
NAT w/o KD21.79
TTS (MOS)Transformer TTS3.82 ± 0.08
FastSpeech3.79 ± 0.12
FastSpeech w/o KD3.58 ± 0.13
+ +Table 3: The comparison between NAR models with and without knowledge distillation. + +Experimental Results We only conducted knowledge distillation on NMT and TTS since there is no previous works on ASR yet. We train the NAR models in NMT and TTS with raw target token sequence instead of teacher outputs and compare the results with that in Table 2. The accuracy improvements of knowledge distillation are shown in Table 3. It can be seen that knowledge distillation can boost the accuracy of NAR in NMT and TTS, which is consistent with the previous works. + +# 4.2 Why Knowledge Distillation Works + +Recently, Zhou et al. (2019) find that knowledge distillation can reduce the complexity of data sets and help NAT to better model the variations in the output data. However, this explanation is reasonable on its own, but mainly from the perspective of data level and is not easy to understand. In this subsection, we analyze knowledge distillation from a more understandable and intuitive perspective, by observing the change of the token dependency based on our proposed CoMMA. + +We measure the target token dependency by training CoMMA with the original training data and new data generated through knowledge distillation, respectively. The results are shown in Figure 3. It can be seen that knowledge distillation can decrease the attention density ratio $R(p)$ on both tasks, indicating that knowledge distillation can reduce the dependency on the target-side context when predicting a target token, which can be helpful for NAT model training. + +# 5 Study on Alignment Constraint + +Without the help of target context, NAR models usually suffer from ambiguous attention to the source context, which affects the accuracy. Re + +![](images/2b88e9df5a02df546fef73cb25ad508ab9314b561c3ace2b34612af2549516be.jpg) +Figure 3: Attention density ratio $R(p)$ for NMT and TTS tasks under different $p$ with and without knowledge distillation, where "KD" means knowledge distillation. + +cently, many works have proposed a variety of approaches to help with the source-target alignment of NAR models, which can improve the estimation of the soft alignment in attention mechanism model. For example, Li et al. (2019b) constrain the KL-divergence of the encoder-to-decoder attention distributions between the AR and NAR models. Gu et al. (2017) predict the fertility of the source tokens to approximate the alignments between target sequence and source sequence. Guo et al. (2019a) convert the source token to target token with phrase table or embedding mapping for alignments. Ren et al. (2019) predict the duration (the number of mel-spectrograms) of each phoneme. + +In this section, we first study the effectiveness of alignment constraint for NAR models, and then analyze why alignment constraint can help the NAR models by observing the changes of token dependency based on our proposed CoMMA. + +# 5.1 The Effectiveness of Alignment Constraint + +Alignment Constraint for NAR Models We choose the attention constraint mechanism which is commonly used based on previous works for each task. + +For NMT, we follow Li et al. (2019b) to minimize the KL-divergence between the attention distributions of AR and NAR model as follow: + +$$ +\mathcal {L} _ {a c} = \frac {1}{N} \sum_ {i = 1} ^ {N} D _ {K L} \left(A _ {i} ^ {\prime} \mid \mid A _ {i}\right), \tag {6} +$$ + +where $A_{i}^{\prime}$ and $A_{i}$ denote the source-target attention weights from the AR teacher model and NAR student model respectively. $A^{\prime}, A \subset \mathbb{R}^{N \times M}$ where $N$ + +and $M$ are the number of tokens in the target and source sequence. + +For TTS, we follow Ren et al. (2019) to extract the encoder-to-decoder attention alignments from the well-trained AR teacher model and convert them to phoneme duration sequence, and then train the duration predictor to expand the hidden of the source sequence to match the length of target sequence. + +For ASR, since there is no previous work proposing alignment constraint for NAR, we design a new alignment constraint method and explore its effectiveness. We first calculate the expectation position of teacher's attention distributions for $i$ -th target token: $E_{i} = \sum_{j=1}^{M} j * A_{i,j}'$ and cast it to the nearest integer. Then we constrain the attention weights of $i$ -th target token for NAR model so that it can only attend to the source position between $E_{i-1}$ and $E_{i+1}$ . Specially, the first target token can only attend to the source position between 1 and $E_{2}$ while the last target token can only attend to the position between $E_{N-1}$ and $M$ . We apply this alignment constraint for ASR only in the training stage. + +
TaskModelAccuracy
NMT (BLEU)Transformer33.90
NAT27.12
NAT w/o AC25.03
ASR (WER)Transformer ASR20.1
NAR-ASR33.1
NAR-ASR w/o AC39.23
TTS (MOS)Transformer TTS3.82 ± 0.08
FastSpeech3.79 ± 0.12
FastSpeech w/o AC1.97 ± 0.16
+ +Table 4: The comparison between NAR models with and without alignment constraint. + +Experimental Results We follow the model configuration and datasets as described in Section 3.1, and explore the accuracy improvements when adding attention constraint to NAR models. The results are shown in Table 4. It can be seen that attention constraint can not only improve the performance of NMT and TTS as previous works (Li et al., 2019b; Ren et al., 2019) demonstrated, but also help the NAR-ASR model achieve better scores. + +# 5.2 Why Alignment Constraint Works + +We further analyze how alignment constraint could help on NAR models by measuring the changes + +![](images/20316043b6b7f8b6b2cd16e45ef007483937ef401dd1071cecc3f00ccbd8caa2.jpg) +Figure 4: Attention density ratio $R(p)$ for NMT, ASR and TTS tasks under different $p$ with and without alignment constraint (AC). + +of token dependency when adding alignment constraint on CoMMA. + +For simplicity, we use the method described in Equation 6 to help the training of CoMMA, where the teacher model is the AR model and student model is CoMMA. We minimize KL-divergence between the per-head encoder-to-decoder attention distributions of the AR model and CoMMA. First, we normalize the encoder-to-decoder attention weights in each head of mix-attention to convert each row of the attention weights to a distribution: + +$$ +\hat {A} _ {i, j} = \frac {A _ {i , N + j}}{\sum_ {k = 1} ^ {M} A _ {i , N + k}} \tag {7} +$$ + +for each $i\in [1,N],j\in [1,M]$ + +where $A\subset \mathbb{R}^{N\times (N + M)}$ is the weights of mix-attention described in Section 2.2, $\tilde{A}\subset \mathbb{R}^{N\times M}$ is the normalized encoder-to-decoder attention weights, $\mathbf{M}$ and $\mathbf{N}$ is the length of source and target sequence. Then, we compute the KL-divergence loss for each head as follows: + +$$ +\mathcal {L} _ {a c} = \frac {1}{N} \sum_ {i = 1} ^ {N} D _ {K L} \left(A _ {i} ^ {\prime} \mid \mid \hat {A} _ {i}\right), \tag {8} +$$ + +where $A^{\prime}\subset \mathbb{R}^{N\times M}$ is the encoder-to-decoder attention of AR teacher model. We average $L_{ac}$ over all heads and layers and get the final attention constraint loss for CoMMA. + +We measure the token dependency by calculating the attention density ratio $R(p)$ based on CoMMA, + +and show the results in Figure 4. It can be seen that alignment constraint can help reduce ratio $R(p)$ on each task and thus reduce the dependency on target context when predicting target tokens. In the meanwhile, alignment constraint can help the model extract more information from the source context, which can help the learning of NAR models. + +Another interesting finding is that NAR model in TTS benefits from attention constraint most as shown in Table 4, and in the meanwhile, TTS has the least attention density ratio as shown in Figure 4. These observations suggest that NAR models with small target token dependency could benefit largely from alignment constraint. + +# 6 Related Works + +Several works try to analyze and understand NAR models on different tasks. We discuss these analyses from the two aspects: knowledge distillation and source-target alignment constraint. + +Knowledge Distillation Knowledge distillation has long been used to compress the model size (Hinton et al., 2015; Furlanello et al., 2018; Yang et al., 2018; Anil et al., 2018; Li et al., 2017) or transfer the knowledge of teacher model to student model (Tan et al., 2019; Liu et al., 2019a,b), and soon been applied to NAR models (Gu et al., 2017; Oord et al., 2017; Guo et al., 2019a; Wang et al., 2019; Li et al., 2019b; Guo et al., 2019b; Ren et al., 2019) to boost the accuracy. Some works focus on studying why knowledge distillation works: Phuong and Lampert (2019) provide some insights into the mechanisms of knowledge distillation by studying the special case of linear and deep linear classifiers and find that data geometry, optimization bias and strong monotonicity determine the success of distillation; Yuan et al. (2019) argue that the success of KD is also due to the regularization of soft targets, which might be as important as the similarity information between categories. + +However, few works have studied the cause of why knowledge distillation benefits NAR training. Recently, Zhou et al. (2019) investigate why knowledge distillation is important for the training of NAR model in NMT task and find that knowledge distillation can reduce the complexity of data sets and help NAR model to learn the variations in the output data. + +Li et al. (2019b) explore the causes of the poor performance of the NAR model by observing the + +attention distributions and hidden states of NAR model. Lee et al. (2018) presents some experiments and analysis to prove the necessity for multiple iterations generation for NAT. They also investigate the effectiveness of knowledge distillation in different task and make the assumption that teacher model can essentially clean the training data so that the distilled NAR model substantially outperforms NAR model trained with raw data. + +Attention Alignment Constraint Previous work pointed out that adding additional alignment knowledge can improve the estimation of the soft alignment in attention mechanism model. For example, Chen et al. (2016) uses the Viterbi alignments of the IBM model 4 as an additional knowledge during NMT training by calculating the divergence between the attention weights and the statistical alignment information. + +Compared with AR model, the attention distributions of NAR model are more ambiguous, which leads to the poor performance of the NAR model. Recent works employ attention alignment constraint between the well-trained AR and NAR model to train a better NAR model. Li et al. (2019b) leverages intermediate hidden information from a well-trained AR-NMT teacher model to improve the NAR-NMT model by minimizing KL-divergence between the per-head encoder-decoder attention of the teacher and the student. Ren et al. (2019) choose the encoder-decoder attention head from the AR-TTS teacher as the attention alignments to improve the performance of the NAR model in TTS. + +# 7 Conclusion + +In this paper, we conducted a comprehensive study on NAR models in NMT, ASR and TTS tasks to analyze several research questions, including the difficulty of NAR generation and why knowledge distillation and alignment constraint can help NAR models. We design a novel CoMMA and a metric called attention density ratio to measure the dependency on target context when predicting a target token, which can analyze these questions in a unified method. Through a series of empirical studies, we demonstrate that the difficulty of NAR generation correlates on the target token dependency, and knowledge distillation as well as alignment constraint reduces the dependency of target tokens and encourages the model to rely more on source context for target token prediction, which improves the + +accuracy of NAR models. We believe our analyses can shed light on the understandings and further improvements on NAR models. + +# Acknowledgments + +This work was supported in part by the National Key R&D Program of China (Grant No.2018AAA0100603), Zhejiang Natural Science Foundation (LR19F020006), National Natural Science Foundation of China (Grant No.61836002), National Natural Science Foundation of China (Grant No.U1611461), and National Natural Science Foundation of China (Grant No.61751209). This work was also partially funded by Microsoft Research Asia. Thanks Tao Qin for the valuable suggestions, comments and guidance on this paper. + +# References + +Rohan Anil, Gabriel Pereyra, Alexandre Passos, Robert Ormandi, George E Dahl, and Geoffrey E Hinton. 2018. Large scale distributed neural network training through online distillation. arXiv preprint arXiv:1804.03235. +Nanxin Chen, Shinji Watanabe, Jesús Villalba, and Najim Dehak. 2019. Non-autoregressive transformer automatic speech recognition. arXiv preprint arXiv:1911.04908. +Wenhu Chen, Evgeny Matusov, Shahram Khadivi, and Jan-Thorsten Peter. 2016. Guided alignment training for topic-aware neural machine translation. CoRR, abs/1607.01628. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +Tommaso Furlanello, Zachary C Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. 2018. Born again neural networks. arXiv preprint arXiv:1805.04770. +Jiatao Gu, James Bradbury, Caiming Xiong, Victor OK Li, and Richard Socher. 2017. Non-autoregressive neural machine translation. arXiv preprint arXiv:1711.02281. +Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, and Tie-Yan Liu. 2019a. Non-autoregressive neural machine translation with enhanced decoder input. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3723-3730. +Junliang Guo, Xu Tan, Linli Xu, Tao Qin, Enhong Chen, and Tie-Yan Liu. 2019b. Fine-tuning by curriculum learning for non-autoregressive neural machine translation. arXiv preprint arXiv:1911.08717. + +Tianyu He, Xu Tan, Yingce Xia, Di He, Tao Qin, Zhibo Chen, and Tie-Yan Liu. 2018. Layer-wise coordination between encoder and decoder for neural machine translation. In Advances in Neural Information Processing Systems, pages 7944-7954. +Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. +Keith Ito. The lj speech dataset, 2017a. url ttps. keithito. com/LJ-Speech-Dataset. +Shigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi Inaguma, Ziyan Jiang, Masao Someki, Nelson Enrique Yalta Soplin, Ryuichi Yamamoto, Xiaofei Wang, et al. 2019. A comparative study on transformer vs rnn in speech applications. arXiv preprint arXiv:1909.06317. +Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural sequence modeling by iterative refinement. arXiv preprint arXiv:1802.06901. +Naihan Li, Shujie Liu, Yanqing Liu, Sheng Zhao, Ming Liu, and M Zhou. 2019a. Neural speech synthesis with transformer network. AAAI. +Yuncheng Li, Jianchao Yang, Yale Song, Liangliang Cao, Jiebo Luo, and Li-Jia Li. 2017. Learning from noisy labels with distillation. In ICCV, pages 1928-1936. +Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2019b. Hint-based training for non-autoregressive machine translation. arXiv preprint arXiv:1909.06708. +Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Improving multi-task deep neural networks via knowledge distillation for natural language understanding. arXiv preprint arXiv:1904.09482. +Yuchen Liu, Hao Xiong, Zhongjun He, Jiajun Zhang, Hua Wu, Haifeng Wang, and Chengqing Zong. 2019b. End-to-end speech translation with knowledge distillation. arXiv preprint arXiv:1904.08075. +Aaron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lockhart, Luis C Cobo, Florian Stimberg, et al. 2017. Parallel wavenet: Fast high-fidelity speech synthesis. arXiv preprint arXiv:1711.10433. +Mary Phuong and Christoph Lampert. 2019. Towards understanding knowledge distillation. In International Conference on Machine Learning, pages 5142-5151. +Ryan Prenger, Rafael Valle, and Bryan Catanzaro. 2019. Waveglow: A flow-based generative network for speech synthesis. In ICASSP 2019-2019 IEEE International Conference on Acoustics, + +Speech and Signal Processing (ICASSP), pages 3617-3621. IEEE. +Yi Ren, Yangjun Ruan, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu. 2019. *Fastspeech: Fast, robust and controllable text to speech.* arXiv preprint arXiv:1905.09263. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909. +Jonathan Shen, Ruoming Pang, Ron J Weiss, Mike Schuster, Navdeep Jaitly, Zongheng Yang, Zhifeng Chen, Yu Zhang, Yuxuan Wang, Rj Skerrv-Ryan, et al. 2018. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4779-4783. IEEE. +Hao Sun, Xu Tan, Jun-Wei Gan, Hongzhi Liu, Sheng Zhao, Tao Qin, and Tie-Yan Liu. 2019. Token-level ensemble distillation for grapheme-to-phoneme conversion. In INTERSPEECH. +Xu Tan, Yi Ren, Di He, Tao Qin, Zhou Zhao, and TieYan Liu. 2019. Multilingual neural machine translation with knowledge distillation. arXiv preprint arXiv:1902.10461. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008. +Yiren Wang, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. 2019. Non-autoregressive machine translation with auxiliary regularization. In AAAI. +Chenglin Yang, Lingxi Xie, Siyuan Qiao, and Alan Yuille. 2018. Knowledge distillation in generations: More tolerant teachers educate better students. arXiv preprint arXiv:1805.05551. +Li Yuan, Francis EH Tay, Guilin Li, Tao Wang, and Jiashi Feng. 2019. Revisit knowledge distillation: a teacher-free framework. arXiv preprint arXiv:1909.11723. +Heiga Zen, Viet Dang, Rob Clark, Yu Zhang, Ron J Weiss, Ye Jia, Zhifeng Chen, and Yonghui Wu. 2019. Libritts: A corpus derived from librispeech for text-to-speech. arXiv preprint arXiv:1904.02882. +Chunting Zhou, Graham Neubig, and Jiatao Gu. 2019. Understanding knowledge distillation in nonautoregressive machine translation. arXiv preprint arXiv:1911.02727. + +# A Model Settings of NAR and AR + +We show the model settings of NAR and AR in Table 5. The hyperpameters in pre-net follow the methods in each task listed in Table 1 in the main part of the paper. + +
Transformer HyperparameterNMT / NATASR / NAR-ASRTTS / FastSpeech
Embedding Dimension512512512
Encoder Layers666
Encoder Hidden512512512
Encoder Filter Size102410241024
Encoder Heads444
Decoder Layers666
Decoder Hidden Size512512512
Decoder Filter Size102410241024
Decoder Heads444
Dropout0.20.10.2
Batch Size643232
Base Learning Rate1e-31e-31e-3
+ +# B Model Settings of CoMMA + +We show the model settings of CoMMA in Table 6. + +Table 5: Hyperparameters of transformer-based AR and NAR models. + +
NameHyperparameter
Embedding Dimension512
Encoder Layers6
Encoder Hidden512
Encoder Filter Size1024
Encoder Heads4
Decoder Layers6
Decoder Hidden Size512
Decoder Filter Size1024
Decoder Heads4
Dropout0.1
Batch Size64
Base Learning Rate1e-3
+ +Table 6: Hyperparameters of CoMMA. \ No newline at end of file diff --git a/astudyofnonautoregressivemodelforsequencegeneration/images.zip b/astudyofnonautoregressivemodelforsequencegeneration/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e7e7d62e63d7987846e259c6a34c457175567a0b --- /dev/null +++ b/astudyofnonautoregressivemodelforsequencegeneration/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:182ff3acd2ccdfe8a93dead660d79157c42ddb31b675ccc365c7015628a47ab1 +size 378264 diff --git a/astudyofnonautoregressivemodelforsequencegeneration/layout.json b/astudyofnonautoregressivemodelforsequencegeneration/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..26f223faa994149dec6a1098dde81002358d8165 --- /dev/null +++ b/astudyofnonautoregressivemodelforsequencegeneration/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:83253e51a6c547d78e782467009f6b05a86885ba7289fe20f70071e77335935f +size 363001 diff --git a/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/e00d7c17-3b64-467b-8a41-9bdcf34723ab_content_list.json b/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/e00d7c17-3b64-467b-8a41-9bdcf34723ab_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..96558683229b9cc73fa18458761eae41c9579ea9 --- /dev/null +++ b/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/e00d7c17-3b64-467b-8a41-9bdcf34723ab_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8b3a4cb37a828025c65dfb34b5c52f6aebe0b587515a6c251236edc270f3692c +size 136990 diff --git a/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/e00d7c17-3b64-467b-8a41-9bdcf34723ab_model.json b/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/e00d7c17-3b64-467b-8a41-9bdcf34723ab_model.json new file mode 100644 index 0000000000000000000000000000000000000000..4adbf0463e88626690ec3ad1e5eaf596b3461a42 --- /dev/null +++ b/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/e00d7c17-3b64-467b-8a41-9bdcf34723ab_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1d051625783fc21852fd5a0ae3b4b3ae909530eec98dca2403679a44f9167877 +size 182846 diff --git a/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/e00d7c17-3b64-467b-8a41-9bdcf34723ab_origin.pdf b/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/e00d7c17-3b64-467b-8a41-9bdcf34723ab_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..64d8c38b24eb7232b00ee90291ebd44dfd397a7e --- /dev/null +++ b/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/e00d7c17-3b64-467b-8a41-9bdcf34723ab_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4638e57f9f11da27f157852ffdc7e94b5db7051011f30956f79e85c591cd8825 +size 572610 diff --git a/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/full.md b/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/full.md new file mode 100644 index 0000000000000000000000000000000000000000..1dd98a943c51ee743a12872d42f0cbbdbf7588cd --- /dev/null +++ b/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/full.md @@ -0,0 +1,658 @@ +# A Systematic Assessment of Syntactic Generalization in Neural Language Models + +Jennifer Hu $^{1}$ , Jon Gauthier $^{1}$ , Peng Qian $^{1}$ , Ethan Wilcox $^{2}$ , and Roger P. Levy $^{1}$ \ + $^{1}$ Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology\ + $^{2}$ Department of Linguistics, Harvard University\ +{jennhu, pqian, rplevy}@mit.edu\ +jon@gauthiers.net, wilcoxeg@g.harvard.edu + +# Abstract + +While state-of-the-art neural network models continue to achieve lower perplexity scores on language modeling benchmarks, it remains unknown whether optimizing for broad-coverage predictive performance leads to human-like syntactic knowledge. Furthermore, existing work has not provided a clear picture about the model properties required to produce proper syntactic generalizations. We present a systematic evaluation of the syntactic knowledge of neural language models, testing 20 combinations of model types and data sizes on a set of 34 English-language syntactic test suites. We find substantial differences in syntactic generalization performance by model architecture, with sequential models underperforming other architectures. Factorially manipulating model architecture and training dataset size (1M-40M words), we find that variability in syntactic generalization performance is substantially greater by architecture than by dataset size for the corpora tested in our experiments. Our results also reveal a dissociation between perplexity and syntactic generalization performance. + +# 1 Introduction + +A growing body of work advocates that assessment of neural language models should include both information-theoretic metrics, such as perplexity, as well as targeted linguistic evaluation. Benchmarks such as GLUE (Wang et al., 2019a,b) have demonstrated that neural language models trained on naturalistic corpora for next-word prediction learn representations that can yield remarkable performance on many semantic tasks. Targeted syntactic evaluations have shown that these models also implicitly capture many syntactic generalizations, ranging from subject-verb agreement + +Materials and code can be found at https://github.com/cpllab/syntactic-generalization. + +to long-distance filler-gap dependencies (Linzen et al., 2016; Marvin and Linzen, 2018; Futrell et al., 2018; Wilcox et al., 2019b). This paper aims to bring targeted evaluations of syntactic performance to scale, complementing similar developments in semantic evaluation (McCoy et al., 2019). + +Because the most widespread currency of evaluation for language models is perplexity—how well, on average, a model predicts a word in its context—a primary focus of this paper is the relationship between a model's perplexity and its performance on targeted syntactic evaluations. As perplexity improves, can we expect more human-like syntactic generalization? How do training dataset size and model architecture jointly affect syntactic generalization? And what picture of models' syntactic generalization emerges when evaluation is brought to scale, across dozens of controlled syntactic tests? + +In this paper we offer initial answers to these questions, systematically assessing the syntactic generalization abilities of neural language models on 34 targeted test suites (33 adapted from previously published work, and 1 novel) covering a wide range of syntactic phenomena. Test suites are written using a standard format that allows for flexible predictions which more closely resemble those used in psycholinguistic studies, specifically allowing for predictions about interactions among multiple testing conditions. Performance on each test suite is reported as a Syntactic Generalization (SG) score. We group test suites into six syntactic circuits based on the linguistic representations needed to achieve high performance on each suite. + +We train four classes of neural models and one baseline $n$ -gram model on four datasets derived from a newswire corpus, consisting of 1, 5, 14, and 42 million tokens. While previous work has compared model architectures for a fixed dataset size (e.g. Wilcox et al., 2019b) and network sizes for a fixed architecture (e.g. van Schijndel et al., + +2019), our controlled regime allows us to make an apples-to-apples comparison across model architectures on a range of sizes. In addition, we evaluate several off-the-shelf models which were trained on datasets ranging up to 2 billion tokens. + +Our results address the three questions posed above: First, for the range of model architectures and dataset sizes tested, we find a substantial dissociation between perplexity and SG score. Second, we find a larger effect of model inductive bias than training data size on SG score, a result that accords with van Schijndel et al. (2019). Models afforded explicit structural supervision during training outperform other models: One structurally supervised model is able to achieve the same SG scores as a purely sequence-based model trained on $\sim 100$ times the number of tokens. Furthermore, several Transformer models achieve the same SG score as a Transformer trained on $\sim 200$ times the amount of data. Third, we find that architectures have different relative advantages across types of syntactic tests, suggesting that the tested syntactic phenomena tap into different underlying processing capacities in the models. + +# 2 Background + +# 2.1 Perplexity + +Standard language models are trained to predict the next token given a context of previous tokens. Language models are typically assessed by their perplexity, the inverse geometric mean of the joint probability of words $w_{1}, \ldots, w_{N}$ in a held-out test corpus $C$ : + +$$ +\operatorname {P P L} (C) = p \left(w _ {1}, w _ {2}, \dots w _ {N}\right) ^ {- \frac {1}{N}} \tag {1} +$$ + +Models with improved perplexity have also been shown to better match various human behavioral measures, such as gaze duration during reading (Frank and Bod, 2011; Fossum and Levy, 2012; Goodkind and Bicknell, 2018; Wilcox et al., 2020). However, a broad-coverage metric such as perplexity may not be ideal for assessing human-like syntactic knowledge for a variety of reasons. In principle, a sentence can appear with vanishingly low probability but still be grammatically well-formed, such as Colorless green ideas sleep furiously (Chomsky, 1957). While perplexity remains an integral part of language model evaluation, fine-grained linguistic assessment can provide both more challenging and more interpretable tests to evaluate neural models. + +# 2.2 Targeted tests for syntactic generalization + +Alternatively, a language model can be evaluated on its ability to make human-like generalizations for specific syntactic phenomena (Linzen et al., 2016; Lau et al., 2017; Gulordava et al., 2018). The targeted syntactic evaluation paradigm (Marvin and Linzen, 2018; Futrell et al., 2019) incorporates methods from psycholinguistic experiments, designing sentences which hold most lexical and syntactic features of each sentence constant while minimally varying features that determine grammaticality or surprise characteristics of the sentence. For example, given the two strings *The keys to the cabinet are on the table and *The keys to the cabinet is on the table, a model that has learned the proper subject-verb number agreement rules for English should assign a higher probability to the grammatical plural verb in the first sentence than to the ungrammatical singular verb in the second (Linzen et al., 2016). + +Although some targeted syntactic evaluations, such as the example discussed above, involve simple comparisons of conditional probabilities of a word in its context, other evaluations are more complex. We can demonstrate this with an evaluation of models' "garden-pathing" behavior (Futrell et al., 2019). For example, the sentence The child kicked in the chaos found her way back home yields processing disruption for humans at the word found. This is because, up to right before that word, the part-of-speech ambiguous kicked is preferentially interpreted as the main verb of the sentence, whereas it turns out to be a passive participle in a reduced relative clause modifying child. This garden-path disambiguation effect is ameliorated by replacing kicked with forgotten, which is not part-of-speech ambiguous (B below; Trueswell et al., 1994) or by using an unreduced relative clause (C below; Ferreira and Clifton, 1986). In probabilistic language models, these garden-path disambiguation effects are well captured by word negative log probabilities, or SURPRISALS (Hale, 2001): $S(w|C) = -\log_2p(w|C)$ , which are independently well-established to predict human incremental processing difficulty over several orders of magnitude in word probability (Smith and Levy, 2013). A targeted syntactic evaluation for garden-pathing is provided by comparing surprisals at the disambiguating word found in the set of four examples below (Futrell et al., 2019): + +(A) The child kicked in the chaos found ... + +(B) The child forgotten in the chaos found ... +(C) The child who was kicked in the chaos found ... +(D) The child who was forgotten in the chaos found ... + +Successful human-like generalization involves three criteria: (i) found should be less surprising (i.e., more probable) in B than A; (ii) found should be more probable in C than A; (iii) the C-D surprisal difference should be smaller than the A-B surprisal difference—a $2 \times 2$ interaction effect on surprisal—because the syntactic disambiguation effect of not reducing the relative clause was achieved by using a part-of-speech unambiguous verb. + +We will use these controlled tests to help us describe and test for human-like syntactic knowledge in language models. + +# 2.3 Related work + +The testing paradigm presented here differs in several crucial ways from recent, related syntactic assessments and provides complementary insights. Unlike Warstadt et al. (2019a), our approach does not involve fine-tuning, but rather assesses what syntactic knowledge is induced from the language modeling objective alone. The most closely related work is the Benchmark of Linguistic Minimal Pairs (Warstadt et al., 2020), which is a challenge set of automatically-generated sentence pairs also designed to test language models on a large set of syntactic phenomena. Our approach differs in important ways: we compare critical sentence regions instead of full-sentence probabilities, and employ a $2 \times 2$ paradigm with a strict, multi-fold success criterion inspired by psycholinguistics methodology. This allows us to factor out as many confounds as possible, such as the lexical frequency of individual tokens and low-level $n$ -gram statistics. + +# 3 Methods + +We designed a controlled paradigm for systematically testing the relationship between two design choices — model class and dataset size — and two performance metrics — perplexity and syntactic generalization capacity. Section 3.1 describes the test suites collected for our evaluation, and Sections 3.2 and 3.3 describe the datasets and model classes investigated. + +# 3.1 Test suites + +We assemble a large number of test suites inspired by the methodology of experimental sentence-processing and psycholinguistic research. Each + +test suite contains a number of ITEMS (typically between 20 and 30), and each item appears in several CONDITIONS: across conditions, a given item will differ only according to a controlled manipulation designed to target a particular feature of grammatical knowledge. Each test suite contains at least one PREDICTION, which specifies inequalities between surprisal values at pairs of regions/conditions that should hold if a model has learned the appropriate syntactic generalization. + +We expect language models which have learned the appropriate syntactic generalizations from their input to satisfy these inequalities without further fine-tuning. We compute accuracy on a test suite as the proportion of items for which the model's behavior conforms to the prediction. Most of our test suites involve $2 \times 2$ designs and a success criterion consisting of a conjunction of inequalities across conditions, as in the garden-pathing example described in Section 2.2.1 Random baseline accuracy varies by test suite and is $\sim 25\%$ overall. Most of these test suites and criteria are designed so that $n$ -gram models cannot perform above chance for $n = 5$ (sometimes greater). + +Syntactic coverage In order to assess the coverage of our test suites, we manually inspected the phenomena covered in Carnie (2012), a standard introductory syntax textbook. Of the 47 empirical phenomena reviewed in the summary sections at the end of each chapter, our tests target $16\left(\sim 34\%\right)$ . These are evenly distributed across the whole range of subject matter, with tests targeting phenomena in 11 of the 15 chapters $(\sim 73\%)$ .2 + +Modifiers Five test suites include paired modifier versions, where extra syntactically irrelevant (but semantically plausible) content, such as a prepositional phrase or relative clause, is inserted before the critical region being measured. We use these paired test suites to evaluate models' stability to intervening content within individual syntactic tests. + +Circuits The test suites are divided into 6 syntactic circuits, based on the type of algorithm required to successfully process each construction. We give a brief overview of each circuit below. + +- Agreement is a constraint on the feature values of two co-varying tokens. For example, + +the number feature of a verb must agree with the number feature of its upstream subject. We include 3 Subject-Verb Number Agreement suites from Marvin and Linzen (2018). + +- Licensing occurs when a particular token must exist within the scope of an upstream licensor token. Scope is determined by the tree-structural properties of the sentence. Test suites include Negative Polarity Item Licensing (NPI) (4 suites) and Reflexive Pronoun Licensing (6 suites), both from Marvin and Linzen (2018). +- Garden-Path Effects are well-studied syntactic phenomena that result from tree-structural ambiguities that give rise to locally-coherent but globally implausible syntactic parses. Garden-path test suites include Main Verb / Reduced Relative Clause (MVRR) (2 suites) and NP/Z Garden-paths (NPZ) (4 suites), both from Futrell et al. (2018). +- Gross Syntactic Expectation is a processor's expectation for large syntactic chunks such as verb phrases or sentences, and are often set up by subordinating conjunctions such as while, although and despite. Our tests for gross syntactic expectation include Subordination (4 suites) from Futrell et al. (2018). +- Center Embedding sentences are sentences recursively nested within each other. Subject and verbs must match in a first-in-last-out order, meaning models must approximate a stack-like data-structure in order to successfully process them. Our 2 suites of Center Embedding sentences come from the items presented in Wilcox et al. (2019a). +- Long-Distance Dependencies are covariations between two tokens that span long distances in tree depth. Test suites include Filler-Gap Dependencies (FGD) (6 suites) from Wilcox et al. (2018) and Wilcox et al. (2019b), and 2 novel Cleft suites, described in detail below. + +Novel test suite: Cleft We introduce one novel test suite that assesses models' ability to process pseudo-cleft constructions, which are used to put a particular syntactic constituent into focus via passive transformation. Consider Example (1): + +
BLLIP sizes:XSSMMDLG
# sentences40K200K600K1.8M
# tokens1M4.8M14M42M
# non-UNK types24K57K100K170K
# UNK types68707174
+ +Table 1: Statistics of training set for each corpus size. + +(1) a. What he did after coming in from the rain was eat a hot meal. [DO/VP] + +b. *What he devoured after coming in from the rain was eat a hot meal. [LEX/VP] +c.*What he did after coming in from the rain was a hot meal. [DO/NP] +d. What he devoured after coming in from the rain was a hot meal. [LEX/NP] + +When this constituent is a verb, it must be replaced in the wh-clause that heads the sentence with the DO verb, as in (1a), below. However, when it is a noun, the lexical verb for which it serves as an object must be preserved, as in (1d). If models have properly learned the pseudo-cleft construction, then DO verbs should set up expectations for VPs (the region in bold should have a lower surprisal in (1a) than in (1b)) and lexicalized verbs should set up expectations for NPs (the region in bold should have a lower surprisal in (1d) than in (1c)). + +# 3.2 Model training data + +Corpora We train and evaluate models on English newswire corpora of four different sizes, obtained by randomly sampling sections from the Brown Laboratory for Linguistic Information Processing 1987-89 Corpus Release 1 (BLLIP; Charniak et al., 2000). The corpora are sampled such that the training set of each corpus is a proper subset of each larger corpus. We call these four corpora BLLIP-xs (40K sentences, 1M tokens); BLLIP-sm (200K sentences, 5M tokens); BLLIP-MD (600K sentences, 14M tokens); and BLLIP-LG (2M sentences, 42M tokens). Table 1 summarizes statistics of the training set for each corpus. + +To ensure consistency in perplexity evaluation across datasets, we report perplexity scores achieved by the models on a shared held-out test set. We additionally use a shared held-out validation for tuning and early stopping. + +We use the NLTK implementation of the Penn Treebank tokenizer to process all datasets (Bird and Loper, 2004; Marcus et al., 1993). + +
# layers# hidden unitsEmbedding size
LSTM2256256
ON-LSTM31150400
RNNG2256256
GPT-212768768
+ +Table 2: Size of neural models in our controlled experiments. + +
BLLIP sizes:XSSMMDLG
LSTM13.4M30.5M52.2M88.1M
ON-LSTM30.8M44.2M61.2M89.2M
RNNG22.8M48.4M81.1M134.9M
GPT-2124.4M124.4M124.4M124.4M
+ +Out-of-vocabulary tokens For each corpus, we designate a token as OOV if the token appears fewer than two times in the training set. Our larger training datasets thus contain larger vocabularies than our smaller training datasets. This allows larger-training-set models to learn richer word-specific information, but may also harm perplexity evaluation because they have vocabulary items that are guaranteed to not appear in the BLLIP-xs test set. This means that perplexity scores across training dataset sizes will not be strictly comparable: if a larger-training-set model does better than a smaller-training-set model, we can be confident that it has meaningfully lower perplexity, but the reverse is not necessarily the case. The exception to the above is GPT-2, which uses sub-words from byte-pair encoding and has no OOVs (see also Footnote 6). + +Unkification We follow the convention used by the Berkeley parser (Petrov and Klein, 2007), which maps OOVs to UNK classes which preserve fine-grained information such as orthographic case distinctions and morphological suffixes (e.g. UNK-ed, UNK-ly). Before training, we verified that the UNK classes in the test and validation sets were all present in the training set. + +# 3.3 Model classes + +In order to study the effects of model inductive bias and dataset size, we trained a fleet of models with varying inductive biases on each corpus. Because many of our test suites exploit ambiguities that arise from incremental processing, we restrict evaluation to left-to-right language models; future + +Table 3: Parameter counts for neural models in our controlled experiments. + +
BLLIP sizes:XSSMMDLG
LSTM98.1965.5259.0557.09
ON-LSTM71.7654.0056.3756.38
RNNG122.4686.7271.1269.57
GPT-2529.90183.1037.0432.14
n-gram240.21158.60125.58106.09
+ +Table 4: Perplexity averages achieved by each controlled model on each corpus. Perplexity scores across training dataset sizes are not always strictly comparable (see Section 3.2). + +work could involve evaluation of bidirectional models (Devlin et al., 2018; Yang et al., 2019) on an appropriate subset of our test suites, and/or adaptation of our suites for use with bidirectional models (Goldberg, 2019). Training ran until convergence of perplexity on a held-out validation set. Wherever possible, we trained multiple seeds of each model class and corpus size. We use the model sizes and training hyperparameters reported in the papers introducing each model (Table 2). The full parameter counts and perplexity scores for each model $\times$ corpus combination are given in Tables 3 and 4, respectively. + +LSTM Our baseline neural model is a vanilla long short-term memory network (LSTM; Hochreiter and Schmidhuber, 1997) based on the boiler-plate PyTorch implementation (Paszke et al., 2017). + +Ordered-Neurons We consider the Ordered-Neurons LSTM architecture (ON-LSTM; Shen et al., 2019), which encodes an explicit bias towards modeling hierarchical structure. + +RNNG Recurrent neural network grammars (RNNG; Dyer et al., 2016) model the joint probability of a sequence of words and its syntactic structure. RNNG requires labeled trees that contain complete constituency parses, which we produce for BLLIP sentences with an off-the-shelf constituency parser (Kitaev and Klein, 2018). To compute surprisesals from RNNG, we use word-synchronous beam search (Stern et al., 2017) to approximate the conditional probability of the current word given the context. + +![](images/daa415e100487f15b549800bece107d9646fb47ba81069c3063f313a713d8d23.jpg) +Figure 1: Average SG score by model class. Asterisks denote off-the-shelf models. Error bars denote bootstrapped $95\%$ confidence intervals of the mean. + +Transformer Transformer models (Vaswani et al., 2017) have recently gained popularity in language processing tasks. We use GPT-2 (Radford et al., 2019) as a representative Transformer model and train it from scratch on our BLLIP corpora. $^{6}$ + +$n$ -gram As a baseline, we consider a 5-gram model with modified Kneser-Ney smoothing. + +# 3.4 Off-the-shelf models + +We also test five off-the-shelf models: GRNN, trained on 90M tokens from Wikipedia (Gulordava et al., 2018); JRNN, trained on 800M tokens from the 1 Billion Word Benchmark (Jozefowicz et al., 2016); Transformer-XL, trained on 103M tokens from WikiText-103 (Dai et al., 2019); and the pretrained GPT-2 and GPT-2-XL, trained on 40GB of web text (Radford et al., 2019). These models are orders of magnitude larger than our controlled ones in parameter count and/or training set size. + +# 4 Results + +Figure 1 shows the average accuracy of all models on the complete set of SG test suites. Asterisks denote off-the-shelf models. All neural models achieve a SG score significantly greater than a random baseline (dashed line). However, the range within neural models is notable, with the best-performing model (GPT-2-XL) scoring over twice as high as the worst-performing model (LSTM). Also notable are the controlled GPT-2 and RNNG models, which achieve comparable performance to Transformer-XL and JRNN, despite being trained on significantly smaller data sizes. + +![](images/2d1e6d8e6f5e57e1a23ef862ea49df5131ed799cc86a0af7e45c0ff5d4ef016a.jpg) + +![](images/a679149e5a5fc77f7d41d867cbb826841ff2b2222b5acc83b871da2c3ae14048.jpg) +Figure 2: Relationship between SG score and perplexity on our held-out BLLIP test set for each model. + +We now return to the three major issues presented in Section 1. In 4.1 we present evidence that SG score is dissociated from perplexity. In 4.2 we argue that model architecture accounts for larger gains in SG score than amount of training data. And in 4.3 we show that this cross-architecture difference is due largely to variance on a handful of key test suites. + +# 4.1 Syntactic generalization and perplexity + +Figure 2 shows the relationship between SG score and perplexity on the BLLIP test set across models and training set sizes. As expected, $n$ -gram models never rise appreciably above chance in SG score. Among neural models, GPT-2 achieves both the worst (BLLIP-XS and BLLIP-SM) and best (BLLIP-MD and BLLIP-LG) performance; the impressive performance of these latter models comes with the caveat that the sub-words come from the pre-trained GPT-2 model, tacitly importing information from a larger training dataset (see further discussion in Section 4.5). For the remaining neural models, there is no simple relationship between perplexity and SG score, especially once training dataset size is controlled for (comparing points in Figure 2 of the same color). For example, there is a remarkable amount of variance in the SG score of models trained on BLLIP-LG not explained by perplexity. This suggests that targeted syntactic evaluation can reveal information that may be orthogonal to perplexity. + +![](images/71e7f957ed72dc20bb6387d3f0b8e3f4d42f2c5cecc6d008bf4fbfcf089ea555.jpg) +Figure 3: Main results of our controlled evaluation of model class and dataset size. SG score varies more by model class (left) than by training dataset size (right). + +![](images/21cbd3380a1b3197136b5be3392485a92bbdbbffad0acb301b7dac04b76d8f3b.jpg) + +# 4.2 Inductive bias and data scale + +In order to decouple the effects of model class and data scale from test suite difficulty, we represent a particular trained model's performance on each test suite as a delta relative to the average performance of all models on this test suite. Unless noted otherwise, the remainder of the figures in this section plot a score delta, aggregating these deltas within model classes or corpus types. + +Figure 3 tracks the influence of model class and data scale across the model types tested in our experiments, with SG score deltas on the y-axis. The left-hand panel shows the difference in SG score by model class. We find that model class clearly influences SG score: for example, the error bars (bootstrapped $95\%$ confidence intervals of the mean) for RNNG and LSTM do not overlap. The right-hand panel shows the difference in SG score delta by training dataset, and shows a much more minor increase in mean SG score as training data increases. + +We tested the influence of these factors quantitatively using a linear mixed-effects regression model, predicting suite-level performance as a feature of model architecture and training dataset size (represented as log-number of words). Both features made statistically significant contributions to SG score (both $p < 0.001$ ). However, predictor ablation indicates that architecture affects regression model fit more (AIC $= -581$ when dataset size is ablated; AIC $= -574$ when architecture is ablated).7 + +Beyond the above analysis, our GPT-2 results offer another striking example of the influence of + +model architecture relative to data scale. Figure 2 shows that our controlled BLLIP-MD and BLLIP-LG GPT-2 models achieve roughly the same SG score as the pre-trained GPT-2 model, despite being trained on less than $1\%$ of the data used by the pretrained model. This suggests diminishing returns to training data scale for syntactic generalization performance. + +# 4.3 Circuit-level effects on SG score + +Figure 4 shows the breakdown at the circuit level by model architecture (left) and training dataset size (right). The right panel demonstrates little effect of dataset size on SG score delta within most circuits, except for Agreement, on which the models trained on our smallest dataset fare poorly. In the left panel we find substantial between-circuit differences across architectures. Linear mixed-effects analyses support this finding: interactions with circuit are significant for both training dataset size and model architecture, but stronger for the latter (AIC $= -654$ and AIC $= -623$ when size and architecture are respectively ablated). + +While model inductive biases separate clearly in performance on some circuits, they have little effect on performance on Licensing. This minimally suggests that Licensing taps into a distinct syntactic process within language models. One potential explanation for this is that the interactions tested by Licensing involve tracking two co-varying tokens where the downstream token is optional (see e.g. Hu et al., 2020). + +We show the circuit-level breakdown of absolute SG scores for all models (including off-the-shelf) in Figure 5. In general, the models that obtain high SG scores on average (as in Figure 1) also perform well across circuits: pre-trained GPT-2 and GPT + +![](images/33c69fa63bddd7f74656aef797dba426735767692552df87918b194ff0d7b106.jpg) +Figure 4: Controlled evaluation results, split across test suite circuits. Circuit-level differences in SG score vary more by model class (left) than by training dataset size (right). + +![](images/75d4d4a6471e978bcbfe148549ac6ce28cb36d634da3d8a7a148dd2c010aa9f2.jpg) + +![](images/ac451cf4d056397c0df0466893ce5df3eb2700bac593db5dfa481b3ec1d0cf67.jpg) +Figure 5: Evaluation results on all models, split across test suite circuits. + +2-XL outperform all other models on each circuit, including Licensing, on which JRNN, GRNN, and most of our custom-trained models perform particularly poorly. Again, we highlight the impressive performance of RNNG: it achieves comparable average performance to GRNN on all circuits, despite being trained on a fraction of the data size. + +# 4.4 Stability to modifiers + +We separately investigate the degree to which models' syntactic generalizations are robustly stored in memory. For five test suites (Center Embedding, Cleft, MVRR, NPZ-Ambiguous, NPZ-Object), we designed minimally edited versions where syntactically irrelevant intervening content was inserted before the critical region. An ideal model should robustly represent syntactic features of its input across these modifier insertions. + +In Figure 6 we plot models' average scores on these five test suites (dark bars) and their minimally edited versions (light bars), evaluating how robust each model is to intervening content. Among mod + +els in our controlled experiments, we see that model class clearly influences the degree to which predictions are affected by intervening content (compare e.g. the stability of RNNG to that of ON-LSTM). Some off-the-shelf models, such as GPT-2-XL, perform near ceiling on the original five test suites and are not affected at all by intervening content. + +![](images/3fd2162158c30ef4447b409221406531cb533c6cb22ee51b0ab3487e5472860e.jpg) +Figure 6: SG score on the pairs of test suites with and without intervening modifiers: Center Embedding, Cleft, MVRR, NPZ-Ambiguous, and NPZ-Object. + +# 4.5 Effects of model pre-processing + +The GPT-2 models trained and evaluated in this paper use a sub-word vocabulary learned by byte-pair encoding (BPE; Sennrich et al., 2016) to represent their inputs, while all other models represent and compute over word-level inputs. This byte-pair encoding was taken from the pre-trained GPT-2 model trained on a much larger corpus. The results reported for these models thus conflate a choice of model class (a deep Transformer architecture) and preprocessing standard (sub-word tokenization computed on a larger corpus). Some preliminary work suggests that sub-word tokenization is indeed responsible for much of the larger GPT-2 models' success: we find that GPT-2 models trained on word-level representations of BLLIP-LG and BLLIP-MD achieve good perplexity measures, but degrade sharply in SG score. + +Peculiarities of the GPT-2 training regime may be responsible for its particularly bad performance on the smaller corpora. Its sub-word vocabulary was held constant across training corpora, meaning that the model vocabulary size also remained constant across corpora, unlike the other models tested. The poor performance of GPT-2 models trained on smaller corpora may thus be due to overparameterization, and not due to fundamental problems with the model architecture at small data scales. We leave a thorough investigation of the role of sub-word tokenization to future work. + +# 5 Discussion + +This work addresses multiple open questions about syntactic evaluations and their relationship to other language model assessments. Our results dissociate model perplexity and performance in syntactic generalization tests, suggesting that the two metrics capture complementary features of language model knowledge. In a controlled evaluation of different model classes and datasets, we find model architecture plays a more important role than training data scale in yielding correct syntactic generalizations. Our circuit-level analysis reveals consistent failure on Licensing but inconsistent behavior on other circuits, suggesting that different syntactic circuits make use of different underlying processing capacities. In addition to the insight these results provide about neural NLP systems, they also bear on questions central to cognitive science and linguistics, putting lower bounds on what syntactic knowledge can be acquired from string input alone. + +Targeted syntactic evaluation is just one in a series of complementary methods being developed to assess the learning outcomes of neural language processing models. Other methods include classifying sentences as grammatical or ungrammatical (Warstadt et al., 2019b), decoding syntactic features from a model's internal state (Belinkov et al., 2017; Giulianelli et al., 2018), or transfer learning to a strictly syntactic task such as parsing or POS tagging (Hewitt and Manning, 2019). As each task brings an explicit set of assumptions, complementary assessment methods can collectively provide greater insight into models' learning outcomes. + +Although this paper, together with Warstadt et al. (2020), report what is to our knowledge the largest-scale targeted syntactic evaluations to date, we emphasize that they are only first steps toward a comprehensive understanding of the syntactic capabilities of contemporary language models. This understanding will be further advanced by new targeted-evaluation test suites covering a still wider variety of syntactic phenomena, additional trained models with more varied hyperparameters and randomization seeds, and new architectural innovations. Humans develop extraordinary grammatical capabilities through exposure to natural linguistic input. It remains to be seen to just what extent contemporary artificial systems do the same. + +# Acknowledgments + +The authors would like to thank the anonymous reviewers and Samuel R. Bowman for their feedback, Miguel Ballesteros for advice and technical guidance, and Tristan Thrush for technical assistance. J.H. is supported by the NIH under award number T32NS105587 and an NSF Graduate Research Fellowship. J.G. is supported by an Open Philanthropy AI Fellowship. R.P.L. gratefully acknowledges support from the MIT-IBM Watson AI Lab, a Google Faculty Research Award, and a Newton Brain Science Award. + +# References + +Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, and James Glass. 2017. What do neural machine translation models learn about morphology? In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861-872. + +Tom Bever. 1970. The cognitive basis for linguistic structures. In J.R. Hayes, editor, Cognition and + +the Development of Language, pages 279-362. New York: John Wiley & Sons. +Steven Bird and Edward Loper. 2004. NLTK: The natural language toolkit. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pages 214-217, Barcelona, Spain. Association for Computational Linguistics. +Kathryn Bock and Carol A. Miller. 1991. Broken agreement. Cognitive Psychology, 23:45-93. +Andrew Carnie. 2012. Syntax: A generative introduction, volume 18. John Wiley & Sons. +Eugene Charniak, Don Blaheta, Niyu Ge, Keith Hall, John Hale, and Mark Johnson. 2000. BLLIP 1987-89 WSJ Corpus Release 1 LDC2000T43. Linguistic Data Consortium. +Rui P. Chaves. 2020. What don't RNN language models learn about filler-gap dependencies? In Proceedings of the Society for Computation in Linguistics. +Noam Chomsky. 1957. Syntactic structures. Walter de Gruyter. +Shammur Absar Chowdhury and Roberto Zamparelli. 2018. RNN simulations of grammaticality judgments on long-distance dependencies. In Proceedings of the 27th International Conference on Computational Linguistics, pages 133-144, Santa Fe, New Mexico, USA. +Stephen Crain and Janet Dean Fodor. 1985. How can grammars help parsers? In David Dowty, Lauri Kartunnen, and Arnold M. Zwicky, editors, *Natural Language Parsing: Psycholinguistic, Computational, and Theoretical Perspectives*, pages 940-128. Cambridge: Cambridge University Press. +Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond a fixed-length context. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978-2988. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 4171-4186, Minneapolis, Minnesota. +Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). +Fernanda Ferreira and Charles Clifton, Jr. 1986. The independence of syntactic processing. Journal of Memory and Language, 25:348-368. + +Victoria Fossum and Roger P. Levy. 2012. Sequential vs. hierarchical syntactic models of human incremental sentence processing. In Proceedings of the 3rd Workshop on Cognitive Modeling and Computational Linguistics, pages 61-69. +Stefan L Frank and Rens Bod. 2011. Insensitivity of the human sentence-processing system to hierarchical structure. Psychological Science, 22(6):829-834. +Lyn Frazier and Keith Rayner. 1982. Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology, 14:178-210. +Richard Futrell, Ethan Wilcox, Takashi Morita, and Roger Levy. 2018. RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency. arXiv preprint arXiv:1809.01329. +Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, and Roger Levy. 2019. Neural language models as psycholinguistic subjects: Representations of syntactic state. In Proceedings of the 18th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 32-42. +Anastasia Giannakidou. 2011. Negative and positive polarity items: Variation, licensing, and compositionality. In Semantics: An international handbook of natural language meaning, volume 3, pages 1660-1712. Berlin: Mouton de Gruyter. +Mario Giulianielli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 240-248. +Yoav Goldberg. 2019. Assessing BERT's syntactic abilities. arXiv preprint arXiv:1901.05287. +Adam Goodkind and Klinton Bicknell. 2018. Predictive power of word surprisal for reading times is a linear function of language model quality. In Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics, pages 10-18. +Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195-1205, New Orleans, Louisiana. +John Hale. 2001. A probabilistic Earley parser as a psycholinguistic model. In Proceedings of the second + +meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, pages 1-8. +John Hewitt and Christopher D Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138. +Francis Roger Higgins. 1973. The Pseudo-Cleft Construction in English. Ph.D. thesis, MIT. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780. +Jennifer Hu, Sherry Yong Chen, and Roger P. Levy. 2020. A closer look at the performance of neural language models on reflexive anaphor licensing. In Proceedings of the Meeting of the Society for Computation in Linguistics. +Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410. +Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676–2686, Melbourne, Australia. Association for Computational Linguistics. +William Ladusaw. 1979. Polarity Sensitivity as Inherent Scope Relations. Ph.D. thesis, University of Texas at Austin. +Jey Han Lau, Alexander Clark, and Shalom Lappin. 2017. Grammaticality, acceptability, and probability: A probabilistic view of linguistic knowledge. Cognitive Science, 5:1202-1247. +Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. In Transactions of the Association for Computational Linguistics, volume 4, pages 521-535. +Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19:313-330. +Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Brussels, Belgium. +Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic + +heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. +George A. Miller and Noam Chomsky. 1963. Finitary models of language users. In R. Duncan Luce, Robert R. Bush, and Eugene Galanter, editors, Handbook of Mathematical Psychology, volume II, pages 419-491. New York: John Wiley & Sons, Inc. +Don C. Mitchell. 1987. Lexical guidance in human parsing: Locus and processing characteristics. In Max Coltheart, editor, Attention and Performance XII: The psychology of reading. London: Erlbaum. +Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In Neural Information Processing Systems Autodiff Workshop. +Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 404-411, Rochester, New York. Association for Computational Linguistics. +Martin J. Pickering and Matthew J. Traxler. 1998. Plausibility and recovery from garden paths: An eyetracking study. Journal of Experimental Psychology: Learning, Memory, & Cognition, 24(4):940-961. +Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical report. +Tanya Reinhart. 1981. Definite NP anaphora and c-command domains. Linguistic Inquiry, 12(4):605-635. +John Robert Ross. 1967. Constraints on Variables in Syntax. Ph.D. thesis, MIT. +Marten van Schijndel and Tal Linzen. 2018. Modeling garden path effects without explicit hierarchical syntax. In Proceedings of the 40th Annual Meeting of the Cognitive Science Society. +Marten van Schijndel, Aaron Mueller, and Tal Linzen. 2019. Quantity doesn't buy quality syntax with neural language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5835-5841. +Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational + +Linguistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Linguistics. +Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron Courville. 2019. Ordered neurons: Integrating tree structures into recurrent neural networks. In International Conference on Learning Representations. +Nathaniel J. Smith and Roger P. Levy. 2013. The effect of word predictability on reading time is logarithmic. Cognition, 128:302-319. +Adrian Staub. 2007. The parser doesn't ignore intransitivity, after all. Journal of Experimental Psychology: Learning, Memory, & Cognition, 33(3):550-569. +Mitchell Stern, Daniel Fried, and Dan Klein. 2017. Effective inference for generative neural parsing. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1695-1700. +Laurie A Stowe. 1986. Parsing wh-constructions: Evidence for on-line gap location. Language & Cognitive Processes, 1(3):227-245. +Patrick Sturt, Martin J. Pickering, and Matthew W. Crocker. 1999. Structural change and reanalysis difficulty in language comprehension. Journal of Memory and Language, 40:136-150. +John C. Trueswell, Michael K. Tanenhaus, and Susan M. Garnsey. 1994. Semantic influences on parsing: Use of thematic role information in syntactic ambiguity resolution. Journal of Memory and Language, 33:285-318. +Shravan Vasishth, Sven Brussow, Richard L Lewis, and Heiner Drenhaus. 2008. Processing polarity: How the ungrammatical intrudes on the grammatical. Cognitive Science, 32(4):685-712. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008. +Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019a. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems, pages 3266-3280. +Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations. +Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R + +Bowman. 2020. BLiMP: A Benchmark of Linguistic Minimal Pairs for English. In Proceedings of the Society for Computation in Linguistics. +Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. 2019a. CoLA: The Corpus of Linguistic Acceptability (with added annotations). +Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019b. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641. +Ethan Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian, and Roger P. Levy. 2020. Evaluating neural networks as models of human online language processing. In Proceedings of the 42nd Meeting of the Cognitive Science Society (CogSci 2020). To appear. +Ethan Wilcox, Roger P. Levy, and Richard Futrell. 2019a. Hierarchical representation in neural language models: Suppression and recovery of expectations. In Proceedings of the 2019 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. +Ethan Wilcox, Roger P. Levy, Takashi Morita, and Richard Futrell. 2018. What do RNN language models learn about filler-gap dependencies? In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. +Ethan Wilcox, Peng Qian, Richard Futrell, Miguel Ballestros, and Roger P. Levy. 2019b. Structural supervision improves learning of non-local grammatical dependencies. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3302-3312, Minneapolis, Minnesota. +Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In Advances in Neural Information Processing Systems. + +# A Syntactic coverage of test suites + +In order to assess the coverage of our syntactic tests, we manually inspected the "Ideas, Rules and Constraints introduced in this Chapter" section for each chapter in Carnie (2012), a standard introductory syntax textbook. We included entries from these sections which are theory-neutral and refer to observable linguistic data. For example, we do not include affix lowering (Chapter 7) or theta criterion (Chapter 8) because these phenomena presuppose a commitment to one particular syntactic analysis. + +We found that our tests covered 16 of the 47 phenomena presented $(\sim 34\%)$ . Of the 15 chapters surveyed, our tests assessed phenomena in 11 + +
CHAPTER 1: GENERATIVE GRAMMARLexical gender +Number
Person
Case
CHAPTER 2: PARTS OF SPEECHParts of Speech
Plurality
Count vs. Mass Nouns
Argument Structure of Verbs
CHAPTER 3: CONSTITUENCY, TREES, RULESConstituency Tests
Hierarchical Structure
CHAPTER 4: STRUCTURAL RELATIONSc-command
Government
CHAPTER 5: BINDING THEORYR-expression vs. Pronominals
Anaphoric expressions and their antecedents
Co-reference and co-indexation
Binding Principles (A, B, C)
Locality Constraints
CHAPTER 6: X-BAR THEORYOne Replacement
Do-so Replacement
CHAPTER 7: EXTENDING X-BAR THEORY TO FUNCTIONAL CATEGORIESFundamental Phrase Types of DP/CP/TP
Genitives: of-genitives and 's genitives
Subjects and Predicates
Clausal Embedding
Clausal
Tense/Finiteness and its restrictions
Yes/No Questions
Subject-Auxillary Inversion
CHAPTER 8: CONSTRAINING X-BAR THEORY: THE LEXICONThematic Relations
Internal Theta role vs. External Theta Roles
Expletive Pronouns and Expletive Insertion
Extended Projection Principle
CHAPTER 9: HEAD-TO-HEAD MOVEMENTV → T Movement
T → C movement
Do-Support
CHAPTER 10: DP MOVEMENTPassive Constructions
DP-Raising
CHAPTER 11: WH-MOVEMENTWh-Movement
StructuralConstraints on Wh-Movement (IslandConstraints)
Wh in-Situ and Echo Questions
CHAPTER 12: A UNIFIED THEORY OF MOVEMENTUniversal Quantifiers vs. Existential Quantifiers
Quantificational Scope and Quantifier Raising
CHAPTER 13: EXTENDED VPSLight Verbs
Object Shift (and end weight)
Ellipsis
Pseudogapping
CHAPTER 14: RAISING CONTROL AND EMPTY CATEGORIESControl, Subject-to-Subject and Subject-to-Object Raising (ECM)
CHAPTER 15: ADVANCED TOPICS IN BINDING THEORYBinding Principle A and B
+ +Table 5: Test suite coverage of syntactic phenomena presented in Carnie (2012). + +$(\sim 73\%)$ . We did not assess coverage from the last two chapters of the book, which explore alternative syntactic formalisms. The outcome of our manual inspection is given in Table 5. + +A $\sqrt{}$ indicates that some aspect of that phenomena was tested in one or more of our suites. $\sqrt{}$ does not necessarily mean that the test suite was designed explicitly for the purpose of testing that phenomena, but merely that the phenomena was implicated in model success. For example, we place a $\sqrt{}$ next to Parts of Speech because differentiation between verbs and nouns is necessary for models to succeed in the Cleft Structure tests. + +# B Description of test suites + +In this work we have assembled a large number of test suites inspired by the methodology of experimental sentence-processing and psycholinguistic research. Each test suite contains a number of ITEMS, and each item appears in several CONDITIONS: across conditions, a given item will differ only according to a controlled manipulation designed to target a particular feature of grammatical knowledge. For each suite we define a SUCCESS CRITERION, which stipulates inequalities among conditional probabilities of sentence substrings. + +In the main paper, a model's accuracy for a test suite is computed as the percentage of the test suite's items for which it satisfies the criterion. In this appendix, we briefly describe each test suite and the criterion used to determine whether a given model succeeds on each item of the test suite. + +# B.1 Notation + +# B.1.1 Sentence status + +Following and building on linguistic traditions, we annotate examples as follows. Examples marked with a * violate a well-established grammatical constraint, and are ungrammatical. Examples marked with ? or ?? are not necessarily ungrammatical, but are marginal: for example, they may require an unusual interpretation of a word in order for the sentence to be grammatical. (More ?'s is roughly intended to indicate more severe marginality). Examples marked with ! are not ungrammatical, but induce severe processing difficulty that is measurable in real-time human sentence processing. For all test suites, we include references to established literature on the relevant grammatical and/or sentence-processing phenomena. + +# B.1.2 Success criteria + +Criteria involve inequalities among conditional probabilities of sentence substrings given the complete sentence context preceding the substring. In describing criteria, we use $P(\cdot)$ for raw probabilities and $S(\cdot)$ for surprises (negative log-probabilities), and leave the conditioning on preceding context implicit. For concision, we use subscripts on $P$ and $S$ to indicate the variant of the sentence within the test suite that we are referring to. In the first described test suite, CENTER EMBEDDING B.2, we show the criterion in both concise and fully spelled-out forms, to help clarify the conventions we are using in the concise form. All items within a given test suite share the same criterion for success. + +We provide chance accuracy on the assumption that the order of probabilities among conditions for a given item is random. In some cases, exactly determining chance accuracy may require further assumptions about the distribution of these probabilities; in this case we provide an upper bound on chance accuracy. + +# B.2 Center embedding + +Center embedding, the ability to embed a phrase in the middle of another phrase of the same type, is a hallmark feature of natural language syntax. Center-embedding creates NESTED SYNTACTIC DEPENDENCIES, which could pose a challenge for some language models. To succeed in generating expectations about how sentences will continue in the context of multiple center embedding, a model must maintain a representation not only of what words appear in the preceding context but also of the order of those words, and must predict that upcoming words occur in the appropriate order. In this test suite we use verb transitivity and subject-verb plausibility to test model capabilities in this respect. For example, A below is a correct center-embedding, but B is not: + +(A) The painting $_{\mathrm{N}_1}$ that the artist $_{\mathrm{N}_2}$ painted $_{\mathrm{V}_2}$ deteriorated $_{\mathrm{V}_1}$ . [correct] +(B) ??The painting $_{\mathrm{N}_1}$ that the artist $_{\mathrm{N}_2}$ deteriorated $_{\mathrm{V}_1}$ painted $_{\mathrm{V}_2}$ . [incorrect] + +Here, $\mathbf{N}_i$ and $V_{i}$ correspond to matched subject-verb pairs. + +In the WITH-MODIFIER version of the test suite, we postmodify $\mathrm{N}_2$ with a relative clause to increase the linear distance over which the nested dependen + +cies must be tracked, potentially leading to a harder test suite: + +(A) The painting $_{\mathrm{N}_1}$ that the artist $_{\mathrm{N}_2}$ who lived long ago painted $_{\mathrm{V}_2}$ deteriorated $_{\mathrm{V}_1}$ . [correct] +(B) #The painting $_{\mathrm{N_1}}$ that the artist $_{\mathrm{N_2}}$ who lived long ago deteriorated $_{\mathrm{V_1}}$ painted $_{\mathrm{V_2}}$ . [incorrect] + +Criterion The probability of the verb sequence in the correct variant should be higher than the probability of the verb sequence in the incorrect variant: + +$$ +P _ {\mathrm {A}} \left(\mathrm {V} _ {2} \mathrm {V} _ {1}\right) > P _ {\mathrm {B}} \left(\mathrm {V} _ {1} \mathrm {V} _ {2}\right) +$$ + +In full form, this criterion for the example item in the no-modifier version of this test suite would be: + +$P(\text{painted deteriorated}|\text{The painting that the artist}) > P(\text{deteriorated painted}|\text{The painting that the artist})$ + +Chance performance on these center-embedding test suites would be $50\%$ . + +References Miller and Chomsky (1963); Wilcox et al. (2019a) + +# B.3 Pseudo-clefting + +The pseudo-cleft construction involves (i) an extraction of a TARGETED CONSTITUENT from a sentence and (ii) a constituent that provides the semantic contents of the targeted constituent and must match it in syntactic category, where (i) and (ii) are linked by the copula. The pseudo-cleft construction can target both NPs and VPs; in the latter case, the VP of the free relative becomes an inflected form of $do$ . This means that a free relative subject plus the copula can set up a requirement for the syntactic category that comes next. If the free relative clause has a $do$ VP without a direct object, then the main-clause postcopular predicate can be a VP (A below). Otherwise, the postcopular predicate must be an NP (C below): + +(A) What the worker did was $\overbrace{\text{board the plane}}^{\text{VP}}$ +(B) ?What the worker did was the plane. +(C) What the worker repaired was the plane. +(D) \*What the worker repaired was VP board the plane. + +Criterion The postcopular predicate should be more surprising when its syntactic category mismatches the cleft, averaging across VP and NP postcopular predicates: + +$$ +S _ {\mathrm {D}} (\mathrm {V P}) + S _ {\mathrm {B}} (\mathrm {N P}) > S _ {\mathrm {C}} (\mathrm {N P}) + S _ {\mathrm {A}} (\mathrm {V P}) +$$ + +Chance is $50\%$ . A more stringent criterion would be to apply this requirement separately for each of NP and VP postcopular predicates: + +$$ +S _ {\mathrm {D}} (\mathrm {V P}) > S _ {\mathrm {A}} (\mathrm {V P}) \wedge S _ {\mathrm {B}} (\mathrm {N P}) > S _ {\mathrm {C}} (\mathrm {N P}) +$$ + +However, it is often possible to use an NP postcopular predicate with a do cleft through semantic coercion (e.g., in B "did" can be interpreted as "fixed" or "was responsible for"), so we felt that this latter criterion might be too stringent. + +# References Higgins (1973) + +# B.4 Filler-gap dependencies + +Consider the following sentence, in which all arguments and adjuncts appear "in situ" (in the syntactic position at which they are normally interpreted semantically): + +I know that our uncle grabbed the food in front of the guests at the holiday party. + +A FILLER-GAP DEPENDENCY can be created by EXTRACTING any of a number of elements from the subordinate clause, including our uncle (subject extraction), the food (object extraction) or the guests (extraction from a prepositional phrase). These possibilities serve as the basis for several test suites on filler-gap dependencies. + +References Ross (1967); Crain and Fodor (1985); Stowe (1986); Wilcox et al. (2018); Chowdhury and Zamparelli (2018); Chaves (2020) + +# B.4.1 Subject extractions + +(A) I know that our uncle grabbed the food in front of the guests at the holiday party. [THAT, NO GAP] +(B) \*I know who our uncle grabbed the food in front of the guests at the holiday party. [WH, NO GAP] +(C) \*I know that the food in front of the guests at the holiday party. [THAT, GAP] + +(D) I know who $\overbrace{\text{grabbed}}^{\beta}$ the food in front of the guests at the holiday party. [WH, GAP] + +Criterion We require that a model successfully pass a two-part criterion for each item: the whfiller should make the unextracted subject $\alpha$ more surprising in the NO-GAP conditions and should make the post-gap material $\beta$ less surprising in the GAP conditions: + +$$ +S _ {\mathrm {B}} (\alpha) > S _ {\mathrm {A}} (\alpha) \wedge S _ {\mathrm {C}} (\beta) > S _ {\mathrm {D}} (\beta) +$$ + +Chance is $25\%$ . + +# B.4.2 Object extractions + +The logic of this test suite is the same as that for subject extraction above. Note that we use obligatorily transitive embedded verbs, so that omitting a direct object should be highly surprising when there is no filler, as in C. + +(A) I know that our uncle grabbed the food in front of the guests at the holiday party. [THAT, NO GAP] +(B) *I know what our uncle grabbed the food in front of the guests at the holiday party. [WH, NO GAP] +(C) ??I know that our uncle grabbed in front of the guests at the holiday party. [THAT, GAP] +(D) I know what our uncle grabbed in front of in front of the guests at the holiday party. [WH, GAP] + +# Criterion + +$$ +S _ {\mathrm {B}} (\alpha) > S _ {\mathrm {A}} (\alpha) \wedge S _ {\mathrm {C}} (\beta) > S _ {\mathrm {D}} (\beta) +$$ + +# B.4.3 Extraction from prepositional phrases + +The logic of this test suite is the same as that for subject and object extractions above. + +(A) I know that our uncle grabbed the food in front of the guests at the holiday party. [THAT, NO GAP] +(B) \*I know who our uncle grabbed the food in front of the guests at the holiday party. [WH, NO GAP] + +(C) *I know that our uncle grabbed the food in front of at the holiday party. [THAT, GAP] +(D) I know who our uncle grabbed the food in front of at the holiday party. [WH, GAP] + +# Criterion + +$$ +S _ {\mathrm {B}} (\alpha) > S _ {\mathrm {A}} (\alpha) \wedge S _ {\mathrm {C}} (\beta) > S _ {\mathrm {D}} (\beta) +$$ + +# B.4.4 Tests for unboundedness + +Filler-gap dependencies are "unbounded" in the sense that there is no limit to how many clausal levels above the gap the filler can be extracted. This serves as the basis for harder versions of the object-extracted test suites, involving three or four levels of clausal embedding. Example [THAT, NO GAP] sentences are given below: + +I know that our mother said her friend remarked that the park attendant reported your friend threw the plastic into the trash can. [3 levels of embedding] + +I know that our mother said her friend remarked that the park attendant reported the cop thinks your friend threw the plastic into the trash can. [4 levels of embedding] + +These base sentences give rise to 4-condition test suites using the same manipulations as for the basic object-extraction test suite (Section B.4.2), and the criterion for success is the same. + +# B.5 Main-verb/reduced-relative garden-path disambiguation + +This is one of the best-studied instances of syntactic garden-pathing in the psycholinguistics literature. An example 4-condition item is given below: + +(A) !The child kicked in the chaos found her way back home. [REDUCED, AMBIG] +(B) The child who was kicked in the chaos found her way back home. +(C) The child forgotten in the chaos found her way back home. +(D) The child who was forgotten in the chaos $\overbrace{\mathrm{f o u n d}}^{\mathrm{V}^{*}}$ her way back home. + +Criterion Relative to the [REDUCED, AMBIG] condition, not reducing the relative clause should make $\mathrm{V}^*$ less surprising, as should changing the participial verb to one that is the same form as a simple past-tense verb. Additionally, the effect of not reducing the relative clause on $\mathrm{V}^*$ surprisal should be smaller for unambiguous participial verbs than for participial verbs: + +$$ +S _ {\mathrm {A}} (\mathrm {V} ^ {*}) > S _ {\mathrm {B}} (\mathrm {V} ^ {*}) \wedge S _ {\mathrm {A}} (\mathrm {V} ^ {*}) > S _ {\mathrm {C}} (\mathrm {V} ^ {*}) \wedge +$$ + +$$ +S _ {\mathrm {A}} (\mathbf {V} ^ {*}) - S _ {\mathrm {B}} (\mathbf {V} ^ {*}) > S _ {\mathrm {C}} (\mathbf {V} ^ {*}) - S _ {\mathrm {D}} (\mathbf {V} ^ {*}) +$$ + +Chance is somewhere below $25\%$ . + +References +Bever (1970); Ferreira and Clifton (1986); Trueswell et al. (1994); van Schijndel and Linzen (2018); Futrell et al. (2019) + +# B.6 Negative Polarity Licensing + +The words any and ever, in their most common uses, are "negative polarity items" (NPIs): they can only be used in an appropriate syntactic-semantic environment—to a first approximation, in the scope of negation. For example, the determiner no can license NPIs, but its NP has to structurally command the NPI. Below, A and D are acceptable, because no is the determiner for the subject noun managers. There is no negation in C so the NPI is unlicensed and the sentence is unacceptable; crucially, however, B is unacceptable despite the presence of no earlier in the sentence, because no is embedded inside a modifier of the main-clause subject and thus does not command the NPI. + +(A) No managers that respected the guard have NPI had any luck. $[+\mathrm{NEG}, - \mathrm{DISTRACTOR}]$ +(B) \*The managers that respected no guard have NPI had any luck.[-NEG,+DISTRACTOR] +(C) \*The managers that respected the guard have NPI had any luck.[-NEG,-DISTRACTOR] +(D) No managers that respected no guard have NPI had any luck. $[+\mathrm{NEG}, + \mathrm{DISTRACTOR}]$ + +In the above test suite, the "distractor" position for no is inside a subject-extracted relative clause modifying the main-clause subject. We also used a variant test suite in which these relative clauses are object-extracted: + +(A) No managers that the guard respected have NPI had any luck. $[+\mathrm{NEG}, - \mathrm{DISTRACTOR}]$ + +(B) \*The managers that no guard respected have NPI had any luck.[-NEG,+DISTRACTOR] +(C) \*The managers that the guard respected have NPI had any luck.[-NEG,-DISTRACTOR] +(D) No managers that no guard respected have NPI had any luck. $[+\mathrm{NEG}, + \mathrm{DISTRACTOR}]$ + +The above two test suites use any as the NPI; we also use test suites with ever as the NPI. Subject-extracted relative clause example: + +(A) No managers that respected the guard have NPI ever gotten old. $[+\mathrm{NEG}, - \mathrm{DISTRACTOR}]$ +(B) *The managers that respected no guard have NPI ever gotten old. [-NEG,+DISTRACTOR] +(C) *The managers that respected the guard have NPI ever gotten old. [-NEG,-DISTRACTOR] +(D) No managers that respected no guard have NPI ever gotten old. $[+\mathrm{NEG}, + \mathrm{DISTRACTOR}]$ + +Object-extracted relative clause example: + +(A) No managers that the guard respected have NPI ever gotten old. $[+\mathrm{NEG}, - \mathrm{DISTRACTOR}]$ +(B) *The managers that no guard respected have NPI ever gotten old. [-NEG,+DISTRACTOR] +(C) *The managers that the guard respected have NPI ever gotten old. [-NEG,-DISTRACTOR] +(D) No managers that no guard respected have NPI ever gotten old. $[+\mathrm{NEG}, + \mathrm{DISTRACTOR}]$ + +Criterion Changing the main-clause subject's determiner from The to No should increase the probability of the NPI where it appears, regardless of whether there is a distractor no in the subject-modifying relative clause. Furthermore, when there is exactly one no in the sentence, the NPI should be higher-probability when it is in a licensing position rather than in a distractor position: + +$$ +\begin{array}{l} P _ {\mathrm {A}} (\mathrm {N P I}) > P _ {\mathrm {C}} (\mathrm {N P I}) \wedge P _ {\mathrm {D}} (\mathrm {N P I}) > P _ {\mathrm {B}} (\mathrm {N P I}) \wedge \\ P _ {\mathrm {A}} (\mathrm {N P I}) > P _ {\mathrm {B}} (\mathrm {N P I}) \\ \end{array} +$$ + +Chance is $\frac{5}{32}$ . + +References Ladusaw (1979); Vasishth et al. (2008); Giannakidou (2011); Marvin and Linzen (2018); Futrell et al. (2018) + +# B.7 NP/Z garden-path ambiguity + +This is another well-studied syntactic garden-pathing configuration. In A below, the NP the waters introduces a local syntactic ambiguity: it could be (1) the direct object of crossed, in which case the sentence-initial subordinate clause has not yet ended, or (2) the subject of the main clause, in which case crossed is used intransitively and is the last word of the sentence-initial subordinate clause. (This was dubbed "NP/Z" by Sturt et al. (1999) because the subordinate-clause verb might have either an NP object or a Z(ero), i.e. null, object.) The next word, remained, is only compatible with (2); the ruling out of (1) generally yields increased processing difficulty for human comprehenders. Marking the end of the subordinate clause with a comma, as in B, makes the sentence easier at $\mathrm{V}^*$ , as does an obligatorily intransitive subordinate-clause verb, as in C. + +(A) !As the ship crossed the waters remained blue and calm. [TRANS,NO COMMA] +(B) As the ship crossed, the waters remained blue and calm. [TRANS,COMMA] +$\mathrm{V}^*$ (C) As the ship drifted the waters remained blue and calm. [INTRANS,NO COMMA] +(V\* (D) As the ship drifted, the waters remained blue and calm. [INTRANS,COMMA] + +Criterion Similar to the main-verb/reduced relative garden-pathing ambiguity, a model must pass a three-part criterion. Relative to A, either marking the subordinate-clause end with a comma or using an obligatorily intransitive verb in the subordinate clause should reduce the surprisal of $\mathbf{V}^*$ . Furthermore, the surprisal-reduction effect of the comma should be smaller when the subordinate-clause verb is intransitive than when it is transitive: + +$$ +\begin{array}{l} S _ {\mathrm {A}} \left(\mathbf {V} ^ {*}\right) > S _ {\mathrm {B}} \left(\mathbf {V} ^ {*}\right) \wedge S _ {\mathrm {A}} \left(\mathbf {V} ^ {*}\right) > S _ {\mathrm {C}} \left(\mathbf {V} ^ {*}\right) \wedge \\ S _ {\mathrm {A}} (\mathbf {V} ^ {*}) - S _ {\mathrm {B}} (\mathbf {V} ^ {*}) > S _ {\mathrm {C}} (\mathbf {V} ^ {*}) - S _ {\mathrm {D}} (\mathbf {V} ^ {*}) \\ \end{array} +$$ + +We also use an NP/Z test suite where the second means of disambiguation is not changing the subordinate-clause verb to an intransitive, but + +rather giving the transitive subordinate-clause verb an overt direct object. For the above example item, the first two conditions are the same and the other two conditions would be: + +(C) As the ship crossed the sea the waters remained blue and calm. +(D) As the ship crossed the sea, the waters remained blue and calm. + +The success criterion remains the same. + +Finally, we create harder versions of both the above test suites by adding a postmodifier to the main-clause subject (in the above example, the waters becomes the waters of the Atlantic Ocean). + +References Frazier and Rayner (1982); Mitchell (1987); Pickering and Traxler (1998); Sturt et al. (1999); Staub (2007) + +# B.8 Subject-verb number agreement + +This task tests a language model for how well it predicts the number marking on English finite present-tense verbs (whether it should be the third-person singular form, or the non-third-person-singular form, generally referred to as the plural form for simplicity, although technically this is the form for first- and second-person singular as well). In controlled, targeted versions of this test, multiple NP precede the verb: the verb's actual subject, as well as a DISTRactor NP with number that is different from that of the subject. A successful language model should place higher probability on the verbform matching that of the subject, not the distractor. We have three versions of this test suite: one where the distractor is in a prepositional phrase postmodifier of the subject: + +(A) The farmer near the clerks knows $\mathrm{V}_{\mathrm{sg}}$ many people. +(B) *The farmer near the clerks know $\mathrm{V}_{\mathrm{pl}}$ many people. +(C) The farmers near the clerk know $\mathrm{V}_{\mathrm{pl}}$ many people. +(D) *The farmers near the clerk knows $\mathrm{V}_{\mathrm{sg}}$ many people. + +one in which the distractor is in a subject-extracted relative clause postmodifier of the subject: + +(A) The farmer that embarrassed the clerks knows $\mathrm{V}_{\mathrm{sg}}$ many people. + +(B) *The farmer that embarrassed the clerks know $\mathrm{V}_{\mathrm{pl}}$ many people. +(C) The farmers that embarrassed the clerk know $\mathrm{V}_{\mathrm{pl}}$ many people. +(D) *The farmers that embarrassed the clerk knows $\mathrm{V}_{\mathrm{sg}}$ many people. + +and one in which the distractor is in an object-extracted relative clause postmodifier of the subject: + +(A) The farmer that the clerks embarrassed knows $\mathrm{V}_{\mathrm{sg}}$ many people. +(B) *The farmer that the clerks embarrassed know $\mathrm{V}_{\mathrm{pl}}$ many people. +(C) The farmers that the clerk embarrassed know $\mathrm{V}_{\mathrm{pl}}$ many people. +(D) *The farmers that the clerk embarrassed knows $\mathrm{V}_{\mathrm{sg}}$ many people. + +Criterion Following Linzen et al. (2016) and Marvin and Linzen (2018), we require successful discrimination of the preferred upcoming verbform of the given lemma (rather than, for example, successful discrimination of the better context given a particular verbform). For success we require that a model successfully predicts the preferred verbform for both the singular- and plural-subject versions of an item: + +$$ +P _ {\mathrm {A}} \left(\mathrm {V} _ {\mathrm {s g}}\right) > P _ {\mathrm {B}} \left(\mathrm {V} _ {\mathrm {p l}}\right) \wedge P _ {\mathrm {C}} \left(\mathrm {V} _ {\mathrm {p l}}\right) > P _ {\mathrm {D}} \left(\mathrm {V} _ {\mathrm {s g}}\right) +$$ + +Chance performance is thus $25\%$ , though a context-insensitive baseline that places different probabilities on $\mathrm{V_{sg}}$ and $\mathrm{V_{pl}}$ would score $50\%$ . + +References Bock and Miller (1991); Linzen et al. (2016); Marvin and Linzen (2018) + +# B.9 Reflexive pronoun licensing + +The noun phrase with which a reflexive pronoun (herself, himself, themselves) corefers must command it in a sense similar to that relevant for negative-polarity items (Section B.6). In the below example, the reflexive pronoun ending the sentence can only corefer to the subject of the sentence, author, with which it must agree in number: a singular subject requires a singular reflexive $\mathbf{R}_{\mathrm{sg}}$ , and a plural subject requires a plural reflexive $\mathbf{R}_{\mathrm{pl}}$ . + +(A) The author next to the senators hurt herself $\mathrm{R_{sg,fe}}$ +(B) *The authors next to the senator hurt herself $_{\mathrm{R_{sg,fe}}}$ . + +(C) The authors next to the senator hurt themselves $\mathrm{R}_{\mathrm{pl}}$ . +(D) *The authors next to the senator hurt themselves $_{\mathrm{R}_{\mathrm{pl}}}$ . +We generated a pair of test suites—one in which the singular reflexive is herself, and another where the singular reflexive is himself, on the template of the above example, where the distractor NP is in a prepositional-phrase postmodifier of the subject NP. We also generated a similar pair of test suites where the distractor NP is inside a subject-extracted relative clause modifying the subject: +(A) The author that liked the senators hurt herself $\mathrm{R_{sg,fe}}$ +(B) *The authors that liked the senator hurt herself $_{\mathrm{R_{sg,fe}}}$ . +(C) The authors that liked the senator hurt themselves $\mathrm{R}_{\mathrm{pl}}$ . +(D) *The authors that liked the senator hurt themselves $_{\mathrm{Rpl}}$ . +and a pair of test suites where the distractor NP is inside an object-extracted relative clause modifying the subject: +(A) The author that the senators liked hurt herself $\mathrm{R_{sg,fe}}$ +(B) *The authors that the senator liked hurt herself $_{\mathrm{R_{sg, fem}}}$ . +(C) The authors that the senator liked hurt themselves $_{\mathrm{R}_{\mathrm{pl}}}$ . +(D) *The authors that the senator liked hurt themselves $_{\mathrm{R}_{\mathrm{pl}}}$ . +Criterion For each item in each test suite, we require that for both the singular and the plural versions of the reflexive pronoun the model assign higher conditional probability in the correct licensing context than in the incorrect licensing context: +Chance is $25\%$ . +References Reinhart (1981); Marvin and Linzen (2018) +Beginning a sentence with As, When, Before, After, or Because, implies that an immediately following clause is not the main clause of the sentence, as would have otherwise been the case, but instead is + +$$ +P _ {\mathrm {A}} \left(\mathrm {R} _ {\mathrm {s g}}\right) > P _ {\mathrm {B}} \left(\mathrm {R} _ {\mathrm {s g}}\right) \wedge P _ {\mathrm {C}} \left(\mathrm {R} _ {\mathrm {p l}}\right) > P _ {\mathrm {D}} \left(\mathrm {R} _ {\mathrm {p l}}\right) +$$ + +# B.10 Subordination + +a SUBORDINATE CLAUSE that must be followed by the main clause. Ending the sentence without a main clause, as in B, is problematic. Conversely, following an initial clause with a second clause MC (without linking it to the initial clause with and, but, despite, or a similar coordinator or subordinator), as in C below, is unexpected and odd. + +(A) The minister praised the building +(B) *After the minister praised the building +(C) ??The minister praised the building, it started to rain. +(D) After the minster praised the building, it started to rain. + +In addition to the base test suite exemplified by the item above, we include three versions with longer and more complex initial clauses, which may make the test suite more difficult. In the first of these versions, we postmodify both the subject and object of the initial clauses with prepositional phrases: + +the minister praised the building + +the minister in the dark suit and white tie praised the new building on the town's main square + +In the second of these versions, the postmodifiers are subject-extracted relative clauses: + +the minister praised the building + +the minister who wore a black suit praised the new building that was built by the square + +In the third of these versions, the postmodifiers are object-extracted relative clauses: + +the minister praised the building + +↓ + +the minister who the mayor had invited praised the new building that the businessman had built downtown + +Criterion Introducing a subordinator at the beginning of the sentence should make an ending without a second clause less probable, and should make a second clause more probable: + +$$ +P _ {\mathrm {A}} (\mathrm {E N D}) > P _ {\mathrm {B}} (\mathrm {E N D}) \wedge P _ {\mathrm {D}} (\mathrm {M C}) < P _ {\mathrm {C}} (\mathrm {M C}) +$$ + +References Futrell et al. (2018) \ No newline at end of file diff --git a/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/images.zip b/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..c6ec318aefb9cd3a37253e8c91ceda25e4421fa8 --- /dev/null +++ b/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2eda12596afc47c0c29b844e0772b85ee880225d4a9dc8de87c1f43781c883bc +size 541160 diff --git a/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/layout.json b/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..12fee4d55e81c75af6369babdbae48e19da48069 --- /dev/null +++ b/asystematicassessmentofsyntacticgeneralizationinneurallanguagemodels/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75ba06987c525c50c5ff7fbd39b646a65660f5a1e9caec7504c40ed15d76536d +size 717847 diff --git a/ataleofaprobeandaparser/b5865fca-6a5b-4028-8262-71e562be1b46_content_list.json b/ataleofaprobeandaparser/b5865fca-6a5b-4028-8262-71e562be1b46_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..0d56e94214797360721adf14b69239e1c1467a4e --- /dev/null +++ b/ataleofaprobeandaparser/b5865fca-6a5b-4028-8262-71e562be1b46_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:68365e2d15a7b18c5a821b1e1b936d165fb5e6f908108820ade2990c06e62615 +size 51784 diff --git a/ataleofaprobeandaparser/b5865fca-6a5b-4028-8262-71e562be1b46_model.json b/ataleofaprobeandaparser/b5865fca-6a5b-4028-8262-71e562be1b46_model.json new file mode 100644 index 0000000000000000000000000000000000000000..35317ffe0d55179702bfcffc36ddc4be940594d0 --- /dev/null +++ b/ataleofaprobeandaparser/b5865fca-6a5b-4028-8262-71e562be1b46_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:fb297699f1dd219e818e76a9683650e038df453d61a96e87e64520c665fdd8ba +size 61187 diff --git a/ataleofaprobeandaparser/b5865fca-6a5b-4028-8262-71e562be1b46_origin.pdf b/ataleofaprobeandaparser/b5865fca-6a5b-4028-8262-71e562be1b46_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2d0194f6f85a2155af221af54423626abc3a9cd1 --- /dev/null +++ b/ataleofaprobeandaparser/b5865fca-6a5b-4028-8262-71e562be1b46_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:588aaf2705daff1b1321e22e0b90f458c1560e25ec411f0c5a9d3025f6124cde +size 578435 diff --git a/ataleofaprobeandaparser/full.md b/ataleofaprobeandaparser/full.md new file mode 100644 index 0000000000000000000000000000000000000000..2e028d7c335e56a4fa61d06d02e2f264045b405f --- /dev/null +++ b/ataleofaprobeandaparser/full.md @@ -0,0 +1,209 @@ +# A Tale of a Probe and a Parser + +# Rowan Hall Maudslay $^{\dagger}$ Josef Valvoda $^{\ddagger}$ Tiago Pimentel $^{\ddagger}$ Adina Williams $^{\dagger}$ Ryan Cotterell $^{\ddagger,\dagger}$ + +6University of Cambridge aFacebook AI Research cETH Zürich rh635@cam.ac.uk, jv406@cam.ac.uk, tp472@cam.ac.uk, adinawilliams@fb.com, ryan.cotterell@inf.ethz.ch + +# Abstract + +Measuring what linguistic information is encoded in neural models of language has become popular in NLP. Researchers approach this enterprise by training "probes"—supervised models designed to extract linguistic structure from another model's output. One such probe is the structural probe (Hewitt and Manning, 2019), designed to quantify the extent to which syntactic information is encoded in contextualised word representations. The structural probe has a novel design, unattested in the parsing literature, the precise benefit of which is not immediately obvious. To explore whether syntactic probes would do better to make use of existing techniques, we compare the structural probe to a more traditional parser with an identical lightweight parameterisation. The parser outperforms structural probe on UUAS in seven of nine analysed languages, often by a substantial amount (e.g. by 11.1 points in English). Under a second less common metric, however, there is the opposite trend—the structural probe outperforms the parser. This begs the question: which metric should we prefer? + +# 1 Introduction + +Recently, unsupervised sentence encoders such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) have become popular within NLP. These pre-trained models boast impressive performance when used in many language-related tasks, but this gain has come at the cost of interpretability. A natural question to ask, then, is whether these models encode the traditional linguistic structures one might expect, such as part-of-speech tags or dependency trees. To this end, researchers have invested in the design of diagnostic tools commonly referred to as probes (Alain and Bengio, 2017; Conneau et al., 2018; Hupkes et al., 2018; Poliak et al., 2018; Marvin and Linzen, 2018; Niven + +and Kao, 2019). Probes are supervised models designed to extract a target linguistic structure from the output representation learned by another model. + +Based on the authors' reading of the probing literature, there is little consensus on where to draw the line between probes and models for performing a target task (e.g. a part-of-speech tagger versus a probe for identifying parts of speech). The main distinction appears to be one of researcher intent: probes are, in essence, a visualisation method (Hupkes et al., 2018). Their goal is not to best the state of the art, but rather to indicate whether certain information is readily available in a model—probes should not "dig" for information, they should just expose what is already present. Indeed, a sufficiently expressive probe with enough training data could learn any task (Hewitt and Liang, 2019), but this tells us nothing about a representation, so it is beside the point. For this reason, probes are made "simple" (Liu et al., 2019), which usually means they are minimally parameterised. $^{1}$ + +Syntactic probes, then, are designed to measure the extent to which a target model encodes syntax. A popular example is the structural probe (Hewitt and Manning, 2019), used to compare the syntax that is decodable from different contextualised word embeddings. Rather than adopting methodology from the parsing literature, this probe utilises a novel approach for syntax extraction. However, the precise motivation for this novel approach is not immediately clear, since it has nothing to do with model complexity, and appears orthogonal to the goal of a probe. Probes are designed to help researchers understand what information exists in a model, and unfamiliar ways of measuring this information may obscure whether we are actually gaining an insight about the representation we wish to examine, or the tool of measurement itself. + +![](images/298d4c3a0d2fa603a49cb1e4b33d6e048bfc09e3e73d2086510d79ee656e7258.jpg) +Figure 1: Example of an undirected dependency tree. We observe that the syntactic distance between displeases and everything is 2 (the red path). + +Using the structural probe as a case study, we explore whether there is merit in designing models specifically for the purpose of probing—whether we should distinguish between the fundamental design of probes and models for performing an equivalent task, as opposed to just comparing their simplicity. We pit the structural probe against a simple parser that has the exact same lightweight parameterisation, but instead employs a standard loss function for parsing. Experimenting on multilingual BERT (Devlin et al., 2019), we find that in seven of nine typologically diverse languages studied (Arabic, Basque, Czech, English, Finnish, Japanese, Korean, Tamil, and Turkish), the parser boosts UUAS dramatically; for example, we observe an 11.1-point improvement in English. + +In addition to using UUAS, Hewitt and Manning (2019) also introduce a new metric—correlation of pairwise distance predictions with the gold standard. We find that the structural probe outperforms the more traditional parser substantially in terms of this new metric, but it is unclear why this metric matters more than UUAS. In our discussion, we contend that, unless a convincing argument to the contrary is provided, traditional metrics are preferable. Justifying metric choice is of central importance for probing, lest we muddy the waters with a preponderance of ill-understood metrics. + +# 2 Syntactic Probing Using Distance + +Here we introduce syntactic distance, which we will later train a probe to approximate. + +Syntactic Distance The syntactic distance between two words in a sentence is, informally, the number of steps between them in an undirected parse tree. Let $\mathbf{w} = w_{1}\dots w_{n}$ be a sentence of length $n$ . A parse tree $\mathbf{t}$ belonging to the sentence $\mathbf{w}$ is an undirected spanning tree of $n$ vertices (with a separate root as a $(n + 1)^{\mathrm{th}}$ vertex), each representing a word in the sentence $\mathbf{w}$ . The syntactic distance between two words $w_{i}$ and $w_{j}$ , denoted + +$\Delta_{\mathbf{t}}(w_i, w_j)$ , is defined as the shortest path from $w_i$ to $w_j$ in the tree $\mathbf{t}$ where each edge has weight 1. Note that $\Delta_{\mathbf{t}}(\cdot, \cdot)$ is a distance in the technical sense of the word: it is non-negative, symmetric, and satisfies the triangle inequality. + +Tree Extraction Converting from syntactic distance to a syntactic tree representation (or vice versa) is trivial and deterministic: + +Proposition 1. There is a bijection between syntactic distance and undirected spanning trees. + +Proof. Suppose we have the syntactic distances $\Delta_{\mathbf{t}}(w_i, w_j)$ for an unknown, undirected spanning tree $\mathbf{t}$ . We may uniquely recover that tree by constructing a graph with an edge between $w_i$ and $w_j$ iff $\Delta_{\mathbf{t}}(w_i, w_j) = 1$ . (This analysis also holds if we have access to only the ordering of the distances between all $|\mathbf{w}|^2$ pairs of words, rather than the perfect distance calculations—if that were the case, the minimum spanning tree could be computed e.g. with Prim's.) On the other hand, if we have an undirected spanning tree $\mathbf{t}$ and wish to recover the syntactic distances, we only need to compute the shortest path between each pair of words, with e.g. Floyd-Warshall, to yield $\Delta_{\mathbf{t}}(\cdot, \cdot)$ uniquely. + +# 3 Probe, Meet Parser + +In this section, we introduce a popular syntactic probe and a more traditional parser. + +# 3.1 The Structural Probe + +Hewitt and Manning (2019) introduce a novel method for approximating the syntactic distance $\Delta_{\mathbf{t}}(\cdot, \cdot)$ between any two words in a sentence. They christen their method the structural probe, since it is intended to uncover latent syntactic structure in contextual embeddings. To do this, they define a parameterised distance function whose parameters are to be learned from data. For a word $w_i$ , let $\mathbf{h}_i \in \mathbb{R}^d$ denote its contextual embedding, where $d$ is the dimensionality of the embeddings from the model we wish to probe, such as BERT. Hewitt and Manning (2019) define the parameterised distance function + +$$ +\begin{array}{l} d _ {B} \left(w _ {i}, w _ {j}\right) = \tag {1} \\ \sqrt {(\mathbf {h} _ {i} - \mathbf {h} _ {j}) ^ {\top} B ^ {\top} B (\mathbf {h} _ {i} - \mathbf {h} _ {j})} \\ \end{array} +$$ + +where $B \in \mathbb{R}^{r \times d}$ is to be learned from data, and $r$ is a user-defined hyperparameter. The matrix $B^{\top} B$ is positive semi-definite and has rank at most $r$ .3 + +The goal of the structural probe, then, is to find $B$ such that the distance function $d_B(\cdot ,\cdot)$ best approximates $\Delta (\cdot ,\cdot)$ . If we are to organise our training data into pairs, each consisting of a gold tree t and its corresponding sentence w, we can then define the local loss function as + +$$ +\begin{array}{l} \ell (B, \langle \mathbf {t}, \mathbf {w} \rangle) = \tag {2} \\ \sum_ {i = 1} ^ {| \mathbf {w} |} \sum_ {j = i + 1} ^ {| \mathbf {w} |} \left| \Delta_ {\mathbf {t}} (w _ {i}, w _ {j}) - d _ {B} (w _ {i}, w _ {j}) \right| \\ \end{array} +$$ + +which is then averaged over the entire training set $\mathcal{D} = \{\langle \mathbf{t}^{(k)},\mathbf{w}^{(k)}\rangle \}_{k = 1}^{N}$ to create the following global objective + +$$ +\mathcal {L} (B) = \sum_ {k = 1} ^ {N} \frac {1}{| \mathbf {w} ^ {(k)} | ^ {2}} \ell \left(B, \langle \mathbf {t} ^ {(k)}, \mathbf {w} ^ {(k)} \rangle\right) \tag {3} +$$ + +Dividing the contribution of each local loss by the square of the length of its sentence (the $|\mathbf{w}^{(k)}|^2$ factor in the denominator) ensures that each sentence makes an equal contribution to the overall objective, to avoid a bias towards the effect of longer sentences. This global loss can be minimised computationally using stochastic gradient descent. + +# 3.2 A Structured Perceptron Parser + +Given that probe simplicity seemingly refers to parameterisation rather than the design of loss function, we infer that swapping the loss function should not be understood as increasing model complexity. With that in mind, here we describe an alternative to the structural probe which learns parameters for the same function $d_B$ a structured perceptron dependency parser, originally introduced in McDonald et al. (2005). + +This parser's loss function works not by predicting every pairwise distance, but instead by predicting the tree based on the current estimation of the distances between each pair of words, then comparing the total weight of that tree to the total weight + +of the gold tree (based on the current distance predictions). The local perceptron loss is defined as + +$$ +\begin{array}{l} \ell (B, \langle \mathbf {t}, \mathbf {w} \rangle) = \sum_ {(i, j) \in \mathbf {t}} d _ {B} \left(w _ {i}, w _ {j}\right) \tag {4} \\ - \underbrace {\underset {\mathbf {t} ^ {\prime} \in \mathcal {T} (\mathbf {w})} {\min } \sum_ {(i ^ {\prime} , j ^ {\prime}) \in \mathbf {t} ^ {\prime}} d _ {B} \left(w _ {i ^ {\prime}} , w _ {j ^ {\prime}}\right)} _ {\text {c o m p u t e d w i t h P r i m ' s a l g o r i h t m}} \\ \end{array} +$$ + +When the predicted minimum spanning tree $\mathbf{t}'$ perfectly matches the gold tree $\mathbf{t}$ , each edge will cancel and this loss will equal zero. Otherwise, it will be positive, since the sum of the predicted distances for the edges in the gold tree will necessarily exceed the sum in the minimum spanning tree. The local losses are summed into a global objective: + +$$ +\mathcal {L} (B) = \sum_ {k = 1} ^ {N} \ell \left(B, \langle \mathbf {t} ^ {(k)}, \mathbf {w} ^ {(k)} \rangle\right) \tag {5} +$$ + +This quantity can also be minimised, again, with a stochastic gradient method. + +Though both the structural probe and the structured perceptron parser may seem equivalent under Prop. 1, there is a subtle but important difference. To minimise the loss in eq. (2), the structural probe needs to encode (in $d_B$ ) the rank-ordering of the distances between each pair of words within a sentence. This is not necessarily the case for the structured perceptron. It could minimise the loss in eq. (4) by just encoding each pair of words as "near" or "far"—and Prim's algorithm will do the rest.[5] + +# 4 Experimental Setup + +# 4.1 Processing Results + +Embeddings and Data We experiment on the contextual embeddings in the final hidden layer of the pre-trained multilingual release of BERT (Devlin et al., 2019), and trained the models on the Universal Dependency (Nivre et al., 2016) treebands (v2.4). This allows our analysis to be multilingual. More specifically, we consider eight typologically diverse languages (Arabic, Basque, Czech, Finnish, Japanese, Korean, Tamil, and Turkish), plus English. + +![](images/9a93b875ddf15f19130a62a62aa6f67cade1b07c6e8127a7d0cdb52f8ace7a1c.jpg) +Figure 2: Results for the metrics in Hewitt and Manning (2019): different metrics, opposite trends. + +![](images/ad1c15aadecf246bc43ee18bc5cfd918ee3fc80a0bcc0bbc339b4b0c1804d83a.jpg) + +Decoding the Predicted Trees Having trained a model to find a $d_B(\cdot ,\cdot)$ that approximates $\Delta_{\mathbf{t}}(\cdot ,\cdot)$ it is trivial to decode test sentences into trees (see Prop. 1). For an unseen sentence $\mathbf{w} = w_{1}\dots w_{n}$ we compute the $n\times n$ pairwise distance matrix $D$ .. + +$$ +D _ {u v} = \left\{ \begin{array}{l l} d _ {B} \left(w _ {i}, w _ {v}\right) & \text {i f} v > u \\ 0 & \text {o t h e r w i s e} \end{array} \right. \tag {6} +$$ + +We can then compute the predicted tree $\mathbf{t}$ from $D$ using Prim's algorithm, which returns the minimum spanning tree from the predicted distances. + +# 4.2 Experiments + +To compare the performance of the models, we use both metrics from Hewitt and Manning (2019), plus a new variant of the second. + +UUAS The undirected unlabeled attachment score (UUAS) is a standard metric in the parsing literature, which reports the percentage of correctly identified edges in the predicted tree. + +DSpr The second metric is the Spearman rank-order correlation between the predicted distances, which are output from $d_B$ , and the gold-standard distances (computable from the gold tree using the Floyd-Warshall algorithm). Hewitt and Manning term this metric distance Spearman (DSpr). While UUAS measures whether the model captures edges in the tree, DSpr considers pairwise distances between all vertices in the tree—even those which are not connected in a single step. + +$\mathbf{DSpr}_{P + FW}$ As a final experiment, we run DSpr again, but first pass each pairwise distance matrix $D$ through Prim's (to recover the predicted tree) then through Floyd-Warshall (to recover a new distance matrix, with distances calculated based on the predicted tree). This post-processing would convert + +a "near"-far" matrix encoding to a precise rank-order one. This should positively affect the results, in particular for the parser, since that is trained to predict trees which result from the pairwise distance matrix, not the pairwise distance matrix itself. + +# 5 Results + +This section presents results for the structural probe and structured perceptron parser. + +UUAS Results Figure 2a presents UUAS results for both models. The parser is the highest performing model on seven of the nine languages. In many of these the difference is substantial—in English, for instance, the parser outperforms the structural probe by 11.1 UUAS points. $^6$ + +DSpr Results The DSpr results (Figure 2b) show the opposite trend: the structural probe outperforms the parser on all languages. The parser performs particularly poorly on Japanese and Arabic, which is surprising, given that these had the second and third largest sets of training data for BERT respectively (refer to Table 1 in the appendices). We speculate that this may be because in the treebanks used, Japanese and Arabic have a longer average sentence length than other languages. + +$\mathbf{DSpr}_{P + FW}$ Results Following the post-processing step, the difference in DSpr (shown in Figure 3) is far less stark than previously suggested—the mean difference between the two across all nine languages is just 0.0006 (in favour of the parser). Notice in particular the improvement + +![](images/9702efb1407a524b40a6d8cf7880e8f3e1e8e78b58e4d6e1ccd191526b386593.jpg) +Figure 3: $\mathrm{DSpr}_{P + FW}$ results—DSpr following the application of Prim's then Floyd-Warshall to $D$ . + +for both Arabic and Japanese—where previously (in the vanilla DSpr) the structured perceptron vastly underperformed, the post-processing step closes the gap almost entirely. Though Prop. 1 implies that we do not need to consider the full pairwise output of $d_B$ to account for global properties of tree, this is not totally borne out in our empirical findings, since we do not see the same trend in DSpr $P + FW$ as we do in UUAS. If we recover the gold tree, we will have a perfect correlation with the true syntactic distance—but we do not always recover the gold tree (the UUAS is less than $100\%$ ), and therefore the errors the parser makes are pronounced. + +# 6 Discussion: Probe v. Parser + +Although we agree that probes should be somehow more constrained in their complexity than models designed to perform well on tasks, we see no reason why being a "probe" should necessitate fundamentally different design choices. It seems clear from our results that how you design a probe has a notable effect on the conclusions one might draw about a representation. Our parser was trained to recover trees (so it is more attuned to UUAS), whilst the structural probe was trained to recover pairwise distances (so it is more attuned to DSpr)—viewed this way, our results are not surprising in the least. + +The fundamental question for probe designers, then, is which metric best captures a linguistic structure believed to be a property of a given representation—in this case, syntactic dependency. We suggest that probing research should focus more explicitly on this question—on the development and justification of probing metrics. Once a metric is established and well motivated, a lightweight probe can be developed to determine whether that structure is present in a model. + +If proposing a new metric, however, the burden of proof lies with the researcher to articulate and demonstrate why it is worthwhile. Moreover, this process of exploring which details a new metric is sensitive to (and comparing with existing metrics) ought not be conflated with an analysis of a particular model (e.g. BERT)—it should be clear whether the results enable us to draw conclusions about a model, or about a means of analysing one. + +For syntactic probing, there is certainly no a-priori reason why one should prefer DSpr to UUAS. If anything, we tentatively recommend UUAS, pending further investigation. The $\mathrm{DSpr}_{P + FW}$ results show no clear difference between the models, whereas UUAS exhibits a clear trend in favour of the parser, suggesting that it may be easier to recover pairwise distances from a good estimate of the tree than vice versa. UUAS also has the advantage that it is well described in the literature (and, in turn, well understood by the research community). + +According to UUAS, existing methods were able to identify more syntax in BERT than the structural probe. In this context, though, we use these results not to give kudos to BERT, but to argue that the perceptron-based parser is a better tool for syntactic probing. Excluding differences in parameterisation, the line between what constitutes a probe or a model designed for a particular task is awfully thin, and when it comes to syntactic probing, a powerful probe seems to look a lot like a traditional parser. + +# 7 Conclusion + +We advocate for the position that, beyond some notion of model complexity, there should be no inherent difference between the design of a probe and a model designed for a corresponding task. We analysed the structural probe (Hewitt and Manning, 2019), and showed that a simple parser with an identical lightweight parameterisation was able to identify more syntax in BERT in seven of nine compared languages under UUAS. However, the structural probe outperformed the parser on a novel metric proposed in Hewitt and Manning (2019), bringing to attention a broader question: how should one choose metrics for probing? In our discussion, we argued that if one is to propose a new metric, they should clearly justify its usage. + +# Acknowledgements + +We thank John Hewitt for engaging wholeheartedly with our work and sharing many helpful insights. + +# References + +Guillaume Alain and Yoshua Bengio. 2017. Understanding intermediate layers using linear classifier probes. In The 5th International Conference on Learning Representations. +Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, and Martin Wattenberg. 2019. Visualizing and measuring the geometry of BERT. arXiv preprint arXiv:1906.02715. +Alexis Conneau, German Kruszewski, Guillaume Lample, Loic Barrault, and Marco Baroni. 2018. What you can cram into a single $\$ \& !\#$ * vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136, Melbourne, Australia. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Robert W. Floyd. 1962. Algorithm 97: Shortest path. Communications of the ACM, 5(6):345. +John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733-2743, Hong Kong, China. Association for Computational Linguistics. +John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129-4138, Minneapolis, Minnesota. Association for Computational Linguistics. +Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. 2018. Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907-926. +Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In The 3rd International Conference on Learning Representations. +Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Linguistic knowledge and transferability of contextual + +representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073-1094, Minneapolis, Minnesota. Association for Computational Linguistics. +Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330. +Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202, Brussels, Belgium. Association for Computational Linguistics. +Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 523-530, Vancouver, British Columbia, Canada. Association for Computational Linguistics. +Timothy Niven and Hung-Yu Kao. 2019. Probing neural network comprehension of natural language arguments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4658-4664, Florence, Italy. Association for Computational Linguistics. +Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 1659-1666, Porto Roz, Slovenia. European Language Resources Association (ELRA). +Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics. +Tiago Pimentel, Josef Valvoda, Rowan Hall Maudslay, Ran Zmigrod, Adina Williams, and Ryan Cotterell. 2020. Information-theoretic probing for linguistic structure. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. +Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, and + +Benjamin Van Durme. 2018. Collecting diverse natural language inference problems for sentence representation evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 67-81, Brussels, Belgium. Association for Computational Linguistics. + +R. C. Prim. 1957. Shortest connection networks and some generalizations. The Bell Systems Technical Journal, 36(6):1389-1401. + +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929-1958. + +Stephen Warshall. 1962. A theorem on boolean matrices. Journal of the ACM, 9(1):11-12. + +Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace's Transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. + +Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144. + +# A Training Details + +For all models (separately for each language), we considered three hyperparameters: the rank $r$ (full rank when $r = 768$ , since this is the dimensionality of the BERT embeddings), the learning rate, and the dropout rate (Srivastava et al., 2014). To optimise these, we performed a random search, selecting values as judged by loss on the development set. When training, we used a batch size of 64 sentences, and employ early stopping after five steps based on loss reduction. As the optimiser, we used Adam (Kingma and Ba, 2015). + +For each language, we used the largest available Universal Dependency 2.4 treebank. One-word sentences and sentences of over 50 words were discarded, and the larger treebanks were pruned to 12,000 sentences (in an 8:1:1 data split). + +We use the BERT implementation of Wolf et al. (2019). Since BERT accepts WordPiece units (Wu + +et al., 2016) rather than words, where necessary we averaged the output to get word-level embeddings. This is clearly a naive composition method; improving it would likely strengthen the results for both the probe and the parser. + +# B Multilingual BERT Details + +Multilingual BERT has 12 layers, 768 hidden states, and a total of 110M parameters. It was trained on the complete Wikipedia dumps for the 104 languages with the largest Wikipedias. Table 1 reports the size of the Wikipedias for the languages considered in this paper. Further details of the training can be found on Google Research's GitHub. + +
LanguageArticles
Arabic1,016,152
Basque342,426
Czech439,467
English5,986,229
Finnish473,729
Japanese1,178,594
Korean476,068
Tamil125,031
Turkish336,380
+ +Table 1: The number of articles in the Wikipedias of the languages considered. \ No newline at end of file diff --git a/ataleofaprobeandaparser/images.zip b/ataleofaprobeandaparser/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..401b6f5e0dfe57d0e94d6218038ac03675873e82 --- /dev/null +++ b/ataleofaprobeandaparser/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b5e16f44277f88011a47b9b620670b3fcb8d3804c6be12dc830f826127824c3b +size 167744 diff --git a/ataleofaprobeandaparser/layout.json b/ataleofaprobeandaparser/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..4ab9bf7f9fec50450fdbc746852ea1185960dc7d --- /dev/null +++ b/ataleofaprobeandaparser/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:70ee75371cbdec076345f8e3ca5f8ecaa13e8572b2a05c7c5f66336d7aac752e +size 243836 diff --git a/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/a9ab3d6c-4319-469d-8120-d9a1e8377763_content_list.json b/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/a9ab3d6c-4319-469d-8120-d9a1e8377763_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..55f66c34fd3b360e6fcb722c1691f7da3a9ba055 --- /dev/null +++ b/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/a9ab3d6c-4319-469d-8120-d9a1e8377763_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b8964f4e1cb222b190b536c01bff803507693a69dc6afe4815e19debcf20a89b +size 78718 diff --git a/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/a9ab3d6c-4319-469d-8120-d9a1e8377763_model.json b/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/a9ab3d6c-4319-469d-8120-d9a1e8377763_model.json new file mode 100644 index 0000000000000000000000000000000000000000..f5488221c5c1cd8d92123688491577f077462f97 --- /dev/null +++ b/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/a9ab3d6c-4319-469d-8120-d9a1e8377763_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8eb4ef9a3c653002cd27cb1675d33795640f40b61d1e11087b15e4907f468e72 +size 98532 diff --git a/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/a9ab3d6c-4319-469d-8120-d9a1e8377763_origin.pdf b/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/a9ab3d6c-4319-469d-8120-d9a1e8377763_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..2d931df00e60587001de303dbc3604ef02497754 --- /dev/null +++ b/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/a9ab3d6c-4319-469d-8120-d9a1e8377763_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1c6d0851e512b7d07dcfe1c962056ff8e5280b0f71dea959c5e32e15ecd5facd +size 342803 diff --git a/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/full.md b/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/full.md new file mode 100644 index 0000000000000000000000000000000000000000..59a327e1caf23ee6556732ce485f3d0ed774e55a --- /dev/null +++ b/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/full.md @@ -0,0 +1,245 @@ +# A Tale of Two Perplexities: Sensitivity of Neural Language Models to Lexical Retrieval Deficits in Dementia of the Alzheimer's Type + +Trevor Cohen* + +Biomedical and Health Informatics + +University of Washington + +Seattle + +cohenta@uw.edu + +Serguei Pakhomov* + +Pharmaceutical Care and Health Systems + +University of Minnesota + +Minneapolis + +pakh0002@umn.edu + +# Abstract + +In recent years there has been a burgeoning interest in the use of computational methods to distinguish between elicited speech samples produced by patients with dementia, and those from healthy controls. The difference between perplexity estimates from two neural language models (LMs) - one trained on transcripts of speech produced by healthy participants and the other trained on transcripts from patients with dementia - as a single feature for diagnostic classification of unseen transcripts has been shown to produce state-of-the-art performance. However, little is known about why this approach is effective, and on account of the lack of case/control matching in the most widely-used evaluation set of transcripts (DementiaBank), it is unclear if these approaches are truly diagnostic, or are sensitive to other variables. In this paper, we interrogate neural LMs trained on participants with and without dementia using synthetic narratives previously developed to simulate progressive semantic dementia by manipulating lexical frequency. We find that perplexity of neural LMs is strongly and differentially associated with lexical frequency, and that a mixture model resulting from interpolating control and dementia LMs improves upon the current state-of-the-art for models trained on transcript text exclusively. + +# 1 Introduction + +Alzheimer's Disease (AD) is a debilitating neurodegenerative condition which currently has no cure, and Dementia of the Alzheimer's Type (DAT) is one of the most prominent manifestations of AD pathology. Prior to availability of disease-modifying therapies, it is important to focus on reducing the emotional and financial burden of this devastating disease on patients, caregivers, and the healthcare system. Recent longitudinal studies of + +aging show that cognitive manifestations of future dementia may appear as early as 18 years prior to clinical diagnosis - much earlier than previously believed (Rajan et al., 2015; Aguirre-Acevedo et al., 2016). With $30 - 40\%$ of healthy adults subjectively reporting forgetfulness on a regular basis (Cooper et al., 2011), there is an urgent need to develop sensitive and specific, easy-to-use, safe, and cost-effective tools for monitoring AD-specific cognitive markers in individuals concerned about their cognitive function. Lack of clear diagnosis and prognosis, possibly for an extended period of time (i.e., many years), in this situation can produce uncertainty and negatively impact planning of future care (Stokes et al., 2015), and misattribution of AD symptoms to personality changes can lead to family conflict and social isolation (Boise et al., 1999; Bond et al., 2005). Delayed diagnosis also results in an estimated $7.9 trillion in medical and care costs (Association, 2018) due to high utilization of emergency care, amongst other factors, by patients with undiagnosed AD. + +Cognitive status is reflected in spoken language. As manual analysis of such data is prohibitively time-consuming, the development and evaluation of computational methods through which symptoms of AD and other dementias can be identified on the basis of linguistic anomalies observed in transcripts of elicited speech samples have intensified in the last several years (Fraser et al., 2016; Yancheva and Rudzicz, 2016; Orimaye et al., 2017). This work has generally employed a supervised machine learning paradigm, in which a model is trained to distinguish between speech samples produced by patients with dementia and those from controls, using a set of deliberately engineered or computationally identified features. However, on account of the limited training data available, overfitting is a concern. This is particularly problematic in DAT, where the nature of linguistic anomalies + +varies between patients, and with AD progression (Altmann and McClung, 2008). + +In the current study we take a different approach, focusing our attention on the perplexity of a speech sample as estimated by neural LMs trained on transcripts of the speech of participants completing a cognitive task. To date, the most successful approach to using LM perplexity as a sole distinguishing feature between narratives by dementia patients and controls was proposed by Fritsch et al. (2019) and replicated by Klumpp et al. (2018). The approach consists of training two recurrent neural LMs - one on transcripts from patients with dementia and the other on transcripts from controls. The difference between the perplexities estimated with these two LMs results in very high classification accuracy (AUC: 0.92) reported by both studies. + +The explanation for this performance offered by Fritsch et al. (2019) relies on observations that patients with DAT describe the picture in an unforeseen way and their speech frequently diverts from the content of the picture, contains repetitions, incomplete utterances, and refers to objects in the picture using words like "thing" or "something". This explanation, however, conflicts with the findings by Klumpp et al. (2018) that demonstrate similarly high classification accuracy (AUC: 0.91) with a single hidden layer non-recurrent neural network and bag-of-words input features, suggesting that while word sequences play a role, it may not be as large as previously believed by Fritsch et al. (2019). Klumpp et al.'s (2018) explanation contrasts "local" with "global language properties" of the picture descriptions being captured by recurrent neural LMs vs. the non-recurrent bag-of-words neural network classifier, respectively. Both of these explanations are based on informal qualitative observations of the data and are not entirely satisfying because both fail to explain the fact that it is precisely the difference between the control and dementia LMs that is able to discriminate between patients and controls. The individual LMs are not nearly as good at this categorization task. + +The objective of the current study is to quantify the extent to which the differences between neural LMs trained on language produced by DAT patients and controls reflect known deficits in language use in this disease - in particular the loss of access to relatively infrequent terms that occurs with disease progression (Almor et al., 1999a). We approach this objective by interrogating trained neural LMs + +with two methods: interrogation by perturbation in which we evaluate how trained neural LMs respond to text that has been deliberately perturbed to simulate AD progression; and interrogation by interpolation in which we develop and evaluate hybrid LMs by interpolating between neural LMs modeling language use with and without dementia. We find neural LMs are progressively more perplexed by text simulating disease of greater severity, and that this perplexity decreases with increasing contributions of a LM trained on transcripts from patients with AD, but increases again when only this LM is considered. Motivated by these observations, we modify the approach of Fritsch et al. (2019) by incorporating an interpolated model and pre-trained word embeddings, with improvements in performance over the best results reported for models trained on transcript text exclusively. + +# 2 Background + +# 2.1 Linguistic Anomalies in AD + +AD is a progressive disease, and the linguistic impairments that manifest reflect the extent of this progression (Altmann and McClung, 2008). In its early stages, deficits in the ability to encode recent memories are most evident. As the disease progresses, it affects regions of the brain that support semantic memory (Martin and Chao, 2001) - knowledge of words and the concepts they represent - and deficits in language comprehension and production emerge (Altmann and McClung, 2008). + +A widely-used diagnostic task for elicitation of abnormalities in speech is the "Cookie Theft" picture description task from the Boston Diagnostic Aphasia Examination (Goodglass, 2000), which is considered to provide an adequate approximation of spontaneous speech. In this task, participants are asked to describe a picture of a pair of children colluding in the theft of cookies from the top shelf of a raised cupboard while their mother distractedly washes dishes1. When used as a diagnostic instrument, the task can elicit features of AD and other dementias, such as pronoun overuse (Almor et al., 1999a), repetition (Hier et al., 1985; Pakhoomov et al., 2018) and impaired recollection of key elements (or "information units") from the picture (Giles et al., 1996). Due to the human-intensive nature of the analyses to detect such anomalies, automated methods present a desirable alternative. + +# 2.2 Classification of Dementia Transcripts + +A number of authors have investigated automated methods of identifying linguistic anomalies in dementia. The most widely-used data set for these studies is the DementiaBank corpus (Becker et al., 1994), which we employ for the current work. In some of the early work on this corpus, Prud'hommeaux and Roark (2015) introduced a novel graph-based content summary score to distinguish between controls and dementia cases in this corpus with an area under the receiver operating characteristic curve (AUC) of 0.83. Much of the subsequent work relied on supervised machine learning, with a progression from manually engineered features to neural models mirroring general Natural Language Processing trends. For example, Fraser and Hirst (2016) report AD classification accuracy of over $81\%$ on 10-fold cross-validation when applying logistic regression to 370 text-derived and acoustic features. In a series of papers, Orimaye et al. (2014; 2017; 2018) report tenfold cross-validation F-measures of up to 0.73 when applying a Support Vector Machine (SVM) to 21 syntactic and lexical features; SVM AUC on leave-pair-out cross-validation (LPOCV) of 0.82 and 0.93 with the best manually-engineered feature set and the best 1,000 of 16,903 lexical, syntactic and n-gram features (with selection based on information gain) respectively; and a LPOCV AUC of 0.73-0.83 across a range of deep neural network models with high-order n-gram features. Yancheva and Rudzicz (2016) derive topic-related features from word vector clusters to obtain an F-score of 0.74 with a random forest classifier2. Karlekar et al. (2018) report an utterance-level accuracy of $84.9\%$ 3 with a convolutional/recurrent neural network combination when trained on text alone. While these results are not strictly comparable as they are based on different subsets of the data, use different cross-validation strategies and report different performance metrics, they collectively show that supervised models can learn to identify patients with AD using data from elicited speech samples. However, as is generally the case with supervised learning on small data sets, overfitting is a concern. + +# 2.3 Perplexity and Cognitive Impairment + +Perplexity is used as an estimate of the fit between a probabilistic language model and a segment of pre + +viously unseen text. The notion of applying n-gram model perplexity (a derivative of cross-entropy) as a surrogate measure of syntactic complexity in spoken narratives was proposed by Roark et al. (2007) and applied to transcribed logical memory (story recall) test responses by patients with mild cognitive impairment (MCI: a frequent precursor to AD diagnosis). In this work, sequences of part-of-speech (POS) tags were used to train bi-gram models on logical memory narratives, and then cross-entropy of these models was computed on held-out cross-validation folds. They found significantly higher mean cross-entropy values in narratives of MCI patients as compared to controls. Subsequent work expanded the use of POS cross-entropy as one of the language characteristics in a predictive model for detecting MCI (Roark et al., 2011). + +Perplexity can also be calculated on word tokens and serve as an indicator of an n-gram model's efficiency in predicting new utterances (Jelinek et al., 1977). Pakhomov et al (2010b) included word and POS LM perplexity amongst a set of measurements used to distinguish between speech samples elicited from healthy controls and patients with frontotemporal lobar degeneration (FTLD). A LM was trained on text from an external corpus of transcribed "Cookie Theft" picture descriptions performed by subjects without dementia from a different study. This model was then used to estimate perplexity of elicited speech samples in cases and controls, with significant differences between mean perplexity scores obtained from subjects with the semantic dementia variant of FTLD and controls. However, the authors did not attempt to use perplexity score as a variable in a diagnostic classification of FTLD or its subtypes. + +Collectively, these studies suggest elevated perplexity (both at the word and POS level) may indicate the presence of dementia. A follow-up study (Pakhomov et al., 2010a) used perplexity calculated with a model trained on a corpus of conversational speech unrelated to the picture description task, as part of a factor analysis of speech and language characteristics in FTLD. Results suggested that the general English LM word- and POS-level perplexity did not discriminate between FTLD subtypes, or between cases and controls. Taken together with the prior results, these results suggest that LMs trained on transcripts elicited using a defined task (such as the "Cookie Theft" task) are better equipped to distinguish between cases and controls + +than LM trained on a broader corpus. + +As the vocabulary of AD patients becomes progressively constrained, one might anticipate language use becoming more predictable with disease progression. Wankerl et al. (2016) evaluate this hypothesis using the writings of Iris Murdoch who developed AD later in life - and eschewed editorial revisions. In this analysis, which was based on time-delimited train/test splits, perplexity decreased in her later output. This is consistent with recent work by Weiner et al. (2018) that found diminished perplexity was of some (albeit modest) utility in predicting transitions to AD. + +The idea of combining two perplexity estimates - one from a model trained on transcripts of speech produced by healthy controls and the other from a model trained on transcripts from patients with dementia - was developed by Wankerl et al. (2017) who report an AUC of 0.83 using n-gram LMs in a participant-level leave-one-out-cross-validation (LOOCV) evaluation across the DementiaBank dataset. Fritsch et al. (2019) further improved performance of this approach by substituting a neural LM (a LSTM model) for the n-gram LM, and report an improved AUC of 0.92. However, it is currently unclear as to whether this level of accuracy is due to dementia-specific linguistic markers, or a result of markers of other significant differences between the case and control group such as age $(\bar{x} = 71.4$ vs. 63) and years of education $(\bar{x} = 12.1$ vs. 14.3) (Becker et al., 1994). + +# 2.4 Neural LM perplexity + +Recurrent neural network language models (RNN-LM) (Mikolov et al., 2010) are widely used in machine translation and other applications such as sequence labeling (Goldberg, 2016). Recurrent Neural Networks (RNN) (Jordan, 1986; Elman, 1990) facilitate modeling sequences of indeterminate length by maintaining a state vector, $S_{t-1}$ , that is combined with a vector representing the input for the next data point in a sequence, $x_t$ at each step of processing. Consequently, RNN-LMs have recourse to information in all words preceding the target for prediction, in contrast to n-gram models. They are also robust to previously unseen word sequences, which with naïve n-gram implementations (i.e., without smoothing or backoff) could result in an entire sequence being assigned a probability of zero. Straightforward RNN implementations are vulnerable to the so-called "vanishing" and "ex + +ploding" gradient problems (Hochreiter, 1998; Pascanu et al., 2012), which emerge on account of the numerous sequential multiplication steps that occur with backpropagation through time (time here indicating each step through the sequence to be modeled), and limit the capacity of RNNs to capture long-range dependencies. An effective way to address this problem involves leveraging Long Short Term Memory (LSTM) networks (Hochreiter and Schmidhuber, 1997), which use structures known as gates to inhibit the flow of information during training, and a mechanism using a memory cell to preserve selected information across sequential training steps. Groups of gates comprise vectors with components that have values that are forced to be close to either 1 or 0 (typically accomplished using the sigmoid function). Only values close to 1 permit transmission of information, which disrupts the sequence of multiplication steps that occurs when backpropagating through time. The three gates used with typical LSTMs are referred to as Input, Forget and Output gates, and as their names suggest they govern the flow of information from the input and past memory to the current memory state, and from the output of each LSTM unit (or cell) to the next training step. LSTM LMs have been shown to produce better perplexity estimates than n-gram models (Sundermeyer et al., 2012). + +# 2.5 Lexical Frequency + +A known distinguishing feature of the speech of AD patients is that it tends to contain higher frequency words with less specificity than that of cognitively healthy individuals (e.g., overuse of pronouns and words like "thing") (Almor et al., 1999b). Lexical frequency affects speech production; however, these effects have different origins in healthy and cognitively impaired individuals. A leading cognitive theory of speech production postulates a two-step process of lexical access in which concepts are first mapped to lemmas and, subsequently, to phonological representations prior to articulation (Levelt, 2001). In individuals without dementia, lexical frequency effects are evident only at the second step - the translation of lemmas to phonological representations and do not originate at the pre-lexical conceptual level (Jescheniak and Levelt, 1994). In contrast, in individuals with dementia, worsening word-finding difficulties are attributed to progressive degradation of semantic networks that underlie lexical access at the concep + +tual level (Astell and Harley, 1996). While lexical frequency effects are difficult to control in unconstrained purely spontaneous language production, language produced during the picture description task is much more constrained in that the picture provides a fixed set of objects, attributes, and relations that serve as referents for the person describing the picture. Thus, in the context of the current study, we expect to find that both healthy individuals and patients with dementia describing the same picture would attempt to refer to the same set of concepts, but that patients with dementia would tend to use more frequent and less specific words due to erosion of semantic representations leading to insufficient activation of the lemmas. Changes in vocabulary have been reported in the literature as one of the most prominent linguistic manifestations of AD (Pekkala et al., 2013; Wilson et al., 1983; Rohrer et al., 2007). We do not suggest that other aspects of language such as syntactic complexity, for example, should be excluded; although, there has been some debate as to the utility of syntactic complexity specifically as a distinguishing feature (see Fraser et al., 2015)). + +# 3 Materials and Methods + +# 3.1 Datasets + +For LM training and evaluation we used transcripts of English language responses to the "Cookie Theft" component of the Boston Diagnostic Aphasia Exam (Goodglass, 2000), provided as part of the DementiaBank database (Becker et al., 1994). Transcripts (often multiple) are available for 169 subjects classified as having possible or probable DAT on the basis of clinical or pathological examination, and 99 patients classified as controls. + +For interrogation by perturbation, we used a set of six synthetic "Cookie Theft" picture description narratives created by Bird et al. (2000) to study the impact of semantic dementia on verb and noun use in picture description tasks. While Bird et al. (2000) focused on semantic dementia, a distinct condition from DAT, these synthetic narratives were not based on patients with semantic dementia. Rather, they were created to manipulate lexical frequency by first compiling a composite baseline narrative from samples by healthy subjects, and then removing and/or replacing nouns and verbs in that baseline with words of higher lexical frequency (e.g., "mother" vs. "woman" vs. "she"). Lexical frequency was calculated using the Celix Lexical + +Database (LDC96L14) and words were aggregated into groups based on four log frequency bands (0.5 - 1.0, 1.0 - 1.5, 1.5 - 2.0, 2.5 - 3.0: e.g., words in the 0.5 - 1.0 band occur in Celex more than 10 times per million). These narratives are well-suited to the study of lexical retrieval deficits in DAT in which loss of access to less frequent words is observed with disease progression (Pekkala et al., 2013). + +In order to calculate mean log lexical frequency on the DementiaBank narratives, we used the SUBTLEXus corpus shown to produce lexical frequencies more consistent with psycholinguistic measures of word processing time than those calculated from the Celex corpus (Brysbaert and New, 2009). The DementiaBank narratives were processed using NLTK's $^4$ implementation of the TnT part-of-speech tagger (Brants, 2000) trained on the Brown corpus (Francis and Kucera, 1979). Following Bird et al. (2000) only nouns and verbs unique within the narrative were used to calculate mean log lexical frequency. We did not stem the words in order to avoid creating potentially artificially high/low frequency items. To validate the mean log lexical frequency values obtained with the SUBTLEXus corpus, we compared the log lexical frequency means for the six narratives developed by Bird et al. (2000) with their frequency band values using Spearman's rank correlation and found them to be perfectly correlated $(\rho = 1.0)$ . + +The text of DementiaBank transcripts was extracted from the original CHAT files (Macwhinney, 2000). The transcripts as well as the six synthetic narratives were lowercased and pre-processed by removing speech and non-speech noise as well as pause fillers (um's amd ah's) and punctuation (excepting the apostrophe). + +# 3.2 Pre-trained models + +Prior work with neural LMs in this context has used randomly instantiated models. We wished to evaluate the utility of pre-training for this task - both pretraining of the LSTM in its entirety and pre-training of word embeddings alone. For the former we used a LSTM trained on the WikiText-2 dataset (Merit et al., 2016) provided with the GluonNLP package5. 200-dimensional word embeddings, including embeddings augmented with subword information, (Bojanowski et al., 2017) were developed using the Semantic Vectors package6 and + +trained using the skipgram-with-negative-sampling algorithm of Mikolov et al. (2013) for a single iteration on the English Wikipedia (10/1/2019 edition, pre-processed with wikifl.pl7) with a window radius of five8. We report results using skipgram embeddings augmented with subword information as these improved performance over both stochastically-initialized and WikiText-2-pretrained LSTMs in preliminary experiments. + +# 3.3 Training + +We trained two sets of dementia and control LSTM models. The first set was trained in order to replicate the findings of Fritsch et al. (2019), using the same RWTHLM package (Sundermeyer et al., 2014) and following their methods as closely as possible in accordance with the description provided in their paper. Each model's cross-entropy loss was optimized over 20 epochs with starting learning rate optimization performed on a heldout set of 10 transcripts. The second set was trained using the GluonNLP averaged stochastic gradient weight-dropped LSTM (standard-lstm-lm-200 architecture) model consisting of 2 LSTM layers with word embedding (tied at input and output) and hidden layers of 200 and 800 dimensions respectively (see Merity et al. (2017) for full details on model architecture). In training the GluonNLP models, the main departure from the methods used by Fritsch et al. (2019) involved not using a small heldout set of transcripts to optimize the learning rate because we observed that the GluonNLP models converged well prior to the 20th epoch with a starting learning rate of 20 which was used for all stochastically initialized models. With pre-trained models we used a lower starting learning rate of 5 to preserve information during subsequent training on DementiaBank. All GluonNLP models were trained using batch size of 20 and back propagation through time (BPTT) window size of 10. During testing, batch size was set to 1 and BPTT to the length of the transcript (tokens). Unseen transcript perplexity was calculated as $e^{\text{loss}}$ . + +# 3.4 Evaluation + +As subjects in the DementiaBank dataset participated in multiple assessments, there are multiple transcripts for most of the subjects. In order to avoid biasing the models to individual subjects, we + +followed the participant-level leave-one-out cross-validation (LOOCV) evaluation protocol of Fritsch et al. (2019) whereby all of the picture description transcripts for one participant are held out in turn for testing and the LMs are trained on the remaining transcripts. Perplexities of the LMs are then obtained on the heldout transcripts, resulting in two perplexity values per transcript, one from the LM trained on the dementia $(P_{dem})$ and control $(P_{con})$ transcripts. Held-out transcripts were scored using these perplexity values, as well as by the difference $(P_{con} - P_{dem})$ between them. + +# 3.5 Interrogation of models + +For interrogation by perturbation, we estimated the perplexity of our models for each of the six synthetic narratives of Bird et al. (2000). We reasoned that an increase in $P_{con}$ and a decrease in $P_{dem}$ as words are replaced by higher-frequency alternatives to simulate progressive lexical retrieval deficits would indicate that these models were indeed capturing AD-related linguistic changes. For interrogation by interpolation, we extracted the parameters from all layers of paired LSTM LMs after training, and averaged these as $\alpha LM_{dem} + (1 - \alpha)LM_{con}$ to create interpolated models. We hypothesized that a decrease in perplexity estimates for narratives emulating severe dementia would occur as $\alpha$ (the proportional contribution of $LM_{dem}$ ) increases. + +# 4 Results and Discussion + +The results of evaluating classification accuracy of the various language models are summarized in Table 1. The $95\%$ confidence interval for GluonNLP models was calculated from perplexity means obtained across ten LOOCV iterations with random model weight initialization on each iteration. The RWTHLM package does not provide support for GPU acceleration and requires a long time to perform a single LOOCV iteration (approximately 10 days in our case). Since the purpose of using the RWTHLM package was to replicate the results previously reported by Fritsch et al. (2019) that were based on a single LOOCV iteration and we obtained the exact same AUC of 0.92 on our first LOOCV iteration with this approach, we did not pursue additional LOOCV iterations. However, we should note that we obtained an AUC of 0.92 for the difference between $P_{con}$ and $P_{dem}$ on two of the ten LOOCV iterations with the GluonNLP LSTM model. Thus, we believe that the GluonNLP + +
MODELCONTROLDEMENTIACONTROL-DEMENTIA
AUC95% CIAUC95% CIAUC95% CI
RWTHLM LSTM0.80-0.64-0.92-
GluonNLP LSTM0.80± 0.0020.65± 0.0020.91± 0.004
+ +Table 1: Classification accuracy using individual models' perplexities and their difference for various models. + +![](images/0d674017564f43edfc229530d04939446bf12724578771e15f0db41361c91e2b.jpg) +Figure 1: Relationship between log frequency bands used to replace words in synthetic Cookie Theft picture descriptions to simulate degrees of semantic dementia and perplexity of LSTM language models trained on picture descriptions by controls and dementia patients. + +LSTM model has equivalent performance to the RWTHLM LSTM model. + +Having replicated results of previously published studies and confirmed that using the difference in perplexities trained on narratives by controls and dementia patients is indeed the current state-of-the-art, we now turn to explaining why the difference between these LMs is much more successful than the individual models alone. + +First, we used the six "Cookie Theft" narratives designed to simulate semantic dementia to examine the relationship between $P_{con}$ and $P_{dem}$ with GluonNLP LSTM LMs and log lexical frequency bands. The results of this analysis are illustrated in Figure 1 and show that $P_{dem}$ is higher than $P_{con}$ on narratives in the lower log frequency bands (less simulated impairment) and lower in the higher log frequency bands (more simulated impairment). + +We confirmed these results by calculating mean log lexical frequency on all DementiaBank narratives and fitting a linear regression model to test for associations with perplexities of the two LMs. The regression model contained mean lexical frequency as the dependent variable and $P_{dem}$ and $P_{con}$ as independent variables, adjusted for age, education and the length of the picture description narrative. In order to avoid likely practice effects + +across multiple transcripts, we only used the transcript obtained on the initial baseline visit; however, we did confirm these results by using all transcripts to fit mixed effects models with random slopes and intercepts in order to account for the correlation between transcripts from the same subject (mixed effects modeling results not shown). + +The results demonstrate that the association between perplexity and lexical frequency is significant and positive for the control LM (coeff: 0.563, $p < 0.001$ ) and negative for dementia LM (coeff: -0.543, $p < 0.001$ ). Age, years of education, and length of the narrative were not significantly associated with lexical frequency in this model. These associations show that the control LM and dementia LM are more "surprised" by narratives containing words of higher lexical frequency and lower lexical frequency respectively. If the use of higher lexical frequency items on a picture description task portends a semantic deficit, then this particular pattern of results explains why it is the difference between the two models that is most sensitive to manifestations of dementia and suggests that there is a point at which the two models become equally "surprised" with a difference between their perplexities close to zero. In Figure 1, that point is between log lexical frequency bands of 2.0 and 2.5 corresponding to the mild to moderate degree of semantic impairment reported by Bird et al. (2000). Notably, in the clinical setting, the mild forms of dementia such as mild cognitive impairment and mild dementia are also particularly challenging and require integration of multiple sources of evidence for accurate diagnosis (Knopman and Petersen, 2014). + +The results of our interpolation studies are shown in Figure 2. Each point in the figure shows the average difference between the perplexity estimate of a perturbed transcript $(Px)$ and the perplexity estimate for the unperturbed $(Po$ : frequency band 0) sample for this model9. While all models tend + +
RANDOMPRETRAINEDRANDOMPRETRAINED
Pcon-PαAUC95% CIAUC95% CIACCeer95% CIACCeer95% CI
α = 0.250.842± 0.0080.838± 0.0150.689± 0.0360.724± 0.034
α = 0.50.816± 0.0090.813± 0.0050.669± 0.0350.665± 0.033
α = 0.750.931± 0.0030.941± 0.0060.854± 0.0310.872± 0.010
α = 1.00.908± 0.0040.930± 0.0050.846± 0.0230.839± 0.017
+ +Table 2: Performance of randomly-instantiated and pre-trained (subword-based skipgram embeddings) interpolated "two perplexity" models across 10 repeated per-participant LOOCV runs. $\alpha$ indicates the proportional contribution of the dementia model. $ACC_{eer}$ gives the accuracy at equal error rate. Best results are in **boldface**, and results using the approach of Fritsch et al. (2019) are in *italics*. + +to find the increasingly perturbed transcripts more perplexing than their minimally perturbed counterparts, this perplexity decreases with increasing contributions of the dementia LM. However, when only this model is used, relative perplexity of the perturbed transcripts increases. This indicates that the "pure" dementia LM may be responding to linguistic anomalies other than those reflecting lack of access to infrequently occurring terms. We reasoned that on account of this, the $\alpha = 0.75$ model may provide a better representation of dementia-related linguistic changes. To evaluate this hypothesis, we assessed the effects on performance of replacing the dementia model with this interpolated model. The results of these experiments (Table 2) reveal improvements in performance with this approach, with best AUC (0.941) and accuracy at equal error rate (0.872) resulting from the combination of interpolation10 with pre-trained word embeddings. That pre-trained embeddings further improve performance is consistent with the observation that the elevation in perplexity when transitioning from $\alpha = 0.75$ to $\alpha = 1.0$ is much less pronounced in these models (Figure 3). These results are significantly better than those reported by Fritsch et al (2019), and our reimplementation of their approach. + +These improvements in performance appear to be attributable to a smoothing effect on the perplexity of the modified dementia models in response to unseen dementia cases. Over ten repeated LOOCV iterations, average perplexity on held-out dementia cases was significantly lower than that of the baseline 'dementia' model $(51.1 \pm 0.81)$ for both the $\alpha = 0.75$ $(47.3 \pm 0.32)$ and pre-trained embeddings $(44.8 \pm 0.53)$ models. This trend is further accentuated with the severity of dementia - for transcripts corresponding to a mini-mental state + +![](images/624e7e3d54aec0f5bbb53741f5843fb1792f33c53b2f1652f402cd7915f138d6.jpg) +Figure 2: Stochastically initialized models. Elevation in perplexity over unperturbed transcript $(Po)$ with the proportional contribution of a dementia model $(\alpha)$ to an interpolated model. Each point is the mean of 268 (held-out participants) data points. Error bars are not shown as they do not exceed the bounds of the markers. + +![](images/e9f8d94058aa2550386b3438239063fd778e0aa41166f1be0fa148db3b4a99d4.jpg) +Figure 3: Pretrained word embeddings. Elevation in perplexity over unperturbed transcript $(Po)$ with the proportional contribution of a dementia model $(\alpha)$ to an interpolated model. Each point is the average of 268 data points, and error bars are not shown as they do not exceed the bounds of the markers. + +exam (MMSE) $\leq 10$ $(n = 16)$ , average perplexities are $148.29\pm 7.69$ , $105.01\pm 3.48$ and $121.86\pm 7.67$ for baseline 'dementia', $\alpha = 0.75$ and pre-trained embeddings models respectively. In both cases, average perplexity of the interpolated $(\alpha = 0.75)$ pretrained embeddings model fell between those of the exclusively pre-trained (lowest overall) and exclusively interpolated (lowest in severe cases) models. + +A practical issue for automated methods to detect dementia concerns establishing their accuracy at earlier stages of disease progression, where a readily disseminating screening tool would arguably have greatest clinical utility, especially in the presence of an effective disease-modifying therapy. To this end, Fritsch et al. (2019) defined a "screening scenario" in which evaluation was limited to participants with a last available MMSE of 21 or more, which corresponds to a range of severity encompassing mild, questionable or absent dementia (Perneczky et al., 2006). In this scenario, classification accuracy of the 'paired perplexity' LSTM based model was only slightly lower (AUC: 0.87) than the accuracy on the full range of cognitive impairment (AUC: 0.92). We found similar performance with our models. When limiting evaluation to those participants with a last-recorded MMSE $\geq 21$ , average AUCs across 10 LOOCV iterations were $0.836 \pm 0.014$ , $0.879 \pm 0.01$ , $0.893 \pm 0.004$ , and $0.899 \pm 0.012$ for the baseline (Fritsch et al (2019)), pretrained embeddings, interpolated $(\alpha = 0.75)$ and interpolated $(\alpha = 0.75)$ with pretrained embeddings variants, respectively. These results support the notion that paired neural LMs can be used effectively to screen for possible dementia at earlier stages of cognitive impairment. + +The contributions of our work can be summarized as follows. First, our results demonstrate that the relationship between LM perplexity and lexical frequency is consistent with the phenomenology of DAT and its deleterious effects on patients' vocabulary. We show that the "two perplexities" approach is successful at distinguishing between cases and controls in the DementiaBank corpus because of its ability to capture specifically linguistic manifestations of the disease. Second, we observe that interpolating between dementia and control LMs mitigates the tendency of dementia-based LMs to be "surprised" by transcripts indicating severe dementia, which is detrimental to performance when the difference between these LMs is used as a basis for classification. In addition, we find a similar + +smoothing effect when using pre-trained word embeddings in place of a randomly instantiated word embedding layer. Finally, we develop a modification of Fritsch et al's "two perplexity" approach that is consistent with these observations - replacing the dementia model with an interpolated variant, and introducing pre-trained word embeddings at the embedding layer. Both modifications exhibit significant improvements in performance, with best results obtained by using them in tandem. Though not strictly comparable on account of differences in segmentation of the corpus amongst others, we note the performance obtained also exceeds that reported with models trained on text alone in prior research. Code to reproduce the results of our experiments is available on GitHub11. + +While using transcript text directly is appealing in its simplicity, others have reported substantial improvements in performance when POS tags and paralinguistic features are incorporated, suggesting fruitful directions for future research. Furthermore, prior work on using acoustic features shows that they can contribute to discriminative models (Konig et al., 2015); however, Dementia Bank audio is challenging for acoustic analysis due to poor quality and background noise. Lastly, while our results do support the claim that classification occurs on the basis of dementia-specific linguistic anomalies, we also acknowledge that DementiaBank remains a relatively small corpus by machine learning standards, and that more robust validation would require additional datasets. + +# 5 Conclusion + +We offer an empirical explanation for the success of the difference between neural LM perplexities in discriminating between DAT patients and controls, involving lexical frequency effects. Interrogation of control- and dementia-based LMs using synthetic transcripts and interpolation of parameters reveals inconsistencies harmful to model performance that can be remediated by incorporating interpolated models and pre-trained embeddings, with significant performance improvements. + +# Acknowledgments + +This research was supported by Administrative Supplement R01 LM011563 S1 from the National Library of Medicine. + +# References + +Daniel C Aguirre-Acevedo, Francisco Lopera, Eliana Henao, Victoria Tirado, Claudia Munoz, Margarita Giraldo, Shrikant I Bangdiwala, Eric M Reiman, Pierre N Tariot, Jessica B Langbaum, et al. 2016. Cognitive decline in a colombian kindred with autosomal dominant alzheimer disease: a retrospective cohort study. JAMA neurology, 73(4):431-438. +Amit Almor, Daniel Kempler, Maryellen C. MacDonald, Elaine S. Andersen, and Lorraine K. Tyler. 1999a. Why do Alzheimer patients have difficulty with pronouns? Working memory, semantics, and reference in comprehension and production in Alzheimer's disease. Brain and language, 67(3):202-227. +Amit Almor, Daniel Kempler, Maryellen C. MacDonald, Elaine S. Andersen, and Lorraine K. Tyler. 1999b. Why do alzheimer patients have difficulty with pronouns? working memory, semantics, and reference in comprehension and production in alzheimer's disease. Brain and Language, 67(3):202-227. +Lori JP Altmann and Jill S McClung. 2008. Effects of semantic impairment on language use in alzheimer's disease. In Seminars in Speech and Language, 01, pages 018-031. © Thieme Medical Publishers. +Alzheimer's Association. 2018. 2018 Alzheimer's disease facts and figures. Alzheimer's & Dementia, 14(3):367-429. +Arlene J. Astell and Trevor A. Harley. 1996. Tip-of-thetongue states and lexical access in dementia. Brain and Language, 54(2):196-215. +James T Becker, François Boiler, Oscar L Lopez, Judith Saxton, and Karen L McGonigle. 1994. The natural history of alzheimer's disease: description of study cohort and accuracy of diagnosis. Archives of Neurology, 51(6):585-594. +Shauna Berube, Jodi Nonnemacher, Cornelia Demsky, Shenly Glenn, Sadhvi Saxena, Amy Wright, Donna C Tippett, and Argye E Hillis. 2018. Stealing cookies in the twenty-first century: Measures of spoken narrative in healthy versus speakers with aphasia. American journal of speech-language pathology, 28(1S):321-329. +H Bird, MA Lambon Ralph, K Patterson, and JR Hodges. 2000. The rise and fall of frequency and imageability: how the progression of semantic dementia impacts on noun and verb production in the cookie theft description. *Brain and Language*, 73.:17 - 49. +Linda Boise, Richard Camicioli, David L Morgan, Julia H Rose, and Leslie Congleton. 1999. Diagnosing dementia: perspectives of primary care physicians. The Gerontologist, 39(4):457-464. + +Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146. +John Bond, C Stave, A Sganga, O Vincenzino, B O'connell, and RL Stanley. 2005. Inequalities in dementia care across europe: key findings of the facing dementia survey. International Journal of Clinical Practice, 59:8-14. +Thorsten Brants. 2000. Tnt - a statistical part-of-speech tagger. +Marc Brysbaert and Boris New. 2009. Moving beyond kucera and francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for american english. Behavior Research Methods, 41(4):977-990. +Trevor Cohen and Dominic Widdows. 2018. Bringing order to neural word embeddings with embeddings augmented by random permutations (earp). In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 465-475. +Claudia Cooper, Paul Bebbington, James Lindesay, Howard Meltzer, Sally McManus, Rachel Jenkins, and Gill Livingston. 2011. The meaning of reporting forgetfulness: a cross-sectional study of adults in the English 2007 Adult Psychiatric Morbidity Survey. Age and ageing, 40(6):711-717. +Jeffrey L Elman. 1990. Finding structure in time. Cognitive science, 14(2):179-211. +W. N. Francis and H. Kucera. 1979. Brown corpus manual. Technical report, Department of Linguistics, Brown University, Providence, Rhode Island, US. +Kathleen Fraser, Jed Meltzer, and Frank Rudzicz. 2015. Linguistic features identify alzheimer's disease in narrative speech. Journal of Alzheimer's disease : JAD, 49. +Kathleen C Fraser, Jed A Meltzer, and Frank Rudzicz. 2016. Linguistic features identify alzheimer's disease in narrative speech. Journal of Alzheimer's Disease, 49(2):407-422. +Julian Fritsch, Sebastian Wankerl, and Elmar Noth. 2019. Automatic diagnosis of alzheimer's disease using neural network language models. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5841-5845. IEEE. +Elaine Giles, Karalyn Patterson, and John R. Hodges. 1996. Performance on the Boston Cookie theft picture description task in patients with early dementia of the Alzheimer's type: Missing information. *Aphasiology*, 10(4):395-408. + +Yoav Goldberg. 2016. A primer on neural network models for natural language processing. Journal of Artificial Intelligence Research, 57:345-420. +Harold Goodglass. 2000. Boston diagnostic aphasia examination: Short form record booklet. Lippincott Williams & Wilkins. +Daniel B. Hier, Karen Hagenlocker, and Andrea Gellin Shindler. 1985. Language disintegration in dementia: Effects of etiology and severity. *Brain and Language*, 25(1):117-133. +Sepp Hochreiter. 1998. The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 6(02):107-116. +Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780. +Frederick Jelinek, Robert Mercer, L R Bahl, and J K Baker. 1977. Perplexity - a measure of the difficulty of speech recognition tasks. Journal of the Acoustical Society of America, 62:S63. +Jorg D. Jescheniak and Willem J. M. Levelt. 1994. Word frequency effects in speech production: Retrieval of syntactic information and of phonological form. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(4):824-843. +Michael I Jordan. 1986. Serial order: A parallel distributed processing approach. Technical report, CALIFORNIA UNIV SAN DIEGO LA JOLLA INST FOR COGNITIVE SCIENCE. +Sweta Karlekar, Tong Niu, and Mohit Bansal. 2018. Detecting linguistic characteristics of alzheimer's dementia by interpreting neural models. arXiv preprint arXiv:1804.06440. +Philipp Klumpp, Julian Fritsch, and Elmar Nöth. 2018. Ann-based alzheimers disease classification from bag of words. In Speech Communication; 13th ITG-Symposium, pages 1-4. VDE. +David S. Knopman and Ronald C. Petersen. 2014. Mild cognitive impairment and mild dementia: A clinical perspective. Mayo Clinic Proceedings, 89(10):1452-1459. +Alexandra Konig, Aharon Satt, Alexander Sorin, Ron Hoory, Orith Toledo-Ronen, Alexandre Derreumaux, Valeria Manera, Frans Verhey, Pauline Aalten, Phillip H. Robert, and Renaud David. 2015. Automatic speech analysis for the assessment of patients with predementia and alzheimer's disease. Alzheimer's & Dementia: Diagnosis, Assessment & Disease Monitoring, 1(1):112-124. +Willem J. M. Levelt. 2001. Spoken word production: A theory of lexical access. Proceedings of the National Academy of Sciences, 98(23):13464-13471. + +Brian Macwhinney. 2000. The childes project: Tools for analyzing talk (third edition): Volume i: Transcription format and programs, volume ii: The database. Computational Linguistics - COLI, 26:657-657. +Alex Martin and Linda L Chao. 2001. Semantic memory and the brain: structure and processes. Current opinion in neurobiology, 11(2):194-201. +Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017. Regularizing and optimizing LSTM language models. +Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843. +Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Černocký, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association. +Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111-3119. +Sylvester O Orimaye, Jojo SM Wong, Karen J Golden, Chee P Wong, and Ireneous N Soyiri. 2017. Predicting probable alzheimer's disease using linguistic deficits and biomarkers. BMC bioinformatics, 18(1):34. +Sylvester Olubolu Orimaye, Jojo Sze-Meng Wong, and Karen Jennifer Golden. 2014. Learning predictive linguistic features for alzheimers disease and related dementias using verbal utterances. In Proceedings of the Workshop on Computational Linguistics and Clinical Psychology: From Linguistic Signal to Clinical Reality, pages 78-87. +Sylvester Olubolu Orimaye, Jojo Sze-Meng Wong, and Chee Piau Wong. 2018. Deep language space neural network for classifying mild cognitive impairment and alzheimer-type dementia. *PloS one*, 13(11):e0205636. +Serguei V. S. Pakhomov, Lynn E. Eberly, and David S. Knopman. 2018. Recurrent perseverations on semantic verbal fluency tasks as an early marker of cognitive impairment. Journal of Clinical and Experimental Neuropsychology, 40(8):832-840. +Serguei V S Pakhomov, Glenn E Smith, Dustin Chacon, Yara Feliciano, Neill Graff-Radford, Richard Caselli, and David S Knopman. 2010a. Computerized analysis of speech and language to identify psycholinguistic correlates of frontotemporal lobar degeneration. Cognitive and behavioral neurology : official journal of the Society for Behavioral and Cognitive Neurology, 23(3):165-177. + +Serguei VS Pakhomov, Glenn E Smith, Susan Marino, Angela Birnbaum, Neill Graff-Radford, Richard Caselli, Bradley Boeve, and David S Knopman. 2010b. A computerized technique to assess language use patterns in patients with frontotemporal dementia. Journal of neurolinguistics, 23(2):127-144. +Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2012. Understanding the exploding gradient problem. CoRR, abs/1211.5063, 2. +Seija Pekkala, Debra Wiener, Jayandra J. Himali, Alexa S. Beiser, Loraine K. Obler, Yulin Liu, Ann McKee, Sanford Auerbach, Sudha Seshadri, Philip A. Wolf, and Rhoda Au. 2013. Lexical retrieval in discourse: An early indicator of alzheimer's dementia. Clinical Linguistics & Phonetics, 27(12):905-921. PMID: 23985011. +Robert Perneczky, Stefan Wagenpfeil, Katja Komossa, Timo Grimmer, Janine Diehl, and Alexander Kurz. 2006. Mapping scores onto stages: mini-mental state examination and clinical dementia rating. The American journal of geriatric psychiatry, 14(2):139-144. +Emily Prud'hommeaux and Brian Roark. 2015. Graph-based word alignment for clinical language evaluation. Computational Linguistics, 41(4):549-578. +Kumar B Rajan, Robert S Wilson, Jennifer Weuve, Lisa L Barnes, and Denis A Evans. 2015. Cognitive impairment 18 years before clinical diagnosis of alzheimer disease dementia. *Neurology*, 85(10):898-904. +Brian Roark, Margaret Mitchell, and Kristy Hollingshead. 2007. Syntactic complexity measures for detecting mild cognitive impairment. In *Biological, translational, and clinical language processing*, pages 1-8. +Brian Roark, Margaret Mitchell, John-Paul Hosom, Kristy Hollingshead, and Jeffrey Kaye. 2011. Spoken language derived measures for detecting mild cognitive impairment. IEEE transactions on audio, speech, and language processing, 19(7):2081-2090. +Jonathan D. Rohrer, William D. Knight, Jane E. Warren, Nick C. Fox, Martin N. Rossor, and Jason D. Warren. 2007. Word-finding difficulty: a clinical analysis of the progressive aphasias. *Brain*, 131(1):8-38. +Laura Stokes, Helen Combes, and Graham Stokes. 2015. The dementia diagnosis: a literature review of information, understanding, and attributions. Psychogeriatrics, 15(3):218-225. +M. Sundermeyer, R. Schlüter, and Hermann Ney. 2014. Rwthlm - the rwh aachen university neural network language modeling toolkit. Proceedings of the Annual Conference of the International Speech Communication Association, INTER-SPEECH, pages 2093-2097. + +Martin Sundermeyer, Ralf Schlüter, and Hermann Ney. 2012. LSTM neural networks for language modeling. In Thirteenth annual conference of the international speech communication association. +Sebastian Wankerl, Elmar Noth, and Stefan Evert. 2016. An analysis of perplexity to reveal the effects of alzheimer's disease on language. In Speech Communication; 12. ITG Symposium; Proceedings of, pages 1-5. VDE. +Sebastian Wankerl, Elmar Noth, and Stefan Evert. 2017. An n-gram based approach to the automatic diagnosis of alzheimers disease from spoken language. In INTERSPEECH, pages 3162-3166. +Jochen Weiner and Tanja Schultz. 2018. Automatic screening for transition into dementia using speech. In Speech Communication; 13th ITG-Symposium, pages 1-5. VDE. +Robert S. Wilson, Lynd D. Bacon, Jacob H. Fox, Richard L. Kramer, and Alfred W. Kaszniak. 1983. Word frequency effect and recognition memory in dementia of the alzheimer type. Journal of Clinical Neuropsychology, 5(2):97-104. +Maria Yancheva and Frank Rudzicz. 2016. Vector-space topic models for detecting alzheimer's disease. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2337-2346. \ No newline at end of file diff --git a/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/images.zip b/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..1e053fced4fc52b349f6751dfd9f49430eba2019 --- /dev/null +++ b/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a90c0099aec375beca7f5fa6210bfbc583eba55c9ff8c3eb0857177058111bcd +size 123493 diff --git a/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/layout.json b/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..b948019e59ff5152fa4314224f232696ed71bd6f --- /dev/null +++ b/ataleoftwoperplexitiessensitivityofneurallanguagemodelstolexicalretrievaldeficitsindementiaofthealzheimerstype/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a28591d9864a3951016eab0c52db632569f193f3ac6c63deaad022fe5e4a2cf0 +size 336142 diff --git a/athreeparameterrankfrequencyrelationinnaturallanguages/e8c8bbb8-4f72-492d-a9ef-604dff721e58_content_list.json b/athreeparameterrankfrequencyrelationinnaturallanguages/e8c8bbb8-4f72-492d-a9ef-604dff721e58_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1ced70720a61508f22eff4fd04532d9f41a611c0 --- /dev/null +++ b/athreeparameterrankfrequencyrelationinnaturallanguages/e8c8bbb8-4f72-492d-a9ef-604dff721e58_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:78a25dd8731fd9972364a3e0880ca327a92bc60ca13a10017e6a438573d02cd9 +size 35630 diff --git a/athreeparameterrankfrequencyrelationinnaturallanguages/e8c8bbb8-4f72-492d-a9ef-604dff721e58_model.json b/athreeparameterrankfrequencyrelationinnaturallanguages/e8c8bbb8-4f72-492d-a9ef-604dff721e58_model.json new file mode 100644 index 0000000000000000000000000000000000000000..d8744045409ac79579c57c22047654195ed0d919 --- /dev/null +++ b/athreeparameterrankfrequencyrelationinnaturallanguages/e8c8bbb8-4f72-492d-a9ef-604dff721e58_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2cbed3fb99ae1ab8c0357a25e69e1e4fa3f089185a7dc4faae88e962b79edcc3 +size 42045 diff --git a/athreeparameterrankfrequencyrelationinnaturallanguages/e8c8bbb8-4f72-492d-a9ef-604dff721e58_origin.pdf b/athreeparameterrankfrequencyrelationinnaturallanguages/e8c8bbb8-4f72-492d-a9ef-604dff721e58_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e57ba6bce6b458f1e4aca881ac688e2a2161a098 --- /dev/null +++ b/athreeparameterrankfrequencyrelationinnaturallanguages/e8c8bbb8-4f72-492d-a9ef-604dff721e58_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8c76069ed920340ebfc13ab257893b2e193528b2a1b8ed7e0ae4e8563b43c7bc +size 701409 diff --git a/athreeparameterrankfrequencyrelationinnaturallanguages/full.md b/athreeparameterrankfrequencyrelationinnaturallanguages/full.md new file mode 100644 index 0000000000000000000000000000000000000000..6f72bf4a08250e0c7f48b132b76b1186c0607fbb --- /dev/null +++ b/athreeparameterrankfrequencyrelationinnaturallanguages/full.md @@ -0,0 +1,129 @@ +# A Three-Parameter Rank-Frequency Relation in Natural Languages + +Chenchen Ding, Masao Utiyama, Eiichiro Sumita + +Advanced Translation Technology Laboratory, + +Advanced Speech Translation Research and Development Promotion Center, + +National Institute of Information and Communications Technology + +3-5 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0289, Japan + +{chenchen.ding, mutiyama, eiichiro-sumita}@nict.go.jp + +# Abstract + +We present that, the rank-frequency relation in textual data follows $f \propto r^{-\alpha}(r + \gamma)^{-\beta}$ , where $f$ is the token frequency and $r$ is the rank by frequency, with $(\alpha, \beta, \gamma)$ as parameters. The formulation is derived based on the empirical observation that $d^2(x + y) / dx^2$ is a typical impulse function, where $(x, y) = (\log r, \log f)$ . The formulation is the power law when $\beta = 0$ and the Zipf-Mandelbrot law when $\alpha = 0$ . We illustrate that $\alpha$ is related to the analytic features of syntax and $\beta + \gamma$ to those of morphology in natural languages from an investigation of multilingual corpora. + +# 1 Introduction + +Zipf's law (Zipf, 1935, 1949) is an empirical law to formulate the rank-frequency (r-f) relation in physical and social phenomena. Linguistically, Zipf's law can be observed on the distribution of words in corpora of natural languages, where the frequency $(f)$ of words is inversely proportional to its rank $(r)$ by frequency; that is, $f \propto r^{-1}$ . Zipf's law is a special form of a general power law, that is, $f \propto r^{-\alpha}$ , with $\alpha = 1$ . + +The Zipf's/power law is usually examined under a log-log plot of rank and frequency, where the data points lie on a straight line. The simple proportionality of the Zipf's/power law can be observed on randomly generated textual data (Li, 1992) and it only roughly depicts the r-f relation in real textual data. A two-parameter generalization of the Zipf's/power law is the Zipf-Mandelbrot law, where $f \propto (r + \beta)^{-\alpha}$ (Mandelbrot, 1965). Li et al. (2010) considered the reversed rank of $r_{max} + 1 - r$ , where $r_{max}$ is the maximum of ranking index, and proposed a two-parameter formulation of $f \propto r^{-\alpha}(r_{max} + 1 - r)^{\beta}$ . + +As a straightforward observation, the coefficients of proportionality should be distinguished for common and rear words (Powers, 1998; Li + +![](images/564d4a8da5902dae2eaa63b31963693fb662ff7ebc089b9eb6d5eb730bf39b16.jpg) +Figure 1: Rank-frequency plots on English words (left) and Chinese characters (right). The $x$ - and $y$ -axes are $\log_{10} r$ and $\log_{10} f$ , respectively. The gray curves are the proposed formulation under logarithm: $y = C - \alpha x - \beta \log_{10}(10^x + 10^\gamma)$ , where $C$ is a constant. The dashed lines are the asymptotes of $C - (\alpha x + \beta \gamma)$ and $C - (\alpha + \beta)x$ . $(\alpha, \beta, \gamma)$ is (0.93, 2.04, 3.82) for English words and (0.59, 32.31, 4.42) for Chinese characters. + +![](images/c259a72958d0751ac1e528b07af251ae114dd3a5fc825148b95e0c31d97a712e.jpg) + +et al., 2010). Therefore, an extension of the original Zipf's/power law requires at least two parameters. In this study, a three-parameter formulation of $f \propto r^{-\alpha} (r + \gamma)^{-\beta}$ is derived based on the observation and analysis of multilingual corpora. It is a natural generalization of the power law and the Zipf-Mandelbrot law. The third parameter provides a depiction of the rigidity of different coefficients of proportionality. The proposed formulation can also fit non-Zipfian phenomena in natural languages, such as the r-f relation on Chinese characters. Figure 1 shows examples on English words from Europarl (Koehn, 2005) and Chinese characters of Academia Sinica from the data of Sproat and Emerson (2003). + +# 2 Proposed and Related Formulation + +Under a logarithmic form, the Zipf's law states that $x + y = C$ , where $(x,y) = (\log r,\log f)$ , and $C$ is roughly a constant. We further investigate the + +![](images/caa58e7b0050187ae95ea67d95d009eba06e273043fae3074de35b4a96cfed79.jpg) +Figure 2: Smoothed second-order differences on the rank-frequency relation. The $x$ -axis is $\log_{10} r$ . + +property of $C = g(x)$ . The first and second-order differences on $g(x)$ are calculated as + +$$ +g _ {i} ^ {\prime} = \frac {g _ {i} - g _ {i - 1}}{x _ {i} - x _ {i - 1}}, \quad g _ {i} ^ {\prime \prime} = \frac {g _ {i} ^ {\prime} - g _ {i - 1} ^ {\prime}}{x _ {i} - x _ {i - 1}}. \tag {1} +$$ + +Here $(x_{i},y_{i})$ is the data point of the $i$ -th frequent token, $g_{i} = x_{i} + y_{i}$ for $i > 1$ , and $g_1' = g_1'' = 0$ . Because the differences are intrinsically nonsmooth, Bézier curves are applied for smoothing in the investigation. + +Figure 2 shows examples of the smoothed $g^{\prime \prime}$ on English words and Chinese characters from the same dataset used for Fig. 1. An artificial Zipfian dataset generated in the manner of Li (1992) $^{4}$ is also used for comparison. It can be observed that the $g^{\prime \prime}$ on English words and Chinese characters has an impulse, but not that on the artificial data. Generally, the impulse becomes more obvious if the data are more non-Zipfian. + +If we consider $g''$ as a general impulse function, then $g'$ is a general sigmoid function and $g$ can be modeled by a general softplus function in the form of $b\log (\exp (x - c) + 1)$ . To replace $x$ by a generalized linear form as $ax + d$ , + +$$ +y = - d - a x - b \log (\exp (x - c) + 1) \tag {2} +$$ + +and to substitute $(x,y)$ by $(\log r,\log f)$ , we obtain, + +$$ +f = \frac {\exp (b c - d)}{r ^ {a} (r + \exp (c)) ^ {b}} \propto r ^ {- \alpha} (r + \gamma) ^ {- \beta}, \tag {3} +$$ + +where $(\alpha, \beta, \gamma) = (a, b, \exp(c))$ . $\exp(bc - d)$ is a constant unrelated to $r$ . + +The obtained proportional form is a natural two-component extension of the power law and the + +![](images/516b4a806da471f003031ad9410f3e40c313d56b1dd14bbc6e1c2f8df2097041.jpg) +Figure 3: English word (left) and Chinese character (right) data in Figure 1 fitted by the gray curve of $y = C - \alpha x + \beta \log_{10}(r_{max} + 1 - 10^x)$ . The dashed lines are of $C - (\alpha x + \beta \log_{10}(r_{max} + 1))$ and $C - \beta \log_{10}(r_{max} + 1 - 10^x)$ for two ends. $(\alpha, \beta)$ is (1.15, 9.16) for English words and (0.62, 157.13) for Chinese characters. + +![](images/fab9747a9b13d0fbf84ffef6d1861eba4f38a4ad14410bf815481639a82b6200.jpg) + +Zipf-Mandelbrot law. Because the softplus function is a differentiable form of a rigid ramp function, Eq. (3) can also be considered as a smoothed piecewise broken power law. As shown in Fig. 1, $\alpha$ and $(\alpha + \beta)$ depict the proportional coefficients at the two ends, and the proportional coefficients are switched smoothly around $x = \gamma$ . + +$f \propto r^{-\alpha}(r_{max} + 1 - r)^{\beta}$ proposed in Li et al. (2010) is also a two-component formulation. One more parameter (i.e., $\gamma$ ) in Eq. (3) is used to identify the location of the impulse observed in $g''$ . Under Li's formulation, we obtain $g = y + \alpha x = \beta \log (r_{max} + 1 - \exp (r))$ and $g'' = -C_1\exp (x)(C_2 - \exp (x))^{-2}$ , where $C_1$ and $C_2$ are constants. $g''$ is a monotonically decreasing function with $x = \log (C_2)$ as the asymptote for $x < \log (C_2)$ . Therefore, Li's formulation always has a steep tail and lacks the capacity to depict the switching of two stable proportional coefficients. Figure 3 shows examples using Li's formulation to fit data in Fig. 1. It can be observed that the non-Zipfian Chinese characters are fitted well, but not for the tail part in more Zipfian English words. This can be explained from the shape of $g''$ in Fig. 2. It is reasonable to model the $g''$ of Chinese characters using a monotonically decreasing function because the $\gamma$ in Eq. (3) is quite large (around $r_{max}$ ). However, it is not proper for English words, where a proper $\gamma$ is required. + +Based on the analysis, it can be concluded that the formulation $f \propto r^{-\alpha}(r + \gamma)^{-\beta}$ is a generalized form that covers the Zipf's/power law, Zipf-Mandelbrot law, piecewise broken power law, and Li's two-parameter formulation. In the next section, we show the linguistic interpretation of the parameter $(\alpha, \beta, \gamma)$ . + +
αβγγ/rmax
bg0.92±.002.05±.064.25±.020.85
cs0.86±.001.20±.013.89±.010.74
da0.99±.001.10±.013.85±.010.69
de0.99±.001.08±.013.94±.010.70
el0.98±.001.96±.034.43±.010.82
en0.93±.002.04±.013.82±.000.75
es0.94±.001.38±.013.82±.010.73
et0.90±.001.06±.014.13±.010.75
fi0.87±.000.89±.014.07±.010.70
fr1.01±.002.05±.024.14±.010.80
hu0.92±.000.96±.024.16±.020.76
it0.94±.001.47±.013.84±.000.73
lt0.84±.001.04±.013.77±.010.70
lv0.87±.001.69±.024.22±.010.81
nl0.98±.001.18±.013.73±.010.68
pl0.87±.001.18±.013.97±.010.76
pt0.93±.001.33±.013.77±.010.72
ro0.94±.005.24±.324.78±.030.97
sk0.89±.001.38±.014.14±.010.79
sl0.91±.001.77±.044.31±.010.84
sv0.99±.001.05±.013.86±.010.70
+ +Table 1: Fitted parameters on Europarl data. + +# 3 Experiment and Discussion + +We used the proposed formulation to fit data of various European languages and typical Asian languages. The Europarl corpus (Koehn, 2005) and data from the Second International Chinese Word Segmentation Bakeoff(ICWB2) (Sproat and Emerson, 2003) were mentioned in Section 1. We also used English-Japanese patent data from the 7th NTCIR Workshop (Fujii et al., 2008). The Europarl data and English data from NTCIR were lower-cased and tokenized using the toolkit provided by MOSES $^{5}$ (Koehn et al., 2007). Fitting was performed under a logarithmic scale using the fit function $^{6}$ in gnplot. $^{7}$ Specifically, relation-frequency data were used to fit $(\alpha, \beta, \gamma)$ and $C$ in $y = C - \alpha x - \beta \log_{10}(10^x + 10^\gamma)$ . For the initialization, $(\alpha, \beta, \gamma) = (1, 1, \frac{r_{max}}{2})$ and $C = 3\gamma$ were applied. + +Table 1 lists the fitting results for all the languages in the Europarl corpus. The $(\alpha, \beta, \gamma)$ with + +![](images/4dae390b7c06b1a3f7378a5869951675359cff9c99cc55634edd8bd92302e143.jpg) +Figure 4: Distribution of languages in Europarl. + +![](images/ff22fabee854c6d17caf0b0d82b4837ee927a20fbc4b6765e34c3def3c3cf9de.jpg) + +the asymptotic standard error $(\pm)$ are listed. Because $\gamma$ may depend on the vocabulary size, normalized $\gamma_{norm} = \frac{\gamma}{r_{max}}$ is also listed. It can be observed that all the language data were fitted well with an $\alpha$ of around 1.0, which is in accordance with the original Zipf's law. $\beta$ and $\gamma_{norm}$ for each language are plotted on the left of Fig. 4. On the $\beta - \gamma_{norm}$ plane, we can observe the rough tendency that $\beta$ and $\gamma_{norm}$ are linear, in addition to a separation for different language branches. Further principal component analysis on $(\alpha, \beta, \gamma_{norm})$ suggests that $\alpha$ and $\beta + \gamma_{norm}$ can be generally considered as two dominant components. The plot on the right of Fig. 4 shows that the language branches can be separated roughly by lines parallel to the axes of $\alpha$ and $\beta + \gamma_{norm}$ . This indicates the linguistic explainability of the two axes. + +From the nature of these languages, we consider that $\alpha$ can be explained as an axis of analysis-synthesis on syntax and $\beta +\gamma_{norm}$ as that on morphology. A large $\alpha$ suggests a couple of extremely frequent words in the corpus. As typical examples, languages with a relatively large $\alpha$ , that is, Romance and Germanic, generally contain abundant prepositions, particles, and determiners to mark syntactic roles, whereas those with a smaller $\alpha$ , that is, Slavic and Uralic, tend to use complex declension and conjugation within words to afford syntactic information. Interesting evidence is that $\text{bg}$ , as a very analytic Slavic language, has a larger $\alpha$ than other Slavic languages. In another dimension, a large $\beta +\gamma_{norm}$ suggests a dramatic decrease in the frequency of rare words. Hence, lan + +
αβγγ/rmax
a.w0.92±.000.73±.013.73±.020.72
c.w0.84±.001.09±.043.84±.030.79
m.w0.80±.001.22±.043.77±.030.81
p.w0.81±.001.32±.063.76±.040.79
a.c0.59±.0032.31±2.044.42±.031.17
c.c0.49±.0031.30±2.734.32±.041.17
m.c0.50±.0015.51±0.523.95±.021.08
p.c0.50±.0021.02±1.184.10±.031.12
+ +Table 2: Fitted parameters on ICWB2 data. + +guages with a small $\beta + \gamma_{norm}$ , that is, Germanic and Uralic, have a more gradual decrease in rare words, which are instances of various phenomena of derivation and compounding from complex morphology. By contrast, languages with a large $\beta + \gamma_{norm}$ , such as en and fr, tend to use phrases composed of multiple common words to express complex concepts, so that the drop in frequency of rare words is relatively dramatic. As $\beta + \gamma_{norm}$ is sensitive to the portion of rare words, this dimension may be easily affected by the property of specific data. An example is ro, for which a much larger $\beta$ than other languages was fitted. + +Table 2 lists the fitting results on ICWB2 Chinese data. a. $\star$ , c. $\star$ , m. $\star$ , and p. $\star$ denote Academia Sinica, City University of Hong Kong, Microsoft Research, and Peking University data, respectively. $\star$ . w and $\star$ . c denote manually segmented words and characters, respectively. For the results on words, a trade-off on $\alpha$ and $\beta + \gamma_{norm}$ can be observed. Based on the previous analysis, we can consider that a. w has more segmentations on function words. An evidence is the segmentation of the expression shibushi (whether or not), which is composed of three characters shi (to be) bu (not), and shi (to be). The expression is segmented into shi/bu/shi in most cases in a. w, but always kept together in m. w. Regarding characters, we have small $\alpha$ and huge $\beta + \gamma_{norm}$ . Note that both common functional words and rare specific concepts in Chinese are commonly composed of multiple characters. Therefore, the contrast between common and rare characters is not so obvious, which leads to small $\alpha$ (no overwhelmingly functional words in syntax) and huge $\beta + \gamma_{norm}$ (extremely analytic in morphology). + +Figure 5 provides further evidence. The data size of typical languages in Europarl is gradu + +![](images/5d724aa9ff6734e5dd381326e53aba737d2bcab080756a493b8d5534816d4625.jpg) +Figure 5: Effects on $\alpha$ and $\beta +\gamma_{norm}$ + +![](images/bb81bb6a82bf8836b669e5208d1944d92908f3db49d8ab0a04a992abc9ab94d1.jpg) + +ally halved and the change of the fitted parameters is shown in the plot on the left of Fig. 5. $\star .0$ denotes the original data and $\star .n$ denotes the data of one $n$ -th size. $\alpha$ does not change substantially for smaller data because of the stable syntax features and functional words. However, $\beta + \gamma_{norm}$ becomes larger, which suggests that there are fewer morphological varieties because of the smaller data size. The plot on the right of Fig. 5 shows how different word segmentations in Japanese affect the parameters. There are three common Japanese morphological analysis tools: kytea, mecab, and juman. kytea provides the most fragmentary segmentation and juman tends to attach suffixes to stems. For example, the three tools segment wakarimashita (understood, in polite form) as follows: waka / ri / ma / shi / ta (5 tokens) by kytea, wakari / mashi / ta (3 tokens) by mecab, and wakari / mashita (2 tokens) by juman. As the most fragmentary segmentation by kytea contains more functional suffixes as words, it has the largest $\alpha$ , and by contrast, the segmentation by juman has the smallest $\alpha$ . Furthermore, mecab has a smaller $\beta + \gamma_{norm}$ because it may keep proper nouns unsegmented, which can be considered as introducing more compounded words. For example, tōkyōdaigaku (The University of Tokyo) is kept as one word by mecab, but segmented as tōkyō / daigaku (Tokyo / university) by the other two tools. + +# 4 Conclusion and Future Work + +We have shown that $f \propto r^{-\alpha}(r + \gamma)^{-\beta}$ for the rank-frequency relation in natural languages. This is an explainable extension of several related formulations, with $\alpha$ related to the analytic features of syntax and $\beta + \gamma$ to that of morphology. A more general form, $f \propto \prod_{k}(r + \gamma_{k})^{-\beta_{k}}$ , can be considered for further investigation. The $k$ terms can depict $k$ different proportional coefficients. + +# References + +Atsushi Fujii, Masao Utiyama, Mikio Yamamoto, and Takehito Utsuro. 2008. Overview of the patent translation task at the NTCIR-7 workshop. In Proc. of NTCIR, pages 389-400. +Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proc. of MT summit, volume 5, pages 79-86. +Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. ACL (Demo and Poster), pages 177-180. +Wentian Li. 1992. Random texts exhibit Zipf's-law-like word frequency distribution. IEEE Transactions on information theory, 38(6):1842-1845. +Wentian Li, Pedro Miramontes, and Germinal Cocho. 2010. Fitting ranked linguistic data with two-parameter functions. Entropy, 12(7):1743-1764. +Benoit Mandelbrot. 1965. Information theory and psycholinguistics. +David M. W. Powers. 1998. Applications and explanations of Zipf's law. In Proc. of NeMLaP3/CoNLL98, pages 151-160. +Richard Sproat and Thomas Emerson. 2003. The first international Chinese word segmentation bakeoff. In Proc. of the SIGHAN workshop on Chinese language processing, pages 133-143. +George K.Zipf.1935.The psycho-biology of language. +George K.Zipf.1949.Human behaviour and the principle of least-effort. \ No newline at end of file diff --git a/athreeparameterrankfrequencyrelationinnaturallanguages/images.zip b/athreeparameterrankfrequencyrelationinnaturallanguages/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..0ecb81fb11ef05150ac4132cd492842ecc11852a --- /dev/null +++ b/athreeparameterrankfrequencyrelationinnaturallanguages/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:048904cc80f5365137a344ba5d8a405e426447018ad45d45cea5f525aa7efac1 +size 221927 diff --git a/athreeparameterrankfrequencyrelationinnaturallanguages/layout.json b/athreeparameterrankfrequencyrelationinnaturallanguages/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..8aebd6454597512541b0c237c9de10bad9b6aec6 --- /dev/null +++ b/athreeparameterrankfrequencyrelationinnaturallanguages/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a8a002ff8cd08c94a186d644199106f22f1c0fcc02599ffde9fd9aa174a7d48a +size 263187 diff --git a/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/91f5201f-af4a-4aef-8409-32882db29f46_content_list.json b/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/91f5201f-af4a-4aef-8409-32882db29f46_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..1d1f04c45af5491a32989248158977e8c7d38f76 --- /dev/null +++ b/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/91f5201f-af4a-4aef-8409-32882db29f46_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5e354f71eec6b5e5f6dab8a344f8060310a880ac876c3d4ae901940626d6fc97 +size 69591 diff --git a/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/91f5201f-af4a-4aef-8409-32882db29f46_model.json b/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/91f5201f-af4a-4aef-8409-32882db29f46_model.json new file mode 100644 index 0000000000000000000000000000000000000000..3e5c7026432314244f3c4d9b293d6b0c9416cc42 --- /dev/null +++ b/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/91f5201f-af4a-4aef-8409-32882db29f46_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8505cfef58ec189742287d810e564abd0ac84091da04f62efe033ae168b4faa7 +size 85270 diff --git a/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/91f5201f-af4a-4aef-8409-32882db29f46_origin.pdf b/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/91f5201f-af4a-4aef-8409-32882db29f46_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7df4e85d83ecbfb144dd98fcc2207f629f6e0ae5 --- /dev/null +++ b/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/91f5201f-af4a-4aef-8409-32882db29f46_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff3aca343ce6ae7568ca5131405e4423c3447c910ffdba6fdc45ab1dff048376 +size 891546 diff --git a/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/full.md b/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/full.md new file mode 100644 index 0000000000000000000000000000000000000000..307bb1352b270dbb925c4360214d7a4084544a59 --- /dev/null +++ b/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/full.md @@ -0,0 +1,282 @@ +# A Top-Down Neural Architecture towards Text-Level Parsing of Discourse Rhetorical Structure + +Longyin Zhang $^{1,2}$ , Yuqing Xing $^{1,2}$ , Fang Kong $^{1,2*}$ , Peifeng Li $^{1,2}$ , Guodong Zhou $^{1,2}$ + +1. Institute of Artificial Intelligence, Soochow University, China + +2. School of Computer Science and Technology, Soochow University, China + +{lyzhang9,yqxing}@stu.suda.edu.cn + +{kongfang,pfli,gdzhou}@suda.edu.cn + +# Abstract + +Due to its great importance in deep natural language understanding and various down-stream applications, text-level parsing of discourse rhetorical structure (DRS) has been drawing more and more attention in recent years. However, all the previous studies on text-level discourse parsing adopt bottom-up approaches, which much limit the DRS determination on local information and fail to well benefit from global information of the overall discourse. In this paper, we justify from both computational and perceptive points-of-view that the top-down architecture is more suitable for text-level DRS parsing. On the basis, we propose a top-down neural architecture toward text-level DRS parsing. In particular, we cast discourse parsing as a recursive split point ranking task, where a split point is classified to different levels according to its rank and the elementary discourse units (EDUs) associated with it are arranged accordingly. In this way, we can determine the complete DRS as a hierarchical tree structure via an encoder-decoder with an internal stack. Experimentation on both the English RST-DT corpus and the Chinese CDTB corpus shows the great effectiveness of our proposed top-down approach towards text-level DRS parsing. + +# 1 Introduction + +Text-level parsing of discourse rhetorical structure (DRS) aims to identify the overall discourse structure and the rhetorical relations between discourse units in a text. As a fundamental research topic in natural language processing, text-level DRS parsing plays an important role in text understanding and can benefit various down-stream applications, such as document summarization (Goyal and Eisenstein, 2016), sentiment analysis (Choi et al., 2016), text categorization (Ji and Smith, 2017), pronoun + +![](images/2af695e2f11ecd4abf71931dce8748d2215bd1378d130cca6986ceb18c88dfeb.jpg) +Figure 1: An example for DRS parsing, where the text consists of 3 sentences containing 7 EDUs. + +e1: 西藏银行部门积极调整信贷结构,/Bank of Tibetan actively readjusts credit structure +e2: 以确保农牧业生产等重点产业的投入,/Ensuring the investment of key industries such as husbandry production +e3: 加大对工业、能源、交通、通信等建设的正常资金供应量。/Increase the normal supply of funds for industrial, energy, transportation, communications +e4: 去年新增贷款十四点四一亿元,/ Last year, the newly increased loan was 1.441 billion yuan +e5: 比上年增加八亿多元。/an increase of more than 800 million yuan compared to the previous year. +e6: 农牧业生产贷款(包括扶贫贷款)比上年新增四点三八亿元;/The loans (including aid the poor loan) for agricultural and livestock production newly increased by 438 million yuan compared to the previous year +e7:乡镇企业贷款增幅为百分之六十一点八三。/The increase in loans to township enterprises was $61.83\%$ + +resolution (Sheng et al., 2017) and event temporal relation identification (Dai et al., 2019). + +According to Rhetorical Structure Theory (RST) (Mann and Thompson, 1988), a text can be presented by a hierarchical tree structure known as a Discourse Tree(DT). Figure 1 illustrates an excerpt with its gold standard DRS from article chtb_0005 in the Chinese CDTB (Connectivedriven Discourse Treebank) corpus (Li et al., 2014c). We can find that, in the DT, each leaf node corresponds to an elementary discourse unit (EDU), and various EDUs are recursively + +combined into high level larger discourse units in a bottom-up fashion. In this example, 7 EDUs are connected by 6 rhetorical relations, while in each non-terminal node, the rhetorical relation and the nuclearity type are labeled. Correspondingly, text-level DRS parsing consists of three components, i.e., bare DRS generation (hierarchical span determination), rhetorical nuclearity determination and rhetorical relation classification. + +During the past decade, text-level DRS parsing has been drawing more and more attention and achieved certain success (Hernault et al., 2010; Joty et al., 2013; Feng and Hirst, 2014; Ji and Eisenstein, 2014; Heilman and Sagae, 2015; Li et al., 2016; Braud et al., 2017; Yu et al., 2018). However, all the previous studies on text-level DRS parsing adopt bottom-up approaches. That is, adjacent EDUs are recursively combined into high-level larger text spans by rhetorical relations to form a final discourse tree in a bottom-up way. In this paper, we justify that compared with a bottom-up approach, a top-down approach may be more suitable for text-level DRS parsing from two points-of-view, + +- From the computational view, only local information (i.e., the constructed DRS subtrees and their context) can be naturally employed to determine the upper layer structure in the bottom-up fashion. Due to the overwhelming ambiguities at the discourse level, global information, such as the macro topic or structure of the discourse, should be well exploited to restrict the final DRS, so as to play its important role. From the computational view, a top-down approach can make better use of global information. +- From the perceptive view, when people read an article or prepare a manuscript, they normally go from coarse to fine, from general to specific. That is, people tend to first have a general sense of the theme of the article, and then go deep to understand the details. Normally, the organization of the article is much limited by its theme. For text-level DRS parsing, a top-down approach can better grasp the overall DRS of a text and conform to the human perception process. + +Additionally, just noted as Li et al. (2014c), they employed a top-down strategy in the Chinese CDTB annotation practice. That is, a top-down approach is consistent with the annotation practice of a DRS corpus. In this paper, we propose a top-down neural architecture to text-level DRS parsing. + +In particular, we cast top-down text-level DRS parsing as a recursive split point ranking task, where various EDUs associated with split points are arranged in different levels according to the rank of the split point. In this way, we can determine the complete DRS as a hierarchical tree structure via an encoder-decoder with an internal stack. It is worthwhile to mention that, at each time step, we use the Biaffine Attention mechanism (Dozat and Manning, 2017) to compute the attention vector and determine the next split point, along with the corresponding nuclearity and relation jointly. + +# 2 Related Work + +In the literature, previous studies on text-level discourse parsing can be classified into two categories, probabilistic CKY-like approaches (Hernault et al., 2010; Joty et al., 2013; Feng and Hirst, 2014; Li et al., 2014a, 2016) and transition-based approaches (Li et al., 2014b; Ji and Eisenstein, 2014; Heilman and Sagae, 2015; Wang et al., 2017; Braud et al., 2017; Yu et al., 2018). + +Probabilistic CKY-like approaches normally exploit various kinds of lexical, syntactic and semantic features to compute the probability of the relation between the EDUs, and select the two EDUs with the highest relational probability to merge into one text span. In this way, the final discourse tree is generated. Recently, various deep learning models are employed to capture hidden information to compute the relational probability, e.g. recursive deep models (Li et al., 2014a), and attention-based hierarchical neural network models (Li et al., 2016). As an alternative, transition-based approaches employ the dependency structure to directly represent the relations between EDUs. Li et al. (2014b) first build a discourse dependency treebank by converting the RST-DT corpus and then apply graph based dependency parsing techniques to discourse parsing. Ji et al. (2014) propose a shift-reduce discourse parser using a representation learning approach to achieve the state-of-the-art performance. Wang et al. (2017) propose a pipelined two-stage parsing approach. First, a transition-based model is employed to parse a bare discourse tree. Then, an independent relation labeller is adopted to determine discourse relations. Braud et al. (2017) present two variants of transition-based discourse parsing using a feedforward neural network model. Yu et al. (2018) build a transition based RST parser with implicit syntactic features. In particular, the information of + +sentence boundaries and paragraph boundaries is embedded as additional features. + +It is worthwhile to emphasize that, all the above studies on text-level discourse parsing employ the bottom-up approaches. So far, only Lin et al. (2019) and Liu et al. (2019) make the preliminary explorations on constructing sentence-level DTs in a top-down fashion. Lin et al. (2019) proposed a framework for both the EDU segmenter and the sentence-level discourse parser uniformly. Following the work of Lin et al. (2019), Liu et al. (2019) proposed hierarchical pointer network for better dependency and sentence-level discourse parsing. However, both studies consider merely sentence-level discourse parsing. While it is simple but effective to encode entire sentence sequentially, entire text-level discourse larger than sentence, such as paragraph and document, is obviously much more complicated. Statistics on the RST-DT corpus show each sentence only contains 2.5 EDUs on average while each document contains 55.6 EDUs on average. The representation for large text span can impact the parsing performance very much. + +In this paper, we present a top-down neural architecture to text-level discourse rhetorical structure parsing. Different from Lin et al. (2019) and Liu et al. (2019), we propose a hierarchical discourse encoder to better present the text span using both EDUs and split points. Benefiting from effective representation for large text spans, our text-level discourse parser achieves competitive or even better results than those best reported discourse parsers either neural or non-neural with hand-engineered features. + +# 3 Top-down Neural Architecture + +Our top-down neural architecture consists of three parts, i.e., EDU Encoder, Split Point Encoder and Attention-based Encoder-Decoder. Among them, the EDU encoder and the split point encoder are responsible for representing the EDUs and the split points, respectively. Different from Lin et al. (2019) and Liu et al. (2019), we combine the representation of both EDUs and split points hierarchically to better represent the text span rather than only using the representation of the last EDU as the representation of the text span. In this way, the global information can be exploited for our text-level discourse parsing. In the following, we take Figure 1 as the example to illustrate the architecture. + +![](images/6b0977398a656073a6bac21c81f0f0d65da8ab1afb02141e566a5d93399ef8aa.jpg) +Figure 2: Architecture of the EDU encoder. + +# 3.1 EDU Encoder + +Figure 2 shows the procedure of the EDU Encoder. + +For a given discourse $D = \{E_1, \ldots, E_N\}$ , where $N$ means the number of EDUs, $E_k$ is the $k$ th EDU. The EDU encoder is responsible for encoding each EDU. For $\forall E_k \in D$ , $E_k = \{w_1, w_2, \ldots, w_n\}$ , where $w_i$ means the $i$ th word of $E_k$ and $n$ is the number of words, we first concatenate the word embedding and the POS embedding for each word. Then, the combined vectors are fed into the bi-directional GRU network (Cho et al., 2014). The output of the $i$ th word is $h_i$ , and the last states of BiGRU in both directions are denoted as $h_{\vec{s}}$ and $h_{\vec{S}}$ (i.e., $h_{\vec{s}} = h_{\vec{n}}$ , $h_{\vec{S}} = h_{\vec{1}}$ ). + +Considering the different importance of each word in a given EDU, we employ a self-attention mechanism to calculate the weight of each word. Eq 1 shows the weight calculation formula, where we take the dot product of a learnable vector $q$ and $h_i$ as the weight of the $i$ th word in the EDU. + +$$ +w _ {i} = \frac {q ^ {T} h _ {i}}{\sum q ^ {T} h _ {j}} \tag {1} +$$ + +In this way, we can achieve the encoding $h_{ek}$ of the $k$ th EDU in given discourse $D$ . + +$$ +h _ {e k} = \left[ \begin{array}{c} h _ {\vec {s}} \\ h _ {\overline {{\vec {s}}}} \end{array} \right] + \sum w _ {i} h _ {i} \tag {2} +$$ + +# 3.2 Split Point Encoder + +In this paper, we call the split position between any two EDUs the split point. A discourse containing $n$ EDUs has $n - 1$ split points. For example, Figure 1 contains 7 EDUs and 6 split points. The split point encoder is responsible for encoding each split point. In our model, we use the both EDUs on the left and right sides of the split point to compute the split point representation. + +![](images/d129b393510663805eed4d74d2489d86c001d081758c08e13b830b4b2f329d8b.jpg) +Figure 3: Architecture of the split point encoder. + +After encoding each EDU using the EDU encoder, we can get the sequence of encoded EDUs $h_e = \{h_{e1}, \dots, h_{eN}\}$ , which are further fed into a bi-directional GRU network to get the final sequence of encoded EDUs $h_e' = \{h_{e1}', \dots, h_{eN}'\}$ . + +For the convenience of calculation, we first add two additional zero vectors on the start and end of the EDU sequence as stubs. Then, we use a convolutional network to compute the final split point representation. Here, the width of the convolution kernel is set to 2, and the Rectified Linear Unit (ReLU) activation function is employed to map the input $h_e' = \{h_{e0}', h_{e1}', \dots, h_{eN}', h_{e(N+1)}'\}$ to the output $h_s = \{h_{s0}, h_{s1}, \dots, h_{sN}\}$ . + +Figure 3 takes the example as shown in Figure 1 to demonstrate the working procedure of the split point encoder. The input is the achieved 7 EDU encoding results during the EDU encoder stage, i.e., the vector sequence $\{h_{e1}\dots h_{e7}\}$ . The output is the 8 split point representation vectors $\{h_{s0}\dots h_{s7}\}$ , where, the first and last vectors are just stubs and the remaining 6 vectors are meaningful outputs for following stages. + +# 3.3 Attention-based Encoder-Decoder on Split Point Ranking + +After achieving the representation of each split point, an encoder-decoder with an internal stack is employed to rank the split points and indirectly get the predicted discourse parse tree. + +Figure 4 shows the complete encoder-decoder framework, where the left part shows the encoder. Here, the achieved split point representation vectors $h_s = \{h_{s0}, h_{s1}, \ldots, h_{sN}\}$ are fed into a bi-directional GRU network to get the output $h_{se} = \{h_{se0}, h_{se1}, \ldots, h_{seN}\}$ . At the same time, the combination of the last states of the bi-directional GRU network in both directions are taken as the initial state of the decoder. During the decoder stage, a + +![](images/514ee1c7b6e54b49f5551ce6776099d6f33ff06d7308e073de8a4edfe032bec4.jpg) +Figure 4: A parsing example of the attention-based encoder-decoder. + +uni-directional GRU network with an internal stack is employed for our discourse parser. Initially, the stack contains only one element, i.e., the index pair of the first and the last split points of the complete discourse $(0,N)$ . At each decoding step, the index pair of the boundary split points is first popped from the top of the stack. Suppose the index pair is $(l,r)$ at the $j$ th step. Then, the encoding output $h_{sel}$ and $h_{ser}$ are concatenated to form the input of the decoder. While the decoder output at the $j$ th step represented by $h_{dj}$ . After that, we adopt the Biaffine Attention mechanism to the encoder output corresponding to the split points between the boundary split points (i.e., $h_{sem},\forall m,l\leq m\leq r$ ) and the decoder output $h_{dj}$ . Finally, the split point with the largest score is selected as the final result of this time. If there are still unselected split points for the new text spans formed by this decision, they are pushed onto the stack for following steps. + +Figure 4 shows the parsing steps of the example shown in Figure 1. Here, the arrows in red indicate the selected split points at each time step. $h_{se0}$ and $h_{se7}$ represent the start and end points of the given discourse, and do not participate in the split point selection during decoding. In particular, the stack is first initialized with containing only one element (0, 7). That is, all EDUs form a complete text span at the very beginning, and we feed the concatenated vector $[h_{e0}; h_{e7}]$ into the decoder to achieve the output $h_{d1}$ . Then, the weight is computed using $h_{d1}$ and the results of the encoder corresponding to the 6 split points between the number 0 and the number 7, i.e., $h_{se1} \ldots h_{se6}$ . In this example, since the split point 3 has the largest weight, the text span is split into two parts, i.e., (0, 3) and (3, 7). Because there are still unselected split points in the text span (0, 3) and (3, 7), we push them onto the stack. In this way, we get one split point at each + +step. After six iterations, the complete discourse rhetorical tree is built. + +# 3.4 Biaffine Attention on Text-level DRS Parsing + +After achieving the split point representation, we adopt the Biaffine Attention mechanism to determine the split point, nuclearity and discourse relation jointly. Since applying smaller multi-layer perceptrons (MLPs) to the recurrent output states before the biaffine classifier has the advantage of stripping away information not relevant to the current decision, we first employ a one-layer perceptron to the output vectors of the encoder $h_{sei}$ and the decoder $h_{dj}$ with $ReLU$ as its activation function. The converted vectors are denoted by $h_{sei}'$ and $h_{dj}'$ . Then, we compute the biaffine attention score function. + +$$ +\begin{array}{l} s _ {j} ^ {i} = h _ {s e i} ^ {\prime} ^ {T} W h _ {d j} ^ {\prime} + U h _ {s e i} ^ {\prime} + V h _ {d j} ^ {\prime} + b; \\ W \in \mathbb {R} ^ {m \times k \times n}, U \in \mathbb {R} ^ {k \times m}, V \in \mathbb {R} ^ {k \times n}, s _ {j} ^ {i} \in \mathbb {R} ^ {k} \tag {3} \\ \end{array} +$$ + +where $W, U, V, b$ are parameters, denoting the weight matrix of the bi-linear term, the two weight vectors of the linear terms, and the bias vector, respectively, $s_j^i$ means the score of the $i$ th split point over different categories, and the $k$ denotes the number of categories (for split point determination, $k = 1$ ; for nuclearity determination, $k = 3$ ; for discourse relation classification, $k = 18$ in English and $k = 16$ in Chinese). In this way, we can determine the split point, nuclearity and discourse relation jointly. + +From Eq. 3, we can find that the biaffine attention score function contains three parts, the encoding output, the decoding output, and the combination of the encoder and the decoder in a bilinear way. Among them, the encoding output can be viewed as the information about the current split point, while the decoding output indicates the information about the boundary points and the historical split point. + +# 3.5 Model Training + +In comparison with transition-based approaches, our approach can not only maintain a linear parsing time, but also perform batch training and decoding in parallel. In particular, we optimize our discourse parsing model using the Negative Log Likelihood Loss (NLL Loss), which consists of three parts, i.e., + +the Split Point Prediction Loss $(L_{s})$ , the Nuclearity Prediction Loss $(L_{n})$ , and the Relation Prediction Loss $(L_{r})$ . Among them, the split point prediction loss is used to maximize the probability of selecting the correct split point at each decoding step. Here, we use Eq. 4 to compute the loss, assuming that the correct split point number at the $i$ th step of the decoder is $j$ . + +$$ +L _ {s} = \sum_ {\text {b a t c h s t e p s}} \sum - \log \left(\hat {p} _ {i} ^ {s} | \theta\right) \tag {4} +$$ + +$$ +\hat {p} _ {i} ^ {s} = \frac {s _ {i , j} ^ {s p l i t}}{\sum s _ {i} ^ {s p l i t}} \tag {5} +$$ + +Similarly, the Nuclearity Prediction Loss and the Relation Prediction Loss are to maximize the probability of correct nuclear position and discourse relation for each correct split point determined by the decoder respectively. Since the convergence speed of these three parts is different during the training process, we take the combined one (Eq. 6) as the final loss function and adjust the parameters on the development set. + +$$ +L = \alpha_ {s} L _ {s} + \alpha_ {n} L _ {n} + \alpha_ {r} L _ {r} \tag {6} +$$ + +# 4 Experimentation + +In this section, we systematically evaluate our top-down text-level discourse parser. + +# 4.1 Experimental Setting + +# 4.1.1 Datasets + +In this paper, we employ both the English RST Discourse Treebank (RST-DT) (Carlson and Marcu, 2001) and the Chinese Connective-driven Discourse TreeBank (CDTB) (Li et al., 2014c) as the benchmark data sets. + +In an RST-style discourse tree, the leaf nodes are non-overlapping text spans called elementary discourse units (EDUs), and internal nodes are the concatenation of continuous EDUs. Adjacent nodes are related through particular discourse relations to form a discourse subtree, which is related to other adjacent nodes in the tree structure. In this way, the hierarchical tree structure is established. The English RST-DT corpus is annotated under the framework of RST. Each document is represented as one DT. It consists of 385 documents (347 for training and 38 for testing) from the Wall Street Journal. We randomly select 34 documents from the training set as our development set. + +
ParameterEnglishChinese
POS Embedding3030
EDU Encoder BiGRU256256
Encoder BiGRU256256
Decoder GRU512512
bi-directional GRU256256
uni-directional GRU512512
Dropout0.20.33
Split Point Biaffine Attention MLP6464
Nuclear Biaffine Attention MLP6432
Relation Biaffine Attention MLP64128
Epoch2020
Batch Size1064
Learning Rate0.0010.001
αs0.30.3
αn1.01.0
αr1.01.0
+ +The Chinese CDTB corpus is motivated by taking both advantages of the English RST-DT corpus (e.g. the tree structure, the nuclearity representation) and the PDTB corpus (e.g., the connectivedriven predict-argument structure) (Prasad et al., 2008). In the Chinese CDTB corpus, each paragraph is marked as a Connective-driven Discourse Tree (CDT), where its leaf nodes are EDUs, its intermediate nodes represent (insertable) connectives (i.e., discourse relations), and EDUs connected by connectives can be combined into higher level discourse units. Currently, the Chinese CDTB corpus consists of 500 newswire articles, which are further divided into 2336 paragraphs with a CDT representation for one paragraph and 10650 EDUs in total. We divide the corpus into three parts, i.e., 425 training documents containing 2002 discourse trees and 6967 discourse relations, 25 development documents containing 105 discourse trees and 396 discourse relations, 50 test documents containing 229 discourse trees and 993 discourse relations. + +# 4.1.2 Evaluation Metrics + +To evaluate the parsing performance, we use three standard ways to measure the performance: unlabeled (i.e., hierarchical spans) and labeled (i.e., nuclearity and relation) F-scores. + +Same as previous studies, we evaluate our system with gold EDU segmentation and binarize those non-binary subtrees with right-branching. We use the 18 fine-grained relations defined in (Carlson and Marcu, 2001) and the 16 fine-grained relations defined in (Li et al., 2014c) to evaluate the relation metric for English and Chinese respectively. In order to avoid the problem that the per + +Table 1: Experimental parameter settings. + +
SystemsBareNucRelFull
ENTop-down(Ours)67.255.545.344.3
Ji&Eisenstein(2014)+64.154.246.846.3
Feng&Hirst(2014)+68.655.945.844.6
Li et al.(2016)+64.554.038.136.6
Braud et al.(2016)59.547.234.734.3
Braud et al.(2017)*62.754.545.545.1
CNTop-down(Ours)85.257.353.345.7
Sun&Kong(2018)(Dup)84.855.852.147.7
+ +Table 2: Performance Comparison.(Bare, bare DRS generation. Nuc, nuclearity determination. Rel, rhetorical relation classification. Full, full discourse parsing. The sign $^+$ means the systems with additional handcrafted features including syntactic, contextual and so on,\* means with additional cross-lingual features.) + +formance with RST-Parseval evaluation (Marcu, 2000) looks unreasonably high, we follow Morey et al. (2018), which adopts the standard Parseval procedure. For fair comparison, we report microaveraged $F_{1}$ scores by default. + +# 4.1.3 Hyper-parameters + +We use the word embedding representation based on the 300D vectors provided by Glove (2014)1 and Qiu(2018) for English and Chinese respectively, and do not update the weights of these vectors during training, while the POS embedding uses the random initialization method and is optimized with our model. We fine-tune the hyper-parameters on the development set as shown in Table 1. + +# 4.2 Experimental Results + +# 4.2.1 Overall Performance + +First, Table 2 compares the detailed performance of our top-down discourse parser with the state-of-the-art on gold standard EDUs. + +For English RST-style text-level discourse parsing, we evaluate our top-down discourse parser on the RST-DT corpus and compare our model with five state-of-the-art systems as mentioned in Morey (2018) using the same evaluation metrics. + +- Ji and Eisenstein (2014), a shift-reduce parser that learns the representation of discourse units + +and trains an SVM classifier jointly with a lot of hand-crafted features. + +- Feng and Hirst (2014), a two stage greedy parser with linear-chain CRF models. +- Li et al. (2016), an attention-based hierarchical model along with hand-crafted features. +- Braud et al. (2016), a sequence-to-sequence parser that is heuristically constrained to build trees with a hierarchical neural model. +- Braud et al. (2017), a transition-based neural model with a lot of cross-lingual features. + +For Chinese CDT-style text-level discourse parsing, there are much fewer studies. Sun and Kong (2018) propose a complete transition-based Chinese discourse structure generation framework. However, they only concerned tree structure generation and did not consider discourse relation classification. In fact, just as noted in Wang et al. (2017), a transition-based model is more appropriate for parsing the bare discourse tree structure due to the data sparsity problem. In addition, since relation classification can benefit from the bare tree structure, a two stage parsing strategy can normally achieve better performance. In comparison, with the support of local contextual information of split points and global high-level discourse structure information, our top-down architecture is able to identify the discourse structure and discourse relations jointly. For fair comparison, we duplicate the approach proposed by Sun and Kong (2018), and evaluate it under the same experimental settings3. We call this system as the duplicated system (denoted as "Dup"). Table 2 shows that, + +- For English, our top-down system achieves comparable performance with the state-of-the-art systems. It is worthwhile to note that, we focus on the effectiveness of our proposed top-down architecture in this paper. The performance of our top-down system is achieved without any other additional features, while other systems employ + +${}^{3}$ Sun and Kong (2018) reported their performance using macro-averaged ${F}_{1}$ scores. In fact, it increases the weight of shorter documents. For Chinese CDTB, each paragraph is represented as a CDT. Statistics on the distribution of CDT heights shows that, one CDT contains about 4.5 EDUs on average, with the average height about 3.42. In this paper, we report the performance using micro-averaged ${F}_{1}$ scores. Furthermore, to gain detailed comparison between the bottom-up and the top-down approaches, we also report the performance of relation classification and full discourse parsing. + +
languageBareNuclearityRelationFull
EN62.350.140.739.6
CN80.253.248.541.7
+ +Table 3: Performance under a full automatic setting. + +various additional features. For example, both Ji and Eisenstein (2014) and Feng and Hirst (2014) employed many kinds of additional hand-crafted features including syntactic, contextual and so on, while Braud et al. (2017) resort to additional cross-lingual features and achieve the gain of 3.2, 7.3, 10.8 and 10.8 on the four evaluation metrics respectively in comparison with Braud et al. (2016). This indicates the great preference of top-down over bottom-up text-level DRS parsing. This also suggests the great potential of additional carefully designed features, which are worth exploring in the future work. + +- For Chinese, our top-down text-level DRS parser significantly outperforms Sun and Kong (2018) on bare DRS generation, nuclearity determination and relation classification with all p-values smaller than 0.01 on signcate testing. However, we find that our top-down approach achieves relatively poor performance on Full discourse parsing. This maybe due to the effectiveness of the joint learning framework as employed in Sun and Kong (2018). Traditional shift-reduce approaches cast the parsing task as a triple (i.e., shift/reduce action, nuclearity and relation type) identification task, and learn/predict the triple simultaneously, while our top-down approach divides the discourse parsing task into three independent sub-tasks, i.e., split point ranking, nuclearity determination and relation classification, and optimize our discourse parsing model only using the Negative Log Likelihood Loss. This also applies to the English discourse parser discussed above. + +- Comparing the results for English and Chinese, Chinese text-level discourse parsing looks better on all performance metrics. This maybe due to the difference between annotation strategies. In English RST-DT corpus, each document is represented as one DT, while in Chinese CDTB, each paragraph is represented as a CDT. As a result, the CDTs generally contain fewer EDUs and are relatively short in height. + +
HeightStdBareNucRelFull
1385339321251221233215213200
222018318411711511611194101
3139119122718271735971
4887578525844423940
5443437172116211016
626182113136969
7181618786925
>= 8131110000000
Overall933795791535521497486426445
+ +# 4.2.2 End-to-end Performance + +Next, Table 3 shows the performance of the end-to-end text-level discourse parser under a full automatic setting. Here, we use the two EDU detectors proposed by Li et al. (2018) and Li et al. (2013) to achieve the auto EDUs for English and Chinese respectively, and the berkeley parser4 to achieve automatic parse trees. From the results shown in Table 3 we can find that, in comparison with the overall performance using gold standard EDUs shown in Table 2, there is a significant performance reduction on all the indicators. This indicates the heavy impact of EDU segmentation. + +# 4.2.3 Detailed Analysis + +Finally, we take Chinese as an example for a detailed comparative analysis. We duplicate the approach proposed by Sun and Kong (2018) and take this duplicated system as the representative of the bottom-up approach. + +Table 4 first compares the results over different DT levels with the gold standard numbers and the correctly identified numbers. It should be noted that, correctly determined nuclearity means both the bare tree node and its nuclearity are correctly recognized. Correctly determined relation means both the bare node and its relation are correctly recognized, and full means all three aspects are correctly recognized. From the results we can find that, in comparison with the bottom-up approach, the top-down approach can achieve better performance on Bare, Nuc and Rel metrics, while for Full-metric, the performance reduces slightly. Just as noted above, this is due to the difference between the joint learning frameworks behind these two approaches. Among three aspects, the improvement of nuclearity is most, and bare tree structure is weakest. At each level, the performance of these + +Table 4: Performance over different DT levels. ("↓" - Top down approach, "↑" - Bottom up approach) + +
ApproachNNNSSN
67.042.233.7
67.635.424.5
+ +Table 5: Performance on nuclearity determination. + +
EDU NumBareNucRel
1-594.857.952.0
6-1087.060.758.6
11-1578.050.145.4
16-2056.225.025.0
21-2568.947.042.4
26-3065.426.911.5
1-597.067.156.6
6-1086.057.359.9
11-1575.250.341.4
16-2056.225.025.0
21-2576.657.740.8
26-3069.242.319.2
+ +Table 6: Performance over different EDU numbers. + +two approaches varies. This suggests that the bidirectional architecture may be an important direction in the future work. + +Since the improvement of nuclearity is significant, we then list the detailed results of these two approaches over different nuclearity categories. Table 5 shows that our top-down approach can determine the "NS" and "SN" much better than the bottom-up approach. This is consistent with human perception. + +We finally divide the DTs into six groups by EDU number and evaluate the two approaches over different groups. Table 6 shows the results. We can find that, our top-down approach achieves better performance on the first, fifth and sixth sets (i.e., the EDU number is 1-5, 21-25 and 26-30 respectively). This suggests that the proposed top-down approach may be more suitable for both end of DTs with others comparable. + +# 5 Conclusion + +In this paper, we propose a top-down neural architecture to text-level discourse parsing. In particular, we cast the discourse parsing task as a EDU split point ranking task, where a split point is classified to different levels according to its rank, and the EDUs associated with the split point are arranged accordingly. In this way, we can determine the complete discourse rhetorical structure as a hierarchical tree structure. Specifically, after encoding the EDUs and EDU split points, a encoder-decoder with an internal stack is employed to generate discourse tree recursively. Experimentation on the English RST-DT corpus and the Chinese CDTB corpus shows the great effectiveness of our proposed approach. In the future work, we will focus on more effective discourse parsing with additional carefully designed features and joint learning with EDU segmentation. + +# Acknowledgements + +The authors would like to thank the anonymous reviewers for the helpful comments. We are greatly grateful to Cheng Sun for his inspiring ideas and preliminary work. This work is supported by Artificial Intelligence Emergency Project 61751206 under the National Natural Science Foundation of China, Project 61876118 under the National Natural Science Foundation of China and the Priority Academic Program Development of Jiangsu Higher Education Institutions. + +# References + +Chloe Braud, Maximin Coavoux, and Anders Søgaard. 2017. Cross-lingual RST discourse parsing. arXiv preprint arXiv:1701.02946. +Chloe Braud, Barbara Plank, and Anders Søgaard. 2016. Multi-view and multi-task training of RST discourse parsers. In Proceedings of COLING 2016, pages 1903-1913. +Lynn Carlson and Daniel Marcu. 2001. Discourse tagging reference manual. ISI Technical Report ISI-TR-545, 54:56. +Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of EMNLP 2014, pages 1724-1734. + +Eunsol Choi, Hannah Rashkin, Luke Zettlemoyer, and Yejin Choi. 2016. Document-level sentiment inference with social, faction, and discourse context. In Proceedings of ACL 2016, pages 333-343. +Qianyin Dai, Longyin Zhang, and Fang Kong. 2019. Event temporal relation identification based on dependency and discourse relation. +Timothy Dozat and Christopher D. Manning. 2017. Deep bioaffine attention for neural dependency parsing. In Proceedings of ICLR 2017. +Vanessa Wei Feng and Graeme Hirst. 2014. A linear-time bottom-up discourse parser with constraints and post-editing. In Proceedings of ACL 2014, pages 511-521. +Naman Goyal and Jacob Eisenstein. 2016. A joint model of rhetorical discourse structure and summarization. In Proceedings of the Workshop on Structured Prediction for NLP, pages 25-34. +Michael Heilman and Kenji Sage. 2015. Fast rhetorical structure theory discourse parsing. arXiv preprint arXiv:1505.02425. +Hugo Hernault, Helmut Prendinger, Mitsuru Ishizuka, et al. 2010. Hilda: A discourse parser using support vector machine classification. Dialogue & Discourse, 1(3). +Yangfeng Ji and Jacob Eisenstein. 2014. Representation learning for text-level discourse parsing. In Proceedings of ACL 2014, pages 13-24. +Yangfeng Ji and Noah A. Smith. 2017. Neural discourse structure for text categorization. In Proceedings of ACL 2017, pages 996-1005. +Shafiq Joty, Giuseppe Carenini, Raymond Ng, and Yashar Mehdad. 2013. Combining intra-and multisentential rhetorical parsing for document-level discourse analysis. In Proceedings of ACL 2013, pages 486-496. +Jing Li, Aixin Sun, and Shafiq Joty. 2018. Segbot: A generic neural text segmentation model with pointer network. In IJCAI, pages 4166-4172. +Jiwei Li, Rumeng Li, and Eduard Hovy. 2014a. Recursive deep models for discourse parsing. In Proceedings of EMNLP 2014, pages 2061-2069. +Qi Li, Tianshi Li, and Baobao Chang. 2016. Discourse parsing with attention-based hierarchical neural networks. In Proceedings of EMNLP 2016, pages 362-371. +Sujian Li, Liang Wang, Ziqiang Cao, and Wenjie Li. 2014b. Text-level discourse dependency parsing. In Proceedings of ACL 2014, pages 25-35. +Yancui Li, wenhe Feng, jing Sun, Fang Kong, and Guodong Zhou. 2014c. Building Chinese discourse corpus with connective-driven dependency tree structure. In Proceedings of EMNLP 2014, pages 2105-2114. + +Yancui Li, Wenhe Feng, Guodong Zhou, and Kunhua Zhu. 2013. Research of Chinese clause identificaton based on comma. Acta Scientiarum Naturalium Universitatis Pekinensis, 49(1):7-14. +Xiang Lin, Shafiq Joty, Prathyusha Jwalapuram, and M Saiful Bari. 2019. A unified linear-time framework for sentence-level discourse parsing. In Proceedings of ACL 2019, pages 4190-4200. +Linlin Liu, Xiang Lin, Shafiq Joty, Simeng Han, and Lidong Bing. 2019. Hierarchical pointer net parsing. In Proceedings of EMNLP 2019, pages 1006-1016. +William Mann and Sandra Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):243-281. +Daniel Marcu. 2000. The Theory and Practice of Discourse Parsing and Summarization. MIT Press. +Mathieu Morey, Philippe Muller, and Nicholas Asher. 2018. A dependency perspective on RST discourse parsing and evaluation. Computational Linguistics, pages 198-235. +Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP 2014, pages 1532-1543. +Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The penn discourse treebank 2.0. In LREC 2008. +Yuanyuan Qiu, Hongzheng Li, Shen Li, Yingdi Jiang, Renfen Hu, and Lijiao Yang. 2018. Revisiting correlations between intrinsic and extrinsic evaluations of word embeddings. In CCL & NLP-NABD 2017, pages 209-221. Springer. +Cheng Sheng, Fang Kong, and Guodong Zhou. 2017. Towards better Chinese zero pronoun resolution from discourse perspective. In *Processes of NLPCC* 2017, pages 406-418. +Cheng Sun and Fang Kong. 2018. A transition-based framework for Chinese discourse structure parsing. Journal of Chinese Information Processing, 32(12):26-34. +Yizhong Wang, Sujian Li, and Houfeng Wang. 2017. A two-stage parsing method for text-level discourse analysis. In Proceedings of ACL 2017: short paper, pages 184-188. +Nan Yu, Meishan Zhang, and Guohong Fu. 2018. Transition-based neural RST parsing with implicit syntax features. In Proceedings of the 27th International Conference on Computational Linguistics, pages 559-570. \ No newline at end of file diff --git a/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/images.zip b/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..9d787baa0888b891bd289c9fdad8550448a7dad7 --- /dev/null +++ b/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f20953daf941e63dc9f490fca2586d5ad4726edc43fa28e2b91ec15f5f12369 +size 301469 diff --git a/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/layout.json b/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..bee01e6972fe0f2e68e3e9705fe3130e500fb133 --- /dev/null +++ b/atopdownneuralarchitecturetowardstextlevelparsingofdiscourserhetoricalstructure/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f67c3bf780f41b91258210d0c18b87afa3f416a0d04416a6e7adce026c030100 +size 330769 diff --git a/atransformerbasedapproachforsourcecodesummarization/b4e98ec9-aecb-4bf0-a128-fc57dad2c9dd_content_list.json b/atransformerbasedapproachforsourcecodesummarization/b4e98ec9-aecb-4bf0-a128-fc57dad2c9dd_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..624a8a5d65dcf5eaa1014dfb31f1d80b3cb7c55e --- /dev/null +++ b/atransformerbasedapproachforsourcecodesummarization/b4e98ec9-aecb-4bf0-a128-fc57dad2c9dd_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f7cb832f1e90fc8b400aa4400dc5552cd20363309bfddd929e42aeaedc94866b +size 64966 diff --git a/atransformerbasedapproachforsourcecodesummarization/b4e98ec9-aecb-4bf0-a128-fc57dad2c9dd_model.json b/atransformerbasedapproachforsourcecodesummarization/b4e98ec9-aecb-4bf0-a128-fc57dad2c9dd_model.json new file mode 100644 index 0000000000000000000000000000000000000000..0a817d30e39e6819c462939b834b7a6c57505a4c --- /dev/null +++ b/atransformerbasedapproachforsourcecodesummarization/b4e98ec9-aecb-4bf0-a128-fc57dad2c9dd_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:5eca0a5083c659f86a2283232c76847e3a34aec20bf405a73151af5f2eb37d41 +size 77664 diff --git a/atransformerbasedapproachforsourcecodesummarization/b4e98ec9-aecb-4bf0-a128-fc57dad2c9dd_origin.pdf b/atransformerbasedapproachforsourcecodesummarization/b4e98ec9-aecb-4bf0-a128-fc57dad2c9dd_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..b0b61bf310451e7a8e119a5e6b107bc767ebf1aa --- /dev/null +++ b/atransformerbasedapproachforsourcecodesummarization/b4e98ec9-aecb-4bf0-a128-fc57dad2c9dd_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:39c0a7173f1816dbccd784c9ce978554b2dab9e517bfdd3302d1d1ea7ec90931 +size 337368 diff --git a/atransformerbasedapproachforsourcecodesummarization/full.md b/atransformerbasedapproachforsourcecodesummarization/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f2716e6b94aba48a196b1eb7d8566aa5bc40c35e --- /dev/null +++ b/atransformerbasedapproachforsourcecodesummarization/full.md @@ -0,0 +1,317 @@ +# A Transformer-based Approach for Source Code Summarization + +Wasi Uddin Ahmad + +University of California, Los Angeles + +wasiahmad@cs.ucla.edu + +Saikat Chakraborty + +Columbia University + +saikatc@cs.columbia.edu + +Baishakhi Ray + +Columbia University + +rayb@cs.columbia.edu + +Kai-Wei Chang + +University of California, Los Angeles + +kwchang@cs.ucla.edu + +# Abstract + +Generating a readable summary that describes the functionality of a program is known as source code summarization. In this task, learning code representation by modeling the pairwise relationship between code tokens to capture their long-range dependencies is crucial. To learn code representation for summarization, we explore the Transformer model that uses a self-attention mechanism and has shown to be effective in capturing long-range dependencies. In this work, we show that despite the approach is simple, it outperforms the state-of-the-art techniques by a significant margin. We perform extensive analysis and ablation studies that reveal several important findings, e.g., the absolute encoding of source code tokens' position hinders, while relative encoding significantly improves the summarization performance. We have made our code publicly available1 to facilitate future research. + +# 1 Introduction + +Program comprehension is an indispensable ingredient of software development and maintenance (Xia et al., 2018). A natural language summary of source code facilitates program comprehension by reducing developers' efforts significantly (Sridhara et al., 2010). Source code summarization refers to the task of creating readable summaries that describe the functionality of a program. + +With the advancement of deep learning and the availability of large-scale data through a vast number of open-source repositories, automatic source code summarizing has drawn attention from researchers. Most of the neural approaches generate source code summaries in a sequence-to-sequence fashion. One of the initial works Iyer et al. (2016) trained an embedding matrix to represent the individual code tokens and combine them with a Re + +current Neural Network (RNN) via an attention mechanism to generate a natural language summary. Subsequent works (Liang and Zhu, 2018; Hu et al., 2018a,b) adopted the traditional RNN-based sequence-to-sequence network (Sutskever et al., 2014) with attention mechanism (Luong et al., 2015) on different abstractions of code. + +The RNN-based sequence models have two limitations in learning source code representations. First, they do not model the non-sequential structure of source code as they process the code tokens sequentially. Second, source code can be very long, and thus RNN-based models may fail to capture the long-range dependencies between code tokens. In contrast to the RNN-based models, Transformer (Vaswani et al., 2017), which leverages self-attention mechanism, can capture long-range dependencies. Transformers have been shown to perform well on many natural language generation tasks such as machine translation (Wang et al., 2019), text summarization (You et al., 2019), story generation (Fan et al., 2018), etc. + +To learn the order of tokens in a sequence or to model the relationship between tokens, Transformer requires to be injected with positional encodings (Vaswani et al., 2017; Shaw et al., 2018; Shiv and Quirk, 2019). In this work, we show that, by modeling the pairwise relationship between source code tokens using relative position representation (Shaw et al., 2018), we can achieve significant improvements over learning sequence information of code tokens using absolute position representation (Vaswani et al., 2017). + +We want to emphasize that our proposed approach is simple but effective as it outperforms the fancy and sophisticated state-of-the-art source code summarization techniques by a significant margin. We perform experiments on two well-studied datasets collected from GitHub, and the results endorse the effectiveness of our approach + +over the state-of-the-art solutions. In addition, we provide a detailed ablation study to quantify the effect of several design choices in the Transformer to deliver a strong baseline for future research. + +# 2 Proposed Approach + +We propose to use Transformer (Vaswani et al., 2017) to generate a natural language summary given a piece of source code. Both the code and summary is a sequence of tokens that are represented by a sequence of vectors, $\mathbf{x} = (x_{1},\ldots ,x_{n})$ where $x_{i}\in R^{d_{model}}$ . In this section, we briefly describe the Transformer architecture (§ 2.1) and how to model the order of source code tokens or their pairwise relationship (§ 2.2) in Transformer. + +# 2.1 Architecture + +The Transformer consists of stacked multi-head attention and parameterized linear transformation layers for both the encoder and decoder. At each layer, the multi-head attention employs $h$ attention heads and performs the self-attention mechanism. + +Self-Attention. We describe the self-attention mechanism based on Shaw et al. (2018). In each attention head, the sequence of input vectors, $\mathbf{x} = (x_{1},\ldots ,x_{n})$ where $x_{i}\in R^{d_{model}}$ are transformed into the sequence of output vectors, $\mathbf{o} = (o_1,\dots,o_n)$ where $o_i\in R^{d_k}$ as: + +$$ +o _ {i} = \sum_ {j = 1} ^ {n} \alpha_ {i j} (x _ {j} W ^ {V}), +$$ + +$$ +e _ {i j} = \frac {x _ {i} W ^ {Q} (x _ {j} W ^ {K}) ^ {T}}{\sqrt {d _ {k}}}, +$$ + +where $\alpha_{ij} = \frac{\exp e_{ij}}{\sum_{k=1}^{n} \exp e_{ik}}$ and $W^Q, W^K \in R^{d_{model} \times d_k}$ , $W^V \in R^{d_{model} \times d_v}$ are the parameters that are unique per layer and attention head. + +Copy Attention. We incorporate the copying mechanism (See et al., 2017) in the Transformer to allow both generating words from vocabulary and copying from the input source code. We use an additional attention layer to learn the copy distribution on top of the decoder stack (Nishida et al., 2019). The copy attention enables the Transformer to copy rare tokens (e.g., function names, variable names) from source code and thus improves the summarization performance significantly (§ 3.2). + +# 2.2 Position Representations + +Now, we discuss how to learn the order of source code tokens or model their pairwise relationship. + +
DatasetJavaPython
Train69,70855,538
Validation8,71418,505
Test8,71418,502
Unique tokens in code66,650307,596
Unique tokens in summary46,89556,189
Avg. tokens in code120.1647.98
Avg. tokens in summary17.739.48
+ +Table 1: Statistics of the experiment datasets. We thank the authors of Wei et al. (2019) for kindly sharing the Python dataset splits. The Java dataset splits are publicly available. + +Encoding absolute position. To allow the Transformer to utilize the order information of source code tokens, we train an embedding matrix $W^{Pe}$ that learns to encode tokens' absolute positions into vectors of dimension $d_{model}$ . However, we show that capturing the order of code tokens is not helpful to learn source code representations and leads to poor summarization performance (§ 3.2). + +It is important to note that we train another embedding matrix $W^{P_d}$ that learns to encode the absolute positions of summary tokens. + +Encoding pairwise relationship. The semantic representation of a code does not rely on the absolute positions of its tokens. Instead, their mutual interactions influence the meaning of the source code. For instance, semantic meaning of the expressions $a + b$ and $b + a$ are the same. + +To encode the pairwise relationships between input elements, Shaw et al. (2018) extended the self-attention mechanism as follows. + +$$ +o _ {i} = \sum_ {j = 1} ^ {n} \alpha_ {i j} \left(x _ {j} W ^ {V} + a _ {i j} ^ {V}\right), +$$ + +$$ +e _ {i j} = \frac {x _ {i} W ^ {Q} (x _ {j} W ^ {K} + a _ {i j} ^ {K}) ^ {T}}{\sqrt {d _ {k}}}, +$$ + +where, $a_{ij}^{V}$ and $a_{ij}^{K}$ are relative positional representations for the two position $i$ and $j$ . Shaw et al. (2018) suggested clipping the maximum relative position to a maximum absolute value of $k$ as they hypothesize that precise relative position information is not useful beyond a certain distance. + +$$ +a _ {i j} ^ {K} = w _ {c l i p (j - i, k)} ^ {K}, a _ {i j} ^ {V} = w _ {c l i p (j - i, k)} ^ {V}, +$$ + +$$ +\operatorname {c l i p} (x, k) = \max (- k, \min (k, x)). +$$ + +Hence, we learn $2k + 1$ relative position representations: $(w_{-k}^{K},\ldots ,w_{k}^{K})$ , and $(w_{-k}^{V},\ldots ,w_{k}^{V})$ . + +
MethodsJavaPython
BLEUMETEORROUGE-LBLEUMETEORROUGE-L
CODE-NN (Iyer et al., 2016)27.6012.6141.1017.3609.2937.81
Tree2Seq (Eriguchi et al., 2016)37.8822.5551.5020.0708.9635.64
RL+Hybrid2Seq (Wan et al., 2018)38.2222.7551.9119.2809.7539.34
DeepCom (Hu et al., 2018a)39.7523.0652.6720.7809.9837.35
API+CODE (Hu et al., 2018b)41.3123.7352.2515.3608.5733.65
Dual Model (Wei et al., 2019)42.3925.7753.6121.8011.1439.45
Our models and ablation study
Base Model43.4125.9152.7131.0818.5744.31
Full Model44.5826.4354.7632.5219.7746.73
Full Model w/o Relative Position44.2626.2353.5831.3818.6944.68
Full Model w/o Copy Attention44.1426.3453.9531.6419.1745.42
+ +Table 2: Comparison of our proposed approach with the baseline methods. The results of the baseline methods are directly reported from (Wei et al., 2019). The "Base Model" refers to the vanilla Transformer (uses absolute position representations) and the "Full Model" uses relative position representations and includes copy attention. + +In this work, we study an alternative of the relative position representations that ignores the directional information (Ahmad et al., 2019). In other words, the information whether the $j$ 'th token is on the left or right of the $i$ 'th token is ignored. + +$$ +\begin{array}{l} a _ {i j} ^ {K} = w _ {c l i p (| j - i |, k)} ^ {K}, a _ {i j} ^ {V} = w _ {c l i p (| j - i |, k)} ^ {V}, \\ \operatorname {c l i p} (x, k) = \min (| x |, k). \\ \end{array} +$$ + +# 3 Experiment + +# 3.1 Setup + +Datasets and Pre-processing. We conduct our experiments on a Java dataset (Hu et al., 2018b) and a Python dataset (Wan et al., 2018). The statistics of the two datasets are shown in Table 1. In addition to the pre-processing steps followed by Wei et al. (2019), we split source code tokens of the form CamelCase and snake case to respective sub-tokens3. We show that such a split of code tokens improves the summarization performance. + +Metrics. We evaluate the source code summarization performance using three metrics, BLEU (Papineni et al., 2002),METEOR (Banerjee and Lavie, 2005),and ROUGE-L (Lin, 2004). + +Baselines. We compare our Transformer-based source code summarization approach with five baseline methods reported in Wei et al. (2019) and their proposed Dual model. We refer the readers to (Wei et al., 2019) for the details about the hyperparameter of all the baseline methods. + +Hyper-parameters. We follow Wei et al. (2019) to set the maximum lengths and vocabulary sizes + +for code and summaries in both the datasets. We train the Transformer models using Adam optimizer (Kingma and Ba, 2015) with an initial learning rate of $10^{-4}$ . We set the mini-batch size and dropout rate to 32 and 0.2, respectively. We train the Transformer models for a maximum of 200 epochs and perform early stop if the validation performance does not improve for 20 consecutive iterations. We use a beam search during inference and set the beam size to 4. Detailed hyperparameter settings can be found in Appendix A. + +# 3.2 Results and Analysis + +Overall results. The overall results of our proposed model and baselines are presented in Table 2. The result shows that the Base model outperforms the baselines (except for ROUGE-L in java), while the Full model improves the performance further.4 We ran the Base model on the original datasets (without splitting the CamelCase and snake(case code tokens) and observed that the performance drops by 0.60, 0.72 BLEU and 1.66, 2.09 ROUGE-L points for the Java and Python datasets respectively. We provide a few qualitative examples in Appendix C showing the usefulness of the Full model over the Base model. + +Unlike the baseline approaches, our proposed model employs the copy attention mechanism. As shown in Table 2, the copy attention improves the performance 0.44 and 0.88 BLEU points for the Java and Python datasets respectively. + +Impact of position representation. We perform an ablation study to investigate the benefits + +
SourceTargetBLEUMETEORROUGE-L
43.4125.9152.71
×42.3424.7450.96
×43.5926.0052.88
××41.8524.3250.87
+ +Table 3: Ablation study on absolute positional representations using the "Base Model" on the Java dataset. + +
kDirectionalBLEUMETEORROUGE-L
844.2226.3553.86
X42.6124.6751.10
1644.1426.3453.95
X44.0626.3153.51
3244.5526.6654.30
X43.9526.2853.24
2i44.3726.5853.96
X43.5825.9552.73
+ +of encoding the absolute position of code tokens or modeling their pairwise relationship for the source code summarization task, and the results are presented in Table 3 and 4. Table 3 demonstrates that learning the absolute position of code tokens are not effective as we can see it slightly hurts the performance compared to when it is excluded. This empirical finding corroborates the design choice of Iyer et al. (2016), where they did not use the sequence information of the source code tokens. + +On the other hand, we observe that learning the pairwise relationship between source code tokens via relative position representations helps as Table 4 demonstrates higher performance. We vary the clipping distance, $k$ , and consider ignoring the directional information while modeling the pairwise relationship. The empirical results suggest that the directional information is indeed important while 16, 32, and $2^{i}$ relative distances result in similar performance (in both experimental datasets). + +Varying model size and number of layers. We perform ablation study by varying $d_{model}$ and $l$ and the results are presented in Table 5. In our experiments, we observe that a deeper model (more layers) performs better than a wider model (larger $d_{model}$ ). Intuitively, the source code summariza + +Table 4: Ablation study on relative positional representations (in encoding) for Transformer. While 8, 16, and 32 represents a fixed relative distance for all the layers, $2^{i}$ (where $i = 1,\dots ,L;L = 6$ ) represents a layer-wise relative distance for Transformer. + +
#Param.BLEUMETEORROUGE-L
Varying the model size (dmodel)
25615.838.2121.5448.63
38428.441.7124.5151.42
51244.143.4125.9152.71
76885.145.2927.5654.39
Varying the number of layers (l)
322.141.2623.5451.37
644.143.4125.9152.71
966.245.0327.2154.02
1288.345.5627.6454.89
+ +Table 5: Ablation study on the hidden size and number of layers for the "Base Model" on the Java dataset. We use $d_{model} = H$ , $d_{ff} = 4H$ , $h = 8$ , and $d_k = d_v = 64$ in all settings. We set $l = 6$ and $d_{model} = 512$ while varying $d_{model}$ and $l$ respectively. #Param. represents the number of trainable parameters in millions (only includes Transformer parameters). + +tion task depends on more semantic information than syntactic, and thus deeper model helps. + +Use of Abstract Syntax Tree (AST). We perform additional experiments to employ the abstract syntax tree (AST) structure of source code in the Transformer. We follow Hu et al. (2018a) and use the Structure-based Traversal (SBT) technique to transform the AST structure into a linear sequence. We keep our proposed Transformer architecture intact, except in the copy attention mechanism, we use a mask to block copying the non-terminal tokens from the input sequence. It is important to note that, with and without AST, the average length of the input code sequences is 172 and 120, respectively. Since the complexity of the Transformer is $O(n^{2} \times d)$ where $n$ is the input sequence length, hence, the use of AST comes with an additional cost. Our experimental findings suggest that the incorporation of AST information in the Transformer does not result in an improvement in source code summarization. We hypothesize that the exploitation of the code structure information in summarization has limited advantage, and it diminishes as the Transformer learns it implicitly with relative position representation. + +Qualitative analysis. We provide a couple of examples in Table 6 to demonstrate the usefulness of our proposed approach qualitatively (more examples are provided in Table 9 and 10 in the Appendix). The qualitative analysis reveals that, in comparison to the Vanilla Transformer model, the copy enabled model generates shorter summaries + +Table 6: Qualitative example of different models' performance on Java and Python datasets. +```java +public static String selectText(XPathExpression expr, Node context) { + try { + return (String) exprevaluate(context, XPathConstants STRING); + } catch (XPathException e) { + throw new XmlException(e); + } +} +``` + +with more accurate keywords. Besides, we observe that in a copy enabled model, frequent tokens in the code snippet get a higher copy probability when relative position representations are used, in comparison to absolute position representations. We suspect this is due to the flexibility of learning the relation between code tokens without relying on their absolute position. + +# 4 Related Work + +Most of the neural source code summarization approaches frame the problem as a sequence generation task and use recurrent encoder-decoder networks with attention mechanisms as the fundamental building blocks (Iyer et al., 2016; Liang and Zhu, 2018; Hu et al., 2018a,b). Different from these works, Allamanis et al. (2016) proposed a convolutional attention model to summarize the source codes into short, name-like summaries. + +Recent works in code summarization utilize structural information of a program in the form of Abstract Syntax Tree (AST) that can be encoded using tree structure encoders such as Tree-LSTM (Shido et al., 2019), Tree-Transformer (Harer et al., 2019), and Graph Neural Network (LeClair et al., 2020). In contrast, Hu et al. (2018a) proposed a structure based traversal (SBT) method to flatten the AST into a sequence and showed improvement over the AST based methods. Later, LeClair et al. (2019) used the SBT method and de + +coupled the code structure from the code tokens to learn better structure representation. + +Among other noteworthy works, API usage information (Hu et al., 2018b), reinforcement learning (Wan et al., 2018), dual learning (Wei et al., 2019), retrieval-based techniques (Zhang et al., 2020) are leveraged to further enhance the code summarization models. We can enhance a Transformer with previously proposed techniques; however, in this work, we limit ourselves to study different design choices for a Transformer without breaking its' core architectural design philosophy. + +# 5 Conclusion + +This paper empirically investigates the advantage of using the Transformer model for the source code summarization task. We demonstrate that the Transformer with relative position representations and copy attention outperforms state-of-the-art approaches by a large margin. In our future work, we want to study the effective incorporation of code structure into the Transformer and apply the techniques in other software engineering sequence generation tasks (e.g., commit message generation for source code changes). + +# Acknowledgments + +This work was supported in part by National Science Foundation Grant OAC 1920462, CCF 1845893, CCF 1822965, CNS 1842456. + +# References + +Wasi Ahmad, Zhisong Zhang, Xuezhe Ma, Eduard Hovy, Kai-Wei Chang, and Nanyun Peng. 2019. On difficulties of cross-lingual transfer with order differences: A case study on dependency parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2440–2452, Minneapolis, Minnesota. Association for Computational Linguistics. +Miltiadis Allamanis, Hao Peng, and Charles A. Sutton. 2016. A convolutional attention network for extreme summarization of source code. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, volume 48 of JMLR Workshop and Conference Proceedings, pages 2091-2100. JMLR.org. +Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65-72, Ann Arbor, Michigan. Association for Computational Linguistics. +Akiko Eriguchi, Kazuma Hashimoto, and Yoshimasa Tsuruoka. 2016. Tree-to-sequence attentional neural machine translation. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 823-833, Berlin, Germany. Association for Computational Linguistics. +Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898, Melbourne, Australia. Association for Computational Linguistics. +Jacob Harer, Chris Reale, and Peter Chin. 2019. Tree-transformer: A transformer-based method for correction of tree-structured data. arXiv preprint arXiv:1908.00449. +Xing Hu, Ge Li, Xin Xia, David Lo, and Zhi Jin. 2018a. Deep code comment generation. In Proceedings of the 26th Conference on Program Comprehension, page 200-210, New York, NY, USA. Association for Computing Machinery. +Xing Hu, Ge Li, Xin Xia, David Lo, Shuai Lu, and Zhi Jin. 2018b. Summarizing source code with transferred api knowledge. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, pages 2269-2275. International Joint Conferences on Artificial Intelligence Organization. + +Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2073-2083, Berlin, Germany. Association for Computational Linguistics. +Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations. +Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics. +Alexander LeClair, Sakib Haque, Linfgei Wu, and Collin McMillan. 2020. Improved code summarization via a graph neural network. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. +Alexander LeClair, Siyuan Jiang, and Collin McMillan. 2019. A neural model for generating natural language summaries of program subroutines. In Proceedings of the 41st International Conference on Software Engineering, page 795-806. IEEE Press. +Yuding Liang and Kenny Qili Zhu. 2018. Automatic generation of text descriptive comments for code blocks. In Thirty-Second AAAI Conference on Artificial Intelligence. +Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics. +Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412-1421, Lisbon, Portugal. Association for Computational Linguistics. +Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, and Junji Tomita. 2019. Multi-style generative reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2273-2284, Florence, Italy. Association for Computational Linguistics. +Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. + +Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073-1083, Vancouver, Canada. Association for Computational Linguistics. +Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464-468, New Orleans, Louisiana. Association for Computational Linguistics. +Yusuke Shido, Yasuaki Kobayashi, Akihiro Yamamoto, Atsushi Miyamoto, and Tadayuki Matsumura. 2019. Automatic source code summarization with extended tree-lstm. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1-8. IEEE. +Vighnesh Shiv and Chris Quirk. 2019. Novel positional encodings to enable tree-based transformers. In Advances in Neural Information Processing Systems 32, pages 12081-12091. Curran Associates, Inc. +Giriprasad Sridhara, Emily Hill, Divya Muppaneni, Lori Pollock, and K. Vijay-Shanker. 2010. Towards automatically generating summary comments for java methods. In Proceedings of the IEEE/ACM International Conference on Automated Software Engineering, page 43-52, New York, NY, USA. Association for Computing Machinery. +Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, pages 3104-3112. Curran Associates, Inc. +Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc. +Yao Wan, Zhou Zhao, Min Yang, Guandong Xu, Haochao Ying, Jian Wu, and Philip S Yu. 2018. Improving automatic source code summarization via deep reinforcement learning. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pages 397-407. ACM. +Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. 2019. Learning deep transformer models for machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1810-1822, Florence, Italy. Association for Computational Linguistics. + +Bolin Wei, Ge Li, Xin Xia, Zhiyi Fu, and Zhi Jin. 2019. Code generation as a dual task of code summarization. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 6563-6573. Curran Associates, Inc. +Xin Xia, Lingfeng Bao, David Lo, Zhenchang Xing, Ahmed E. Hassan, and Shanping Li. 2018. Measuring program comprehension: A large-scale field study with professionals. In Proceedings of the 40th International Conference on Software Engineering, ICSE '18, page 584, New York, NY, USA. Association for Computing Machinery. +Yongjian You, Weijia Jia, Tianyi Liu, and Wenmian Yang. 2019. Improving abstractive document summarization with salient information modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2132-2141, Florence, Italy. Association for Computational Linguistics. +Jian Zhang, Xu Wang, Hongyu Zhang, Hailong Sun, and Xudong Liu. 2020. Retrieval-based neural source code summarization. In Proceedings of the 42nd International Conference on Software Engineering. IEEE. + +# A Hyper-Parameters + +Table 7 summarizes the hyper-parameters that we used in our experiments. + +
Hyper-parameterValue
Embeddingk16
Modell6
h8
dmodel512
dk, dv64
dff2048
Trainingdropout0.2
optimizerAdam
learning rate0.0001
batch size32
Testingbeam size4
+ +# B Recurrent Encoder-Decoder vs. Transformer on Python Dataset + +Table 7: Hyper-parameters in our experiments. $l$ and $h$ indicates the number of layers and heads in Transformer respectively. $k$ refers to the clipping distance in relative position representations in Transformer. + +
ModelsBLEUMETEORROUGE-L
Seq2seq30.5717.8643.64
Seq2seq*29.0817.1242.97
Transformer31.0818.5744.31
Transformer*31.3818.6944.68
+ +Table 8: Comparison between recurrent sequence-to-sequence (Seq2seq) model and Transformer on the Python dataset. * indicates models are equipped with the copy attention mechanism. + +While conducting our study using the Transformer on the Python dataset, we observed a significant gain over the state-of-the-art methods as reported in Wei et al. (2019). However, our initial experiments on this dataset using recurrent sequence-to-sequence models also demonstrated higher performance compared to the results report in Wei et al. (2019). We suspect that such lower performance is due to not tuning the hyperparameters correctly. So for the sake of fairness and to investigate the true advantages of Transformer, we present a comparison on recurrent Seq2seq model and Transformer in Table 8 using our implementation. + +We can see from Table 8, the performance of the recurrent Seq2seq model is much better than the results reported in prior works. However, to our surprise, the copy attention mechanism does not result in improvement for the recurrent Seq2seq model. When we looked into the training perplexity and the validation performance, we also observed lower performance in comparison to the base recurrent Seq2seq model. In comparison, our proposed Transformer-based approach outperforms the recurrent Seq2seq models by a large margin showing its effectiveness for source code summarization. + +```txt +public static terminal find(String with_name) { if(with_name $= =$ null) return null; else return (terminal) all.get(with_name); } +``` + +Base Model: lookup a non terminal by name string + +Full Model w/o Relative Position: lookup a terminal terminal by name string + +Full Model w/o Copy Attention: lookup a non terminal by name string + +Full Model: lookup a terminal by name + +Human Written: lookup a terminal by name string. + +```txt +public static String selectText(XPathExpression expr, Node context) { try { return (String) expr.evaluate(context, XPathConstants STRING); } catch (XPathException e) { throw new XmlException(e); } } +``` + +Base Model: evaluates the xpath expression to a xpath expression. + +Full Model w/o Relative Position: evaluates the xpath expression. + +Full Model w/o Copy Attention Attention: evaluates the xpath expression as a single element. + +Full Model: evaluates the xpath expression as a text string. + +Human Written: evaluates the xpath expression as text. + +```java +public CTaggingPanel( final JFrame parent, final ZyGraph graph, final ITagManager manager) { super(new BorderLayout()); mtagsTree = new CTagsTree(parent, graph, manager); final JScrollPane pane = new JS重要原因 pane(mtagsTree); pane.setVerticalScrollBarPolicy( ScrollPaneConstants.VERTICAL_SCROLLBAR_AS_NEEDED); pane.setHorizontalScrollBarPolicy( ScrollPaneConstants.HORIZONTAL_SCROLLBAR_AS_NEEDED); add pane); setBorder(new TitledBorder(new LineBorder(Color.LIGHTGray, NUM, BOOL), STRING)); setDoubleBuffered(BOOL); } +``` + +Base Model: creates a new dnetscapepressServername dialog. + +Full Model w/o Relative Position: creates a new settings dialog. + +Full Model w/o Copy Attention: creates a new toolbar panel. + +Full Model: creates a new api panel object. + +Human Written: creates a new panel object. + +```java +public DSignCsr(JFrameparent, PKCS10CertificationRequest pkcs10Csr, File csvFile, PrivateKey signPrivateKey, KeyPairType signKeyPairType, X509Certificate verificationCertificate, Provider provider) throws CryptoException{ super(parent, Dialog.ModalityType.DOCUMENT_MODAL); this.pkcs10Csr = pkcs10Csr; this.csvFile = csvFile; this.signPrivateKey = signPrivateKey; this.signKeyPairType = signKeyPairType; thisverificationCertificate = verificationCertificate; this.provider = provider; setTitle(res.getString(STRING)); initComponents(); } +``` + +Base Model: creates a new dsigncsr dialog for a spkac formattedcsr. + +Full Model w/o Relative Position: creates a new signer dialog for a pkcs # 10 formatted . + +Full Model w/o Copy Attention: creates a new dsigncsr dialog for a spkac formattedcsr. + +Full Model: creates a new dsigncsr dialog for a pkcs # 10 formattedcsr. + +Human Written: creates a new dsigncsr dialog for a pkcs # 10 formattedcsr. + +Table 9: Qualitative example of different models' performance in Java dataset. + +Table 10: Qualitative example of different models' performance in Python dataset. +```python +def get_hosting_service(name): + try: + return hosting.service Registry.get(u'hosting service id', name) +except ItemLookupError: + return None +Base Model: returns the color limits from the current service name. +Full Model w/o Relative Position: return the hosting service. +Full Model w/o Copy Attention: return the name of the service. +Full Model: return the hosting service name. +Human Written: return the hosting service with the given name. +``` \ No newline at end of file diff --git a/atransformerbasedapproachforsourcecodesummarization/images.zip b/atransformerbasedapproachforsourcecodesummarization/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..530d0e4b5c7bd5bc0ceeaafc5e5a31eac5dc1fd7 --- /dev/null +++ b/atransformerbasedapproachforsourcecodesummarization/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:188c1c3e4fd3620901a29930eb0bb5605399061704585ab85d2570c1dd86a707 +size 296122 diff --git a/atransformerbasedapproachforsourcecodesummarization/layout.json b/atransformerbasedapproachforsourcecodesummarization/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..d2c81ba6f155dfdcb7294768f40177f97598d843 --- /dev/null +++ b/atransformerbasedapproachforsourcecodesummarization/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7bc0a810fcab2bbf495bf6e417692eb06b05bc9a1e435cd6119bbbc4b1ae1cad +size 297001 diff --git a/atwostagemaskedlmmethodfortermsetexpansion/899c6f32-53d1-4c24-a555-8b7a51ce7538_content_list.json b/atwostagemaskedlmmethodfortermsetexpansion/899c6f32-53d1-4c24-a555-8b7a51ce7538_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..c0ac3f7ed18454662fea5a65eb33ab0773fa8593 --- /dev/null +++ b/atwostagemaskedlmmethodfortermsetexpansion/899c6f32-53d1-4c24-a555-8b7a51ce7538_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:76d37c90517074a5d647790c2e623512601d217968c69c22f93441a56f3f9188 +size 51151 diff --git a/atwostagemaskedlmmethodfortermsetexpansion/899c6f32-53d1-4c24-a555-8b7a51ce7538_model.json b/atwostagemaskedlmmethodfortermsetexpansion/899c6f32-53d1-4c24-a555-8b7a51ce7538_model.json new file mode 100644 index 0000000000000000000000000000000000000000..6b7e1f78ca000f333583b64dd7c2a6335ca0deed --- /dev/null +++ b/atwostagemaskedlmmethodfortermsetexpansion/899c6f32-53d1-4c24-a555-8b7a51ce7538_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ac05a809f98aa73050a3c8923890bb6e5496958ab3b7b1ff0f62742424742d55 +size 61665 diff --git a/atwostagemaskedlmmethodfortermsetexpansion/899c6f32-53d1-4c24-a555-8b7a51ce7538_origin.pdf b/atwostagemaskedlmmethodfortermsetexpansion/899c6f32-53d1-4c24-a555-8b7a51ce7538_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..1c38700b2cc0d826b6f3553af259a81555b1cab3 --- /dev/null +++ b/atwostagemaskedlmmethodfortermsetexpansion/899c6f32-53d1-4c24-a555-8b7a51ce7538_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1c99c3a892642c509bf5e556d0ce0694c93f1030650622700a874737e96d977f +size 303859 diff --git a/atwostagemaskedlmmethodfortermsetexpansion/full.md b/atwostagemaskedlmmethodfortermsetexpansion/full.md new file mode 100644 index 0000000000000000000000000000000000000000..558d85226db32bb9a4075bfa6406e95171128cd2 --- /dev/null +++ b/atwostagemaskedlmmethodfortermsetexpansion/full.md @@ -0,0 +1,201 @@ +# A Two-Stage Masked LM Method for Term Set Expansion + +Guy Kushilevitz + +Technion- Israel institute of technology + +guykush@cs.technion.ac.il + +Shaul Markovitch + +Technion- Israel institute of technology + +shaulm@cs.technion.ac.il + +Yoav Goldberg + +Bar-Ilan University + +Allen Institute for AI + +yogo@cs.biu.ac.il + +# Abstract + +We tackle the task of Term Set Expansion (TSE): given a small seed set of example terms from a semantic class, finding more members of that class. The task is of great practical utility, and also of theoretical utility as it requires generalization from few examples. Previous approaches to the TSE task can be characterized as either distributional or pattern-based. We harness the power of neural masked language models (MLM) and propose a novel TSE algorithm, which combines the pattern-based and distributional approaches. Due to the small size of the seed set, fine-tuning methods are not effective, calling for more creative use of the MLM. The gist of the idea is to use the MLM to first mine for informative patterns with respect to the seed set, and then to obtain more members of the seed class by generalizing these patterns. Our method outperforms state-of-the-art TSE algorithms. Implementation is available at: https://github.com/guykush/TermSetExpansion-MPB/ + +# 1 Introduction + +Term Set expansion (TSE) is the task of expanding a small seed set of terms into a larger (ideally complete) set of terms that belong to the same semantic category. For example, the seed set $\{\text{orange}, \text{"apple"}\}$ should expand into a set of fruits, while $\{\text{orange}, \text{"blue"}\}$ into a set of colors, and $\{\text{"apple", "google"}\}$ into a set of tech companies. Beyond being of great practical utility, the TSE task is a challenging instance of a generalization from few examples problem. Solving TSE requires the algorithm to: (1) identify the desired concept class based on few examples; and (2) identify additional members of the class. + +We present an effective TSE method which is based on querying large, pre-trained masked language models (MLMs). Pre-trained language + +models (LMs) have been shown to contain semantic (Tenney et al., 2019), syntactic (Goldberg, 2019; Hewitt and Manning, 2019; Linzen et al., 2016) and factual knowledge (Petroni et al., 2019), and to be great starting points for transfer-learning to new tasks via fine-tuning on few examples. However, the TSE seed sets are too small for fine-tuning, calling for a different approach. Our method uses the MLMs directly for the task they were trained for—language-modeling—by issuing word-completion queries and operating on the returned word distributions. $^{1}$ + +Previous solutions to the TSE problem (also called semantic class induction) can be roughly categorized into distributional and pattern-based approaches (Shi et al., 2010). Our method can be seen as a combination of the two. + +The distributional approach to TSE (Hindle, 1990; Pantel and Lin, 2002; Pantel et al., 2009; Mamou et al., 2018; Mahabal et al., 2018) operates under the hypothesis that similar words appear in similar contexts (Harris, 1968). These methods represent each term in the vocabulary as an embedding vector that summarizes all the contexts the term appears in in a large corpus, and then look for terms with vectors that are similar to those of the seed term. The methods differ in their context definitions and in their way of computing similarities. A shortcoming of these methods is that they consider all occurrences of a term in the corpus when calculating its representation, including many contexts that are irrelevant to the concept at hand due to polysemy, noise in the corpus or noninformative contexts. + +In contrast, the pattern-based approach consid + +ers specific indicative patterns that signal the desired concept, looking for them in a large corpus, and extracting the terms that appear in them. Patterns can be binary (Hearst, 1992; Ohshima et al., 2006; Zhang et al., 2009) ("such as X or Y"), indicating that both X and Y belong to the same class, or unary (Gupta and Manning, 2014; Wang and Cohen, 2007) ("fruits such as X", "First I painted the wall red, but then I repainted it X"), suggesting that X belongs to a certain category (fruit, color). The patterns can be determined manually (Hearst, 1992) or automatically (Wang and Cohen, 2007; Gupta and Manning, 2014). While well tailored patterns can be precise and interpretable, a notable shortcoming of pattern-based methods is their lack of coverage, due to the challenge of finding patterns that are specific enough to be accurate yet common enough in a large corpus to be useful. Wang and Cohen (2007) use patterns from nonnatural language (HTML) while Gupta and Manning (2014) restrict themselves to short patterns of 2-4 words to each side of the masked term. + +Our method. By using MLMs, we combine the power of the pattern-based and the distributional approaches: like the patterns-based approaches, we consider only specific, indicative corpus locations (retaining specificity and transparency). We then use the distributional nature of the neural LM to generalize across patterns and corpus locations. + +We use sentences with a single masked location as indicative patterns. For example, 'We took Rexy, our pet , to the vet." is an indicative pattern for the house animals semantic class. Given an initial set of seed terms, we first search the corpus for indicative patterns for members of the set (2.1). Intuitively, an indicative pattern is a corpus location which is considered by an LM to be a good fit for all seed members. Once we identified indicative patterns, we extend the set to terms that can appear in similar patterns. We propose two methods for doing this. The first method (2.2) queries an MLM for completions. While effective, this method restricts the expanded set to the LM vocabulary. The second method (2.3) uses the MLM to define a similarity metric over patterns, and searches the corpus for terms that appear in patterns that are similar to the indicative ones. To summarize, we embrace the pattern-based approach, while using distributional similarity for identifying good patterns as well as + +for generalizing across patterns. + +# 2 Method + +Task formulation we are given a seed set $S$ of $k^3$ terms $S = t_1, \dots, t_k$ , that come from a larger (and unknown) gold set $S_g$ . Our goal is to return $S_g$ . Practically, our (and other) algorithms return a ranked list of terms rather than a fixed set. The evaluation is then performed over the ranking: ideally, all terms in $S_g$ will be ranked above all terms not in $S_g$ . + +We operate in stages. First, we search the corpus for $\ell$ indicative masked patterns $m_{1},\ldots ,m_{\ell}$ that are likely to signal the concept class in $S_{g}$ with high probability. Then, we use the patterns to extend the set. + +# 2.1 Finding indicative masked-patterns + +A masked pattern $m$ is a sequence of words with a single masked location (marked as “____”), where the mask indicates one or more words. We look for patterns such that, with high probability, instances of the desired semantic class will make good mask replacements, while instances of other classes will make bad replacements. For example, “The capital of ______” is a good pattern for the “countries” class. + +We collect $L$ pattern candidates for each seed term $t_j$ by querying a corpus for sentences that contain the term, and replacing the term position with a mask. We then score each of the $kL$ resulting pattern candidate $m_i$ , and take the $\ell$ -best ones. + +Intuitively, we seek a diverse set of patterns in which all seed terms are ranked high (ie, have low rank index) in the MLM's prediction: we look for patterns whose worst-fitting seed term is still high on the list of replacement terms. Formally, let $LM(m)$ be the word completions (mask replacements) predicted by the LM for pattern $m$ , ranked by their probability, and let $R_{LM}(t,m)$ be the rank (index) of term $t$ in $LM(m)$ . + +The score of the a pattern is then the maximal rank of any of the seed terms:4 + +$$ +s \left(m _ {i}\right) = \max R a n k \left(m _ {i}\right) = \max _ {t _ {j} \in S} R _ {L M} \left(t _ {j}, m _ {i}\right) \tag {1} +$$ + +We then sort the patterns by $s(m_i)$ and take the patterns with minimal values. This min-over-max + +
# sent +# patt20100300100020004000
1.794.729.704.843.939.939
5.834.938.960.969.981.964
10.839.938.974.978.990.975
20.838.932.972.987.990.978
40NA.916.962.993.993.989
80NA.913.954.992.996.993
160NANA.949.985.998.997
600NANANA.981.994.993
+ +Table 1: Number of indicative patterns used (#patt), and number of candidate seed-term containing sentences (#sent) used for selecting these indicative patterns. Set is the NFL team set, method is MPB1. Every value is an avg MAP on 5 seeds (chosen randomly, fixed for all values of #sent and #patt) of size 3. NA: #patt can not be bigger than #sent. + +formulation ensures that the patterns are a good fit for all seed terms.5 + +To achieve the diversity objective, we use the following heuristic: after sorting all candidate patterns $m_i$ by $s(m_i)$ , rather than taking the first $\ell$ items we go over the sorted list in order, and keep a pattern only if it differs by at least 50% of its tokens from an already kept pattern. We do this until collecting $\ell$ patterns. + +# 2.2 seed set extension via MLM query + +Having identified indicative patterns, we now turn to suggest terms for expanding the seed set. Each indicative pattern $m_{i}$ naturally provides a ranked list of candidate terms $LM(m_{i}) = t_{1},\dots,t_{|V|}$ , where $V$ is the LM's vocabulary and each term $t_{j}$ is scored by its pattern-conditional probability. We combine the term scores from all chosen indicative patterns using a product of experts approach, scoring each term by the product of probabilities (sum of log probabilities) assigned to it by each context. Let $p_{LM}(t|m_{i})$ be the probability assigned to vocabulary term $t$ in pattern $m_{i}$ . The term score is: + +$$ +\operatorname {s c o r e} (t) = \sum_ {i = 1} ^ {\ell} c _ {i} \log p _ {L M} (t | m _ {i}) \tag {2} +$$ + +where $c_{i} = \frac{(maxRank(m_{i})^{-1}}{\sum_{j=1}^{\ell}(maxRank(m_{j})^{-1}}$ is a weighing factor for indicative pattern $m_{i}$ , giving more weight to "tighter" indicative patterns. + +This method is fast and effective, requiring only $\ell$ queries to the LM. However, it assumes that all the desired terms from $S_{g}$ appear as vocabulary + +items in the LM. This assumption often does not hold in practice: first, for efficiency reasons, pretrained LM vocabularies are often small ( $\sim 50k$ items), precluding rare words. Second, many terms of interest are multi-word units, that do not appear as single items in the LM vocabulary. + +# 2.3 Extended coverage via pattern similarity + +We seek a term expansion method that will utilize the power of the pre-trained LM, without being restricted by its vocabulary: we would like to identify rare words, out-of-domain words, and multiword units. + +Our solution is to generalize the indicative patterns. Rather than looking for terms that match the patterns, we instead search a large corpus for patterns which are similar to the indicative ones, and collect the terms that appear within them. Following the distributional hypothesis, these terms should be of the desired concept class. + +By looking at patterns that surround corpus locations, we are no longer restricted by the LM vocabulary to single-token terms. + +However, considering all corpus locations as candidate patterns is prohibitively expensive. Instead, we take a ranking approach and restrict ourselves only to corpus locations that correspond to occurrences of candidate terms returned by a high-recall algorithm.[6] + +We use the LM to define a similarity measure between two masked patterns that aims to capture our desired notion of similarity: masked patterns are similar if they are likely to be filled by the same terms. Let $top_{q}(LM(m_{i}))$ be the $q$ highest scoring terms for pattern $m_{i}$ . We define the similarity between two patterns as the fraction of shared terms in their top $q$ predictions ( $q$ being a hyperparameter): + +$$ +\begin{array}{c} s i m (m _ {i}, m _ {j}) = \\ \hskip 1 4. 2 2 6 3 7 8 p t | t o p _ {q} (L M (m _ {i})) \cap t o p _ {q} (L M (m _ {j})) | / q \end{array} +$$ + +For a candidate term $t$ , let $pats(t) = m_1^t, \dots, m_n^t$ be the set of patterns derived from it: sentences that contain $t$ , where $t$ is replaced with a mask. Note that $t$ can be an arbitrary word or word sequence. We wish to find terms for which the similarity between $pats(t)$ and the indicative patterns is high. However, since words have dif + +
Setk=1k=5k=50k=300k=700k=3000
States.693.848.986.965.972.975
NFL.876.939.938.919.921.916
+ +Table 2: Effect of similarity measure's $k$ on performance, using MPB2 on a single random seed from each set. + +ferent senses, it is sufficient for only some patterns in $pats(t)$ to be similar to patterns in $m_{1},\ldots,m_{\ell}$ . We score a term $t$ as: + +$$ +\operatorname {s c o r e} (t) = \sum_ {i = 1} ^ {\ell} c _ {i} \max _ {m \in p a t s (t)} \operatorname {s i m} \left(m _ {i}, m\right) \tag {3} +$$ + +where $c_{i}$ is the pattern weighing factor from equation (2). As $\sum_{i = 1}^{\ell}c_{i} = 1$ , the term score $score(t)$ for every term $t$ is $\in [0,1]$ . + +# 3 Experiments and Results + +We refer to the method in Section (2.2) as MPB1 and the method in section (2.3) as MPB2. + +Setup. In our experiments we use BERT (Devlin et al., 2019) as the MLM, and English Wikipedia as the corpus. Following previous TSE work (e.g. (Mahabal et al., 2018)), we measure performance using MAP (using $\mathrm{MAP}_{70}$ for the open set). For each method we report the average MAP over several runs (exact number mentioned under each table), each with a different random seed set of size 3. Based on preliminary experiments, for MPB1 we use $\ell = 160$ and $L = 2000 / k$ and for MPB2 we use $\ell = 20$ and $L = 2000 / k$ . When comparing different systems (i.e., in Table 3), each system sees the same random seed sets as the others. For smaller sets we expand to a set of size 200, while for the Countries and Capitals sets, which have expected sizes of $>100$ , we expand to 350 items. + +Dataset. Automatic TSE evaluation is challenging. A good TSE evaluation set should be complete (contain all terms in the semantic class), clean (not contain other terms) and comprehensive (contain all different synonyms for all terms). These are hard to come by. Indeed, previous work either used a small number of sets, or used some automatic set acquiring method which commonly are not complete. We curated a dataset with 7 closed, well defined sets, which we make publicly available. The sets are National football league teams (NFL, size:32), Major league baseball + +teams (MLB, 30), US states (States, 50), Countries (Cntrs, 195), European countries (Euro, 44) Capital cities (Caps, 195) and Presidents of the USA (Pres, 44). We also provide on one open class set: Music Genres (Genre). This set created by manually verifying the items in the union of the output of all the different algorithms. This set contains around 600 unique items. + +Compared Methods. We compare our methods, MPB1 (MLM-pattern-based) (Section 2.2) and MPB2 $^{8}$ (Section 2.3), to two state-of-the-art systems: setExpander $^{9}$ (SE) (Mamou et al., 2018), and category builder (CB) (Mahabal et al., 2018). We also compare to two baselines: The first, BB (basic-BERT), is a baseline for MPB1. This is a BERT-based baseline that uses the MPB1 method on patterns derived from sentences that include seed terms, without the selection method described in Section 2.1. The second, s2v, is a baseline for MPB2. This is a basic distributional method that uses sense2vec (Trask et al., 2015) representations, $^{10}$ which is also our candidate acquisition method for MPB2 (A). As MPB2 relies on external candidate generation, we also report on the oracle case MPB2+O where we expand the s2v-generated candidate list to include all the members of the class. + +Main Results. Our main results are reported in Table 3. Our first method, MPB1, achieves the best scores on two of the three sets suitable for its limitations (where all or most of the set's terms are in the LM's vocabulary), and second-best results on the third.[11] MPB2 outperforms all other methods on 5 out of 7 closed sets when assuming gold-standard candidates $(\mathrm{MPB2 + O})$ , and even when considering the missing candidates it outperforms other expanders on 4 out of 7 closed sets, averaging the best MAP score on all sets. While other + +8We follow (Mahabal et al., 2018) and limit MPB2 to 200,000 most frequent terms. MPB2 can work with any number of terms and is limited only by the candidate supplying method (in this implementation- sence2vec which has $\sim 3,400,000$ terms). +9We use the non-grouping release version because it reaches better results on our dataset than the grouping one. +10https://explosion.ai/demos/sense2vec +11MPB1's relatively poor performance on the president's set can be a result of the basic terms MPB1 considers. MPB1 ranks only terms which are in the LM's vocabulary, which means that while other expanders can rank terms like "President George W. Bush", MPB1 will consider terms like "bush", which are harder to ascribe to the presidents set. While this is true for all sets, it seems to be more significant for a set containing person names. + +
MethodNFLMLBPresStatesCntrsEuroCapsGenreAvg
SE(SetExpander).54.45.33.55.55.61.14.99.52
CB(CategoryBuilder).98.97.70.93.74.46.21.67.71
BB(BERT Baseline).91.92*.52**NANANANANA.78†
MPB1(Section 2.2).98.99*.63**NANANANANA.87†
S2V(Sense2Vec Baseline).95.80.18.94.71.78.21.90.68
MPB2(Section 2.3).95.82.37.98.76.79.27.98.74
MPB2+O(Sec 2.3, Oracle).95.90.88.98.91.81.80NA'.89†
+ +methods tend to stand out in either closed sets (CB) or the open set (SE),12 MPB2 shows good performance on both kinds of sets. The results also suggest that a better candidate-acquiring method may lead to even better performance. + +Additional experiments. How many sentences should we query when searching for indicative patterns, and how many patterns should we retain? Table 1 shows a grid of these parameters. We use the NFL set for this experiment, as terms in this set all have more than one meaning, and for most the common usage is not the one that belongs to the NFL set (e.g. "jets", "dolphins"). Therefore, this set should give a pessimistic estimation for the number of sentences we need to extract to find quality indicative patterns. Results imply that $\sim 2000$ appearances of seed terms are sufficient, and that good results can be obtained also with fewer instances. This shows that—beyond the data used to train the initial MLM—we do not require a large corpus to achieve good results, suggesting applicability also in new domains.[13] + +How sensitive is the algorithm to the choice of $k$ when computing the pattern similarity? Table 2 shows that the similarity measure is effective for various $k$ values, with max performance at $\sim 50$ . + +Finally, how do the different methods behave in a case where the seed terms are a part of a sub + +Table 3: Main results. Average MAP scores over 3 random seeds of size 3. $* / *$ : excluding 2 or 3 OOV terms. NA: Not applicable, because sets contain many OOV terms. NA': Not applicable for oracle setting, because gold standard candidates not available for open sets. †: Average value over applicable sets only. + +
SetS2VCBSEMPB2MPB2+O
Euro.782.458.609.787.814
Cntrs.454.752.197.528.804
+ +Table 4: Performance on a subset. Avg MAP over 3 random seeds of size 3. + +set? Table 4 shows a case where seed terms are European countries. Ideally, we would like top results to be European countries, later results to be non-European countries, and then unrelated terms. MPB2+O achieves the best MAP scores on both the set and the subset. In the subset case, even when not provided with all oracle terms, MPB2 is better then all other expanders. While other expanders tend to reach stronger results on either the set or the subset, MPB2+O achieves similar scores on both. + +# 4 Conclusions + +We introduce an LM-based TSE method, reaching state-of-the-art results. The method uses the power of LM predictions to locate indicative patterns for the concept class indicated by the seed terms, and then to generalize these patterns to other corpus locations. Beyond strong TSE results, our method demonstrates a novel use of pre-trained MLMs, using their predictions directly rather than relying on their states for fine-tuning. + +# Acknowledgements + +This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT). + +# References + +Asaf Amrami and Yoav Goldberg. 2018. Word sense induction with neural bilm and symmetric patterns. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4860-4867. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. +Yoav Goldberg. 2019. Assessing bert's syntactic abilities. CoRR, abs/1901.05287. +Sonal Gupta and Christopher D. Manning. 2014. Improved pattern learning for bootstrapped entity extraction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, CoNLL 2014, Baltimore, Maryland, USA, June 26-27, 2014, pages 98-108. +Zellig S. Harris. 1968. Mathematical structures of language, volume 21 of Interscience tracts in pure and applied mathematics. Interscience Publ. +Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In 14th International Conference on Computational Linguistics, COLING 1992, Nantes, France, August 23-28, 1992, pages 539-545. +John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word representations. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, pages 4129-4138. +Donald Hindle. 1990. Noun classification from predicate-argument structures. In 28th Annual Meeting of the Association for Computational Linguistics, 6-9 June 1990, University of Pittsburgh, Pittsburgh, Pennsylvania, USA, Proceedings, pages 268-275. +Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. TACL, 4:521-535. +Abhijit Mahabal, Dan Roth, and Sid Mittal. 2018. Robust handling of polysemy via sparse representations. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, *SEM@NAACL-HLT, New Orleans, Louisiana, USA, June 5-6, 2018, pages 265-275. Association for Computational Linguistics. + +Jonathan Mamou, Oren Pereg, Moshe Wasserblat, Alon Eirew, Yael Green, Shira Guskin, Peter Izsak, and Daniel Korat. 2018. Term set expansion based NLP architect by intel AI lab. pages 19-24. +Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings. +Hiroaki Ohshima, Satoshi Oyama, and Katsumi Tanaka. 2006. Searching coordinate terms with their context from the web. In Web Information Systems - WISE 2006, 7th International Conference on Web Information Systems Engineering, Wuhan, China, October 23-26, 2006, Proceedings, pages 40-47. +Patrick Pantel, Eric Crestan, Arkady Borkovsky, Ana Maria Popescu, and Vishnu Vyas. 2009. Web-scale distributional similarity and entity set expansion. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, EMNLP 2009, 6-7 August 2009, Singapore, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 938-947. +Patrick Pantel and Dekang Lin. 2002. Discovering word senses from text. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, July 23-26, 2002, Edmonton, Alberta, Canada, pages 613-619. +Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick S. H. Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander H. Miller. 2019. Language models as knowledge bases? pages 2463-2473. +Shuming Shi, Huibin Zhang, Xiaojie Yuan, and JiRong Wen. 2010. Corpus-based semantic class mining: Distributional vs. pattern-based approaches. In COLING 2010, 23rd International Conference on Computational Linguistics, Proceedings of the Conference, 23-27 August 2010, Beijing, China, pages 993-1001. +Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. pages 4593-4601. +Andrew Trask, Phil Michalak, and John Liu. 2015. sense2vec - A fast and accurate method for word sense disambiguation in neural word embeddings. CoRR, abs/1511.06388. +Richard C. Wang and William W. Cohen. 2007. Language-independent set expansion of named entities using the web. In Proceedings of the 7th IEEE International Conference on Data Mining (ICDM 2007), October 28-31, 2007, Omaha, Nebraska, USA, pages 342-350. + +Huibin Zhang, Mingjie Zhu, Shuming Shi, and Ji-Rong Wen. 2009. Employing topic models for pattern-based semantic class discovery. In ACL 2009, Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP, 2-7 August 2009, Singapore, pages 459-467. + +# A Appendix A: Finding candidate terms + +For our first method, MPB1, the candidate terms we score are just the terms in the LM's vocabulary. For our second method, MPB2, we want to score candidates which are not in this vocabulary as well. Hence, we need a way to acquire these candidates. As running on all possible terms is prohibitive, we seek an efficient method to acquire a high-recall group of candidates for the desired semantic class. We get this using a simple distributional set-expander: we compute the mean vector for words in our seed set, and look for the top-k neighbours in a distributional space. + +Specifically, we use the sense2vec pretrained vectors. Sense2vec (Trask et al., 2015) is a misleadingly-named algorithm from the w2v-family (Mikolov et al., 2013) that models each term as "term-part of speech". This allows it, for example, to learn different representations for "duck-verb" and "duck-noun". + +More importantly, the pre-trained sense2vec vectors distributed by explosion.ai $^{14}$ are trained over a large and diverse English corpus (reddit posts and comments from 2015 and 2019), and its vocabulary includes not only single words but also multi-word units (NP-chunks and named entities). \ No newline at end of file diff --git a/atwostagemaskedlmmethodfortermsetexpansion/images.zip b/atwostagemaskedlmmethodfortermsetexpansion/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..b0da503c56817deccb7a966b7b6d7446efb740ee --- /dev/null +++ b/atwostagemaskedlmmethodfortermsetexpansion/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3d1b86bf488c9c5fafda0a6f34b814fae21a1279dd0ec7eb588a2b2618b49465 +size 137389 diff --git a/atwostagemaskedlmmethodfortermsetexpansion/layout.json b/atwostagemaskedlmmethodfortermsetexpansion/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..46e780237ac267daa710ba8c1b84b4019c509fab --- /dev/null +++ b/atwostagemaskedlmmethodfortermsetexpansion/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e24d05a275b54a62fa2a1d5c86e2247477a0d4e5c91f4824ed45a4ceed4889a +size 260744 diff --git a/atwostepapproachforimpliciteventargumentdetection/3e2f187f-2124-4f62-b1a8-90d8fe7425c0_content_list.json b/atwostepapproachforimpliciteventargumentdetection/3e2f187f-2124-4f62-b1a8-90d8fe7425c0_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..e9d5b01fc11170af9452705cee6694f9900d977d --- /dev/null +++ b/atwostepapproachforimpliciteventargumentdetection/3e2f187f-2124-4f62-b1a8-90d8fe7425c0_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:385a11c18e0d204f567a1a5db42bbb4df574c219486d4304ea71244bf0a53f1d +size 46042 diff --git a/atwostepapproachforimpliciteventargumentdetection/3e2f187f-2124-4f62-b1a8-90d8fe7425c0_model.json b/atwostepapproachforimpliciteventargumentdetection/3e2f187f-2124-4f62-b1a8-90d8fe7425c0_model.json new file mode 100644 index 0000000000000000000000000000000000000000..230cbb64dd148427dfdebf5c7c041c229799340c --- /dev/null +++ b/atwostepapproachforimpliciteventargumentdetection/3e2f187f-2124-4f62-b1a8-90d8fe7425c0_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:08afca62b72c1e97746529de5118b014f8ad951decaee8975d443c03bf3d1aab +size 55641 diff --git a/atwostepapproachforimpliciteventargumentdetection/3e2f187f-2124-4f62-b1a8-90d8fe7425c0_origin.pdf b/atwostepapproachforimpliciteventargumentdetection/3e2f187f-2124-4f62-b1a8-90d8fe7425c0_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..82851d0f3091686c525cdd487d3b7f9e141c2adb --- /dev/null +++ b/atwostepapproachforimpliciteventargumentdetection/3e2f187f-2124-4f62-b1a8-90d8fe7425c0_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b2d6a05693870e5b368c61764b4828ab4a9ad1a440f4c60590b96ec0040f923a +size 377895 diff --git a/atwostepapproachforimpliciteventargumentdetection/full.md b/atwostepapproachforimpliciteventargumentdetection/full.md new file mode 100644 index 0000000000000000000000000000000000000000..d72cc905d0e26216cf0a99abeea4fc76779c6992 --- /dev/null +++ b/atwostepapproachforimpliciteventargumentdetection/full.md @@ -0,0 +1,169 @@ +# A Two-Step Approach for Implicit Event Argument Detection + +# Zhisong Zhang, Xiang Kong, Zhengzhong Liu, Xuezhe Ma, Eduard Hovy + +Language Technologies Institute, Carnegie Mellon University + +{zhisongz, xiangk, liu, xuezhem, hovy}@cs.cmu.edu + +# Abstract + +In this work, we explore the implicit event argument detection task, which studies event arguments beyond sentence boundaries. The addition of cross-sentence argument candidates imposes great challenges for modeling. To reduce the number of candidates, we adopt a two-step approach, decomposing the problem into two sub-problems: argument head-word detection and head-to-span expansion. Evaluated on the recent RAMS dataset (Ebner et al., 2020), our model achieves overall better performance than a strong sequence labeling baseline. We further provide detailed error analysis, presenting where the model mainly makes errors and indicating directions for future improvements. It remains a challenge to detect implicit arguments, calling for more future work of document-level modeling for this task. + +# 1 Introduction + +Event argument detection is a key component in the task of event extraction. It resembles semantic role labeling (SRL) in that the main target is to find argument spans to fill the roles of event frames. However, event arguments can go beyond sentence boundaries: there can be non-local or implicit arguments at the document level. Figure 1 shows such an example: for the purchase event, which is triggered by the word "bought", its money argument appears in the previous sentence. + +Implicit arguments have been under-explored in event extraction. Most of previous systems (Li et al., 2013; Chen et al., 2015; Nguyen et al., 2016; Wang et al., 2019) only consider local arguments in the same sentence of the event trigger. While incorporating implicit arguments requires corresponding annotations, few exist in most of the widely used event datasets, like ACE2005 (LDC, 2005; Walker et al., 2006) and RichERE (LDC, 2015). There are several annotation efforts for implicit arguments + +(a) The new computer cost 3000 dollars, while the old one cost 1000 dollars. Nevertheless, he still bought the more expensive one. +(b) The new computer cost 3000 dollars, while the old one cost 1000 dollars. Therefore, he bought the cheaper one. + +![](images/28dcfb1dc7d507681caaf8129b4a4d6baa71aa809a8ad6208b0cf3f228ba1d91.jpg) +Figure 1: Examples of implicit arguments and model illustration. The bold text indicates the trigger word for the purchase event, while the underlined text indicates its non-local "money" argument in the previous sentence. Our model first detects the head-word "dollars", and then expands it to the whole span. + +in SRL, including $G\& C$ (Gerber and Chai, 2010, 2012), SemEval-2010 (Ruppenhofer et al., 2009, 2010), and 80Days (Feizabadi and Padó, 2014). Yet most are performed with different ontologies such as Nombank ( $G\& C$ ) and FrameNet (SemEval-2010 and 80Days); on different domains (e.g. novels); and in smaller scales ( $G\& C$ and 80Days only cover 10 types of predicates). The lack of annotations poses challenges to train and transfer implicit argument models for event extraction. + +Recently, Ebner et al. (2020) create the Roles Across Multiple Sentences (RAMS) dataset, which covers multi-sentence implicit arguments for a wide range of event and role types. They further develop a span-based argument linking model and achieve relatively high scores. However, they mainly explore a simplified setting that assumes + +the availability of gold argument spans. We extend their work and explore the more challenging full detection problem that predicts argument spans among all possible candidates. The difficulty of the full problem is highlighted in Figure 1. Both "3000 dollars" and "1000 dollars" are good candidates for the money role of the purchase event, but the selections are different given different contexts. + +When considering all possible candidate spans that may occur in any sentences, their quadratic number poses great challenges for the detection. Inspired by dependency-based SRL (Surdeanu et al., 2008; Hajic et al., 2009), we take the syntactical head-words as the proxy for full argument spans, hypothesizing that the head-words can contain enough information to fill the argument roles. Based on this, we adopt a two-step approach: first detecting the head-words of the arguments, and adopting a second step of head-to-span expansion. Actually, this type of two-step setup is not uncommon in prior work of information extraction, including entity detection (Lin et al., 2019), coreference resolution (Peng et al., 2015) and document-level pseudo-coreference (Jauhar et al., 2015; Liu et al., 2016). By considering only individual tokens in the detection step, the system only needs to handle a candidate space whose size scales linearly in respective to the number of tokens instead of quadratically. + +With the same setting of fine-tuning BERT (Devlin et al., 2019) encoder, we show the effectiveness of our model by obtaining overall better results than a strong sequence-labeling model. We further provide detailed error analysis, showing that the main difficulties of the task are upon non-local and non-core arguments. Our analysis shows that the implicit argument task is quite challenging, calling for more future work on document-level semantic understanding for this task. + +# 2 Model + +The goal of event argument detection is to create labeled links between argument spans and the predicate (event trigger). Recent state-of-the-art solutions for sentence-level SRL perform the detection in an end-to-end setting, such as span-based (He et al., 2018; Ouchi et al., 2018), and sequence labeling models (He et al., 2017; Shi and Lin, 2019). However, span-based models face great challenges when considering arguments across sentence boundaries, since the computational complex + +ity of such models grows quadratically to deal with $O(N^2)$ span candidates given $N$ tokens. While traditional sequence labeling models can run in linear-time, they are less flexible and extensible in complex scenarios like overlapping mentions and multiple roles for one mention. In this work, we take a two-step approach that decomposes the problem explicitly into two sub-problems, based on the hypothesis that head-words can usually capture the information of the mention spans. Figure 1 illustrates the three main modules of our model: 1) BERT-based Encoder, 2) Argument Head-Word Detector, and 3) Head-to-span Expander. + +# 2.1 BERT-based Encoder + +Our encoding module is a BERT-based contextualized encoder. The input contains a predicate word (or occasionally a span), which triggers an event, together with its multi-sentence context. We refer to the sentence containing the event trigger as the center sentence. We concatenate the tokens within the 5-sentence window (the window size used in RAMS annotation) of the center sentences, and feed them to BERT to obtain the contextual representation e of each token. In addition, we add special token_type_ids indicators: tokens of the event trigger are assigned 0, other tokens in the center sentence get 1, and tokens in surrounding sentences get $0^{1}$ . We only adopt the indicators when fine-tuning BERT, since the pre-trained BERT originally uses them as segment ids. + +# 2.2 Argument Head-word Detector + +Instead of directly deciding argument spans, we first identify the head-words of the arguments. The hypothesis is that the head-word is able to represent the meaning of the whole span. In this way, this sub-problem mimics a token-pairwise dependency-parsing problem. Following (Dozat and Manning, 2017, 2018), we adopt a biaffine module to calculate $\operatorname{Pr}_r(p, c)$ : the probability of a candidate word $c$ filling an argument role $r$ in the frame governed by a predicate $p$ . We first take the contextualized representations of the candidate $(\mathbf{e}_c)$ and the predicate $(\mathbf{e}_p)$ , which are calculated by BERT as described in §2.1. "Biaffine" further gives the pairwise score based on these representations, and $\operatorname{Pr}_r(p, c)$ is then + +given by softmax with the scores: + +$$ +\operatorname * {P r} _ {r} (p, c) = \frac {\exp \operatorname {B i a f f i n e} _ {r} (\mathbf {e} _ {p} , \mathbf {e} _ {c})}{\sum_ {c ^ {\prime} \in \mathcal {C} \cup \{\epsilon \}} \exp \operatorname {B i a f f i n e} _ {r} (\mathbf {e} _ {p} , \mathbf {e} _ {c ^ {\prime}})} +$$ + +where the normalization is done over the argument candidate set $\mathcal{C}$ (or null $\epsilon$ , whose score is fixed to 0) for each role, following (Ebner et al., 2020; Ouchi et al., 2018). During training, we use the cross-entropy loss to guide the network to pick head-words of gold arguments (or $\epsilon$ if there are no arguments for this role). If there are multiple arguments for one role, we view them as individual instances and sum the losses. At inference time, we simply pick the maximumly-scored argument (or $\epsilon$ ) for each role. + +# 2.3 Head-to-span Expander + +The second module expands each head-word of the argument to its full span. We view it as a combination of left and right boundary classification problems. Taking the left-expanding scenario (L) as example, for each head-word $h$ , we generate a set of candidate spans by adding words one by one on the left up to $K$ words (we empirically set $K = 7$ ), and calculate the probability of word $b$ being the boundary as follow: + +$$ +\operatorname * {P r} _ {L} (h, b) = \frac {\exp \operatorname {M L P} _ {L} (\mathbf {e} _ {h} , \mathbf {e} _ {b})}{\sum_ {b ^ {\prime} \in (h - K , h ]} \exp \operatorname {M L P} _ {L} (\mathbf {e} _ {h} , \mathbf {e} _ {b ^ {\prime}})} +$$ + +Here, the input to the Multi-layer Perceptron (MLP) is again the contextualized representations as depicted in §2.1. During training, we minimize cross-entropy losses on the left and right respectively. At test time, we expand to the maximumly-scored boundary words on both sides. + +# 3 Experiment + +We conduct all experiments $^2$ on the RAMS (v1.0) dataset and focus on the event argument detection task: given (gold) event triggers and their multisentence contexts, predicting the argument spans from raw input tokens. Following (Ebner et al., 2020), we only use gold event types in the type-constrained decoding (TCD) setting. + +Through our experiments, we adopt the pretrained bert-base-cased model. We train all the models for maximumly 20 epochs. If finetuning BERT, we set the initial learning rate to 5e-5; otherwise, it is set to 2e-4. We jointly train our + +
+TCDDev. F1Test PTest RTest F1
Spanno69.962.874.968.3
yes75.178.169.273.3
Headno71.071.566.268.8
yes74.381.166.273.0
+ +Table 1: Comparison of Span-based (Ebner et al., 2020) and Head-based (ours) models on RAMS, given gold argument spans. “+TCD” indicates whether applying type-constrained decoding based on gold event types. + +argument-detector and span-expander, with loss multipliers of 1.0 and 0.5, respectively. + +Since head-words are not annotated, we apply a simple rule: utilizing predicted dependency trees, we heuristically pick the word that has the smallest arc distance to the dependency root as the head. Ties are broken by choosing the rightmost one. There are cases where this procedure does not always give the perfect head, or there is no single head-word for a span (e.g., in multi-word expressions or conjunction). Nevertheless, we find this strategy works well in practice. + +# 3.1 Argument Linking with Gold Spans + +Setting To compare our model with span-based models, we first evaluate in the same setting of (Ebner et al., 2020) that assumes gold argument spans. We directly apply the head rule on the gold spans and consider the head-words as candidates. We also adopt the same BERT setting: learning a linear combination of layers 9, 10, 11 and 12, and applying neither the special input indicators nor fine-tuning. + +Results Table 1 compares our results with the reported results of the span-based model from (Ebner et al., 2020). The results show that the head-word approach can get comparable results to the span-based counterpart. This matches our hypothesis that head-words contain sufficient information of surrounding words using contextualized embedding, making them reasonable alternatives to full argument spans. + +# 3.2 Full Argument Detection + +Setting This setting considers all arguments from any spans in the multi-sentence context. Unless otherwise noted, here we use the last layer of BERT and apply fine-tuning for the whole model. We compare with a strong BERT-based BIO-styled sequence labeling model (Shi and Lin, 2019). We + +
+TCDDev.Test
SpanF1HeadF1SpanF1HeadF1
Seq.no38.1±0.745.7±0.739.3±0.447.1±0.7
yes39.2±0.746.7±0.840.5±0.448.0±0.5
Headno38.9±0.646.4±0.740.1±0.747.7±0.9
yes40.3*±0.648.0*±0.741.8*±0.649.7*±0.8
+ +Table 2: Comparison of the sequence-labeling model (Seq.) and our Head-based model for argument detection on RAMS v1.0. All results are averaged over five runs, $*$ denotes that the result of Head model is significantly better than the corresponding Seq. model (by paired randomization test, $p < 0.05$ ). + +
SpanF1HeadF1
BERT-Full38.9±0.646.4±0.7
No-Indicator35.6±0.442.9±0.4
No-FineTuning34.4±0.540.0±0.4
LSTM26.6±0.431.9±0.6
+ +adopt a modified version3 from AllenNLP and retrain it on RAMS with similar settings: adopting special input indicators and fine-tuning BERT. For arguments that have multiple roles labels, we simply concatenate the labels as a new class. + +Results Table 2 shows the main results for full argument detection. Since the criterion of full-span matching might be too strict in some way, we also report head-word based F1 scores by evaluating solely on head-word matches (obtained using the same head rules). The results show that our head-word based approach gets better results on average without type-constrained decoding and significantly better results after adopting type-constrained decoding with gold event types. Our head-driven approach is also flexible and easily extensible to more complex scenarios like nesting mentions or multiple roles, while keeping the linear complexity. + +Ablation Table 3 lists the ablation results on the encoder. The results show that the BERT encoder contributes much to the performance of our full + +Table 3: Ablation on the encoder for the head-based argument detection model (on development set, no type-constrained decoding). "BERT-Full" is our full finetuned BERT encoder, "No-Indicator" ablates indicating inputs, "No-FineTuning" freezes all pre-trained parameters of BERT, and "LSTM" replaces the BERT with a bi-directional LSTM encoder. + +
d=-2 (3.6%)d=-1 (7.5%)d=0 (82.8%)d=1 (4.0%)d=2 (2.1%)
Seq.14.0±0.614.0±2.441.2±0.915.7±1.04.2±2.5
Head15.6±1.715.3±1.043.4±0.717.8±2.68.5±6.2
+ +Table 4: Performance breakdown for Span-F1 by argument-trigger distance $d$ (on development set, no type-constrained decoding). Numbers in parentheses at the second row indicate the distribution over distance $d$ . + +model. Fine-tuning BERT and the special indicator inputs can provide further improvements. + +On Sentence Distances Table 4 lists the performance breakdown on different sentence distances between arguments and triggers. As opposed to the relative consistent performance in the gold span setting, as shown in (Ebner et al., 2020), we notice a dramatic performance drop on non-local arguments. There may be two main reasons: 1) data imbalance, since non-local implicit arguments appear much less frequently (only around $18\%$ in RAMS) than local ones; 2) lack of direct syntax signals, making the connections between the implicit arguments and event triggers much weaker than the local ones. + +On Argument Roles We also investigate performance breakdowns on different argument roles. The results are shown in Figure 2, where we take the top-20 frequent roles to get more robust results. We can observe that our model performs better on core roles such as "communicator", "employee" and "victim" (with $\mathrm{F}1 > 50$ ), but struggles on non-core roles, like "instrument", "origin" and "destination", with F1 scores of around 20 to 30. The F1 scores correlate well (with Pearson and Spearman correlation coefficients of 0.64 and 0.70, respectively) with the local percentages: the more often one role appears locally around the event trigger, the better results it can obtain. These patterns are not surprising if we consider the possible underlying reasoning. The non-core arguments are not closely related with the event trigger, and thus can appear more freely at other places (or sometimes even be omitted), leading to a lower local percentage and also being harder to detect. + +# 3.3 Manual Analysis + +To further investigate in detail what type of errors the model makes, we sample 200 event frames from the development set and manually compare our model's predictions with the gold annotations. Overall, there are 459 annotated arguments and 442 + +
CategoryDescriptionExampleCount +(Percentage)
CorrectCorrect-348 (38.6%)
SpanUnimportant span mismatchThe [monument]artifact to fallen Soviet sailorsartifact in Limbazi, was demolished by activists.82 (9.1%)
Coref.Co-referencesThe United Statesdestination gets more energy domestically, as [the country]destination continues to rely on oil importsTransport from elsewhere.60 (6.7%)
Possi.Possible annotation problemsA Chinese official participant said dialogueDiscussion was needed to resolve issues on the Korean peninsula.44 (4.9%)
PartialPartially correct[His]recipient family, advisers and alliesrecipient set about acquiringPurchase expensive overseas homes and positions in the country.26 (2.9%)
FrameFrame errorsRelation was wrecked last November when [Turkey]killer attacker shotLifeDie down a fighter jet over the boarder.31 (3.4%)
OthersOther errors-310 (34.4%)
+ +Table 5: Examples and results of error analysis. In the examples, the bold text indicates the trigger word, followed by its event type noted in green. Arguments in gold annotations are indicated by the underlined spans with red role types, while the predicted arguments are indicated by [bracketed] spans with blue role types. + +![](images/951a2d938a51ec1a7906e5b0ae24f4aa9f29e59fe271792f5ce31c28ebd04f6b.jpg) +Figure 2: Performance breakdown of Span-F1 on the top-20 frequent roles (on development set, no type-constrained decoding). $x$ -axis represents the percentage of local arguments for this role, while $y$ -axis denotes the role specific Span-F1 scores. The two blue dashed lines denote the overall F1 scores (0.389) and local percentage (82.8%). + +predicted ones. For both annotated and predicted arguments, we assign them to one of seven categories, and the results are listed in Table 5. Here, the "Span" errors denote unimportant span mismatches, and they take nearly $9\%$ of all items. If we ignore these errors, the performance can reach around $47\%$ , which roughly matches the automatically evaluated Head-F1 scores. In some way, this supports our intuition to adopt a two-step approach, since the decisions of the span ranges may be separated from the core problem of argument detection, where head-words can be reasonable representatives. Another major source of errors comes from "Coref.", which is not surprising since the + +same entities can have multiple appearances at the document level. Our analysis indicates that this is a problem that should be further investigated for both modeling and evaluation. Another notable type of error is frame mismatch ("Frame"). In the main setting (without type-constrained decoding), our model neither utilizes nor predicts event frame types, meaning that the frame information purely comes from the trigger words. Therefore, roles belonging to other event frames may be predicted. Finally, the "Others" category includes the ones where we cannot find obviously intuitive patterns. We would identify most of them as the more difficult cases, whose error breakdown follows similar patterns to the overall ones as shown in Figure 2. + +# 4 Conclusion + +In this work, we propose a flexible two-step approach for implicit event argument detection. Our head-word based approach effectively reduces the candidate size and achieves good results on the RAMS dataset. We further provide detailed error analysis, showing that non-local and non-core arguments are the main difficulties. We hope that this work can shed some light and inspire future work at this line of research. + +# Acknowledgment + +This research was supported in part by DARPA grant FA8750-18-2-0018 funded under the AIDA program. We thank the three anonymous reviewers for their helpful comments. + +# References + +Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multipooling convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 167-176, Beijing, China. Association for Computational Linguistics. +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. +Timothy Dozat and Christopher D. Manning. 2017. Deep bioaffine attention for neural dependency parsing. In ICLR. +Timothy Dozat and Christopher D. Manning. 2018. Simpler but more accurate semantic dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 484-490, Melbourne, Australia. Association for Computational Linguistics. +Seth Ebner, Patrick Xia, Ryan Culkin, Kyle Rawlins, and Benjamin Van Durme. 2020. Multi-sentence argument linking. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. +Parvin Sadat Feizabadi and Sebastian Padó. 2014. Crowdsourcing annotation of non-local semantic roles. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, volume 2: Short Papers, pages 226-230, Gothenburg, Sweden. Association for Computational Linguistics. +Matthew Gerber and Joyce Chai. 2010. Beyond NomBank: A study of implicit arguments for nominal predicates. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1583-1592, Uppsala, Sweden. Association for Computational Linguistics. +Matthew Gerber and Joyce Y. Chai. 2012. Semantic role labeling of implicit arguments for nominal predicates. Computational Linguistics, 38(4):755-798. +Jan Hajic, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Antonia Martí, Lluis Marquez, Adam Meyers, Joakim Nivre, Sebastian Padó, Jan Štepanek, Pavel Stranák, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of + +the Thirteenth Conference on Computational Natural Language Learning (CoNLL 2009): Shared Task, pages 1-18, Boulder, Colorado. Association for Computational Linguistics. +Luheng He, Kenton Lee, Omer Levy, and Luke Zettlemoyer. 2018. Jointly predicting predicates and arguments in neural semantic role labeling. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 364-369, Melbourne, Australia. Association for Computational Linguistics. +Luheng He, Kenton Lee, Mike Lewis, and Luke Zettlemoyer. 2017. Deep semantic role labeling: What works and what's next. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 473-483, Vancouver, Canada. Association for Computational Linguistics. +Sujay Kumar Jauhar, Raul Guerra, Edgar González Pellicer, and Marta Recasens. 2015. Resolving discourse-deictic pronouns: A two-stage approach to do it. In Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics, pages 299-308, Denver, Colorado. Association for Computational Linguistics. +LDC. 2005. ACE (automatic content extraction) english annotation guidelines for events version 5.4.3. Linguistic Data Consortium. +LDC. 2015. Deft Rich ERE annotation guidelines: Events version 3.0. Linguistic Data Consortium. +Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 73-82, Sofia, Bulgaria. Association for Computational Linguistics. +Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2019. Sequence-to-nuggets: Nested entity mention detection via anchor-region networks. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5182-5192, Florence, Italy. Association for Computational Linguistics. +Zhengzhong Liu, Edgar Gonzalez Pellicer, and Daniel Gillick. 2016. Exploring the steps of verb phrase ellipsis. In Proceedings of the Workshop on Coreference Resolution Beyond OntoNotes (CORBON 2016), pages 32-40, San Diego, California. Association for Computational Linguistics. +Thien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 300-309, San Diego, California. Association for Computational Linguistics. + +Hiroki Ouchi, Hiroyuki Shindo, and Yuji Matsumoto. 2018. A span selection model for semantic role labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1630-1642, Brussels, Belgium. Association for Computational Linguistics. +Haoruo Peng, Kai-Wei Chang, and Dan Roth. 2015. A joint framework for coreference resolution and mention head detection. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, pages 12–21, Beijing, China. Association for Computational Linguistics. +Josef Ruppenhofer, Caroline Sporleder, Roser Morante, Collin Baker, and Martha Palmer. 2009. SemEval-2010 task 10: Linking events and their participants in discourse. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW-2009), pages 106-111, Boulder, Colorado. Association for Computational Linguistics. +Josef Ruppenhofer, Caroline Sporleder, Roser Morante, Collin Baker, and Martha Palmer. 2010. SemEval-2010 task 10: Linking events and their participants in discourse. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 45-50, Uppsala, Sweden. Association for Computational Linguistics. +Peng Shi and Jimmy Lin. 2019. Simple bert models for relation extraction and semantic role labeling. arXiv preprint arXiv:1904.05255. +Mihai Surdeanu, Richard Johansson, Adam Meyers, Lluis Marquez, and Joakim Nivre. 2008. The CoNLL 2008 shared task on joint parsing of syntactic and semantic dependencies. In CoNLL 2008: Proceedings of the Twelfth Conference on Computational Natural Language Learning, pages 159-177, Manchester, England. Coling 2008 Organizing Committee. +Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. ACE 2005 multilingual training corpus. Linguistic Data Consortium, 57. +Xiaozhi Wang, Ziqi Wang, Xu Han, Zhiyuan Liu, Juanzi Li, Peng Li, Maosong Sun, Jie Zhou, and Xiang Ren. 2019. HMEAE: Hierarchical modular event argument extraction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5776-5782, Hong Kong, China. Association for Computational Linguistics. \ No newline at end of file diff --git a/atwostepapproachforimpliciteventargumentdetection/images.zip b/atwostepapproachforimpliciteventargumentdetection/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..e93f8285aeeef62730c9ad48f4967fe916a15884 --- /dev/null +++ b/atwostepapproachforimpliciteventargumentdetection/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b9ae0295f7545ca6947e75817a219784dd5e033b1569f3a97eedaff783c92488 +size 234706 diff --git a/atwostepapproachforimpliciteventargumentdetection/layout.json b/atwostepapproachforimpliciteventargumentdetection/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..1fee94e46add269003fe5d1755731dc0cc3acf2a --- /dev/null +++ b/atwostepapproachforimpliciteventargumentdetection/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8e1ffdc20b81dcc9fbe7628cc9dbba6ced554149b3980af73f299a39bef04002 +size 196761 diff --git a/aunifiedmrcframeworkfornamedentityrecognition/4e829556-0b14-46fd-ab71-3533c6a06237_content_list.json b/aunifiedmrcframeworkfornamedentityrecognition/4e829556-0b14-46fd-ab71-3533c6a06237_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..da2546bd9b589b2773181e291191ae8bdce3bf97 --- /dev/null +++ b/aunifiedmrcframeworkfornamedentityrecognition/4e829556-0b14-46fd-ab71-3533c6a06237_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2233524149f35970bb4482957cc82fe9e86c45359c428090ae1e8c0e2743668a +size 79667 diff --git a/aunifiedmrcframeworkfornamedentityrecognition/4e829556-0b14-46fd-ab71-3533c6a06237_model.json b/aunifiedmrcframeworkfornamedentityrecognition/4e829556-0b14-46fd-ab71-3533c6a06237_model.json new file mode 100644 index 0000000000000000000000000000000000000000..710d78265b872809001fce6590166fd082d9bad2 --- /dev/null +++ b/aunifiedmrcframeworkfornamedentityrecognition/4e829556-0b14-46fd-ab71-3533c6a06237_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:4d841534c6204a8b1edff9c2b16e9afc13d91174fb3c8bd1169ea129134dfafc +size 102556 diff --git a/aunifiedmrcframeworkfornamedentityrecognition/4e829556-0b14-46fd-ab71-3533c6a06237_origin.pdf b/aunifiedmrcframeworkfornamedentityrecognition/4e829556-0b14-46fd-ab71-3533c6a06237_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..7e123e7badd2afe9658c339db4501145e1795b0a --- /dev/null +++ b/aunifiedmrcframeworkfornamedentityrecognition/4e829556-0b14-46fd-ab71-3533c6a06237_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47ff91d3d07a46f016e80be34d021e9f66fd22278f34a4269ed6e9ea97b77bae +size 951374 diff --git a/aunifiedmrcframeworkfornamedentityrecognition/full.md b/aunifiedmrcframeworkfornamedentityrecognition/full.md new file mode 100644 index 0000000000000000000000000000000000000000..afae67e6b156532ccd16bc16efc4770e94466187 --- /dev/null +++ b/aunifiedmrcframeworkfornamedentityrecognition/full.md @@ -0,0 +1,353 @@ +# A Unified MRC Framework for Named Entity Recognition + +Xiaoya Li\*, Jingrong Feng\*, Yuxian Meng\*, Qinghong Han\*, Fei Wu\* and Jiwei Li\*\* + +$\spadesuit$ Department of Computer Science and Technology, Zhejiang University + +* Shannon.AI + +{xiaoya_li, jingrong_feng, yuxian_meng, qinghong_han} @ shannonai.com + +wufei@cs.zju.edu.cn, jiwei_li@shannonai.com + +# Abstract + +The task of named entity recognition (NER) is normally divided into nested NER and flat NER depending on whether named entities are nested or not. Models are usually separately developed for the two tasks, since sequence labeling models are only able to assign a single label to a particular token, which is unsuitable for nested NER where a token may be assigned several labels. + +In this paper, we propose a unified framework that is capable of handling both flat and nested NER tasks. Instead of treating the task of NER as a sequence labeling problem, we propose to formulate it as a machine reading comprehension (MRC) task. For example, extracting entities with the PER(Person) label is formalized as extracting answer spans to the question "which person is mentioned in the text". This formulation naturally tackles the entity overlapping issue in nested NER: the extraction of two overlapping entities with different categories requires answering two independent questions. Additionally, since the query encodes informative prior knowledge, this strategy facilitates the process of entity extraction, leading to better performances for not only nested NER, but flat NER. + +We conduct experiments on both nested and flat NER datasets. Experiment results demonstrate the effectiveness of the proposed formulation. We are able to achieve a vast amount of performance boost over current SOTA models on nested NER datasets, i.e., +1.28, +2.55, +5.44, +6.37, respectively on ACE04, ACE05, GENIA and KBP17, as well as flat NER datasets, i.e., +0.24, +1.95, +0.21, +1.49 respectively on English CoNLL 2003, English OntoNotes 5.0, Chinese MSRA and Chinese OntoNotes 4.0. The code and datasets can be found at https://github.com/ShannonAI/mrc-for-flat-nested-ner. + +Alpha B2 proteins bound the PEBP2 site within the mouse GM-CSF promoter. + +![](images/9d502c93dff25ef586d517d3fb9e483571719922bf8f909a347b8fdd496874c4.jpg) + +Last night, at the Chinese embassy in France, there was a holiday atmosphere. + +![](images/b97eb55335e0e01f52663325539922f2cac245930e534d8f745b44387070d0c3.jpg) +Figure 1: Examples for nested entities from GENIA and ACE04 corpora. + +# 1 Introduction + +Named Entity Recognition (NER) refers to the task of detecting the span and the semantic category of entities from a chunk of text. The task can be further divided into two sub-categories, nested NER and flat NER, depending on whether entities are nested or not. Nested NER refers to a phenomenon that the spans of entities (mentions) are nested, as shown in Figure 1. Entity overlapping is a fairly common phenomenon in natural languages. + +The task of flat NER is commonly formalized as a sequence labeling task: a sequence labeling model (Chiu and Nichols, 2016; Ma and Hovy, 2016; Devlin et al., 2018) is trained to assign a single tagging class to each unit within a sequence of tokens. This formulation is unfortunately incapable of handling overlapping entities in nested NER (Huang et al., 2015; Chiu and Nichols, 2015), where multiple categories need to be assigned to a single token if the token participates in multiple entities. Many attempts have been made to reconcile sequence labeling models with nested NER (Alex et al., 2007; Byrne, 2007; Finkel and Manning, 2009; Lu and Roth, 2015; Katiyar and Cardie, 2018), mostly based on the pipelined systems. However, pipelined systems suffer from the disadvantages of error propagation, long running time and the intensiveness in developing hand-crafted features, etc. + +Inspired by the current trend of formalizing + +NLP problems as question answering tasks (Levy et al., 2017; McCann et al., 2018; Li et al., 2019), we propose a new framework that is capable of handling both flat and nested NER. Instead of treating the task of NER as a sequence labeling problem, we propose to formulate it as a SQuAD-style (Rajpurkar et al., 2016, 2018) machine reading comprehension (MRC) task. Each entity type is characterized by a natural language query, and entities are extracted by answering these queries given the contexts. For example, the task of assigning the PER(Person) label to "[Washington] was born into slavery on the farm of James Burroughs" is formalized as answering the question "which person is mentioned in the text?" This strategy naturally tackles the entity overlapping issue in nested NER: the extraction of two entities with different categories that overlap requires answering two independent questions. + +The MRC formulation also comes with another key advantage over the sequence labeling formulation. For the latter, golden NER categories are merely class indexes and lack for semantic prior information for entity categories. For example, the ORG(ORGANIZATION) class is treated as a one-hot vector in sequence labeling training. This lack of clarity on what to extract leads to inferior performances. On the contrary, for the MRC formulation, the query encodes significant prior information about the entity category to extract. For example, the query "find an organization such as company, agency and institution in the context" encourages the model to link the word "organization" in the query to location entities in the context. Additionally, by encoding comprehensive descriptions (e.g., "company, agency and institution") of tagging categories (e.g., ORG), the model has the potential to disambiguate similar tagging classes. + +We conduct experiments on both nested and flat NER datasets to show the generality of our approach. Experimental results demonstrate its effectiveness. We are able to achieve a vast amount of performance boost over current SOTA models on nested NER datasets, i.e., +1.28, +2.55, +5.44, +6.37, respectively on ACE04, ACE05, GENIA and KBP17, as well as flat NER datasets, i.e., +0.24, +1.95, +0.21, +1.49 respectively on English CoNLL 2003, English OntoNotes 5.0, Chinese MSRA, Chinese OntoNotes 4.0. We wish that our work would inspire the introduction of + +new paradigms for the entity recognition task. + +# 2 Related Work + +# 2.1 Named Entity Recognition (NER) + +Traditional sequence labeling models use CRFs (Lafferty et al., 2001; Sutton et al., 2007) as a backbone for NER. The first work using neural models for NER goes back to 2003, when Hammerton (2003) attempted to solve the problem using unidirectional LSTMs. Collobert et al. (2011) presented a CNN-CRF structure, augmented with character embeddings by Santos and Guimaraes (2015). Lample et al. (2016) explored neural structures for NER, in which the bidirectional LSTMs are combined with CRFs with features based on character-based word representations and unsupervised word representations. Ma and Hovy (2016) and Chiu and Nichols (2016) used a character CNN to extract features from characters. Recent large-scale language model pretraining methods such as BERT (Devlin et al., 2018) and ELMo (Peters et al., 2018a) further enhanced the performance of NER, yielding state-of-the-art performances. + +# 2.2 Nested Named Entity Recognition + +The overlapping between entities (mentions) was first noticed by Kim et al. (2003), who developed handcrafted rules to identify overlapping mentions. Alex et al. (2007) proposed two multi-layer CRF models for nested NER. The first model is the inside-out model, in which the first CRF identifies the innermost entities, and the successive layer CRF is built over words and the innermost entities extracted from the previous CRF to identify second-level entities, etc. The other is the outside-in model, in which the first CRF identifies outermost entities, and then successive CRFs would identify increasingly nested entities. Finkel and Manning (2009) built a model to extract nested entity mentions based on parse trees. They made the assumption that one mention is fully contained by the other when they overlap. Lu and Roth (2015) proposed to use mention hyper-graphs for recognizing overlapping mentions. Xu et al. (2017) utilized a local classifier that runs on every possible span to detect overlapping mentions and Katiyar and Cardie (2018) used neural models to learn the hyper-graph representations for nested entities. Ju et al. (2018) dynamically stacked flat NER layers in a hierarchical manner. Lin et al. + +(2019a) proposed the Anchor-Region Networks (ARNs) architecture by modeling and leveraging the head-driven phrase structures of nested entity mentions. Luan et al. (2019) built a span enumeration approach by selecting the most confident entity spans and linking these nodes with confidence-weighted relation types and coreferences. Other works (Muis and Lu, 2017; Sohrab and Miwa, 2018; Zheng et al., 2019) also proposed various methods to tackle the nested NER problem. + +Recently, nested NER models are enriched with pre-trained contextual embeddings such as BERT (Devlin et al., 2018) and ELMo (Peters et al., 2018b). Fisher and Vlachos (2019) introduced a BERT-based model that first merges tokens and/or entities into entities, and then assigned labeled to these entities. Shibuya and Hovy (2019) provided inference model that extracts entities iteratively from outermost ones to inner ones. Straková et al. (2019) viewed nested NER as a sequence-to-sequence generation problem, in which the input sequence is a list of tokens and the target sequence is a list of labels. + +# 2.3 Machine Reading Comprehension (MRC) + +MRC models (Seo et al., 2016; Wang et al., 2016; Wang and Jiang, 2016; Xiong et al., 2016, 2017; Wang et al., 2016; Shen et al., 2017; Chen et al., 2017) extract answer spans from a passage through a given question. The task can be formalized as two multi-class classification tasks, i.e., predicting the starting and ending positions of the answer spans. + +Over the past one or two years, there has been a trend of transforming NLP tasks to MRC question answering. For example, Levy et al. (2017) transformed the task of relation extraction to a QA task: each relation type $R(x,y)$ can be parameterized as a question $q(x)$ whose answer is $y$ . For example, the relation EDUCATED-AT can be mapped to "Where did x study?". Given a question $q(x)$ , if a non-null answer $y$ can be extracted from a sentence, it means the relation label for the current sentence is $R$ . McCann et al. (2018) transformed NLP tasks such as summarization or sentiment analysis into question answering. For example, the task of summarization can be formalized as answering the question "What is the summary?" Our work is significantly inspired by Li et al. (2019), which formalized the task + +of entity-relation extraction as a multi-turn question answering task. Different from this work, Li et al. (2019) focused on relation extraction rather than NER. Additionally, Li et al. (2019) utilized a template-based procedure for constructing queries to extract semantic relations between entities and their queries lack diversity. In this paper, more factual knowledge such as synonyms and examples are incorporated into queries, and we present an in-depth analysis of the impact of strategies of building queries. + +# 3 NER as MRC + +# 3.1 Task Formalization + +Given an input sequence $X = \{x_{1}, x_{2}, \dots, x_{n}\}$ , where $n$ denotes the length of the sequence, we need to find every entity in $X$ , and then assign a label $y \in Y$ to it, where $Y$ is a predefined list of all possible tag types (e.g., PER, LOC, etc). + +Dataset Construction Firstly we need to transform the tagging-style annotated NER dataset to a set of (QUESTION, ANSWER, CONTEXT) triples. For each tag type $y \in Y$ , it is associated with a natural language question $q_{y} = \{q_{1}, q_{2}, \ldots, q_{m}\}$ , where $m$ denotes the length of the generated query. An annotated entity $x_{\text{start, end}} = \{x_{\text{start}}, x_{\text{start+1}}, \dots, x_{\text{end-1}}, x_{\text{end}}\}$ is a substring of $X$ satisfying start $\leq$ end. Each entity is associated with a golden label $y \in Y$ . By generating a natural language question $q_{y}$ based on the label $y$ , we can obtain the triple $(q_{y}, x_{\text{start, end}}, X)$ , which is exactly the (QUESTION, ANSWER, CONTEXT) triple that we need. Note that we use the subscript "start, end" to denote the continuous tokens from index 'start' to 'end' in a sequence. + +# 3.2 Query Generation + +The question generation procedure is important since queries encode prior knowledge about labels and have a significant influence on the final results. Different ways have been proposed for question generation, e.g., Li et al. (2019) utilized a template-based procedure for constructing queries to extract semantic relations between entities. In this paper, we take annotation guideline notes as references to construct queries. Annotation guideline notes are the guidelines provided to the annotators of the dataset by the dataset builder. They are descriptions of tag categories, which are described as generic and precise as possible so that + +
EntityNatural Language Question
LocationFind locations in the text, including non-geographical locations, mountain ranges and bodies of water.
FacilityFind facilities in the text, including buildings, airports, highways and bridges.
OrganizationFind organizations in the text, including companies, agencies and institutions.
+ +Table 1: Examples for transforming different entity categories to question queries. + +human annotators can annotate the concepts or mentions in any text without running into ambiguity. Examples are shown in Table 1. + +# 3.3 Model Details + +# 3.3.1 Model Backbone + +Given the question $q_{y}$ , we need to extract the text span $x_{\mathrm{start, end}}$ which is with type $y$ from $X$ under the MRC framework. We use BERT (Devlin et al., 2018) as the backbone. To be in line with BERT, the question $q_{y}$ and the passage $X$ are concatenated, forming the combined string $\{[\mathrm{CLS}], q_{1}, q_{2}, \dots, q_{m}, [\mathrm{SEP}], x_{1}, x_{2}, \dots, x_{n}\}$ , where [CLS] and [SEP] are special tokens. Then BERT receives the combined string and outputs a context representation matrix $E \in \mathbb{R}^{n \times d}$ , where $d$ is the vector dimension of the last layer of BERT and we simply drop the query representations. + +# 3.3.2 Span Selection + +There are two strategies for span selection in MRC: the first strategy (Seo et al., 2016; Wang et al., 2016) is to have two $n$ -class classifiers separately predict the start index and the end index, where $n$ denotes the length of the context. Since the softmax function is put over all tokens in the context, this strategy has the disadvantage of only being able to output a single span given a query; the other strategy is to have two binary classifiers, one to predict whether each token is the start index or not, the other to predict whether each token is the end index or not. This strategy allows for outputting multiple start indexes and multiple end indexes for a given context and a specific query, and thus has the potentials to extract all related entities according to $q_{y}$ . We adopt the second strategy and describe the details below. + +Start Index Prediction Given the representation matrix $E$ output from BERT, the model first predicts the probability of each token being a start + +index as follows: + +$$ +P _ {\text {s t a r t}} = \operatorname {s o f t m a x} _ {\text {e a c h r o w}} (E \cdot T _ {\text {s t a r t}}) \in \mathbb {R} ^ {n \times 2} \tag {1} +$$ + +$T_{\mathrm{start}} \in \mathbb{R}^{d \times 2}$ is the weights to learn. Each row of $P_{\mathrm{start}}$ presents the probability distribution of each index being the start position of an entity given the query. + +End Index Prediction The end index prediction procedure is exactly the same, except that we have another matrix $T_{\mathrm{end}}$ to obtain probability matrix $P_{\mathrm{end}} \in \mathbb{R}^{n \times 2}$ . + +Start-End Matching In the context $X$ , there could be multiple entities of the same category. This means that multiple start indexes could be predicted from the start-index prediction model and multiple end indexes predicted from the end-index prediction model. The heuristic of matching the start index with its nearest end index does not work here since entities could overlap. We thus further need a method to match a predicted start index with its corresponding end index. + +Specifically, by applying argmax to each row of $P_{\mathrm{start}}$ and $P_{\mathrm{end}}$ , we will get the predicted indexes that might be the starting or ending positions, i.e., $\hat{I}_{\mathrm{start}}$ and $\hat{I}_{\mathrm{end}}$ : + +$$ +\begin{array}{l} \hat {I} _ {\text {s t a r t}} = \{i \mid \operatorname {a r g m a x} (P _ {\text {s t a r t}} ^ {(i)}) = 1, i = 1, \dots , n \} \\ \hat {I} _ {\text {e n d}} = \{j \mid \operatorname {a r g m a x} \left(P _ {\text {e n d}} ^ {(j)}\right) = 1, j = 1, \dots , n \} \tag {2} \\ \end{array} +$$ + +where the superscript $(i)$ denotes the $i$ -th row of a matrix. Given any start index $i_{\mathrm{start}} \in \hat{I}_{\mathrm{start}}$ and end index $i_{\mathrm{end}} \in \hat{I}_{\mathrm{end}}$ , a binary classification model is trained to predict the probability that they should be matched, given as follows: + +$$ +P _ {i _ {\text {s t a r t}}, j _ {\text {e n d}}} = \operatorname {s i g m o i d} (m \cdot \operatorname {c o n c a t} \left(E _ {i _ {\text {s t a r t}}}, E _ {j _ {\text {e n d}}}\right)) \tag {3} +$$ + +where $m\in \mathbb{R}^{1\times 2d}$ is the weights to learn. + +# 3.4 Train and Test + +At training time, $X$ is paired with two label sequences $Y_{\mathrm{start}}$ and $Y_{\mathrm{end}}$ of length $n$ representing the ground-truth label of each token $x_{i}$ being the start index or end index of any entity. We therefore have the following two losses for start and end index predictions: + +$$ +\begin{array}{l} \begin{array}{l} \mathcal {L} _ {\text {s t a r t}} = \mathrm {C E} \left(P _ {\text {s t a r t}}, Y _ {\text {s t a r t}}\right) \\ \mathcal {L} _ {\text {s t a r t}} = \mathrm {C E} \left(P _ {\text {s t a r t}}, Y _ {\text {s t a r t}}\right) \end{array} \tag {4} \\ \mathcal {L} _ {\text {e n d}} = \mathrm {C E} (P _ {\text {e n d}}, Y _ {\text {e n d}}) \\ \end{array} +$$ + +Let $Y_{\text{start, end}}$ denote the golden labels for whether each start index should be matched with each end + +index. The start-end index matching loss is given as follows: + +$$ +\mathcal {L} _ {\text {s p a n}} = \mathrm {C E} \left(P _ {\text {s t a r t , e n d}}, Y _ {\text {s t a r t , e n d}}\right) \tag {5} +$$ + +The overall training objective to be minimized is as follows: + +$$ +\mathcal {L} = \alpha \mathcal {L} _ {\text {s t a r t}} + \beta \mathcal {L} _ {\text {e n d}} + \gamma \mathcal {L} _ {\text {s p a n}} \tag {6} +$$ + +$\alpha, \beta, \gamma \in [0,1]$ are hyper-parameters to control the contributions towards the overall training objective. The three losses are jointly trained in an end-to-end fashion, with parameters shared at the BERT layer. At test time, start and end indexes are first separately selected based on $\hat{I}_{\mathrm{start}}$ and $\hat{I}_{\mathrm{end}}$ . Then the index matching model is used to align the extracted start indexes with end indexes, leading to the final extracted answers. + +# 4 Experiments + +# 4.1 Experiments on Nested NER + +# 4.1.1 Datasets + +For nested NER, experiments are conducted on the widely-used ACE 2004, ACE 2005, GENIA and KBP2017 datasets, which respectively contain $24\%$ , $22\%$ , $10\%$ and $19\%$ nested mentions. Hyperparameters are tuned on their corresponding development sets. For evaluation, we use span-level micro-averaged precision, recall and F1. + +ACE 2004 and ACE 2005 (Doddington et al., 2005; Christopher Walker and Maeda, 2006): The two datasets each contain 7 entity categories. For each entity type, there are annotations for both the entity mentions and mention heads. For fair comparison, we exactly follow the data preprocessing strategy in Katiyar and Cardie (2018) and Lin et al. (2019b) by keeping files from bn, nw and wl, and splitting these files into train, dev and test sets by 8:1:1, respectively. + +GENIA (Ohta et al., 2002) For the GENIA dataset, we use GENIAcorpus3.02p. We follow the protocols in Katiyar and Cardie (2018). + +KBP2017 We follow Katiyar and Cardie (2018) and evaluate our model on the 2017 English evaluation dataset (LDC2017D55). Training set consists of RichERE annotated datasets, which include LDC2015E29, LDC2015E68, LDC2016E31 and LDC2017E02. We follow the dataset split strategy in Lin et al. (2019b). + +# 4.1.2 Baselines + +We use the following models as baselines: + +- Hyper-Graph: Katiyar and Cardie (2018) proposes a hypergraph-based model based on LSTMs. +- Seg-Graph: Wang and Lu (2018) proposes a segmental hypergargh representation to model overlapping entity mentions. +- ARN: Lin et al. (2019a) proposes AnchorRegion Networks by modeling and levraging the head-driven phrase structures of entity mentions. +- KBP17-Best: Ji et al. (2017) gives an overview of the Entity Discovery task at the Knowledge Base Population (KBP) track at TAC2017 and also reports previous best results for the task of nested NER. +- Seq2Seq-BERT: Straková et al. (2019) views the nested NER as a sequence-to-sequence problem. Input to the model is word tokens and the output sequence consists of labels. +- Path-BERT: Shibuya and Hovy (2019) treats the tag sequence as the second best path within in the span of their parent entity based on BERT. +- Merge-BERT: Fisher and Vlachos (2019) proposes a merge and label method based on BERT. +- DYGIE: Luan et al. (2019) introduces a general framework that share span representations using dynamically constructed span graphs. + +# 4.1.3 Results + +Table 2 shows experimental results on nested NER datasets. We observe huge performance boosts on the nested NER datasets over previous state-of-the-art models, achieving F1 scores of $85.98\%$ , $86.88\%$ , $83.75\%$ and $80.97\%$ on ACE04, ACE05, GENIA and KBP-2017 datasets, which are $+1.28\%$ , $+2.55\%$ , $+5.44\%$ and $+6.37\%$ over previous SOTA performances, respectively. + +# 4.2 Experiments on Flat NER + +# 4.2.1 Datasets + +For flat NER, experiments are conducted on both English datasets i.e. CoNLL2003 and OntoNotes 5.0 and Chinese datasets i.e. OntoNotes 4.0 and MSRA. Hyperparameters are tuned on their corresponding development sets. We report span-level + +
English ACE 2004
ModelPrecisionRrecallF1
Hyper-Graph (Katiyar and Cardie, 2018)73.671.872.7
Seg-Graph (Wang and Lu, 2018)78.072.475.1
Seq2seq-BERT (Straková et al., 2019)--84.40
Path-BERT (Shibuya and Hovy, 2019)83.7381.9182.81
DYGIE (Luan et al., 2019)--84.7
BERT-MRC85.0586.3285.98 (+1.28)
English ACE 2005
ModelPrecisionRecallF1
Hyper-Graph (Katiyar and Cardie, 2018)70.670.470.5
Seg-Graph (Wang and Lu, 2018)76.872.374.5
ARN (Lin et al., 2019a)76.273.674.9
Path-BERT (Shibuya and Hovy, 2019)82.9882.4282.70
Merge-BERT (Fisher and Vlachos, 2019)82.782.182.4
DYGIE (Luan et al., 2019)--82.9
Seq2seq-BERT (Straková et al., 2019)--84.33
BERT-MRC87.1686.5986.88 (+2.55)
English GENIA
ModelPrecisionRecallF1
Hyper-Graph (Katiyar and Cardie, 2018)77.771.874.6
ARN (Lin et al., 2019a)75.873.974.8
Path-BERT (Shibuya and Hovy, 2019)78.0776.4577.25
DYGIE (Luan et al., 2019)--76.2
Seq2seq-BERT (Straková et al., 2019)--78.31
BERT-MRC85.1881.1283.75 (+5.44)
English KBP 2017
ModelPrecisionRecallF1
KBP17-Best (Ji et al., 2017)76.273.072.8
ARN (Lin et al., 2019a)77.771.874.6
BERT-MRC82.3377.6180.97 (+6.37)
+ +micro-averaged precision, recall and F1 scores for evaluation. + +CoNLL2003 (Sang and Meulder, 2003) is an English dataset with four types of named entities: Location, Organization, Person and Miscellaneous. We followed data processing protocols in Ma and Hovy (2016). + +OntoNotes 5.0 (Pradhan et al., 2013) is an English dataset and consists of text from a wide variety of sources. The dataset includes 18 types of named entity, consisting of 11 types (Person, Organization, etc) and 7 values (Date, Percent, etc). + +MSRA (Levow, 2006) is a Chinese dataset and performs as a benchmark dataset. Data in MSRA is collected from news domain and is used as shared task on SIGNAN backoff 2006. There are three types of named entities. + +OntoNotes 4.0 (Pradhan et al., 2011) is a Chinese dataset and consists of text from news domain. OntoNotes 4.0 annotates 18 named entity types. In this paper, we take the same data split as Wu et al. (2019). + +Table 2: Results for nested NER tasks. + +
English CoNLL 2003
ModelPrecisionRecallF1
BiLSTM-CRF (Ma and Hovy, 2016)--91.03
ELMo (Peters et al., 2018b)--92.22
CVT (Clark et al., 2018)--92.6
BERT-Tagger (Devlin et al., 2018)--92.8
BERT-MRC92.3394.6193.04 (+0.24)
English OntoNotes 5.0
ModelPrecisionRecallF1
BiLSTM-CRF (Ma and Hovy, 2016)86.0486.5386.28
Strubell et al. (2017)--86.84
CVT (Clark et al., 2018)--88.8
BERT-Tagger (Devlin et al., 2018)90.0188.3589.16
BERT-MRC92.9889.9591.11 (+1.95)
Chinese MSRA
ModelPrecisionRecallF1
Lattice-LSTM (Zhang and Yang, 2018)93.5792.7993.18
BERT-Tagger (Devlin et al., 2018)94.9794.6294.80
Glyce-BERT (Wu et al., 2019)95.5795.5195.54
BERT-MRC96.1895.1295.75 (+0.21)
Chinese OntoNotes 4.0
ModelPrecisionRecallF1
Lattice-LSTM (Zhang and Yang, 2018)76.3571.5673.88
BERT-Tagger (Devlin et al., 2018)78.0180.3579.16
Glyce-BERT (Wu et al., 2019)81.8781.4081.63
BERT-MRC82.9881.2582.11 (+0.48)
+ +Table 3: Results for flat NER tasks. + +# 4.2.2 Baselines + +For English datasets, we use the following models as baselines. + +BiLSTM-CRF from Ma and Hovy (2016). +- ELMo tagging model from Peters et al. (2018b). +- CVT from Clark et al. (2018), which uses Cross-View Training(CVT) to improve the representations of a Bi-LSTM encoder. +- Bert-Tagger from Devlin et al. (2018), which treats NER as a tagging task. + +For Chinese datasets, we use the following models as baselines: + +- Lattice-LSTM: Zhang and Yang (2018) constructs a word-character lattice. +- Bert-Tagger: Devlin et al. (2018) treats NER as a tagging task. +- Glyce-BERT: The current SOTA model in Chinese NER developed by Wu et al. (2019), which combines glyph information with BERT pretraining. + +# 4.2.3 Results and Discussions + +Table 3 presents comparisons between the proposed model and baseline models. For English CoNLL 2003, our model outperforms the finetuned BERT tagging model by $+0.24\%$ in terms of F1, while for English OntoNotes 5.0, the pro + +
English OntoNotes 5.0
ModelF1
LSTM tagger (Strubell et al., 2017)86.84
BiDAF (Seo et al., 2017)87.39 (+0.55)
QAnet (Yu et al., 2018)87.98 (+1.14)
BERT-Tagger89.16
BERT-MRC91.11 (+1.95)
+ +posed model achieves a huge gain of $+1.95\%$ improvement. The reason why greater performance boost is observed for OntoNotes is that OntoNotes contains more types of entities than CoNLL03 (18 vs 4), and some entity categories face the severe data sparsity problem. Since the query encodes significant prior knowledge for the entity type to extract, the MRC formulation is more immune to the tag sparsity issue, leading to more improvements on OntoNotes. The proposed method also achieves new state-of-the-art results on Chinese datasets. For Chinese MSRA, the proposed method outperforms the fine-tuned BERT tagging model by $+0.95\%$ in terms of F1. We also improve the F1 from $79.16\%$ to $82.11\%$ on Chinese OntoNotes4.0. + +# 5 Ablation studies + +# 5.1 Improvement from MRC or from BERT + +For flat NER, it is not immediately clear which proportion is responsible for the improvement, the MRC formulation or BERT (Devlin et al., 2018). On one hand, the MRC formulation facilitates the entity extraction process by encoding prior knowledge in the query; on the other hand, the good performance might also come from the large-scale pre-training in BERT. + +To separate the influence from large-scale BERT pretraining, we compare the LSTM-CRF tagging model (Strubell et al., 2017) with other MRC based models such as QAnet (Yu et al., 2018) and BiDAF (Seo et al., 2017), which do not rely on large-scale pretraining. Results on English Ontonotes are shown in Table 5. As can be seen, though underperforming BERT-Tagger, the MRC based approaches QAnet and BiDAF still significantly outperform tagging models based on LSTM+CRF. This validates the importance of MRC formulation. The MRC formulation's benefits are also verified when comparing BERT-tagger + +Table 4: Results of different MRC models on English OntoNotes5.0. + +
English OntoNotes 5.0
ModelF1
BERT-Tagger89.16
Position index of labels88.29 (-0.87)
Keywords89.74 (+0.58)
Wikipedia89.66 (+0.59)
Rule-based template filling89.30 (+0.14)
Synonyms89.92 (+0.76)
Keywords+Synonyms90.23 (+1.07)
Annotation guideline notes91.11 (+1.95)
+ +Table 5: Results of different types of queries. + +with BERT-MRC: the latter outperforms the former by $+1.95\%$ . + +We plot the attention matrices output from the BiDAF model between the query and the context sentence in Figure 2. As can be seen, the semantic similarity between tagging classes and the contexts are able to be captured in the attention matrix. In the examples, Flevland matches geographical, cities and state. + +# 5.2 How to Construct Queries + +How to construct query has a significant influence on the final results. In this subsection, we explore different ways to construct queries and their influence, including: + +- Position index of labels: a query is constructed using the index of a tag to, i.e., "one", "two", "three". +- Keyword: a query is the keyword describing the tag, e.g., the question query for tag ORG is "organization". +- Rule-based template filling: generates questions using templates. The query for tag ORG is "which organization is mentioned in the text". +- Wikipedia: a query is constructed using its wikipedia definition. The query for tag ORG is "an organization is an entity comprising multiple people, such as an institution or an association." +- Synonyms: are words or phrases that mean exactly or nearly the same as the original keyword extracted using the Oxford Dictionary. The query for tag ORG is "association". +- Keyword+Synonyms: the concatenation of a keyword and its synonym. +- Annotation guideline notes: is the method we use in this paper. The query for tag ORG is "find organizations including companies, agencies and institutions". + +Table 5 shows the experimental results on En + +![](images/eadc91b50c7ea0b86b1746aea605d7fbb0635cbededc7a35924941ee810fe8d7.jpg) +Figure 2: An example of attention matrices between the query and the input sentence. + +
ModelsTrainTestF1
BERT-tagsOntoNotes5.0OntoNotes5.089.16
BERT-MRCOntoNotes5.0OntoNotes5.091.11
BERT-tagsCoNLL03OntoNotes5.031.87
BERT-MRCCoNLL03OntoNotes5.072.34
+ +Table 6: Zero-shot evaluation on OntoNotes5.0. BERT-MRC can achieve better zero-shot performances. + +glish OntoNotes 5.0. The BERT-MRC outperforms BERT-Tagger in all settings except Position Index of Labels. The model trained with the Annotation Guideline Notes achieves the highest F1 score. Explanations are as follows: for Position Index Dataset, queries are constructed using tag indexes and thus do not contain any meaningful information, leading to inferior performances; Wikipedia underperforms Annotation Guideline Notes because definitions from Wikipedia are relatively general and may not precisely describe the categories in a way tailored to data annotations. + +# 5.3 Zero-shot Evaluation on Unseen Labels + +It would be interesting to test how well a model trained on one dataset is transferable to another, which is referred to as the zero-shot learning ability. We trained models on CoNLL 2003 and test them on OntoNotes5.0. OntoNotes5.0 contains 18 entity types, 3 shared with CoNLL03, and 15 unseen in CoNLL03. Table 6 presents the results. As can been seen, BERT-tags does not have zero-shot learning ability, only obtaining an accuracy of $31.87\%$ . This is in line with our expectation since it cannot predict labels unseen from the training set. The question-answering formal + +![](images/1cbcb3d82d85ea83a34320a3566aef367ea3ccfea9321c85ba8300e8a0d7f0f9.jpg) +Figure 3: Effect of varying percentage of training samples on Chinese OntoNotes 4.0. BERT-MRC can achieve the same F1-score comparing to BERT-Tagger with fewer training samples. + +ization in MRC framework, which predicts the answer to the given query, comes with more generalization capability and achieves acceptable results. + +# 5.4 Size of Training Data + +Since the natural language query encodes significant prior knowledge, we expect that the proposed framework works better with less training data. Figure 3 verifies this point: on the Chinese OntoNotes 4.0 training set, the query-based BERT-MRC approach achieves comparable performance to BERT-tags even with half amount of training data. + +# 6 Conclusion + +In this paper, we reformulate the NER task as a MRC question answering task. This formalization comes with two key advantages: (1) being capa + +ble of addressing overlapping or nested entities; (2) the query encodes significant prior knowledge about the entity category to extract. The proposed method obtains SOTA results on both nested and flat NER datasets, which indicates its effectiveness. In the future, we would like to explore variants of the model architecture. + +# Acknowledgement + +We thank all anonymous reviewers, as well as Jiawei Wu and Wei Wu for their comments and suggestions. The work is supported by the National Natural Science Foundation of China (NSFC No. 61625107 and 61751209). + +# References + +Beatrice Alex, Barry Haddow, and Claire Grover. 2007. Recognising nested named entities in biomedical text. In Proceedings of the Workshop on BioNLP 2007: Biological, Translational, and Clinical Language Processing, pages 65-72. Association for Computational Linguistics. +Kate Byrne. 2007. Nested named entity recognition in historical archive text. In International Conference on Semantic Computing (ICSC 2007), pages 589-596. IEEE. +Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051. +Jason PC Chiu and Eric Nichols. 2015. Named entity recognition with bidirectional LSTM-cnns. arXiv preprint arXiv:1511.08308. +Jason PC Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics, 4:357-370. +Julie Medero Christopher Walker, Stephanie Strassel and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia 57. +Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc V. Le. 2018. Semi-supervised sequence modeling with cross-view training. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 1914-1925. +Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537. + +Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. +George R Doddington, Alexis Mitchell, Mark A Przybocki, Stephanie M Strassel Lance A Ramshaw, and Ralph M Weischedel. 2005. The automatic content extraction (ace) program-tasks, data, and evaluation. In LREC, 2:1. +Jenny Rose Finkel and Christopher D Manning. 2009. Nested named entity recognition. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, pages 141-150. Association for Computational Linguistics. +Joseph Fisher and Andreas Vlachos. 2019. Merge and label: A novel neural network architecture for nested NER. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 5840-5850. +James Hammerton. 2003. Named entity recognition with long short-term memory. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 172-175. Association for Computational Linguistics. +Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. +Heng Ji, Xiaoman Pan, Boliang Zhang, Joel Nothman, James Mayfield, Paul McNamee, and Cash Costello. 2017. Overview of TAC-KBP2017 13 languages entity discovery and linking. In Proceedings of the 2017 Text Analysis Conference, TAC 2017, Gaithersburg, Maryland, USA, November 13-14, 2017. +Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1446-1459, New Orleans, Louisiana. Association for Computational Linguistics. +Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 861-871. +J-D Kim, Tomoko Ohta, Yuka Tateisi, and Jun'ichi Tsujii. 2003. Genia corpus—a semantically annotated corpus for bio-textmining. Bioinformatics, 19(suppl_1):i180-i182. +John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. + +Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360. +Gina-Anne Levow. 2006. The third international Chinese language processing bakeoff: Word segmentation and named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 108-117, Sydney, Australia. Association for Computational Linguistics. +Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. arXiv preprint arXiv:1706.04115. +Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, and Jiwei Li. 2019. Entity-relation extraction as multi-turn question answering. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 1340-1350. +Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2019a. Sequence-to-nuggets: Nested entity mention detection via anchor-region networks. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 5182-5192. +Hongyu Lin, Yaojie Lu, Xianpei Han, and Le Sun. 2019b. Sequence-to-nuggets: Nested entity mention detection via anchor-region networks. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 5182-5192. +Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 857-867. +Yi Luan, Dave Wadden, Luheng He, Amy Shah, Mari Ostendorf, and Hannaneh Hajishirzi. 2019. A general framework for information extraction using dynamic span graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 3036-3046. +Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-Crf. arXiv preprint arXiv:1603.01354. +Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730. + +Aldrian Obaja Muis and Wei Lu. 2017. Labeling gaps between words: Recognizing overlapping mentions with mention separators. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2608-2618, Copenhagen, Denmark. Association for Computational Linguistics. +Tomoko Ohta, Yuka Tateisi, and Jin-Dong Kim. 2002. The genia corpus: An annotated research abstract corpus in molecular biology domain. In Proceedings of the Second International Conference on Human Language Technology Research, HLT '02, pages 82-86, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. +Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word representations. arXiv preprint arXiv:1802.05365. +Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018b. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 1 (Long Papers), pages 2227-2237. +Sameer Pradhan, Mitchell P. Marcus, Martha Palmer, Lance A. Ramshaw, Ralph M. Weischedel, and Nianwen Xue, editors. 2011. Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, CoNLL 2011, Portland, Oregon, USA, June 23-24, 2011. ACL. +Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Björkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using OntoNotes. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 143-152, Sofia, Bulgaria. Association for Computational Linguistics. +Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822. +Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. +Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning, CoNLL 2003, Held in cooperation with HLT-NAACL 2003, Edmonton, Canada, May 31 - June 1, 2003, pages 142-147. + +Cicero Nogueira dos Santos and Victor Guimaraes. 2015. Boosting named entity recognition with neural character embeddings. arXiv preprint arXiv:1505.05008. +Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention flow for machine comprehension. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. +Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. +Yelong Shen, Po-Sen Huang, Jianfeng Gao, and Weizhu Chen. 2017. Reasonet: Learning to stop reading in machine comprehension. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1047-1055. ACM. +Takashi Shibuya and Eduard H. Hovy. 2019. Nested named entity recognition via second-best sequence learning and decoding. CoRR, abs/1909.02250. +Mohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2843-2849, Brussels, Belgium. Association for Computational Linguistics. +Jana Straková, Milan Straka, and Jan Hajic. 2019. Neural architectures for nested NER through linearization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5326-5331, Florence, Italy. Association for Computational Linguistics. +Emma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. arXiv preprint arXiv:1702.02098. +Charles Sutton, Andrew McCallum, and Khashayar Rohanimanesh. 2007. Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data. Journal of Machine Learning Research, 8(Mar):693-723. +Bailin Wang and Wei Lu. 2018. Neural segmental hypergraphs for overlapping mention recognition. arXiv preprint arXiv:1810.01817. +Shuohang Wang and Jing Jiang. 2016. Machine comprehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905. +Zhiguo Wang, Haitao Mi, Wael Hamza, and Radu Florian. 2016. Multi-perspective context matching for machine comprehension. arXiv preprint arXiv:1612.04211. + +Wei Wu, Yuxian Meng, Qinghong Han, Muyu Li, Xiaoya Li, Jie Mei, Ping Nie, Xiaofei Sun, and Jiwei Li. 2019. Glyce: Glyph-vectors for chinese character representations. arXiv preprint arXiv:1901.10125. +Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604. +Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dcn+: Mixed objective and deep residual coattention for question answering. arXiv preprint arXiv:1711.00106. +Mingbin Xu, Hui Jiang, and Seditawut Watcharawittayakul. 2017. A local detection approach for named entity recognition and mention detection. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1237-1247. +Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V. Le. 2018. *Qanet: Combining local convolution with global self-attention* for reading comprehension. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. +Yue Zhang and Jie Yang. 2018. Chinese ner using lattice lstm. arXiv preprint arXiv:1805.02023. +Changmeng Zheng, Yi Cai, Jingyun Xu, Ho-fung Leung, and Guandong Xu. 2019. A boundary-aware neural model for nested named entity recognition. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 357-366, Hong Kong, China. Association for Computational Linguistics. \ No newline at end of file diff --git a/aunifiedmrcframeworkfornamedentityrecognition/images.zip b/aunifiedmrcframeworkfornamedentityrecognition/images.zip new file mode 100644 index 0000000000000000000000000000000000000000..7752ffda81f9baefe734e03b62b7845f304b4138 --- /dev/null +++ b/aunifiedmrcframeworkfornamedentityrecognition/images.zip @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0fa67078d19dc7dd77f4f2f63806c9871026ef9a5b89feabaaa970143dca355e +size 378209 diff --git a/aunifiedmrcframeworkfornamedentityrecognition/layout.json b/aunifiedmrcframeworkfornamedentityrecognition/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..117a0445a6c4c5990db6e0668b5372a1346f0f1e --- /dev/null +++ b/aunifiedmrcframeworkfornamedentityrecognition/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bff2a19957cd1aa53b90097efa2b5b4e9550bcf490d72583debf510c77a53915 +size 399973 diff --git a/reconstruingmeaninginnlp/e5fc1ecd-3852-4161-be96-44061a714186_content_list.json b/reconstruingmeaninginnlp/e5fc1ecd-3852-4161-be96-44061a714186_content_list.json new file mode 100644 index 0000000000000000000000000000000000000000..fc8e11b7049db401082f3efdfb4008d64776a32a --- /dev/null +++ b/reconstruingmeaninginnlp/e5fc1ecd-3852-4161-be96-44061a714186_content_list.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:432f46784140b8eb412db864cbdf1f6003b8f0f623cfad525b4d083251143fac +size 101439 diff --git a/reconstruingmeaninginnlp/e5fc1ecd-3852-4161-be96-44061a714186_model.json b/reconstruingmeaninginnlp/e5fc1ecd-3852-4161-be96-44061a714186_model.json new file mode 100644 index 0000000000000000000000000000000000000000..a723102401ca4dc272538ad155de707ccf2ee0ab --- /dev/null +++ b/reconstruingmeaninginnlp/e5fc1ecd-3852-4161-be96-44061a714186_model.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c30d50ddd96961bce1c5893a88decd81b31f58ab1ad92198934543e37005bfa6 +size 141915 diff --git a/reconstruingmeaninginnlp/e5fc1ecd-3852-4161-be96-44061a714186_origin.pdf b/reconstruingmeaninginnlp/e5fc1ecd-3852-4161-be96-44061a714186_origin.pdf new file mode 100644 index 0000000000000000000000000000000000000000..3f36482222e25ecb1bdc012e36627988e380c5d2 --- /dev/null +++ b/reconstruingmeaninginnlp/e5fc1ecd-3852-4161-be96-44061a714186_origin.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a2ab3bede27a2c9dae66b4130bed7716e1a98208da9722aa6d8336fe90f61440 +size 325363 diff --git a/reconstruingmeaninginnlp/full.md b/reconstruingmeaninginnlp/full.md new file mode 100644 index 0000000000000000000000000000000000000000..f620ad460865c5a43c2518950278cbd820827c13 --- /dev/null +++ b/reconstruingmeaninginnlp/full.md @@ -0,0 +1,457 @@ +# (Re)construing Meaning in NLP + +# Sean Trott + +University of California, San Diego + +sttrott@ucsd.edu + +# Nancy Chang + +Google + +ncchang@google.com + +# Tiago Timponi Torrent + +Federal University of Juiz de Fora + +tiagotorrent@ufjf.edu.br + +# Nathan Schneider + +Georgetown University + +nathan.schneider@georgetown.edu + +# Abstract + +Human speakers have an extensive toolkit of ways to express themselves. In this paper, we engage with an idea largely absent from discussions of meaning in natural language understanding—namely, that the way something is expressed reflects different ways of conceptualizing or construing the information being conveyed. We first define this phenomenon more precisely, drawing on considerable prior work in theoretical cognitive semantics and psycholinguistics. We then survey some dimensions of construed meaning and show how insights from construal could inform theoretical and practical work in NLP. + +# 1 Introduction + +Natural language is a versatile tool for allowing humans to express all manner of communicative intents, from simple descriptions of the entities and situations in their direct experience to elaborate rhetorical flights of fancy. Many NLP applications, such as information extraction, question answering, summarization, and dialogue systems, have restricted their scope to what one might call objective information content—relatively uncontroversial facts that systems can infer from an utterance, store in a database and reason about. + +While it is tempting to equate such information with the meaning of an utterance, a large body of literature in linguistics and psycholinguistics argues that an utterance conveys much more than a simple set of facts: it carries with it a halo of intimations arising from the speaker's choices, including considerations of perspective, emphasis, and framing. That is, linguistic choices subtly color meaning; far from merely conveying objective facts, they reflect how speakers conceptualize meaning and affect listeners' interpretations in predictable ways. + +Take, for example, this metaphor-rich portrayal of a newborn as a tyrant over her parental subjects: + +(1) Nora's arrival brought a regime change. Life under her adorable tyranny was filled with squawking, swaddling and ceaseless sleep-input-output cycles. We were relieved when she relaxed her tiny iron grip. + +This report of new parenthood describes a major life change along with everyday caregiver routines, but its emphasis is on the parents' experience of being suppressed (under) and controlled (grip) by a creature who is cast, variously, as a tyrant (regime), a bird (squawk), and a relentless machine (sleep-input-output cycles, iron grip)—albeit a (subjectively) adorable one. + +The power of linguistic choices to shape understanding is also evident in more mundane (and well-studied) examples: + +(2) a. Chuck bought a car from Jerry. +Jerry sold a car to Chuck. +Jerry paid Chuck for the car. +b. I work at Microsoft. +I work for Microsoft. +c. The statue stands in the plaza. +The statue is standing in the plaza. + +Each set includes sentences that convey roughly the same facts—i.e. they could describe the same scenario—but nonetheless differ in various respects. The familiar framing differences between buy/sell/ pay (2a) focus attention on different participants and subevents in a commercial transaction. (2b) involves a subtler difference in emphasis, where the choice of at highlights the location of the work, while for evokes how that work benefits the employer. Grammatical marking can also shift event connotations, as illustrated by the stative vs. temporary contrast in (2c). + +Such distinctions illustrate the general phenomenon of construal, which we claim has been neglected in NLP. We believe that a proper recog- + +nition of construal would provide a unified framework for addressing a wide range of issues involving meaning and linguistic variation, opening the way to systems that more closely approximate (actually) natural language. + +This paper surveys the theoretical and empirical landscape related to construal phenomena and makes the case for its relevance to NLP. After clarifying the terms adopted here ( $\S 2$ ), we lay out a few key dimensions of construed meaning ( $\S 3$ ) and then elaborate on some mechanisms of construal ( $\S 4$ ). A trio of case studies illustrate how different types of construal can challenge NLP systems ( $\S 5$ ). We end with some conclusions and suggestions for how to begin addressing these challenges ( $\S 6$ ). + +# 2 Meaning and construal + +Our view of construal and its close companion meaning is rooted in both frame-based and cognitive semantic traditions. The notion that words and other linguistic units evoke background scenes along with specific perspectives on those scenes is captured by Fillmore's (1977) slogan, MEANINGS ARE RELATIVIZED TO SCENES. This idea has deeper consequences than merely assigning different semantic roles to examples like (2a). As Langacker (1993, p. 460) observes, "any given situation can be viewed in multiple if not infinitely many ways. Starting from the same basic conceptual content... we can form an endless variety of specific conceptions by making alternate choices in regard to the many dimensions of construal." + +This view of linguistic meaning—which we might call inherently multivalent—is more flexible than in many theoretical and computational treatments, particularly truth-conditional approaches that liken meanings to facts in a database. The visual domain offers a more informative analog: a photographic or artistic rendering of a scene can vary in vantage point, viewing distance, objects in sight or in focus, color and lighting choices, etc. (Langacker, 1993; Talmy, 1988). Context matters, too: a painting hanging on a preschool wall may be received differently if displayed in a museum. Just as there is no one objective, context-independent depiction of a scene, there are many valid ways to present an idea through language. + +We thus extend Fillmore's slogan to include all kinds of conceptual content (beyond scenes); the broader communicative context; and the effect of choices made as part of the construal process: + +# MEANINGS ARE RELATIVIZED TO CONTENT, CONTEXT AND CONSTRUAL. + +Below we elaborate on how each of these interrelated factors affects construed meaning. + +Conceptual content. We assume that linguistic units can evoke and combine all kinds of conceptual content, including open-ended world knowledge (entities, actions, events, relations, etc.) as well as more schematic structures often associated with grammar and function words. Crucially, concepts must also be amenable to certain kinds of transformation (e.g., shifts in perspective or granularity) as part of construal; see below. + +Communicative context. We take meaning to encompass scene-level entities and events, discourse-level information about the interlocutors and their communicative intents, and other phenomena straddling the (fuzzy) semantic-pragmatic boundary, related to attention (e.g., profiling and perspective) and conditions of usage falling under what Fillmore (1985) dubbed "U-Semantics" (in contrast to truth-oriented "T-Semantics").2 + +Contextual factors (e.g., the interlocutors' identity, beliefs, goals, conceptual repertoire, cultural backgrounds) can radically alter construed meaning. On this view, meaning is not arbitrarily subjective, or merely intersubjective; it is also constrained by all aspects of the communicative context. + +Construal. We define construal as a dynamic process of meaning construction, in which speakers and hearers encode and decode, respectively, some intended meaning in a given communicative context. To do so, they draw on their repertoire of linguistic and conceptual structures, composing and transforming them to build coherent interpretations consistent with the speaker's lexical, grammatical, and other expressive choices.3 + +We take construal to be fundamental to all language use, though how much construal and what + +kinds of construal vary across interpretations.4 In the simplest cases, the relevant components fit neatly together (à la compositional semantics). But many (or even most) utterances involve a myriad of disparate structures—conceptual, linguistic, and contextual—that may need to be transformed, (re)categorized, or otherwise massaged to be integrated into a single coherent whole. + +This conceptual flexibility is not arbitrary: the space of combinatorial options is delimited by **construal operations** defined with respect to certain privileged **construal dimensions**. A number of dimensions and operations have been proposed, many motivated by general cognitive processes; we will review some of these in §3, and illustrate how they are engaged during language use in §4. + +This inclusive, flexible view of meaning has broad implications for a wide variety of linguistic phenomena, and many parallels in prior work—far too many to address exhaustively here. We restrict our current scope in several ways: (1) While some aspects of context will be mentioned below, we do not address many phenomena related to pragmatic inference (e.g. politeness, indirect requests). (2) Though many construal dimensions are relevant cross-linguistically, we will not address typological patterns in the lexical, grammatical, and cultural conventions that influence construal. (3) We highlight construal phenomena that are psycholinguistically attested and/or relevant to NLP research. + +# 3 Dimensions of construed meaning + +Several (partial) taxonomies of construal dimensions have been proposed in the cognitive linguistics literature (Langacker, 1993; Talmy, 1988; Croft and Wood, 2000; Taylor, 1995; Casad, 1995); see Croft and Cruse (2004) for an overview. We will not attempt to reconcile their many differences in terminology and organization, but instead present selected dimensions most relevant for NLP. + +# 3.1 Perspective + +Languages have many ways of describing scenes from a specific PERSPECTIVE (or vantage point). The spatial domain provides clear examples: a cup might be described as being left or right of some other object, depending on whose perspective is adopted; or explicitly marked as being on my/your/ + +her/Sue's left. Likewise, the same motion event can be described relative to differing deictic centers (e.g., the arrival in (1) can also be viewed as a departure from the hospital). + +Perspective can extend beyond the spatial domain. The use of past tense in (1) indicates the speaker's retrospective viewpoint. Differences in opinion, belief state or background have also been treated as perspective shifting. + +Talmy's (1988) taxonomy defines a broader version of PERSPECTIVE that includes distribution of attention. Descriptions of a static scene can adopt a dynamic perspective, evoking the experience of moving through the scene ("There is a house every now and then through the valley"); these descriptions can be even more explicit, as with fictive motion ("The road runs through the valley") (Talmy, 1996; Matlock, 2004b). + +Psycholinguistic evidence. Grammatical person can affect which perspective a comprehender adopts when reading about an event (Brunyé et al., 2009) and which actions they are most likely to remember (Ditman et al., 2010). Fictive motion can also influence the way comprehenders conceptualize a static scene (Matlock, 2004a,b). + +Relevant NLP research. Perspective is crucial for understanding spatial language, e.g. for robotics (§5.2) and other kinds of situated language. Work on grounding referents from natural language descriptions has incorporated visual perspective as another source of information about the intended referent (Devin and Alami, 2016; Ros et al., 2010; Trafton et al., 2005). + +# 3.2 Prominence + +PROMINENCE (or salience) refers to the relative attention focused on different elements in a scene (Langacker, 1993; Talmy, 1988). Languages have various devices for highlighting, or profiling, some elements over others (or leaving them implicit). For example, verbs like those in (2a) differ in which elements in a larger scene are preferentially expressed. Similarly, many spatial and temporal adpositions involve an asymmetric profiling of one entity relative to another; thus "the painting is above the piano" and "the piano is below the painting" describe the same situation but differ in focus. + +Verbal and constructional alternations also manipulate prominence: The active/passive pair "Microsoft employs me" and "I am employed by Microsoft" differ in profiling the employer and + +speaker, respectively. Similarly, transitive "I rolled the ball" vs. intransitive "The ball rolled" differ in whether the ball-roller is even mentioned. + +Languages also differ systematically in how motion events are most idiomatically expressed, in particular in whether the main verb encodes (and foregrounds) the manner (English run) or path (Spanishentrar) of motion. + +Psycholinguistic evidence. A speaker's decisions about which features to encode in the main verb versus a satellite can influence which events comprehenders find most similar (Billman and Krych, 1998) and which features they tend to remember (Gennari et al., 2002). + +In other work, Fausey and Boroditsky (2010) found that descriptions of an accidental event using a transitive construction ("She had ignited the napkin") led participants to assign more blame to the actor involved, and even demand higher financial penalties, than descriptions using non-agentive constructions ("The napkin had ignited"). + +In language production, there are a number of factors influencing which construction a speaker chooses (e.g., current items in discourse focus (Bresnan et al., 2007), lexical and syntactic priming (Pickering and Ferreira, 2008)). + +Relevant NLP research. Recovering implicit information is widely studied in NLP, and deciding which information to express is key to NLG and summarization. We mention three examples exploring how choices of form lend prominence to certain facets of meaning in ways that strongly resonate with our claims about construal. + +Greene and Resnik (2009) show that syntactic framing—e.g. active (Prisoner murders guard) vs. passive (Guard is murdered)—is relevant to detecting speaker sentiment about violent events. + +Hwang et al. (2017) present an annotation scheme for capturing adpositional meaning construal (as in (2b)). Rather than disambiguate the adposition with a single label, they separately annotate an adposition's role with respect to a scene (e.g. employment) and the aspect of meaning brought into prominence by the adposition itself (e.g., benefactive for vs. locative at). This more flexibly accounts for meaning extensions and resolves some annotator difficulties. + +Rohde et al. (2018) studied the construction of discourse coherence by asking participants to insert a conjunction (and, or, but, so, because, before) where none was originally present, before an + +explicit discourse adverbial (e.g. in other words). They found that some contexts licensed multiple alternative conjunctions, each expressing a different coherence relation—i.e., distinct implicit relations can be inferred from the same passage. This speaks to the challenge of fully annotating discourse coherence relations and underscores the role of both linguistic and contextual cues in coherence. + +# 3.3 Resolution + +Concepts can be described at many levels of RESOLU TION—from highly detailed to more schematic. We include here both specificity (e.g., $pug < dog$ $<$ animal $<$ being) and granularity (e.g., viewing a forest at the level of individual leaves vs. branches vs. trees). Lexical items and larger expressions can evoke and combine concepts at varying levels of detail ("The gymnast triumphantly landed upright" vs. "A person did something"). + +Psycholinguistic evidence. Resolution is related to basic-level categories (Rosch et al., 1976; Lakoff, 1987; Hajibayova, 2013), the most culturally and cognitively salient levels of a folk taxonomy. Speakers tend to use basic-level terms for reference (e.g., tree vs. entity/birch), and basic-level categories are more easily and quickly accessed by comprehenders (Mervis and Rosch, 1981; Rosch et al., 1976). + +Importantly, however, what counts as basic-level depends on the speaker's domain expertise (Tanaka and Taylor, 1991). Speakers may deviate from basic-level terms under certain circumstances, e.g., when a more specific term is needed for disambiguation (Graf et al., 2016). Conceptualization is thus a flexible process that varies across both individual cognizers (e.g., as a function of their world knowledge) and specific communicative contexts. + +Relevant NLP research. Resolution is already recognized as important for applications such as text summarization and dialogue generation (Louis and Nenkova, 2012; Li and Nenkova, 2015; Ko et al., 2019a; Li et al., 2016; Ko et al., 2019b), e.g., in improving human judgments of informativity and relevance (Ko et al., 2019b). Also relevant is work on knowledge representation in the form of inheritance-based ontologies and lexica (e.g., FrameNet (Fillmore and Baker, 2009), ConceptNet (Liu and Singh, 2004)). + +# 3.4 Configuration + +CONFIGURATION refers to internal-structural properties of entities, groups of entities, and events, + +indicating their schematic "shape" and "texture": multiplicity (or plexity), homogeneity, boundedness, part-whole relations, etc. (Langacker, 1993; Talmy, 2000). To borrow an example from Croft (2012), a visitor to New England can describe stunning autumn leaves or foliage. Though both words indicate a multiplex perception, they exhibit a grammatical difference: the (plural) count noun leaves suggests articulated boundaries of multiple individuals, whereas the mass noun foliage suggests a more impressionistic, homogeneous rendering. + +This dimension includes many distinctions and phenomena related to aspect (Vendler, 1967; Comrie, 1976), including whether an event is seen as discrete (sneeze) or continuous (read); involves a change of state (leave vs. have); has a defined endpoint (read vs. read a book); etc. Lexical and grammatical markers of configuration properties interact in complex ways; see discussion of count/ mass and aspectual coercion in $\S 4$ . + +Psycholinguistic evidence. Differences in grammatical aspect can modulate how events are conceptualized (Matlock, 2011). Stories written in imperfective aspect are remembered better; participants are also more likely to believe that the events in these stories are still happening (Magliano and Schleich, 2000) and build richer mental simulations of these events (Bergen and Wheeler, 2010). In turn, these differences in conceptualization have downstream consequences, ranging from judgments about an event's complexity (Wampler and Wittenberg, 2019) to predictions about the consequences of a political candidate's behavior on reelection (Fausey and Matlock, 2011). + +The mass/count distinction has attested psychological implications, including differences in word recognition time (Gillon et al., 1999) (see Fieder et al. (2014) for a review). + +Relevant NLP research. Configurational properties are closely linked to well-studied challenges at the syntax-semantic interface, in particular nominal and aspectual coercion effects (§4). Several approaches explicitly model coercion operations based on event structure representations (Moens and Steedman, 1988; Passonneau, 1988; Pulman, 1997; Chang et al., 1998), while others explore statistical learning of aspectual classes and features (Siegel and McKeown, 2000; Mathew and Katz, 2009; Friedrich and Palmer, 2014). Lexical resources have also been developed for aspectual annotation (Donatelli et al., 2018) and the count/ + +mass distinction (Schiehlen and Spranger, 2006; Kiss et al., 2017). + +# 3.5 Metaphor + +The dimension of METAPHOR is broadly concerned with cross-domain comparison, in which speakers "conceptualize two distinct structures in relation to one another" (Langacker, 1993, p. 450). Metaphors have been analyzed as structured mappings that allow a target domain to be conceptualized in terms of a source domain (Lakoff and Johnson, 1980). + +Metaphors pervade language use, and exhibit highly systematic, extensible structure. For example, in English, events are often construed either as locations in space or as objects moving through space. Our experience of time is thus often described in terms of either motion toward future events ("we're approaching the end of the year"), or the future moving toward us ("the deadline is barreling towards us") (Boroditsky, 2000, 2001; Hendricks and Boroditsky, 2017; Núñez and Sweetser, 2006). Metaphor plays a role in our linguistic characterization of many other domains as well (Lakoff and Johnson, 1980). + +Psycholinguistic evidence. Different metaphors can shape a comprehender's representation about the same event or concept in radically different ways. Thibodeau and Boroditsky (2011) found that describing a city's crime problem as a beast or as a virus elicited markedly different suggestions about how best to address the problem, e.g., whether participants tended to endorse enforcement- or reform-based solutions. Similar effects of metaphor on event conceptualization have been found across other domains, such as cancer (Hauser and Schwarz, 2015; Hendricks et al., 2018) and climate change (Flusberg et al., 2017) (see Thibodeau et al. (2017) for a thorough review). + +Relevant NLP research. Considerable NLP work has addressed the challenge of metaphor detection and understanding (Narayanan, 1999; Shutova et al., 2010, 2013; Shutova, 2015). This work has made use of both statistical, bottom-up approaches to language modeling (Gutiérrez et al., 2016; Shutova et al., 2013), as well as knowledge bases such as MetaNet (Dodge et al., 2015; Stickles et al., 2014; David and Dancygier, 2017). + +# 3.6 Summary + +The selective review of construal dimensions presented here is intended to be illustrative, not exhaustive or definitive. Returning to the visual anal + +ogy, we can see these dimensions as primarily concerned with how (and what part of) a conceptual "scene" is perceived (PERSPECTIVE, PROMINENCE); the choice or categorization of which schematic structures are present (CONFIGURATION and METAPHOR); or both (RESOLUTION). + +We have omitted another high-level categorization dimension, SCHEMATIZATION, which includes concepts related to force dynamics, image schemas, and other experientially grounded schemas well discussed in the literature (Talmy, 2000). We have also not addressed pragmatic inference related to politeness (Brown and Levinson, 1987), indirect requests (Clark, 1979), and other aspects of communicative intent. Additionally, some phenomena are challenging to categorize within the dimensions listed here; a more complete analysis would include evidentiality (Chafe and Nichols, 1986), modality (Mortelmans, 2007), light verb constructions (Wittenberg and Levy, 2017; Wittenberg et al., 2014), and more. Nonetheless, we hope this partial taxonomy provides a helpful entry point to relevant prior work and starting point for further alignment. + +# 4 Construal in action + +How might construal work in practice? We have emphasized so far the flexibility afforded by the dimensions in §3. But we must also explain why some words and concepts make easier bedfellows than others. This section presents a thumbnail sketch of how the construal process copes with apparent mismatches, where it is the collective constraints of the input structures that guide the search for coherence. + +We focus on comprehension (similar processes apply in production), and assume some mechanism for proposing interpretations consisting of a set of conceptual structures and associated compatibility constraints. Compatibility constraints are analogous to various kinds of binding constraints proposed in the literature (variable binding, role-filler bindings, unification bindings, and the like): they are indicators that two structures should be conceptualized as a single unit. But compatibility is softer and more permissive than identity or type-compatibility, in that it can also be satisfied with the help of construal operations. Some operations effect relatively subtle shifts in meaning; others have more dramatic effects, including changes to truth-conditional aspects of meaning. + +Below we illustrate how some example linguistic + +phenomena fit into the sketch just presented and mention connections to prior lines of work. + +Count/mass coercion. English nouns are flexible in their count/mass status (see §3.4). Atypical marking for number or definiteness can cause a shift, or coercion, in boundedness: plural or indefinite marking on mass nouns (a lemonade, two lemonades) yields a bounded interpretation (cups or bottles of lemonade). Conversely, count nouns with no determiner are coerced to an undifferentiated mass, via a phenomenon known as grinding ("there was mosquito all over the windshield") (Pelletier and Schubert, 1989, 2003; Copestake and Briscoe, 1995). Here we see evidence of the outsize influence of tiny grammatical markers on manipulating lexical defaults in the construal process. + +Aspectual composition. Aspect is a prime arena for studying how multiple factors conspire to shape event construal. Verbs are associated with default aspectual classes that can be coerced under pressure from conflicting cues, where details of event structure systematically constrain possible coerctions and their inferential consequences (Moens and Steedman, 1988; Talmy, 1988). + +In fact, aspectual coercion can be reanalyzed in terms of construal dimensions. For example, durative modifiers (e.g. for an hour) prefer to combine with atelic processes (lacking a defined endpoint, as in 3a) on which to impose a bound (analogous to count/mass coercion) and duration. Combination with any other aspectual class triggers different operations to satisfy that preference: + +(3) a. He $\{\mathrm{sleep} / \mathrm{ran}\}$ for an hour. + +b. He sneezed for an hour. + +c. He read the book for an hour. + +d. He left for an hour. + +A single sneeze, being a discrete event unlikely to last an hour, undergoes ITERATION into a series of sneezes (3b), illustrating a change in plexity ( $\S 3.4$ ); while the book-reading in (3c) is simply viewed as unfinished (cf. "He read the book"). The departure in (3d) is a discrete event, but unlike sneezing, it also results in a state change that is reversible and therefore boundable (cf. the iterative reading of "He broke the glass for an hour", the non-permanent reading of 2c). Its coercion thus features multiple operations: a PROMINENCE shift to profile the result state of being gone; and then a BOUNDING that also reverses state, implying a return (Chang et al., 1998). + +Constructional coercion. The flagship example cited in the construction grammar literature (4a) has also been analyzed as a kind of coercion, serving to resolve conflicts between lexical and grammatical meaning (Goldberg, 1995, 2019): + +(4) a. She sneezed the napkin off the table. +b. She {pushed / blew / sneezed / ?slept} the napkin off the table. + +Here, the verb sneeze, though not typically transitive or causal, appears in a Caused Motion argument structure construction, which pairs oblique-transitive syntax with a caused motion scene. The resulting conflict between its conventional meaning and its putative causal role is resolvable, however, by a commonsense inference that sneezing expels air, which can plausibly cause the napkin's motion (cf. Forbes and Choi, 2017). + +This coercion, also described as role fusion, differs from the previous examples in manipulating the PROMINENCE of a latent component of meaning. Coercion doesn't always succeed, however: presumably sneezing could only move a boulder with contextual support, and sleeping has a less plausibly forceful reading. In fact, construal depends on the interaction of many factors, including degree of conventionality (where push and blow are prototypical caused motion verbs), embodied and world knowledge (the relative forces of sneeze and sleep to napkin weight), and context.[5] + +There is extensive psycholinguistic evidence of constructional coercion and the many factors influencing ease of construal (see Goldberg (2003, 2019) for reviews). Some of these phenomena have been analyzed within computational implementations of construction grammar (Bergen and Chang, 2005; Bryant, 2008; Bergen and Chang, 2013; Dodge and Petruck, 2014; Steels, 2017; Steels and Feldman, 2017; Matos et al., 2017), and have also been incorporated in corpus annotation schemes (Bonial et al., 2011; Hwang et al., 2014; Lyngfelt et al., 2018). + +Metonymy and metaphor. Metonymy and metaphor are associated with semantic mismatches + +that trigger construal operations. A possible analysis of tiny iron grip from (1) illustrates both. + +First, the modifiers tiny and iron expect a physical entity, but grip is a (nominalized) action. This conflict triggers a profile shift (PROMINENCE) to the grip's effector (a hand), effectively licensing a metonymy. A further conflict arises between the hand and its description as iron (unlikely to be literal unless the protagonist is of robotic lineage). A structural alignment (METAPHOR) then maps the iron's strength to the grip's force, which in turn maps to the degree of dictatorial control.[6] + +We observe that multiple construal operations can occur in sequence; that a conceptual or linguistic element may afford more than one construal within the same analysis (grip as both a hand and metaphorical control); and that aspects of common sense, world knowledge, and culture (though not the focus of the present work) inevitably constrain construal options. + +# 5 Case studies + +We turn to a few illustrations of how the pervasive effects of construal can arise in applied settings. + +# 5.1 Case study 1: Conversational assistants + +Even simple tasks like rescheduling a meeting pose many challenges to dialogue systems, in both understanding users' intents and formulating natural responses. Consider the following exchange: + +U-1: When is my 1-1 with Chuck? +A-2: 4 PM today, in 15 minutes. +U-3: Is there another slot soon? +A-4: Not today, should I check tomorrow? +U-5: Let's push it to his tomorrow evening. +A-6: Rescheduled 1-1 with Chuck for 2 PM tomorrow, 6 PM in Brazil. + +The agent's first response (A-2) demonstrates sensitivity to PERSPECTIVE by providing a relative time. Interpreting "another slot soon" in the user's follow-up (U-3) requires both understanding that another is implicitly defined in contrast to the existing slot (relying on PROMINENCE) and then inferring the appropriate RESOLUTION meant by soon (on the scale of hours, rather than minutes or seconds). The agent's succinct response in (A-4) exploits PROMINENCE yet again, both by eliding reference to the sought-after open meeting slot with + +Chuck, and by using "tomorrow" (the direct object of "check") as a metonymic shorthand for the joint constraints of the user's and Chuck's calendars. + +The next user turn (U-5) employs METAPHOR in its construal of an event as a physical object, capable of being pushed. The metaphorical destination ("his tomorrow evening") requires consideration of differing time zones (PERSPECTIVE), as made explicit in the final agent turn (A-6). + +Interactions between situational context and the kinds of compatibility constraints discussed in §4 can also affect a dialogue system's best response. A user asking a fitness tracking app "How long have I been running?" while panting around a track may be referring to the current run, but the same question asked while sitting at home is more likely wondering how long they've been habitually running. A successful response requires the integration of the constraints from (at least): the verb running, whose progressive marking is associated with ongoing processes, but ambiguous between a single run and a series of runs (CONFIGURATION); the present-perfect have been V-ing, which implies an internal view (PERSPECTIVE); and the situational context (is the user currently running?). + +# 5.2 Case study 2: Human-robot interaction + +Situated interactions between humans and robots require the integration of language with other modalities (e.g., visual or haptic). Clearly, any spatially grounded referring expressions must be tailored to the interlocutors' PERSPECTIVE (whether shared or not) (Kunze et al., 2017). + +Focus of attention (PROMINENCE) is especially important for systems that must interpret procedural language. Recipes, for example, are notoriously telegraphic, with rampant omissions of information that a human cook could easily infer in context (Ruppenhofer and Michaelis, 2010; Malmaud et al., 2014). Consider (5): + +(5) In a medium bowl, cream together the sugar and butter. Beat in the eggs, one at a time, then stir in the vanilla. + +The italicized words provide crucial constraints that would help a cook (human or robot) track the evolving spatial relations. The first in establishes + +the bowl as the reference point for the creaming action, whose result—the mixture of sugar and butter together—becomes the implicit landmark for the subsequent beating in of eggs and vanilla. + +Systems following instructions also require a means of segmenting continuous sensorimotor data and linking it to discrete linguistic categories (Regneri et al., 2013; Yagcioglu et al., 2018) (cf. the symbol grounding problem (Harnad, 1990)). This mapping may depend on flexibly adjusting RESOLUTION and CONFIGURATION based on linguistic cues (e.g., cut/dice/slice/sliver the apple). + +# 5.3 Case study 3: Paraphrase generation + +Despite many advances, paraphrase generation systems remain far from human performance. One vexing issue is the lack of evaluation metrics that correlate with human judgments for tasks like paraphrase, image captioning, and textual entailment (see, e.g., Bhagat and Hovy, 2013; Pavlick and Kwiatkowski, 2019; Wang et al., 2019b). + +In particular, it is unclear how closely a good paraphrase should hew to all aspects of the source sentence. For example, should active/passive descriptions of the same scene, or the sets of sentences in (2), be considered meaning-equivalent? Or take the putative paraphrase below: + +(6) a. The teacher sat on the student's left. + +b. Next to the children was a mammal. + +These could plausibly describe the same scene; should their differences across multiple dimensions (PERSPECTIVE, PROMINENCE, RESOLUTION) be rewarded or penalized for this diversity? + +A first step out of this quandary is to recognize construal dimensions and operations as a source of linguistic variability. Paraphrase generation and other semantically oriented tasks could incorporate these into system design and evaluation in task-specific ways. + +# 6 Discussion + +Throughout this paper, we have emphasized the flexible and multivalent nature of linguistic meaning, as evidenced by the construal phenomena described here. The effects of construal are ubiquitous: from conventional to creative language use, through morphemes and metaphors. Indeed, even the smallest forms can, like tiny tyrants, exert a transformative force on their surroundings, inducing anything from a subtle shift in emphasis to a + +radical reconceptualization. + +As illustrated in §5, this flexibility of language use poses a challenge for NLP practitioners. Yet crucially—and fortunately—construal is not random: variations in linguistic form correspond systematically to differences in construal. The dimensions of construal and their associated operations (§3 and §4) offer principled constraints that render the search for coherence more tractable. + +How, then, should we proceed? Our goal is for construal dimensions such as those highlighted in §3 to be incorporated into any research program aspiring to human-level linguistic behavior. Below, we describe several concrete recommendations for how to do this. + +More meaningful metrics. Taking construal seriously means rethinking how NLP tasks are designed and evaluated. Construal dimensions can provide a rubric for assessing tasks, datasets, and meaning representations (Abend and Rappoport, 2017) for which meaningful distinctions they make or require. (E.g.: Does it capture the level of RESOLUTION at which entities and events are described? Does it represent METAPHOR? Is it sensitive to the PROMINENCE of different event participants?) + +Such questions might also help guard against unintended biases like those recently found in NLP evaluations and systems (e.g., Caliskan et al., 2017; Gururangan et al., 2018). Popular NLU benchmarks (like SuperGLUE; Wang et al., 2019a) should be critically examined for potential construal biases, and contrasts should be introduced deliberately to probe whether systems are modeling lexical choices, grammatical choices, and meaning in the desired way (Naik et al., 2018; Kaushik et al., 2020; McCoy et al., 2019; Gardner et al., 2020). + +As a broader suggestion, datasets should move away from a one-size-fits-all attitude based on gold annotations. Ideally, evaluation metrics should take into account not only partial structure matches, but also similarity to alternate construals. + +Cognitive connections. The many connections between construal and the rest of cognition highlight the need for further interdisciplinary engagements in the study of construal. + +The psycholinguistics literature is a particularly rich source of construal-related data and human language benchmarks. Psycholinguistic data could also be used to probe neural language models (Futrell et al., 2018; Linzen and Leonard, 2018; van Schijndel and Linzen, 2018; Ettinger, 2020). + +How well do such models capture the phenomena reviewed in $\S 3$ , and where do they fall short? + +A fuller account of the constellation of factors involved in construal should also take seriously the grounded, situated nature of language use (Harnad, 1990; Kiros et al., 2018; Bender and Koller, 2020; Bisk et al., 2020). Frameworks motivated by the linguistic insights mentioned in $\S 2$ (such as the work on computational construction grammar referenced in $\S 4$ ) and by growing evidence of embodied simulations as the basis for meaning (Narayanan, 1999; Bergen and Chang, 2005; Feldman, 2006; Bergen, 2012; Tamari et al., 2020) are especially relevant lines of inquiry. + +Much work remains to flesh out the construal dimensions, operations and phenomena preliminarily identified in §3 and §4, especially in connecting to typological, sociolinguistic, developmental, and neural constraints on conceptualization. We believe a concerted effort across the language sciences would provide valuable guidance for developing better NL systems and resources. + +# 7 Conclusion + +As the saying goes, the camera doesn't lie—but it may tell us only a version of the truth. The same goes for language. + +Some of the phenomena we have described may seem, at first glance, either too subtle to bother with or too daunting to tackle. But we believe it is both timely and necessary, as language technologies grow in scope and prominence, to seek a more robust treatment of meaning. We hope that a deeper appreciation of the role of construal in language use will spur progress toward systems that more closely approximate human linguistic intelligence. + +# Acknowledgments + +We are grateful to Lucia Donatelli, Nick Hay, Aurelie Herbelot, Jena Hwang, Jakob Prange, Susanne Riehemann, Hannah Rohde, Rachel Rudinger, and anonymous reviewers for many helpful suggestions; and to the ACL 2020 organizers for planning a special theme, Taking Stock of Where We've Been and Where We're Going. Special thanks to Nora Chang-Hay for finally relaxing her tiny iron grip. + +This research was supported in part by NSF award IIS-1812778. The FrameNet Brasil Lab is funded by CAPES grants 88887.125411/2016-00 and 88887.144043/2017-00. + +# References + +Omri Abend and Ari Rappoport. 2017. The state of the art in semantic representation. In Proc. of ACL, pages 77-89, Vancouver, Canada. +Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffith, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for sembanking. In Proc. of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, pages 178-186, Sofia, Bulgaria. +Emily M. Bender and Alexander Koller. 2020. Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proc. of ACL. +Benjamin Bergen and Nancy Chang. 2013. Embodied Construction Grammar. In Thomas Hoffmann and Graeme Trousdale, editors, The Oxford Handbook of Construction Grammar, pages 168-190. Oxford University Press, New York. +Benjamin Bergen and Kathryn Wheeler. 2010. Grammatical aspect and mental simulation. *Brain and Language*, 112(3):150-158. +Benjamin K. Bergen. 2012. Louder Than Words: The New Science of How the Mind Makes Meaning. Perseus Books Group, New York. +Benjamin K. Bergen and Nancy Chang. 2005. Embodied Construction Grammar in simulation-based language understanding. In Jan-Ola Östman and Mirjam Fried, editors, Construction grammars: cognitive grounding and theoretical extensions, pages 147-190. John Benjamins, Amsterdam. +Rahul Bhagat and Eduard Hovy. 2013. What is a paraphrase? Computational Linguistics, 39(3):463-472. +Dorrit Billman and Meredithy Krych. 1998. Path and manner verbs in action: Effects of "skipping" or "exiting" on event memory. In Proc. of CogSci, volume 20, pages 156-161, Madison, Wisconsin. +Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, and Joseph Turian. 2020. Experience grounds language. arXiv:2004.10151 [cs]. +Claire Bonial, Susan Windisch Brown, Jena D. Hwang, Christopher Parisien, Martha Palmer, and Suzanne Stevenson. 2011. Incorporating coercive constructions into a verb lexicon. In Proc. of the ACL 2011 Workshop on Relational Models of Semantics, pages 72-80, Portland, Oregon, USA. +Claire Bonial, Lucia Donatelli, Stephanie M. Lukin, Stephen Tratz, Ron Artstein, David Traum, and Clare Voss. 2019. Augmenting Abstract Meaning Representation for human-robot dialogue. In Proc. of the First International Workshop on Designing + +Meaning Representations, pages 199-210, Florence, Italy. +Lera Boroditsky. 2000. Metaphoric structuring: Understanding time through spatial metaphors. Cognition, 75(1):1-28. +Lera Boroditsky. 2001. Does language shape thought?: Mandarin and English speakers' conceptions of time. Cognitive Psychology, 43(1):1-22. +Joan Bresnan, Anna Cueni, Tatiana Nikitina, and R. Harald Baayen. 2007. Predicting the dative alternation. In Gerlof Bouma, Irene Kraemer, and Joost Zwarts, editors, Cognitive foundations of interpretation, pages 69-94. KNAw, Amsterdam. +Penelope Brown and Stephen C. Levinson. 1987. Politeness: Some universals in language usage, volume 4. Cambridge University Press. +Tad T. Brunyé, Tali Ditman, Caroline R. Mahoney, Jason S. Augustyn, and Holly A. Taylor. 2009. When you and I share perspectives: Pronouns modulate perspective taking during narrative comprehension. Psychological Science, 20(1):27-32. +John Bryant. 2008. Best-fit constructiona! analysis. Ph.D. dissertation, University of California, Berkeley, Berkeley, California. +Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186. +Eugene Casad. 1995. Seeing it in more than one way. In John R. Taylor, editor, Language and the Cognitive Construal of the World, pages 23-49. Mouton de Gruyter, Berlin. +Wallace L. Chafe and Johanna Nichols. 1986. Evidentiality: The linguistic coding of epistemology, volume 20. Ablex Publishing Corporation, Norwood, NJ. +Nancy Chang, Daniel Gildea, and Srini Narayanan. 1998. A dynamic model of aspectual composition. In Proc. of CogSci, pages 226-231, Madison, WI, USA. +Herbert H. Clark. 1979. Responding to indirect speech acts. Cognitive Psychology, 11(4):430-477. +Bernard Comrie. 1976. Aspect: An introduction to the study of verbal aspect and related problems, volume 2. Cambridge University Press, New York. +Ann Copestake and Ted Briscoe. 1995. Semi-productive polysemy and sense extension. Journal of Semantics, 12(1):15-67. +William Croft. 2012. Verbs: Aspect and Causal Structure. Oxford University Press, Oxford, UK. + +William Croft and D. Alan Cruse. 2004. Conceptualization and construal operations. In Cognitive Linguistics, chapter 3. Cambridge University Press. +William Croft and Esther J. Wood. 2000. Construal operations in linguistics and artificial intelligence. In Liliana Albertazzi, editor, Meaning and Cognition: A multidisciplinary approach, pages 51-78. John Benjamins, Amsterdam. +Oana David and Barbara Dancygier. 2017. Computational approaches to metaphor: the case of MetaNet. In Barbara Dancygier, editor, The Cambridge Handbook of Cognitive Linguistics, pages 574-589. Cambridge University Press, Cambridge. +Sandra Devin and Rachid Alami. 2016. An implemented theory of mind to improve human-robot shared plans execution. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages 319-326. IEEE. +Tali Ditman, Tad T. Brunyé, Caroline R. Mahoney, and Holly A. Taylor. 2010. Simulating an enactment effect: Pronouns guide action simulation during narrative comprehension. Cognition, 115(1):172-178. +Ellen Dodge, Jisup Hong, and Elise Stickles. 2015. MetaNet: Deep semantic automatic metaphor analysis. In Proc. of the Third Workshop on Metaphor in NLP, pages 40-49, Denver, Colorado, USA. +Ellen K. Dodge and Miriam R. L. Petruck. 2014. Representing caused motion in Embodied Construction Grammar. In Proc. of the ACL 2014 Workshop on Semantic Parsing, pages 39-44, Baltimore, MD. +Lucia Donatelli, Michael Regan, William Croft, and Nathan Schneider. 2018. Annotation of tense and aspect semantics for sentential AMR. In Proc. of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018), pages 96-108, Santa Fe, New Mexico, USA. +David R. Dowty. 1991. Thematic proto-roles and argument selection. Language, 67(3):547-619. +Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34-48. +Caitlin M. Fausey and Lera Boroditsky. 2010. Subtle linguistic cues influence perceived blame and financial liability. Psychonomic Bulletin & Review, 17(5):644-650. +Caitlin M. Fausey and Teenie Matlock. 2011. Can grammar win elections? Political Psychology, 32(4):563-574. +Jerome A. Feldman. 2006. From molecule to metaphor: a neural theory of language. MIT Press, Cambridge, MA. + +Nora Fieder, Lyndsey Nickels, and Britta Biedermann. 2014. Representation and processing of mass and count nouns: A review. Frontiers in Psychology, 5:589. +Charles J. Fillmore. 1977. The case for case reopened. In Peter Cole and Jerrold M. Sadock, editors, Syntax and Semantics, vol. 8: Grammatical Relations, pages 59-81. Academic Press, New York. +Charles J. Fillmore. 1985. Frames and the semantics of understanding. *Quaderni di Semantica*, 6(2):222-254. +Charles J. Fillmore and Collin Baker. 2009. A frames approach to semantic analysis. In Bernd Heine and Heiko Narrog, editors, The Oxford Handbook of Linguistic Analysis, pages 791-816. Oxford University Press, Oxford, UK. +Stephen J. Flusberg, Teenie Matlock, and Paul H. Thibodeau. 2017. Metaphors for the war (or race) against climate change. *Environmental Communication*, 11(6):769-783. +Maxwell Forbes and Yejin Choi. 2017. Verb Physics: relative physical knowledge of actions and objects. In Proc. of ACL, pages 266-276, Vancouver, Canada. +Annemarie Friedrich and Alexis Palmer. 2014. Automatic prediction of aspectual class of verbs in context. In Proc. of ACL, pages 517-523, Baltimore, Maryland, USA. +Richard Futrell, Ethan Wilcox, Takashi Morita, and Roger Levy. 2018. RNNs as psycholinguistic subjects: Syntactic state and grammatical dependency. arXiv preprint arXiv:1809.01329. +Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating NLP models via contrast sets. arXiv:2004.02709 [cs]. +Silvia P. Gennari, Steven A. Sloman, Barbara C. Malt, and W. Tecumseh Fitch. 2002. Motion events in language and cognition. Cognition, 83(1):49-79. +Brendan Gillon, Eva Kehayia, and Vanessa Taler. 1999. The mass/count distinction: Evidence from on-line psycholinguistic performance. *Brain and Language*, 68(1-2):205-211. +Adele E. Goldberg. 1995. *Constructions: A construction grammar approach to argument structure*. University of Chicago Press, Chicago. +Adele E. Goldberg. 2003. *Constructions: A new theoretical approach to language.* Trends in Cognitive Sciences, 7(5):219-224. + +Adele E. Goldberg. 2019. *Explain Me This: Creativity, Competition, and the Partial Productivity of Constructions*. Princeton University Press, Princeton. +Caroline Graf, Judith Degen, Robert X.D. Hawkins, and Noah D. Goodman. 2016. Animal, dog, or dalmatian? Level of abstraction in nominal referring expressions. In Proc. of CogSci, pages 2261-2266, Philadelphia, PA. +Stephan Greene and Philip Resnik. 2009. More than words: syntactic packaging and implicit sentiment. In Proc. of NAACL-HLT, pages 503-511, Boulder, Colorado. +Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proc. of NAACL-HLT, pages 107-112, New Orleans, Louisiana. +E. Dario Gutiérrez, Ekaterina Shutova, Tyler Marghetis, and Benjamin Bergen. 2016. Literal and metaphorical senses in compositional distributional semantic models. In Proc. of ACL, pages 183-193, Berlin, Germany. +Lala Hajibayova. 2013. Basic-level categories: A review. Journal of Information Science, 39(5):676-687. +Stevan Harnad. 1990. The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1):335 - 346. +David J. Hauser and Norbert Schwarz. 2015. The war on prevention: Bellicose cancer metaphors hurt (some) prevention intentions. *Personality and Social Psychology Bulletin*, 41(1):66-77. +Rose K. Hendricks and Lera Boroditsky. 2017. New space-time metaphors foster new nonlinguistic representations. Topics in Cognitive Science, 9(3):800-818. +Rose K. Hendricks, Zsófia Demjén, Elena Semino, and Lera Boroditsky. 2018. Emotional implications of metaphor: Consequences of metaphor framing for mindset about cancer. Metaphor and Symbol, 33(4):267-279. +Jena D. Hwang, Archna Bhatia, Na-Rae Han, Tim O'Gorman, Vivek Srikumar, and Nathan Schneider. 2017. Double trouble: the problem of construal in semantic annotation of adpositions. In *Proc. of *SEM, pages 178-188, Vancouver, Canada. +Jena D. Hwang, Annie Zaenen, and Martha Palmer. 2014. Criteria for identifying and annotating caused motion constructions in corpus data. In Proc. of LREC, pages 1297-1304, Reykjavík, Iceland. +Edward Kako. 2006. Thematic role properties of subjects and objects. Cognition, 101(1):1-42. + +Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. 2020. Learning the difference that makes a difference with counterfactually-augmented data. In Proc. of ICLR. +Jamie Kiros, William Chan, and Geoffrey Hinton. 2018. Illustrative language understanding: large-scale visual grounding with image search. In Proc. of ACL, pages 922-933, Melbourne, Australia. +Tibor Kiss, Francis Jeffry Pelletier, Halima Husic, and Johanna Poppek. 2017. Issues of mass and count: Dealing with 'dual-life' nouns. In Proc. of *SEM, pages 189-198, Vancouver, Canada. +Wei-Jen Ko, Greg Durrett, and Junyi Jessy Li. 2019a. Domain agnostic real-valued specificity prediction. In Proc. of AAAI, volume 33, pages 6610-6617. +Wei-Jen Ko, Greg Durrett, and Junyi Jessy Li. 2019b. Linguistically-informed specificity and semantic plausibility for dialogue generation. In Proc. of NAACL-HLT, pages 3456-3466, Minneapolis, Minnesota. +Lars Kunze, Tom Williams, Nick Hawes, and Matthias Scheutz. 2017. Spatial referring expression generation for hri: Algorithms and evaluation framework. In 2017 AAAI Fall Symposium Series, pages 27-35, Palo Alto, CA. +George Lakoff. 1987. Women, fire, and dangerous things: what categories reveal about the mind. University of Chicago Press, Chicago. +George Lakoff and Mark Johnson. 1980. Metaphors We Live By. University of Chicago Press, Chicago. +Ronald W. Langacker. 1993. Universals of construal. In Proc. of Berkeley Linguistics Society, volume 19, pages 447-463. +Junyi Jessy Li and Ani Nenkova. 2015. Fast and accurate prediction of sentence specificity. In Proc. of AAAI, pages 2281-2287, Austin, Texas. +Junyi Jessy Li, Bridget O'Daniel, Yi Wu, Wenli Zhao, and Ani Nenkova. 2016. Improving the annotation of sentence specificity. In Proc. of LREC, pages 3921-3927, Portoorž, Slovenia. +Tal Linzen and Brian Leonard. 2018. Distinct patterns of syntactic agreement errors in recurrent networks and humans. In Proc. of CogSci, pages 690-695, Madison, WI. +Hugo Liu and Push Singh. 2004. ConceptNet—a practical commonsense reasoning tool-kit. BT Technology Journal, 22(4):211-226. +Annie Louis and Ani Nenkova. 2012. A corpus of general and specific sentences from news. In Proc. of LREC, pages 1818-1821, Istanbul, Turkey. +Benjamin Lyngfelt, Lars Borin, Kyoko Ohara, and Tiago Timponi Torrent. 2018. Constructicography: Constructicon development across languages. John Benjamins, Amsterdam. + +Joseph P. Magliano and Michelle C. Schleich. 2000. Verb aspect and situation models. Discourse Processes, 29(2):83-112. +Jonathan Malmaud, Earl Wagner, Nancy Chang, and Kevin Murphy. 2014. Cooking with semantics. In Proc. of the ACL 2014 Workshop on Semantic Parsing, pages 33-38, Baltimore, MD. +Thomas A. Mathew and E. Graham Katz. 2009. Supervised categorization for habitual versus episodic sentences. In Sixth Midwest Computational Linguistics Colloquium, Bloomington, Indiana. +Teenie Matlock. 2004a. The conceptual motivation of fictive motion. In *Studies in Linguistic Motivation*, pages 221-248. Mouton de Gruyter, Berlin. +Teenie Matlock. 2004b. Fictive motion as cognitive simulation. *Memory & Cognition*, 32(8):1389-1400. +Teenie Matlock. 2011. The conceptual motivation of aspect. In Klaus-Uwe Panther and Gunter Radden, editors, Motivation in Grammar and the Lexicon, pages 133-148. John Benjamins Publishing, Amsterdam. +Ely Matos, Tiago Torrent, Vania Almeida, Adrieli Laviola, Ludmila Lage, Natalia Marcao, and Tatiane Tavares. 2017. Constructional analysis using constrained spreading activation in a FrameNet-based structured connectionist model. In Computational Construction Grammar and Natural Language Understanding: Papers from the 2017 AAAI Spring Symposium, pages 222-229, Stanford, California. AAAI Press. +Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proc. of ACL, pages 3428-3448, Florence, Italy. +Carolyn B. Mervis and Eleanor Rosch. 1981. Categorization of natural objects. Annual Review of Psychology, 32(1):89-115. +Marc Moens and Mark Steedman. 1988. Temporal ontology and temporal reference. Computational Linguistics, 14(2):15-28. +Tanja Mortelmans. 2007. Modality in cognitive linguistics. In *The Oxford Handbook of Cognitive Linguistics*. Oxford University Press. +Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proc. of COLING, pages 2340-2353, Santa Fe, New Mexico, USA. +Srinivas Narayanan. 1999. Moving right along: a computational model of metaphoric reasoning about events. In Proc. of AAAI, pages 121-128, Orlando, Florida. + +Rafael E. Núñez and Eve Sweetser. 2006. With the future behind them: convergent evidence from Aymara language and gesture in the crosslinguistic comparison of spatial construals of time. Cognitive Science, 30(3):401-450. +Rebecca J. Passonneau. 1988. A computational model of the semantics of tense and aspect. Computational Linguistics, 14(2):44-60. +Ellie Pavlick and Tom Kwiatkowski. 2019. Inherent disagreements in human textual inferences. Transactions of the Association for Computational Linguistics, 7:677-694. +Francis Jeffry Pelletier and Lenhart K. Schubert. 1989. Mass expressions. In D. Gabbay and F. Guenthner, editors, Handbook of Philosophical Logic: Volume IV: Topics in the Philosophy of Language, Synthese Library, pages 327-407. Springer Netherlands, Dordrecht. +Francis Jeffry Pelletier and Lenthart K. Schubert. 2003. Mass expressions. In D. M. Gabbay and F. Guentner, editors, Handbook of Philosophical Logic, Handbook of Philosophical Logic, pages 249-335. Springer Netherlands, Dordrecht. +Martin J. Pickering and Victor S. Ferreira. 2008. Structural priming: A critical review. Psychological Bulletin, 134(3):427. +Stephen Pulman. 1997. Aspectual shift as type coercion. Transactions of the Philological Society, 95. +Pirita Pyykkönen, Danielle Matthews, and Juhani Järvikivi. 2010. Three-year-olds are sensitive to semantic prominence during online language comprehension: A visual world study of pronoun resolution. Language and Cognitive Processes, 25(1):115-129. +Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. 2013. Grounding action descriptions in videos. Transactions of the Association for Computational Linguistics, 1:25-36. +Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015. Semantic proto-roles. Transactions of the Association for Computational Linguistics, 3:475-488. +Hannah Rohde, Alexander Johnson, Nathan Schneider, and Bonnie Webber. 2018. Discourse coherence: concurrent explicit and implicit relations. In Proc. of ACL, pages 2257-2267, Melbourne, Australia. +Raquel Ros, Severin Lemaignan, E. Akin Sisbot, Rachid Alami, Jasmin Steinwender, Katharina Hamann, and Felix Warneken. 2010. Which one? grounding the referent based on efficient humanrobot interaction. In 19th International Symposium in Robot and Human Interactive Communication, pages 570-575. IEEE. + +Eleanor Rosch, Carolyn B. Mervis, Wayne Gray, David Johnson, and Penny Boyes-Braem. 1976. Basic objects in natural categories. Cognitive Psychology, 8(3):382-439. +Rachel Rudinger, Adam Teichert, Ryan Culkin, Sheng Zhang, and Benjamin Van Durme. 2018. Neural-Davidsonian semantic proto-role labeling. In Proc. of EMNLP, pages 944-955, Brussels, Belgium. +Josef Ruppenhofer and Laura A. Michaelis. 2010. A constructional account of genre-based argument omissions. *Constructions and Frames*, 2(2):158-184. +Michael Schiehlen and Kristina Spranger. 2006. The mass-count distinction: acquisition and disambiguation. In Proc. of LREC, pages 265-270, Genoa, Italy. +Marten van Schijndel and Tal Linzen. 2018. Modeling garden path effects without explicit hierarchical syntax. In Proc. of CogSci, pages 2603-2608, Madison, WI. +Ekaterina Shutova. 2015. Design and evaluation of metaphor processing systems. Computational Linguistics, 41(4):579-623. +Ekaterina Shutova, Lin Sun, and Anna Korhonen. 2010. Metaphor identification using verb and noun clustering. In Proc. of Coling, pages 1002-1010, Beijing, China. +Ekaterina Shutova, Simone Teufel, and Anna Korhonen. 2013. Statistical metaphor processing. Computational Linguistics, 39(2):301-353. +Eric V. Siegel and Kathleen R. McKeown. 2000. Learning methods to combine linguistic indicators: Improving aspectual classification and revealing linguistic insights. Computational Linguistics, 26(4):595-628. +Luc Steels. 2017. Basics of Fluid Construction Grammar. *Constructions and Frames*, 9(2):178-255. +Luc Steels and Jerome Feldman, editors. 2017. Computational Construction Grammar and Natural Language Understanding: Papers from the 2017 AAAI Spring Symposium. AAAI Press, Stanford, California. +Elise Stickles, Ellen Dodge, and Jisup Hong. 2014. A construction-driven, MetaNet-based approach to metaphor extraction and corpus analysis. Presented at Conceptual Structure, Discourse, and Language (CSDL 12), Santa Barbara, California. +Leonard Talmy. 1988. Grammatical construal. In Brygida Rudzka-Ostyn, editor, Topics in Cognitive Linguistics, pages 165-205. John Benjamins, Amsterdam. + +Leonard Talmy. 1996. Fictive motion in language and "ception". In Paul Bloom, Mary A. Peterson, Lynn Nadel, and Merrill F. Garrett, editors, Language and space, pages 211-276. The MIT Press, Cambridge, MA. +Leonard Talmy. 2000. Toward a cognitive semantics: concept structuring systems. MIT Press, Cambridge, MA. +Ronen Tamari, Chen Shani, Tom Hope, Miriam R. L. Petruck, Omri Abend, and Dafna Shahaf. 2020. Language (re)modelling: Towards embodied language understanding. In Proc. of ACL. +James W. Tanaka and Marjorie Taylor. 1991. Object categories and expertise: Is the basic level in the eye of the beholder? Cognitive Psychology, 23(3):457-482. +John R. Taylor. 1995. Introduction: On construing the world. In John R. Taylor, editor, Language and the Cognitive Construal of the World, pages 1-22. Mouton de Gruyter, Berlin. +Paul H. Thibodeau and Lera Boroditsky. 2011. Metaphors we think with: the role of metaphor in reasoning. PLoS ONE, 6(2):e16782. +Paul H. Thibodeau, Rose K. Hendricks, and Lera Boroditsky. 2017. How linguistic metaphor scaffolds reasoning. Trends in Cognitive Sciences, 21(11):852-863. +J. Gregory Trafton, Nicholas L. Cassimatis, Magdalena D. Bugajska, Derek P. Brock, Farilee E. Mintz, and Alan C. Schultz. 2005. Enabling effective human-robot interaction using perspectivetaking in robots. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 35(4):460-470. +Zeno Vendler. 1967. Linguistics in Philosophy. Cornell University Press, Ithaca, NY. +Joshua Wampler and Eva Wittenberg. 2019. Doing thus and so: Event referential expressions and referent complexity. Presented at California Meeting on Psycholinguistics (CAMP) 3, UC Santa Cruz. +Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019a. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In Proc. of NeurIPS, pages 3266-3280, Vancouver, Canada. +Su Wang, Rahul Gupta, Nancy Chang, and Jason Baldridge. 2019b. A task in a suit and a tie: paraphrase generation with semantic augmentation. In Proc. of AAAI, volume 33, pages 7176-7183, Honolulu, Hawaii. +Eva Wittenberg, Ray Jackendoff, Gina Kuperberg, Martin Paczynski, Jesse Snedeker, and Heike Wiese. 2014. The processing and representation of light + +verb constructions. In Asaf Bachrach, Isabelle Roy, and Linnaea Stockall, editors, Structuring the argument, pages 61-80. John Benjamins, Amsterdam. +Eva Wittenberg and Roger Levy. 2017. If you want a quick kiss, make it count: How choice of syntactic construction affects event construal. Journal of Memory and Language, 94:254-271. +Semin Yagcioglu, Aykut Erdem, Erkut Erdem, and Nazli Ikizler-Cinbis. 2018. RecipeQA: a challenge dataset for multimodal comprehension of cooking recipes. In Proc. of EMNLP, pages 1358-1368, Brussels, Belgium. \ No newline at end of file diff --git a/reconstruingmeaninginnlp/layout.json b/reconstruingmeaninginnlp/layout.json new file mode 100644 index 0000000000000000000000000000000000000000..7ab1317da4f8c10afacd18a2a0711fddb163ab4f --- /dev/null +++ b/reconstruingmeaninginnlp/layout.json @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0f85784d07e906dcbd1e031a5b750fd9aaa62d69298905500b19ba75960f6242 +size 485710