{ "paper_id": "P19-1003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:23:33.123728Z" }, "title": "Improving Multi-turn Dialogue Modelling with Utterance ReWriter", "authors": [ { "first": "Hui", "middle": [], "last": "Su", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Xiaoyu", "middle": [], "last": "Shen", "suffix": "", "affiliation": { "laboratory": "MPI Informatics & Spoken Language Systems (LSV)", "institution": "", "location": {} }, "email": "xshen@mpi-inf.mpg.de" }, { "first": "Rongzhi", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Chinese Academy of Science", "location": {} }, "email": "" }, { "first": "Fei", "middle": [], "last": "Sun", "suffix": "", "affiliation": { "laboratory": "Alibaba Group", "institution": "", "location": {} }, "email": "" }, { "first": "Pengwei", "middle": [], "last": "Hu", "suffix": "", "affiliation": { "laboratory": "", "institution": "IBM Research", "location": { "country": "China" } }, "email": "" }, { "first": "Cheng", "middle": [], "last": "Niu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Jie", "middle": [], "last": "Zhou", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recent research has made impressive progress in single-turn dialogue modelling. In the multi-turn setting, however, current models are still far from satisfactory. One major challenge is the frequently occurred coreference and information omission in our daily conversation, making it hard for machines to understand the real intention. In this paper, we propose rewriting the human utterance as a pre-process to help multi-turn dialgoue modelling. Each utterance is first rewritten to recover all coreferred and omitted information. The next processing steps are then performed based on the rewritten utterance. To properly train the utterance rewriter, we collect a new dataset with human annotations and introduce a Transformer-based utterance rewriting architecture using the pointer network. We show the proposed architecture achieves remarkably good performance on the utterance rewriting task. The trained utterance rewriter can be easily integrated into online chatbots and brings general improvement over different domains. 1", "pdf_parse": { "paper_id": "P19-1003", "_pdf_hash": "", "abstract": [ { "text": "Recent research has made impressive progress in single-turn dialogue modelling. In the multi-turn setting, however, current models are still far from satisfactory. One major challenge is the frequently occurred coreference and information omission in our daily conversation, making it hard for machines to understand the real intention. In this paper, we propose rewriting the human utterance as a pre-process to help multi-turn dialgoue modelling. Each utterance is first rewritten to recover all coreferred and omitted information. The next processing steps are then performed based on the rewritten utterance. To properly train the utterance rewriter, we collect a new dataset with human annotations and introduce a Transformer-based utterance rewriting architecture using the pointer network. We show the proposed architecture achieves remarkably good performance on the utterance rewriting task. The trained utterance rewriter can be easily integrated into online chatbots and brings general improvement over different domains. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Dialogue systems have made dramatic progress in recent years, especially in single-turn chit-chat and FAQ matching (Shang et al., 2015; Ghazvininejad et al., 2018; Molino et al., 2018; . Nonethless, multi-turn dialogue modelling still remains extremely challenging (Vinyals and Le, 2015; Serban et al., 2016 Serban et al., , 2017 Shen et al., 2018a,b) . The challenge is multi-sided. One most important difficulty is the frequently occurred coreference and information omission in our daily conversations, especially in pro-drop languages like Chinese or Japanese. From our preliminary study of 2,000 Chinese multi-turn con- Table 1 : An example of multi-turn dialogue. Each utterance 3 is rewritten into Utterance 3 . Green means coreference and blue means omission.", "cite_spans": [ { "start": 115, "end": 135, "text": "(Shang et al., 2015;", "ref_id": "BIBREF26" }, { "start": 136, "end": 163, "text": "Ghazvininejad et al., 2018;", "ref_id": "BIBREF11" }, { "start": 164, "end": 184, "text": "Molino et al., 2018;", "ref_id": "BIBREF18" }, { "start": 265, "end": 287, "text": "(Vinyals and Le, 2015;", "ref_id": "BIBREF32" }, { "start": 288, "end": 307, "text": "Serban et al., 2016", "ref_id": "BIBREF24" }, { "start": 308, "end": 329, "text": "Serban et al., , 2017", "ref_id": "BIBREF25" }, { "start": 330, "end": 351, "text": "Shen et al., 2018a,b)", "ref_id": null } ], "ref_spans": [ { "start": 625, "end": 632, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "versations, different degrees of coreference and omission exist in more than 70% of the utterances. Capturing the hidden intention beneath them requires deeper understanding of the dialogue context, which is difficult for current neural networkbased systems. Table 1 shows two typical examples in multi-turn dialogues. \"\u4ed6\"(he) from Context 1 is a coreference to \"\u6885\u897f\"(Messi) and \"\u4e3a\u4ec0 \u4e48\"(Why) from Context 2 omits the further question of \"\u4e3a\u4ec0\u4e48\u6700\u559c\u6b22\u6cf0\u5766\u5c3c\u514b\"(Why do you like Tatanic most)?. Without expanding the coreference or omission to recover the full information, the chatbot has no idea how to continue the talk.", "cite_spans": [], "ref_spans": [ { "start": 259, "end": 266, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To address this concern, we propose simplifying the multi-turn dialogue modelling into a singleturn problem by rewriting the current utterance. The utterance rewriter is expected to perform (1) coreference resolution and (2) information completion to recover all coreferred and omitted mentions. In the two examples from Table 1, each utterance 3 will be rewritten into utterance 3 . Afterwards, the system will generate a reply by only looking into the utterance 3 without considering the previous turns utterance 1 and 2. This simplification shortens the length of dialogue con-text while still maintaining necessary information needed to provide proper responses, which we believe will help ease the difficulty of multi-turn dialogue modelling. Compared with other methods like memory networks (Sukhbaatar et al., 2015) or explicit belief tracking (Mrk\u0161i\u0107 et al., 2017) , the trained utterance rewriter is model-agnostic and can be easily integrated into other black-box dialogue systems. It is also more memory-efficient because the dialogue history information is reflected in a single rewritten utterance.", "cite_spans": [ { "start": 797, "end": 822, "text": "(Sukhbaatar et al., 2015)", "ref_id": "BIBREF29" }, { "start": 851, "end": 872, "text": "(Mrk\u0161i\u0107 et al., 2017)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To get supervised training data for the utterance rewriting, we construct a Chinese dialogue dataset containing 20k multi-turn dialogues. Each utterance is paired with corresponding manually annotated rewritings. We model this problem as an extractive generation problem using the Pointer Network . The rewritten utterance is generated by copying words from either the dialogue history or the current utterance based on the attention mechanism (Bahdanau et al., 2014) . Inspired by the recently proposed Transformer architecture (Vaswani et al., 2017) in machine translation which can capture better intra-sentence word dependencies, we modify the Transformer architecture to include the pointer network mechanism. The resulting model outperforms the recurrent neural network (RNN) and original Transformer models, achieving an F1 score of over 0.85 for both the coreference resolution and information completion. Furthermore, we integrate our trained utterance rewriter into two online chatbot platforms and find it leads to more accurate intention detection and improves the user engagement. In summary, our contributions are: 1. We collect a high-quality annotated dataset for coreference resolution and information completion in multi-turn dialogues, which might benefit future related research.", "cite_spans": [ { "start": 444, "end": 467, "text": "(Bahdanau et al., 2014)", "ref_id": "BIBREF1" }, { "start": 529, "end": 551, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2. We propose a highly effective Transformerbased utterance rewriter outperforming several strong baselines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "3. The trained utterance rewriter, when integrated into two real-life online chatbots, is shown to bring significant improvement over the original system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the next section, we will first go over some related work. Afterwards, in Section 3 and 4, our collected dataset and proposed model are introduced. The experiment results and analysis are presented in Section 5. Finally, some conclusions are drawn in Section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Related Work", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Sentence rewriting has been widely adopted in various NLP tasks. In machine translation, people have used it to refine the output generations from seq2seq models (Niehues et al., 2016; Junczys-Dowmunt and Grundkiewicz, 2017; Grangier and Auli, 2017; Gu et al., 2017) . In text summarization, reediting the retrieved candidates can provide more accurate and abstractive summaries (See et al., 2017; Chen and Bansal, 2018; Cao et al., 2018) . In dialogue modelling, Weston et al. (2018) applied it to rewrite outputs from a retrieval model, but they pay no attention to recovering the hidden information under the coreference and omission. Concurrent with our work, Rastogi et al. (2019) adopts a similar idea on English conversations to simplify the downstream SLU task by reformulating the original utterance. Rewriting the source input into some easy-to-process standard format has also gained significant improvements in information retrieval (Riezler and Liu, 2010) , semantic parsing (Chen et al., 2016) or question answering (Abujabal et al., 2018) , but most of them adopt a simple dictionary or template based rewriting strategy. For multi-turn dialogues, due to the complexity of human languages, designing suitable template-based rewriting rules would be timeconsuming.", "cite_spans": [ { "start": 162, "end": 184, "text": "(Niehues et al., 2016;", "ref_id": "BIBREF20" }, { "start": 185, "end": 224, "text": "Junczys-Dowmunt and Grundkiewicz, 2017;", "ref_id": "BIBREF15" }, { "start": 225, "end": 249, "text": "Grangier and Auli, 2017;", "ref_id": "BIBREF12" }, { "start": 250, "end": 266, "text": "Gu et al., 2017)", "ref_id": "BIBREF13" }, { "start": 379, "end": 397, "text": "(See et al., 2017;", "ref_id": "BIBREF23" }, { "start": 398, "end": 420, "text": "Chen and Bansal, 2018;", "ref_id": "BIBREF7" }, { "start": 421, "end": 438, "text": "Cao et al., 2018)", "ref_id": "BIBREF4" }, { "start": 464, "end": 484, "text": "Weston et al. (2018)", "ref_id": "BIBREF33" }, { "start": 664, "end": 685, "text": "Rastogi et al. (2019)", "ref_id": "BIBREF21" }, { "start": 945, "end": 968, "text": "(Riezler and Liu, 2010)", "ref_id": "BIBREF22" }, { "start": 988, "end": 1007, "text": "(Chen et al., 2016)", "ref_id": "BIBREF5" }, { "start": 1030, "end": 1053, "text": "(Abujabal et al., 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Sentence Rewriting", "sec_num": "2.1" }, { "text": "Coreference resolution aims to link an antecedent for each possible mention. Traditional approaches often adopt a pipeline structure which first identify all pronouns and entities then run clustering algorithms (Haghighi and Klein, 2009; Lee et al., 2011; Durrett and Klein, 2013; Bj\u00f6rkelund and Kuhn, 2014) . At both stages, they rely heavily on complicated, fine-grained features. Recently, several neural coreference resolution systems (Clark and Manning, 2016a,b) utilize distributed representations to reduce human labors. Lee et al. (2017) reported state-of-the-art results with an end-to-end neural coreference resolution system. However, it requires computing the scores for all possible spans, which is computationally inefficient on online dialogue systems. The recently proposed Transformer adopted the self-attention mechanism which could implicitly capture inter-word dependencies in an unsupervised way (Vaswani et al., 2017) . However, when multiple coreferences occur, it has problems properly distinguishing them. Our proposed architecture is built upon the Transformer architecture, but perform coreference resolution in a supervised setting to help deal with ambiguous mentions.", "cite_spans": [ { "start": 211, "end": 237, "text": "(Haghighi and Klein, 2009;", "ref_id": "BIBREF14" }, { "start": 238, "end": 255, "text": "Lee et al., 2011;", "ref_id": "BIBREF16" }, { "start": 256, "end": 280, "text": "Durrett and Klein, 2013;", "ref_id": "BIBREF10" }, { "start": 281, "end": 307, "text": "Bj\u00f6rkelund and Kuhn, 2014)", "ref_id": "BIBREF2" }, { "start": 439, "end": 467, "text": "(Clark and Manning, 2016a,b)", "ref_id": null }, { "start": 528, "end": 545, "text": "Lee et al. (2017)", "ref_id": "BIBREF17" }, { "start": 917, "end": 939, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Coreference Resolution", "sec_num": "2.2" }, { "text": "To get parallel training data for the sentence rewriting, we crawled 200k candidate multi-turn conversational data from several popular Chinese social media platforms for human annotators to work on. Sensitive information is filtered beforehand for later processing. Before starting the annotation, we randomly sample 2,000 conversational data and analyze how often coreference and omission occurs in multi-turn dialogues. Table 2 lists the statistics. As can be seen, only less than 30% utterances have neither coreference nor omission and quite a few utterances have both. This further validates the importance of addressing the these situations in multi-turn dialogues. In the annotation process, human annotators need to identify these two situations then rewrite the utterance to cover all hidden information. An example is shown in Table 1 . Annotators are required to provide the rewritten utterance 3 given the original conversation [utterance 1,2 and 3]. To ensure the annotation quality, 10% of the annotations from each annotator are daily examined by a project manager and feedbacks are provided. The annotation is considered valid only when the accuracy of examined results surpasses 95%. Apart from the accuracy examination, the project manage is also required to (1) select topics that are more likely to be talked about in daily conversations, (2) try to cover broader domains and (3) balance the proportion of different coreference and omission patterns. The whole annotation takes 4 months to finish. In the end, we get 40k high-quality parallel samples. Half of them are negative samples which do not need any rewriting. The other half are positive samples where rewriting is needed. Table 3 lists the statistics. The rewritten utterance contains 10.5 tokens in average, reducing the context length by 80%.", "cite_spans": [], "ref_spans": [ { "start": 423, "end": 430, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 838, "end": 845, "text": "Table 1", "ref_id": null }, { "start": 1703, "end": 1710, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Dataset", "sec_num": "3" }, { "text": "40,000 Avg. length of original conversation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset size:", "sec_num": null }, { "text": "48.8 Avg. length of rewritten utterance:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset size:", "sec_num": null }, { "text": "10.5 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset size:", "sec_num": null }, { "text": "We denote each training sample as (H, U n \u2192 R). H = {U 1 , U 2 , . . . , U n\u22121 } represents the dialogue history containing the first n \u2212 1 turn of utterances. U n is the nth turn of utterance, the one that needs to be rewritten. R is the rewritten utterance after recovering all corefernced and omitted information in U n . R could be identical to U n if no coreference or omission is detected (negative sample). Our goal is to learn a mapping function p(R|(H, U n )) that can automatically rewrite U n based on the history information H. The process is to first encode (H, U n ) into s sequence of vectors, then decode R using the pointer network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 4.1 Problem Formalization", "sec_num": "4" }, { "text": "The next section will explain the steps in order.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model 4.1 Problem Formalization", "sec_num": "4" }, { "text": "We unfold all tokens in (H, U n ) into (w 1 , w 2 , . . . , w m ). m is the number of tokens in the whole dialogue. An end-of-turn delimiter is inserted between each two turns. The unfolded sequence of tokens are then encoded with Transformer. We concatenate all tokens in (H, U n ) as the input, in hope that the Transformer can learn rudimentary coreference information within them by means of the self-attention mechanism. For each token w i , the input embedding is the sum of its word embedding, position embedding and turn embedding:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "4.2" }, { "text": "I(w i ) = W E(w i ) + P E(w i ) + T E(w i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "4.2" }, { "text": "The word embedding W E(w i ) and position embedding P E(w i ) are the same as in normal Transformer architectures (Vaswani et al., 2017) . We Figure 1 : Architecture of our proposed model. Green box is the Transformer encoder and pink box is the decoder. The decoder computes the probability \u03bb at each step to decide whether to copy from the context or utterance.", "cite_spans": [ { "start": 114, "end": 136, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 142, "end": 150, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Encoder", "sec_num": "4.2" }, { "text": "add an additional turn embedding T E(w i ) to indicate which turn each token belongs to. Tokens from the same turn will share the same turn embedding. The input embeddings are then forwarded into L stacked encoders to get the final encoding representations. Each encoder contains a self-attention layer followed by a feedforward neural network.:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "4.2" }, { "text": "E (0) = I(w 1 ), I(w 2 ), . . . , I(w m ) E (l) = FNN(MultiHead(E (l\u22121) , E (l\u22121) , E (l\u22121) ))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "4.2" }, { "text": "FNN is the feedforward neural network and MultiHead(Q, K, V) is a multi-head attention function taking a query matrix Q, a key matrix K, and a value matrix V as inputs. Each self-attention and feedforward component comes with a residual connection and layer-normalization step, which we refer to Vaswani et al. (2017) for more details. The final encodings are the output from the Lth encoder E (L) .", "cite_spans": [ { "start": 296, "end": 317, "text": "Vaswani et al. (2017)", "ref_id": "BIBREF30" }, { "start": 394, "end": 397, "text": "(L)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "4.2" }, { "text": "The decoder also contains L layers, each layer is composed of three sub-layers. The first sub-layer is a multi-head self-attention:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "4.3" }, { "text": "M l = MultiHead(D (l\u22121) , D (l\u22121) , D (l\u22121) ) D (0) = R.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "4.3" }, { "text": "The second sub-layer is encoderdecoder attention that integrates E (L) into the decoder. In our task, as H and U n serve different purposes, we use separate key-value matrix for tokens coming from the dialogue history H and those coming from U n . The encoded sequence E (L) obtained from the last section is split into E Un (encodings of tokens from U n ) then processed separately. The encoder-decoder vectors are computed as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "4.3" }, { "text": "C(H) l = MultiHead(M (l) , E (L) H , E (L) H ) C(U n ) l = MultiHead(M (l) , E (L) Un , E (L)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "4.3" }, { "text": "Un )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "4.3" }, { "text": "The third sub-layer is a position-wise fully connected feed-forward neural network:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "4.3" }, { "text": "D (l) = FNN([C(H) l \u2022 C(U n ) l ])", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "4.3" }, { "text": "where \u2022 denotes vector concatenation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "4.3" }, { "text": "In the decoding process, we hope our model could learn whether to copy words from H or U n at different steps. Therefore, we impose a soft gating weight \u03bb to make the decision. The decoding probability is computed by combining the atten-tion distribution from the last decoding layer:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Output Distribution", "sec_num": "4.4" }, { "text": "p(R t =w|H, U n , R