| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T02:10:25.940027Z" |
| }, |
| "title": "Findings on Conversation Disentanglement", |
| "authors": [ |
| { |
| "first": "Rongxin", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The University of Melbourne", |
| "location": {} |
| }, |
| "email": "rongxinz1@student.unimelb.edu.au" |
| }, |
| { |
| "first": "Han", |
| "middle": [], |
| "last": "Lau", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The University of Melbourne", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Jianzhong", |
| "middle": [], |
| "last": "Qi", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The University of Melbourne", |
| "location": {} |
| }, |
| "email": "jianzhong.qi@unimelb.edu.au" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Conversation disentanglement, the task to identify separate threads in conversations, is an important pre-processing step in multiparty conversational NLP applications such as conversational question answering and conversation summarization. Framing it as a utterance-to-utterance classification problem-i.e. given an utterance of interest (UOI), find which past utterance it replies to-we explore a number of transformer-based models and found that BERT in combination with handcrafted features remains a strong baseline. We then build a multi-task learning model that jointly learns utterance-to-utterance and utterance-to-thread classification. Observing that the ground truth label (past utterance) is in the top candidates when our model makes an error, we experiment with using bipartite graphs as a post-processing step to learn how to best match a set of UOIs to past utterances. Experiments on the Ubuntu IRC dataset show that this approach has the potential to outperform the conventional greedy approach of simply selecting the highest probability candidate for each UOI independently, indicating a promising future research direction.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Conversation disentanglement, the task to identify separate threads in conversations, is an important pre-processing step in multiparty conversational NLP applications such as conversational question answering and conversation summarization. Framing it as a utterance-to-utterance classification problem-i.e. given an utterance of interest (UOI), find which past utterance it replies to-we explore a number of transformer-based models and found that BERT in combination with handcrafted features remains a strong baseline. We then build a multi-task learning model that jointly learns utterance-to-utterance and utterance-to-thread classification. Observing that the ground truth label (past utterance) is in the top candidates when our model makes an error, we experiment with using bipartite graphs as a post-processing step to learn how to best match a set of UOIs to past utterances. Experiments on the Ubuntu IRC dataset show that this approach has the potential to outperform the conventional greedy approach of simply selecting the highest probability candidate for each UOI independently, indicating a promising future research direction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In public forums and chatrooms such as Reddit and Internet Relay Chat (IRC), there are often multiple conversations happening at the same time. Figure 1 shows two threads of conversation (blue and green) running in parallel. Conversation disentanglement, a task to identify separate threads among intertwined messages, is an essential preprocessing step for analysing entangled conversations in multiparty conversational applications such as question answering (Li et al., 2020) and response selection . It is also useful in constructing datasets for dialogue system studies (Lowe et al., 2015) . Previous studies address the conversation disentanglement task with two steps: link prediction and clustering. In link prediction, a confidence score is computed to predict a reply-to relation from an utterance of interest (UOI) to a past utterance (Elsner and Charniak, 2008; . In clustering, conversation threads are recovered based on the predicted confidence scores between utterance pairs. The most popular clustering method uses a greedy approach to group UOIs linked with their best past utterances to create the threads (Kummerfeld et al., 2019; .", |
| "cite_spans": [ |
| { |
| "start": 461, |
| "end": 478, |
| "text": "(Li et al., 2020)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 575, |
| "end": 594, |
| "text": "(Lowe et al., 2015)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 846, |
| "end": 873, |
| "text": "(Elsner and Charniak, 2008;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1125, |
| "end": 1150, |
| "text": "(Kummerfeld et al., 2019;", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 144, |
| "end": 150, |
| "text": "Figure", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In link prediction, the model that estimates the relevance between a pair of utterances plays an important role. To this end, we explore three transformer-based pretrained models: BERT (Devlin et al., 2019) , ALBERT (Lan et al., 2019) and POLY-ENCODER (Humeau et al., 2019) . These variants are selected by considering performance, memory consumption and speed. We found that BERT in combination with handcrafted features remains a strong baseline. Observing that utterances may be too short to contain sufficient information for disentanglement, we also build a multi-task learning model that learns to jointly link a UOI to a past utterance and a cluster of past utterances (i.e. the conversation threads).", |
| "cite_spans": [ |
| { |
| "start": 185, |
| "end": 206, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 216, |
| "end": 234, |
| "text": "(Lan et al., 2019)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 252, |
| "end": 273, |
| "text": "(Humeau et al., 2019)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For clustering, we experiment with bipartite graph matching algorithms that consider how to best link a set of UOIs to their top candidates, thereby producing globally more optimal clusters. When the graph structure is known, we show that this approach substantially outperforms conventional greedy clustering method, although challenges remain on how to infer the graph structure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To summarise:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We study different transformer-based models for conversation disentanglement.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We explore a multi-task conversation disentanglement framework that jointly learns utterance-to-utterance and utterance-to-thread classification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We experiment with bipartite graphs for clustering utterances and found a promising future direction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Conversation disentanglement methods can be classified into two categories: (1) two-step methods and (2) end-to-end methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In two-step methods, the first step is to measure the relations between utterance pairs, e.g., replyto relations Kummerfeld et al., 2019) or same thread relations Charniak, 2008, 2010) . Either feature-based models Charniak, 2008, 2010) or deep learning models (Kummerfeld et al., 2019; are used. Afterwards a clustering algorithm is applied to recover separate threads using results from the first step. Elsner and Charniak (2008 , 2010 , 2011 ) use a greedy graph partition algorithm to assign an utterance u to the thread of u which has the maximum relevance to u among candidates if the score is larger than a threshold. Kummerfeld et al. (2019); Zhu et al. (2020) use a greedy algorithm to recover threads following all reply-to relations independently identified for each utterance. Jiang et al. (2018) propose a graph connected component-based algorithm.", |
| "cite_spans": [ |
| { |
| "start": 113, |
| "end": 137, |
| "text": "Kummerfeld et al., 2019)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 163, |
| "end": 184, |
| "text": "Charniak, 2008, 2010)", |
| "ref_id": null |
| }, |
| { |
| "start": 215, |
| "end": 236, |
| "text": "Charniak, 2008, 2010)", |
| "ref_id": null |
| }, |
| { |
| "start": 261, |
| "end": 286, |
| "text": "(Kummerfeld et al., 2019;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 405, |
| "end": 430, |
| "text": "Elsner and Charniak (2008", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 431, |
| "end": 437, |
| "text": ", 2010", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 438, |
| "end": 444, |
| "text": ", 2011", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 789, |
| "end": 808, |
| "text": "Jiang et al. (2018)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "End-to-end methods construct threads incrementally by scanning through a chat log and either append the current utterance to an existing thread or create a new thread. Tan et al. (2019) use a hierarchical LSTM model to obtain utterance representation and thread representation. Liu et al.", |
| "cite_spans": [ |
| { |
| "start": 168, |
| "end": 185, |
| "text": "Tan et al. (2019)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "U A chat log with N utterances T A set of disjoint threads in U T A thread in T ui An utterance of interest u An utterance in a chat log Ci A candidate (parent) utterance pool for ui ti The token sequence of ui with ni tokens (2020) build a transition-based model that uses three LSTMs for utterance encoding, context encoding and thread state updating, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Symbol Meaning", |
| "sec_num": null |
| }, |
| { |
| "text": "Given a chat log U with N utterances", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notations and Task Definition", |
| "sec_num": "3" |
| }, |
| { |
| "text": "{u 1 , u 2 , \u2022 \u2022 \u2022 , u N } in chronological order, the goal of conversation disentanglement is to obtain a set of disjoint threads T = {T 1 , T 2 , \u2022 \u2022 \u2022 , T m }. Each thread T l contains a collection of topically- coherent utterances. Utterance u i contains a list of n i tokens w i 1 , w i 2 , \u2022 \u2022 \u2022 , w i n i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notations and Task Definition", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The task can be framed as a reply-to relation identification problem, where we aim to find the parent utterance for every u i \u2208 U (Kummerfeld et al., 2019; , i.e., if an utterance u i replies to a (past) utterance u j , u j is called the parent utterance of u i . When all reply-to utterance pairs are identified, T can be recovered unambiguously by following the reply-to relations.", |
| "cite_spans": [ |
| { |
| "start": 130, |
| "end": 155, |
| "text": "(Kummerfeld et al., 2019;", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Notations and Task Definition", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Henceforth we call the target utterance u i an utterance of interest (UOI). We use u i \u2192 u j to represent the reply-to relation from u i to u j , where u j is the parent utterance of u i . The reply-to relation is asymmetric, i.e., u i \u2192 u j and u j \u2192 u i do not hold at the same time. We use a candidate pool C i to denote the set of candidate utterances from which the parent utterance is selected from. Table 1 presents a summary of symbols/notations.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 406, |
| "end": 413, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Notations and Task Definition", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We conduct experiments on the Ubuntu IRC dataset (Kummerfeld et al., 2019) , which contains questions and answers about the Ubuntu system, as well as chit-chats from multiple participants. Table 2 shows the statistics in train, validation and test sets. The four columns are the number of chat logs, the number of annotated utterances, the number of threads and the average number of parents for each utterance. Table 2 : Statistics of training, validation and testing split of the Ubuntu IRC dataset. \"Ann. Utt\" is the number of annotated utterances. \"Avg. parent\" is the average number of parents of an utterance.", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 74, |
| "text": "(Kummerfeld et al., 2019)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 412, |
| "end": 419, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We start with studying pairwise models that take as input a pair of utterances and decide whether a reply-to relation exists (Section 5.1). Then, we add dialogue history information into consideration and study a multi-task learning model (Section 5.2) built upon the pairwise models. In Section 5.3, we further investigate a globally-optimal approach based on bipartite graph matching, considering the top parent candidates of multiple UOIs together to help resolve conflicts in the utterance matches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "To establish a baseline, we first study the effectiveness of pairwise models that measure the confidence of a reply-to relation between an UOI and each candidate utterance independently without considering any past context (e.g., dialogue history). To find the parent utterance for u i , we compute the relevance score r ij between u i and each u j \u2208 C i :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Models", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "r ij = f (u i , u j , v ij ), \u2200 u j \u2208 C i (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Models", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "where f (\u2022) is the pairwise model and v ij represents additional information describing the relationship between u i and u j , such as manually defined features like time, user (name) mentions and word overlaps. We use transformer-based models to automatically capture more complex semantic relationships between utterances pairs, such as questionanswer relation and coreference resolution which cannot be modeled by features very well. Following Kummerfeld et al. (2019) , we assume the parent utterance of a UOI to be within k c history utterances in the chat log, and we solve a k c -way multi-class classification problem where", |
| "cite_spans": [ |
| { |
| "start": 447, |
| "end": 471, |
| "text": "Kummerfeld et al. (2019)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Models", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "C i con- tains exactly k c utterances [u i\u2212kc+1 , \u2022 \u2022 \u2022 , u i\u22121 , u i ]. UOI u i is included in C i for detecting self-links, i.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Models", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "e., an utterance that starts a new thread. The train-ing loss is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Models", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "L r = \u2212 N i=1 kc j=1 1[y i = j] log p ij (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Models", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "where 1[y i = j] = 1 if u i \u2192 u j holds, and 0 otherwise; p ij is the normalized probability after applying softmax over {r ij } u j \u2208C i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Models", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We study the empirical performance of the following pairwise models. See more details of the models in Appendix 8. LASTMENTION: A baseline model that links a UOI u i to the last utterance of the user directly mentioned by u i . If u i does not contain a user mention, we link it to the immediately preceding utterance, i.e., u i\u22121 . GLOVE+MF: Following Kummerfeld et al. (2019) , this is a feedforward neural network (FFN) that uses the max and mean Glove (Pennington et al., 2014) embeddings of a pair of utterances and some handcrafted features 1 including time difference between two utterances, direct user mention, word overlaps, etc. MF: An FFN model that uses only the handcrafted features in GLOVE+MF. This model is designed to test the effectiveness of the handcrafted features. 2 BERT (Devlin et al., 2019) : A pretrained model based on transformer (Vaswani et al., 2017) finetuned on our task. We follow the standard setup for sentence pair scoring in BERT by concatenating UOI u i and a candidate u j delimited by [SEP] . BERT+MF: A BERT-based model that also incorporates the handcrafted features in GLOVE+MF. BERT+TD: A BERT-based model that uses the time difference between two utterances as the only manual feature, as preliminary experiments found that this is the most important feature. ALBERT (Lan et al., 2019) : A parameterefficient BERT variant fine-tuned on our task. POLY-ENCODER (Humeau et al., 2019) : A transformer-based model designed for fast training and inference by encoding query (UOI) and candidate separately. 3 We use POLY-ENCODER in two settings: POLY-BATCH where the labels of UOIs in a batch is used as the shared candidate pool to reduce computation overhead, and POLY-INLINE where each query has its own candidate pool similar to the other models.", |
| "cite_spans": [ |
| { |
| "start": 353, |
| "end": 377, |
| "text": "Kummerfeld et al. (2019)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 456, |
| "end": 481, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 795, |
| "end": 816, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 859, |
| "end": 881, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1026, |
| "end": 1031, |
| "text": "[SEP]", |
| "ref_id": null |
| }, |
| { |
| "start": 1313, |
| "end": 1331, |
| "text": "(Lan et al., 2019)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 1405, |
| "end": 1426, |
| "text": "(Humeau et al., 2019)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1546, |
| "end": 1547, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "5.1.1" |
| }, |
| { |
| "text": "Evaluation Metrics We measure the model performance in three aspects: (1) the link prediction metrics measure the precision, recall and F1 scores of the predicted reply-to relations; (2) the clustering metrics include variation information (VI, (Meil\u0203, 2007) ), one-to-one Overlap (1-1, (Elsner and Charniak, 2008) ) and exact match F1; these evaluate the quality of the recovered threads; 4 and (3) the ranking metrics Recall@k (k = {1, 5, 10}) assess whether the ground truth parent utterance u j is among the top-k candidates. 5", |
| "cite_spans": [ |
| { |
| "start": 245, |
| "end": 258, |
| "text": "(Meil\u0203, 2007)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 287, |
| "end": 314, |
| "text": "(Elsner and Charniak, 2008)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.1.2" |
| }, |
| { |
| "text": "Dataset construction In training and validation, we set C i to contain exactly one parent utterance of an UOI u i . We observe that 98.5% of the UOIs in the training data reply to a parent utterance within the 50 latest utterances and so we set k c = 50 (i.e., |C i | = 50). We discard training samples that do no contain the parent utterance of an UOI under this setting (1.5% in the training data). If there are more than one parent utterances in C i (2.5% in training data), we take the latest parent utterance of u i as the target \"label\". We do not impose these requirements in testing and so do not manipulate the test data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.1.2" |
| }, |
| { |
| "text": "Model configuration We clip both UOI u i and a candidate u j to at most 60 tokens. |v ij | (manual feature dimension) = 77 in BERT+MF. In BERT+TD, |v ij | = 6. The dimensionality of word embeddings in MF is 50. All BERT-based models use the \"bert-base-uncased\" pretrained model. The batch size for POLY-INLINE, BERT, BERT+TD and BERT+MF is 64. 6 The batch sizes of POLY-BATCH and ALBERT are 96 and 256 respectively. We tune the batch size, the number of layers, and the hidden size in BERT+MF and BERT+TD according to recall@1 on the validation set.", |
| "cite_spans": [ |
| { |
| "start": 344, |
| "end": 345, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.1.2" |
| }, |
| { |
| "text": "negative examples to create the candidates, while we use kc past utterances as candidates, which makes the next utterance selection task arguably an easier task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.1.2" |
| }, |
| { |
| "text": "4 Exact Match F1 is calculated based on the number of recovered threads that perfectly match the ground truth ones (ignoring the ground truth threads with only one utterance).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.1.2" |
| }, |
| { |
| "text": "5 E.g., if uj is in the top-5 candidates, recall@5 = 1. 6 Actual batch size is 4 with a gradient accumulation of 16.", |
| "cite_spans": [ |
| { |
| "start": 56, |
| "end": 57, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.1.2" |
| }, |
| { |
| "text": "Results and discussions Table 3 shows that LASTMENTION is worse than all other models, indicating that direct user mentions are not sufficient for disentanglement. The manual features model (MF) has very strong results, outperforming transformer-based models (BERT, ALBERT and POLY-ENCODER) by a large margin, suggesting that the manual features are very effective. The overall best model across all metrics is BERT+MF. Comparing BERT+MF to BERT, we see a large improvement when we incorporate the manual features. Interestingly though, most of the improvement appears to come from the time difference feature (BERT+MF vs. BERT+TD).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 24, |
| "end": 31, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.1.2" |
| }, |
| { |
| "text": "Looking at BERT and POLY-INLINE, we see that the attention between words in BERT is helpful to capture the semantics between utterance pairs better, because the only difference between them is that POLY-INLINE encodes two utterances separately first and uses additional attention layers to compute the final relevance score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.1.2" |
| }, |
| { |
| "text": "The performance gap between POLY-BATCH and POLY-INLINE shows that the batch mode (Humeau et al., 2019) strategy has a negative impact on the prediction accuracy. This is attributed to the difference in terms of training and testing behaviour, as at test time we predict links similar to the inline mode (using past k c utterances as candidates).", |
| "cite_spans": [ |
| { |
| "start": 81, |
| "end": 102, |
| "text": "(Humeau et al., 2019)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.1.2" |
| }, |
| { |
| "text": "The GPU memory consumption and speed of transformer-based models are shown in Table 4 . POLY-BATCH is the most memory efficient and fastest model, suggesting that it is a competitive model in real-world applications where speed and efficiency is paramount.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 78, |
| "end": 85, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.1.2" |
| }, |
| { |
| "text": "The inherent limitation of the pairwise models is that they ignore the dialogue history of a candidate utterance. Intuitively, if the prior utterances from the same thread of candidate utterance u j is known, it will provide more context when computing the relevance scores. However, the threads of candidate utterances have to be inferred, which could be noisy. Furthermore, the high GPU memory consumption of transformer-based models renders using a long dialogue history impractical.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Context Expansion by Thread Classification", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "To address the issues above, we propose a multitask learning framework that (1) considers the dialogue history in a memory efficient manner and (2) does not introduce noise at test time. Specifically, we maintain a candidate thread pool with k t threads. A thread that contains multiple candidates would only be included once. This alleviates some of the memory burden, not to mention that k t is much smaller than |C i |. For the second issue, we train a shared BERT model that does replyto relation identification and thread classification jointly, and during training we use the ground truth threads but at test time we only perform reply-to relation identification, avoiding the use of potentially noisy (predicted) threads.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Context Expansion by Thread Classification", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The model consists of a shared BERT module and separate linear layers for reply-to relation identification and thread classification. As shown in Figure 2, given u i , we compute its relevance score s r ij to every candidate utterances in utterance candidate pool C i and relevance score s t il to every thread in thread candidate pool T c i . We aim to minimize the following loss function during model training:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 146, |
| "end": 155, |
| "text": "Figure 2,", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model Architecture", |
| "sec_num": "5.2.1" |
| }, |
| { |
| "text": "L = \u2212 N i=1 kc j=1 1(y r = j) log s r ij +\u03b1 N i=1 kt l=1 1(y t = l) log s t il (3)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Architecture", |
| "sec_num": "5.2.1" |
| }, |
| { |
| "text": "where 1(y r = j) is 1 if u j is the parent utterance of u i , and 0 otherwise. Similarly, 1(y t = l) tests whether u i belongs to thread T c i . Hyper-parameter \u03b1 is used to balance the importance of the two loss components.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Architecture", |
| "sec_num": "5.2.1" |
| }, |
| { |
| "text": "Relevance score computation We compute the utterance relevance score s r ij between UOI u i and each candidate utterance u j \u2208 C i in the same way as the BERT model shown in Section 5.1 does.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Architecture", |
| "sec_num": "5.2.1" |
| }, |
| { |
| "text": "For thread classification, we consider a pool containing k t threads before u i , including a special thread {u i } for the case where u i starts a new thread. The score s t il between u i and thread T l is computed using the shared BERT, following the format used by Ghosal et al. (2020) :", |
| "cite_spans": [ |
| { |
| "start": 268, |
| "end": 288, |
| "text": "Ghosal et al. (2020)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Architecture", |
| "sec_num": "5.2.1" |
| }, |
| { |
| "text": "[CLS], w 1 1 , \u2022 \u2022 \u2022 w 1 n 1 , w 2 1 , \u2022 \u2022 \u2022 w 2 n 2 , w k 1 \u2022 \u2022 \u2022 w k n k , [SEP], w i 1 , \u2022 \u2022 \u2022 w i n i [SEP]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Architecture", |
| "sec_num": "5.2.1" |
| }, |
| { |
| "text": "where w p q is the q-th token of the p-th utterance in T l , and w i m is the m-th token of u i . We take the embedding of [CLS] and use another linear layer to compute the final score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model Architecture", |
| "sec_num": "5.2.1" |
| }, |
| { |
| "text": "For reply-to relation identification, we use the same configuration described in Section 5.1.2. For thread classification, we consider k t = 10 thread candidates. Each thread is represented by (at most) five latest utterances. The maximum number of tokens in T l and t i are 360 and 60, respectively. We train the model using Adamax optimizer with learning rate 5 \u00d7 10 \u22125 and batch size 64. As before we use \"bert-base-uncased\" as the pretrained model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "5.2.2" |
| }, |
| { |
| "text": "As Table 5 shows, incorporating an additional thread classification loss (\"MULTI (\u03b1 = k)\" models) improves link prediction substantially compared to BERT, showing that the thread classification objective provides complementary information", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 10, |
| "text": "Table 5", |
| "ref_id": "TABREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "5.2.2" |
| }, |
| { |
| "text": "UOI (u i ) u i-1 u i-2 u i-3 u i-kc+2 u i-kc+1 u i-kc+1 u i-3 u i-kc+2 u i-2 u i-1 u i \u2022 \u2022 \u2022 \u2022 \u2022 u i wr 1 wr 2 wr m wt 1 wt 2 wt n BERT BERT shared er 1 er 2 er m \u2026 \u2026 et 1 et 2 et m \u2026 \u2026", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "5.2.2" |
| }, |
| { |
| "text": "dense layers dense layers", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "5.2.2" |
| }, |
| { |
| "text": "reply-to relation score thread classification score Figure 2 : The architecture of the multi-task learning framework. On the left side, we use a BERT model with additional dense layers to calculate the relevance score between a UOI and each candidate utterance for reply-to relation identification. On the right side, we use the same BERT model but different dense layers on the top to calculate the relevance scores between the UOI and each candidate thread for thread classification. to the reply-to relation identification task. Interestingly, when \u03b1 increases from 5 to 10, both the link prediction and ranking metrics drop, suggesting that it is important not to over-emphasize thread classification, since it is not used at test time. Adding thread classification when we have manual features (MULTI+MF vs. BERT+MF), however, does not seem to help, further reinforcing the effectiveness of these features in the dataset. That said, in situations/datasets where these manual features are not available, e.g. Movie Dialogue Dataset ), our multi-task learning framework could be useful.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 52, |
| "end": 60, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u2022 \u2022", |
| "sec_num": null |
| }, |
| { |
| "text": "After we have obtained the pairwise utterance relevance scores for every UOI, we need to link the candidate utterances with the UOIs to recover the threads. A greedy approach would use all reply-to relations that have been identified independently for each UOI to create the threads. As shown in Figure 3, the reply-to relations for u 67 and u 59 using greedy approach are {u 67 \u2192 u 58 , u 59 \u2192 u 58 }.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 296, |
| "end": 302, |
| "text": "Figure", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Bipartite Graph Matching for Conversation Disentanglement", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "With such an approach, we observe that: (1) some candidates receive more responses than they should (based on ground truth labels); and (2) many UOIs choose the same candidate. Given the fact that over 95% of the UOIs' parents are within", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bipartite Graph Matching for Conversation Disentanglement", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Future Directions U67 [20:32] <Bashing-om> groob: There can be only one boot control authority per hard drive . What I do is disable 30_os-prober in the seconday system,' sudo update-grub' amd in the primary also 'sudo aptupdate-grub' to propogate the changes to the system(s) . U60 [20:30] <groob> corba: Oh, cool. Good to know! Thanks! U59 [20:29] <corba> groob, you could change that with grub-customizer easily U58 [20:29] <groob> corba: Err, I mean the order of the operating systems. OS1 appears first currently, but if OS2 ran update-grub, it would cause OS2 to be listed first. U54 [20:28] <groob> corba: Thanks for the help! I'll try using a dedicated boot partition instead. The biggest downside that I can think of is that the partition order in the menu will change depending on which OS ran update-grub last. If that could be solved, then it would be perfect. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rongxin Zhu", |
| "sec_num": null |
| }, |
| { |
| "text": "Figure 3: An example showing the difference between the greedy approach and the global decoding. Consider identifying the parent utterances of u 59 and u 67 . Each utterance contains ID (e.g., u 51 ), timestamp, user name and content. Both u 59 and u 67 have three candidates. The pairwise scores are labelled to the links, indicating the confidence of potential reply-to relations. The red link denotes the identified reply-to relation for u 67 using the greedy approach, and the green link is the result of a global decoding algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "10.2", |
| "sec_num": null |
| }, |
| { |
| "text": "the top-5 candidates in BERT+MF (R@5 in Table 3), we explore whether it is possible to get better matches if we constrain the maximum number of reply links each candidate receives and perform the linking of UOIs to their parent utterances together. In situations where a UOI u i 's top-1 candidate utterance u j has a relevant score that is just marginally higher than other candidates but u j is a strong candidate utterance for other UOIs, we may want to link u j with the other UOIs instead of u i . Using Figure 3 as example, if u 58 can only receive one response, then u 67 should link to the second best candidate u 54 as its parent instead of u 58 . Based on this intuition, we explore using bipartite algorithms that treat the identification of all reply-to relations within a chat log as a maximumweight matching (Gerards, 1995) problem on a bipartite graph. Note that this step is a postprocessing step that can be applied to technically any pairwise utterance scoring models.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 509, |
| "end": 517, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "10.2", |
| "sec_num": null |
| }, |
| { |
| "text": "Given a chat log U , we build a bipartite graph G = V, E, W where V is the set of nodes, E is the set of edges, and W is the set of edge weights. Set V consists of two subsets V l and V r representing two disjoint subsets of nodes of a bipartite. Subset V l = Figure 4 : The left figure is an example bipartite graph built from a chat log with 5 UOIs. Each UOI u i has k c = 3 candidates {u i\u22122 , u i\u22121 , u i }, except the first k c \u2212 1 UOIs (u 1 and u 2 ). Utterances u 1 and u 3 are duplicated twice because they receive 2 replies. The corresponding disentangled chat log is shown on the right figure with the following reply-to relations:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 260, |
| "end": 268, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Graph Construction", |
| "sec_num": "5.3.1" |
| }, |
| { |
| "text": "{u 1 \u2192 u 1 , u 2 \u2192 u 1 , u 3 \u2192 u 2 , u 4 \u2192 u 3 , u 5 \u2192 u 3 }. {v l i } N i=1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Construction", |
| "sec_num": "5.3.1" |
| }, |
| { |
| "text": "represents the set of UOIs, i.e., each node v l i corresponds to a UOI u i . Subset V r represents the set of candidate utterances. Note that some UOIs may be candidate utterances of other UOIs. Such an utterance will have both a node in V l and a node in V r .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Construction", |
| "sec_num": "5.3.1" |
| }, |
| { |
| "text": "Some utterances may receive more than one reply, i.e., multiple nodes in V l may link to the same node in V r . This violates the standard assumption of a bipartite matching problem, where every node in V r will only be matched with at most one node in V l . To address this issue, we duplicate nodes in V r . Let \u03b4(u j ) denotes the number of replies u j receives, then u j is represented by \u03b4(u j ) nodes in", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Construction", |
| "sec_num": "5.3.1" |
| }, |
| { |
| "text": "V r . Now V r = N j=1 S(u j ), where S(u j ) is a set of duplicated nodes {v r j,1 , v r j,2 , \u2022 \u2022 \u2022 v r j,\u03b4(u j )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Construction", |
| "sec_num": "5.3.1" |
| }, |
| { |
| "text": "} for u j . Sets E and W are constructed based on the pairwise relevance scores obtained from the link prediction phase. Specifically, E = N i=1 R(u i ) where R(u i ) is the set of edges between u i and all its k c candidates:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Construction", |
| "sec_num": "5.3.1" |
| }, |
| { |
| "text": "kc m=1 { v l i , v m } vm\u2208S(um) . For each UOI-candidate pair (u i , u j ), if \u03b4(u j ) > 0, a set of edges { v l i , v r j,k } \u03b4(u j )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Graph Construction", |
| "sec_num": "5.3.1" |
| }, |
| { |
| "text": "k=1 are constructed, each with weight w(i, j), which is the relevance score between u i and u j . An example bipartite graph is shown on the left side of Figure 4 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 154, |
| "end": 162, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Graph Construction", |
| "sec_num": "5.3.1" |
| }, |
| { |
| "text": "Given the bipartite formulation above, we solve the conversation disentanglement problem as a maximum-weight bipartite matching problem, which is formulated as the following constrained optimization problem:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Integer Programming Formulation", |
| "sec_num": "5.3.2" |
| }, |
| { |
| "text": "max v i ,v j \u2208E x(i, j) \u2022 w(i, j) s.t. v l \u2208neighbors(v i ) x(i, l) = 1, \u2200v i \u2208 V l vp\u2208neighbors(v j ) x(p, j) \u2264 1, \u2200v j \u2208 V r x(i, j) \u2208 {0, 1}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Integer Programming Formulation", |
| "sec_num": "5.3.2" |
| }, |
| { |
| "text": "(4) Here, neighbors(v x ) is the set of adjacent nodes of v x (i.e., nodes directly connected to v x ) in G. For each edge in G, we have a variable x(i, j), which takes value 1 if we include the edge v i , v j in the final matched bipartite, and 0 otherwise. Intuitively, we are choosing a subsect of E to maximize the total weight of the chosen edges, given the constraints that (1) each node in set V l is connected to exactly one edge (each UOI has exactly one parent); and (2) each node in V r is connected to at most one edge.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Integer Programming Formulation", |
| "sec_num": "5.3.2" |
| }, |
| { |
| "text": "Since the number of replies received by an utterance u j , i.e., \u03b4(u j ), is unknown at test time, we estimate \u03b4(u j ) for each candidate utterance u j . We experiment with two different estimation strategies: heuristics method and regression model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Node Frequency Estimation in V r", |
| "sec_num": "5.3.3" |
| }, |
| { |
| "text": "In the heuristics method, we estimate \u03b4(u j ) based on the total relevance scores accumulated by u j from all UOIs, using the following equation:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Node Frequency Estimation in V r", |
| "sec_num": "5.3.3" |
| }, |
| { |
| "text": "r ij = exp(r ij ) u k \u2208C i exp(r ik ) S j = i r ij \u03b4 (u j ) = RND(\u03b1S j + \u03b2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Node Frequency Estimation in V r", |
| "sec_num": "5.3.3" |
| }, |
| { |
| "text": "where\u03b4(u j ) is the estimation, RND is the round(\u2022) function, and \u03b1 and \u03b2 are scaling parameters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Node Frequency Estimation in V r", |
| "sec_num": "5.3.3" |
| }, |
| { |
| "text": "In the regression model, we train an FFN to predict \u03b4(u j ) using mean squared error as the training loss. The features are normalized scores of u j from all UOIs, as well as the sum of those scores. We also include textual features using BERT (based on the [CLS] vector), denoted as BERT+FFN. We use the same RND function to obtain an integer from the prediction of the regression models. Table 6 : Link prediction results using bipartite matching. Oracle is a model that uses ground truth node frequencies for V r .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 390, |
| "end": 397, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Node Frequency Estimation in V r", |
| "sec_num": "5.3.3" |
| }, |
| { |
| "text": "We obtain the performance upper bound by solving the maximum weight bipartite matching problem using the ground truth node frequencies for all nodes in V r . This approach is denoted as \"Oracle\" in Table 6 . We found that when node frequencies are known, bipartite matching significantly outperforms the best greedy methods (F1 score 86.8 vs. 72.6 of BERT+MF in Table 3 ). When using estimated node frequencies, the heuristics method and FFN achieve very similar results, and BERT+FFN is worse than both. Unfortunately, these results are all far from Oracle, and they are ultimately marginally worse than BERT+MF (72.6; Table 3 ). Overall, our results suggest that there is much potential of using bipartite matching for creating the threads, but that there is still work to be done to design a more effective method for estimating the node frequencies.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 198, |
| "end": 205, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 362, |
| "end": 369, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 620, |
| "end": 627, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments and Discussion", |
| "sec_num": "5.3.4" |
| }, |
| { |
| "text": "In this paper, we frame conversation disentanglement as a task to identify the past utterance(s) that each utterance of interest (UOI) replies to, and conduct various experiments to explore the task. We first experiment with transformer-based models, and found that BERT combined with manual features is still a strong baseline. Next we propose a multi-task learning model to incorporate dialogue history into BERT, and show that the method is effective especially when manual features are not available. Based on the observation that most utterances' parents are in the top-ranked candidates when there are errors, we experiment with bipartite graph matching that matches a set of UOIs and candidates together to produce globally more optimal clusters. The algorithm has the potential to outperform standard greedy approach, indicating a promising future research direction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "See a full feature list inKummerfeld et al. (2019). 2 Note that MF is different from the manual features model inKummerfeld et al. (2019) which uses a linear model.3 It is worthwhile to note that POLY-ENCODER showed strong performance on a related task, next utterance selection, which aims to choose the correct future utterance, but with two key differences: (1) their UOI incorporates the dialogue history which provides more context; (2) they randomly sample", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank the anonymous reviewers for their helpful comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgement", |
| "sec_num": "7" |
| }, |
| { |
| "text": "(5) Here, concat(t i , t j ) means to concatenate the two sub-word sequences t i and t j corresponding to u i and u j into a single sequenceis a special beginning token and [SEP] is a separation token. Denote the number of tokens in this sequence by m. Then, e k \u2208 R d BERT is the encoded embedding of the k-th (k \u2264 m) token in t ij . Following (Devlin et al., 2019) , we use the encoded embedding of [CLS] as the aggregated representation of u i and u j . Another linear layer is applied to obtain score r ij \u2208 R using learnable parameters W \u2208 R 1\u00d7d BERT and b \u2208 R.", |
| "cite_spans": [ |
| { |
| "start": 345, |
| "end": 366, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 401, |
| "end": 406, |
| "text": "[CLS]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "annex", |
| "sec_num": null |
| }, |
| { |
| "text": "We obtain the encoded embedding of [CLS] in the same way as BERT, denoted as e. Then, we compute the pairwise relevance score r ij as follows:where W e \u2208 R d mid \u00d7d BERT and b e \u2208 R d mid are parameters of a linear layer to reduce the dimensionality of the BERT output; [h; v ij ] is the concatenation of h and the pairwise vector of hand-parameters of two dense layers with the softsign activation function; sum(x) represents the sum of values in vector x. In BERT+TD. The time difference feature between u i and u j is a 6-d vector:where n = (i \u2212 j)/100 representing the relative distance between two utterances in the candidate pool; x 1 , \u2022 \u2022 \u2022 , x 5 are binary values indicating whether the time difference in minutes between u i and u j lies in the ranges of [\u22121, 0), [0, 1), [1, 5), [5, 60) and (60, \u221e) respectively.", |
| "cite_spans": [ |
| { |
| "start": 35, |
| "end": 40, |
| "text": "[CLS]", |
| "ref_id": null |
| }, |
| { |
| "start": 790, |
| "end": 793, |
| "text": "[5,", |
| "ref_id": null |
| }, |
| { |
| "start": 794, |
| "end": 797, |
| "text": "60)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BERT+MF", |
| "sec_num": "8.2" |
| }, |
| { |
| "text": "Model architecture and training We choose the best hyper-parameters according to the ranking performance Recall@1 on validation set. All models are evaluated every 0.2 epoch. We stop training if Recall@1 on validation set does not improve in three evaluations consecutively.The final settings are as follows. In MF, we use a 2-layer FFN with softsign activation function. Both layers contain 512 hidden units. We train it using Adam optimizer with learning rate 0.001. For all transformer-based models (BERT, BERT+MF, ALBERT and POLY-ENCODER), we use Adamax optimizer with learning rate 5 \u00d7 10 \u22125 , updating all parameters in training. We use automatic mixed precision to reduce GPU memory consumption provided by Pytorch 7 . All experiments are implemented in Parlai 8 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Models Settings", |
| "sec_num": "8.3" |
| }, |
| { |
| "text": "Setup Both node frequency estimation and graph construction are based on the relevance scores from BERT+MF. In the rule-based method, we choose \u03b1 in {0.9, 1, 1, 1.3, 1.5, 1.7, 1.9} and \u03b2 in {0.1, 0.2, 0.3, 0.4, 0.5}. The optimal values \u03b1 = 1.3 and \u03b2 = 0.2 yield the best link prediction F1 on the validation set. The regression mode is a 2-layer fully connected neural network. Both layers contain 128 hidden units, with the ReLU activation function. We choose hidden layer size from {64, 128, 256} and the number of layers from {2, 3}. We train the model using Adam optimizer with batch size 64. Hyper-parameters are chosen to minimize mean squared error on the validation set. The integer programming problem is solved using pywraplp 9 . We observe that sometimes the integer programming problem is infeasible due to underestimation of the frequencies of some nodes. We relax Equation 4 in experiments as follows to 7 https://pytorch.org/ 8 https://parl.ai/ 9 https://google.github.io/or-tools/ python/ortools/linear_solver/pywraplp. html avoid infeasibility:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BGMCD Set Up", |
| "sec_num": "8.4" |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "4171--4186", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/N19-1423" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "You talking to me? a corpus and algorithm for conversation disentanglement", |
| "authors": [ |
| { |
| "first": "Micha", |
| "middle": [], |
| "last": "Elsner", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ACL-08: HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "834--842", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Micha Elsner and Eugene Charniak. 2008. You talk- ing to me? a corpus and algorithm for conversation disentanglement. In Proceedings of ACL-08: HLT, pages 834-842, Columbus, Ohio. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Disentangling chat", |
| "authors": [ |
| { |
| "first": "Micha", |
| "middle": [], |
| "last": "Elsner", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Computational Linguistics", |
| "volume": "36", |
| "issue": "3", |
| "pages": "389--409", |
| "other_ids": { |
| "DOI": [ |
| "10.1162/coli_a_00003" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Micha Elsner and Eugene Charniak. 2010. Disentan- gling chat. Computational Linguistics, 36(3):389- 409.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Disentangling chat with local coherence models", |
| "authors": [ |
| { |
| "first": "Micha", |
| "middle": [], |
| "last": "Elsner", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "1179--1189", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Micha Elsner and Eugene Charniak. 2011. Disentan- gling chat with local coherence models. In Proceed- ings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1179-1189, Portland, Oregon, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Matching. Handbooks in operations research and management science", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Amh Gerards", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "", |
| "volume": "7", |
| "issue": "", |
| "pages": "135--224", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "AMH Gerards. 1995. Matching. Handbooks in opera- tions research and management science, 7:135-224.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Utterance-level dialogue understanding: An empirical study", |
| "authors": [ |
| { |
| "first": "Deepanway", |
| "middle": [], |
| "last": "Ghosal", |
| "suffix": "" |
| }, |
| { |
| "first": "Navonil", |
| "middle": [], |
| "last": "Majumder", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2009.13902" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Deepanway Ghosal, Navonil Majumder, Rada Mihal- cea, and Soujanya Poria. 2020. Utterance-level di- alogue understanding: An empirical study. arXiv preprint arXiv:2009.13902.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Poly-encoders: Architectures and pre-training strategies for fast and accurate multi-sentence scoring", |
| "authors": [ |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "Humeau", |
| "suffix": "" |
| }, |
| { |
| "first": "Kurt", |
| "middle": [], |
| "last": "Shuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Marie-Anne", |
| "middle": [], |
| "last": "Lachaux", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Architec- tures and pre-training strategies for fast and accurate multi-sentence scoring. In International Conference on Learning Representations.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Multi-turn response selection using dialogue dependency relations", |
| "authors": [ |
| { |
| "first": "Qi", |
| "middle": [], |
| "last": "Jia", |
| "suffix": "" |
| }, |
| { |
| "first": "Yizhu", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Siyu", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenny", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Haifeng", |
| "middle": [], |
| "last": "Tang", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1911--1920", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Qi Jia, Yizhu Liu, Siyu Ren, Kenny Zhu, and Haifeng Tang. 2020. Multi-turn response selection using di- alogue dependency relations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1911-1920.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Learning to disentangle interleaved conversational threads with a siamese hierarchical network and similarity ranking", |
| "authors": [ |
| { |
| "first": "Jyun-Yu", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Francine", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Yan-Ying", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "1812--1822", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jyun-Yu Jiang, Francine Chen, Yan-Ying Chen, and Wei Wang. 2018. Learning to disentangle inter- leaved conversational threads with a siamese hierar- chical network and similarity ranking. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1812-1822.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A large-scale corpus for conversation disentanglement", |
| "authors": [ |
| { |
| "first": "Jonathan", |
| "middle": [ |
| "K" |
| ], |
| "last": "Kummerfeld", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Sai", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [ |
| "J" |
| ], |
| "last": "Gouravajhala", |
| "suffix": "" |
| }, |
| { |
| "first": "Vignesh", |
| "middle": [], |
| "last": "Peper", |
| "suffix": "" |
| }, |
| { |
| "first": "Chulaka", |
| "middle": [], |
| "last": "Athreya", |
| "suffix": "" |
| }, |
| { |
| "first": "Jatin", |
| "middle": [], |
| "last": "Gunasekara", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ganhotra", |
| "suffix": "" |
| }, |
| { |
| "first": "Sankalp", |
| "middle": [], |
| "last": "Siva", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Patel", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Lazaros", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [], |
| "last": "Polymenakos", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lasecki", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "3846--3856", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/P19-1374" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonathan K. Kummerfeld, Sai R. Gouravajhala, Joseph J. Peper, Vignesh Athreya, Chulaka Gu- nasekara, Jatin Ganhotra, Siva Sankalp Patel, Lazaros C Polymenakos, and Walter Lasecki. 2019. A large-scale corpus for conversation disentangle- ment. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 3846-3856, Florence, Italy. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Albert: A lite bert for self-supervised learning of language representations", |
| "authors": [ |
| { |
| "first": "Zhenzhong", |
| "middle": [], |
| "last": "Lan", |
| "suffix": "" |
| }, |
| { |
| "first": "Mingda", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Goodman", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Gimpel", |
| "suffix": "" |
| }, |
| { |
| "first": "Piyush", |
| "middle": [], |
| "last": "Sharma", |
| "suffix": "" |
| }, |
| { |
| "first": "Radu", |
| "middle": [], |
| "last": "Soricut", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. In International Con- ference on Learning Representations.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Molweni: A challenge multiparty dialogues-based machine reading comprehension dataset with discourse structure", |
| "authors": [ |
| { |
| "first": "Jiaqi", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Min-Yen", |
| "middle": [], |
| "last": "Kan", |
| "suffix": "" |
| }, |
| { |
| "first": "Zihao", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Zekun", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Wenqiang", |
| "middle": [], |
| "last": "Lei", |
| "suffix": "" |
| }, |
| { |
| "first": "Ting", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Qin", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 28th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "2642--2652", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/2020.coling-main.238" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiaqi Li, Ming Liu, Min-Yen Kan, Zihao Zheng, Zekun Wang, Wenqiang Lei, Ting Liu, and Bing Qin. 2020. Molweni: A challenge multiparty dialogues-based machine reading comprehension dataset with dis- course structure. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 2642-2652, Barcelona, Spain (Online). Inter- national Committee on Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "End-to-end transition-based online dialogue disentanglement", |
| "authors": [ |
| { |
| "first": "Hui", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhan", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| }, |
| { |
| "first": "Jia-Chen", |
| "middle": [], |
| "last": "Gu", |
| "suffix": "" |
| }, |
| { |
| "first": "Quan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Si", |
| "middle": [], |
| "last": "Wei", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaodan", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "IJCAI", |
| "volume": "20", |
| "issue": "", |
| "pages": "3868--3874", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hui Liu, Zhan Shi, Jia-Chen Gu, Quan Liu, Si Wei, and Xiaodan Zhu. 2020. End-to-end transition-based on- line dialogue disentanglement. In IJCAI, volume 20, pages 3868-3874.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems", |
| "authors": [ |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Lowe", |
| "suffix": "" |
| }, |
| { |
| "first": "Nissan", |
| "middle": [], |
| "last": "Pow", |
| "suffix": "" |
| }, |
| { |
| "first": "Iulian", |
| "middle": [], |
| "last": "Vlad Serban", |
| "suffix": "" |
| }, |
| { |
| "first": "Joelle", |
| "middle": [], |
| "last": "Pineau", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue", |
| "volume": "", |
| "issue": "", |
| "pages": "285--294", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ryan Lowe, Nissan Pow, Iulian Vlad Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. In Proceedings of the 16th An- nual Meeting of the Special Interest Group on Dis- course and Dialogue, pages 285-294.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Comparing clusterings-an information based distance", |
| "authors": [ |
| { |
| "first": "Marina", |
| "middle": [], |
| "last": "Meil\u0203", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "98", |
| "issue": "", |
| "pages": "873--895", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marina Meil\u0203. 2007. Comparing clusterings-an infor- mation based distance. Journal of multivariate anal- ysis, 98(5):873-895.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Glove: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Context-aware conversation thread detection in multi-party chat", |
| "authors": [ |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| }, |
| { |
| "first": "Dakuo", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yupeng", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Haoyu", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Saloni", |
| "middle": [], |
| "last": "Potdar", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaoxiao", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Shiyu", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Mo", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "6456--6461", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ming Tan, Dakuo Wang, Yupeng Gao, Haoyu Wang, Saloni Potdar, Xiaoxiao Guo, Shiyu Chang, and Mo Yu. 2019. Context-aware conversation thread detection in multi-party chat. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6456-6461.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "\u0141ukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Advances in neural information processing systems", |
| "volume": "", |
| "issue": "", |
| "pages": "5998--6008", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Ramesh Nallapati, and Bing Xiang. 2020. Who did they respond to? conversation structure modeling using masked hierarchical transformer", |
| "authors": [ |
| { |
| "first": "Henghui", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Feng", |
| "middle": [], |
| "last": "Nan", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhiguo", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proceedings of the AAAI Conference on Artificial Intelligence", |
| "volume": "34", |
| "issue": "", |
| "pages": "9741--9748", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Henghui Zhu, Feng Nan, Zhiguo Wang, Ramesh Nal- lapati, and Bing Xiang. 2020. Who did they re- spond to? conversation structure modeling using masked hierarchical transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 34, pages 9741-9748.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Ubuntu IRC chat log sample sorted by time. Each arrow represents a directed reply-to relation. The two conversation threads are shown in blue and green.", |
| "uris": null |
| }, |
| "TABREF0": { |
| "text": "[12:05] <ydnar> for what reason would a dvd not play if i have libdvdcss2 installed? [12:05] <Ng> ydnar: what are you using to play it? [12:06] <holycow> because it couldn't crack the encoding for the particual portion of the dvde [12:06] <ydnar> tried vlc. holycow, do you have any [12:05] <gourdin> we will we be able to access an edgy repo ? [12:06] <Anfangs> Edgy Eft is the next codename for Ubuntu dapper+1. See https://ubuntu.com/0064.html. [12:06] <gourdin> I don't think the link works", |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>", |
| "html": null |
| }, |
| "TABREF1": { |
| "text": "", |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>", |
| "html": null |
| }, |
| "TABREF4": { |
| "text": "Results of pairwise models. Ranking metrics are not applicable to Last Mention. Best scores are bold.", |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>Model</td><td colspan=\"2\">GPU Mem (GB) Speed (ins/s)</td></tr><tr><td>BERT</td><td>18.7</td><td>9.4</td></tr><tr><td>ALBERT</td><td>14.6</td><td>9.4</td></tr><tr><td>POLY-INLINE</td><td>9.9</td><td>16.8</td></tr><tr><td>POLY-BATCH</td><td>5.1</td><td>36.4</td></tr></table>", |
| "html": null |
| }, |
| "TABREF5": { |
| "text": "", |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>", |
| "html": null |
| }, |
| "TABREF7": { |
| "text": "Results of multi-task learning model.", |
| "type_str": "table", |
| "num": null, |
| "content": "<table/>", |
| "html": null |
| } |
| } |
| } |
| } |