ACL-OCL / Base_JSON /prefixJ /json /J18 /J18-4012.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "J18-4012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:20:05.160803Z"
},
"title": "Modeling Speech Acts in Asynchronous Conversations: A Neural-CRF Approach",
"authors": [
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nanyang Technological University School of Computer Science and Engineering",
"location": {}
},
"email": "srjoty@ntu.edu.sg"
},
{
"first": "Tasnim",
"middle": [],
"last": "Mohiuddin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nanyang Technological University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Participants in an asynchronous conversation (e.g., forum, e-mail) interact with each other at different times, performing certain communicative acts, called speech acts (e.g., question, request). In this article, we propose a hybrid approach to speech act recognition in asynchronous conversations. Our approach works in two main steps: a long short-term memory recurrent neural network (LSTM-RNN) first encodes each sentence separately into a task-specific distributed representation, and this is then used in a conditional random field (CRF) model to capture the conversational dependencies between sentences. The LSTM-RNN model uses pretrained word embeddings learned from a large conversational corpus and is trained to classify sentences into speech act types. The CRF model can consider arbitrary graph structures to model conversational dependencies in an asynchronous conversation. In addition, to mitigate the problem of limited annotated data in the asynchronous domains, we adapt the LSTM-RNN model to learn from synchronous conversations (e.g., meetings), using domain adversarial training of neural networks. Empirical evaluation shows the effectiveness of our approach over existing ones: (i) LSTM-RNNs provide better task-specific representations, (ii) conversational word embeddings benefit the LSTM-RNNs more than the off-the-shelf ones, (iii) adversarial training gives better domain-invariant representations, and (iv) the global CRF model improves over local models.",
"pdf_parse": {
"paper_id": "J18-4012",
"_pdf_hash": "",
"abstract": [
{
"text": "Participants in an asynchronous conversation (e.g., forum, e-mail) interact with each other at different times, performing certain communicative acts, called speech acts (e.g., question, request). In this article, we propose a hybrid approach to speech act recognition in asynchronous conversations. Our approach works in two main steps: a long short-term memory recurrent neural network (LSTM-RNN) first encodes each sentence separately into a task-specific distributed representation, and this is then used in a conditional random field (CRF) model to capture the conversational dependencies between sentences. The LSTM-RNN model uses pretrained word embeddings learned from a large conversational corpus and is trained to classify sentences into speech act types. The CRF model can consider arbitrary graph structures to model conversational dependencies in an asynchronous conversation. In addition, to mitigate the problem of limited annotated data in the asynchronous domains, we adapt the LSTM-RNN model to learn from synchronous conversations (e.g., meetings), using domain adversarial training of neural networks. Empirical evaluation shows the effectiveness of our approach over existing ones: (i) LSTM-RNNs provide better task-specific representations, (ii) conversational word embeddings benefit the LSTM-RNNs more than the off-the-shelf ones, (iii) adversarial training gives better domain-invariant representations, and (iv) the global CRF model improves over local models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With the advent of Internet technologies, communication media like e-mails and discussion forums have become commonplace for discussing work, issues, events, and experiences. Participants in these media interact with each other asynchronously by Second, existing approaches mostly disregard conversational dependencies between sentences inside a comment and across comments. For instance, consider the example in Figure 1 again. The Suggestions are answers to Questions asked in a previous comment. We therefore hypothesize that modeling inter-sentence relations is crucial for speech act recognition. We have tagged the sentences in Figure 1 with human annotations (HUMAN) and with the predictions of a local (LOCAL) classifier that considers word order for sentence representation but classifies each sentence separately or individually. Prediction errors are underlined and highlighted in red. Notice the first and second sentences of comment C 4 , which are mistakenly tagged as Statement and Response, respectively, by our best local classifier. We hypothesize that some of the errors made by the local classifier could be corrected by utilizing a global joint model that is trained to perform a collective classification, taking into account the conversational dependencies between sentences (e.g., adjacency relations like Question-Suggestion). The preliminary preparations, eligibility, the require funds etc., are some of the issues which I wish to know from any panel members of this forum who is aware and had gone through similar procedures to obtain an admission in an university abroad. \u21d2 HUMAN: Question, LOCAL: Statement, GLOBAL: Statement C 3 : [truncated] take a list of canadian universities and then create a table and insert all the relevant information by reading each and every program info on the web. \u21d2 HUMAN: Suggestion, LOCAL: Suggestion, GLOBAL: Suggestion",
"cite_spans": [],
"ref_spans": [
{
"start": 413,
"end": 421,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 634,
"end": 642,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Without doing a research my advice would be to apply to UVIC... for the following reasons... 1. good egineering school, 2 affordable, 3 strong co-op, 4. beautiful and safe city. \u21d2 HUMAN: Suggestion, LOCAL: Suggestion, GLOBAL: Suggestion UBC is good too... but it is expensive particularly for international students due to tuition differential.... and pls pls pls.. dont waste your money on intermediaries or so called consultants... do it yourself.. most of them accept on-line application or email application. Example of a forum conversation (truncated) with HUMAN annotations and automatic predictions by a LOCAL classifier and a GLOBAL classifier for speech acts (e.g., Statement, Suggestion) . The incorrect decisions are underlined and marked with red color.",
"cite_spans": [
{
"start": 675,
"end": 697,
"text": "Statement, Suggestion)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "However, unlike synchronous conversations (e.g., meeting, phone), modeling conversational dependencies between sentences in an asynchronous conversation is challenging, especially when the thread structure (e.g., \"reply-to\" links between comments) is missing, which is also our case. The conversational flow often lacks sequential dependencies in its temporal/chronological order. For example, if we arrange the sentences as they arrive in the conversation, it becomes hard to capture any dependency between the act types because the two components of the adjacency pairs can be far apart in the sequence. This leaves us with one open research question: How do we model the dependencies between sentences in a single comment and between sentences across different comments? In this article, we attempt to address this question by designing and experimenting with conditional structured models over arbitrary graph structures of the conversation. Apart from the underlying discourse structure (sequence vs. graph), asynchronous conversations differ from synchronous conversations in style (spoken vs. written) and in vocabulary usage (meeting conversations on some focused topics vs. conversations on any topic of interest in a public forum). In this article, we propose to use domain adaptation methods in the neural network framework to model these differences in the sentence encoding process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "More concretely, we make the following contributions in speech act recognition for asynchronous conversations. First, we propose to use a recurrent neural network (RNN) with a long short-term memory (LSTM) hidden layer to compose phrases in a sentence and to represent the sentence using distributed condensed vectors (i.e., embeddings). These embeddings are trained directly on the speech act classification task. We experiment with both unidirectional and bidirectional RNNs. Second, we train (task-agnostic) word embeddings from a large conversational corpus, and use it to boost the performance of the LSTM-RNN model. Third, we propose conditional structured models in the form of pairwise conditional random fields (CRF) (Murphy 2012 ) over arbitrary conversational structures. We experiment with different variations of this model to capture different types of interactions between sentences inside the comments and across the comments in a conversational thread. These models use the LSTMencoded vectors as feature vectors for learning to classify sentences in a conversation collectively.",
"cite_spans": [
{
"start": 726,
"end": 738,
"text": "(Murphy 2012",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Furthermore, to address the problem of insufficient training data in the asynchronous domains, we propose to use the available labeled data from synchronous domains (e.g., meetings). To make the best use of this out-of-domain data, we adapt our LSTM-RNN encoder to learn task-specific sentence representations by modeling the differences in style and vocabulary usage between the two domains. We achieve this by using the recently proposed domain adversarial training methods of neural networks (Ganin et al. 2016) . As a secondary contribution, we also present and release a forum data set annotated with a standard speech act tagset.",
"cite_spans": [
{
"start": 495,
"end": 514,
"text": "(Ganin et al. 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "We train our models in various settings with synchronous and asynchronous corpora, and we evaluate on one synchronous meeting data set and three asynchronous data sets-two forum data sets and one e-mail data set. We also experimented with different pretrained word embeddings in the LSTM-RNN model. Our main findings are: (i) LSTM-RNNs provide better sentence representation than BOW and other unsupervised methods; (ii) bidirectional LSTM-RNNs, which encode a sentence using two vectors, provide better representation than the unidirectional ones; (iii) word embeddings pretrained on a large conversational corpus yield significant improvements; (iv) the globally normalized joint models (CRFs) improve over local models for certain graph structures; and (v) domain adversarial training improves the results by inducing domain-invariant features. The source code, the pretrained word embeddings, and the new data sets are available at https://ntunlpsg.github.io/demo/project/ speech-act/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "After discussing related work in Section 2, we present our speech act recognition framework in Section 3. In Section 4, we present the data sets used in our experiments along with our newly created corpus. The experiments and analysis of results are presented in Section 5. Finally, we summarize our contributions with future directions in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Three lines of research are related to our work: (i) compositionality with LSTM-RNNs, (ii) conditional structured models, and (iii) speech act recognition in asynchronous conversations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2."
},
{
"text": "RNNs are arguably the most popular deep learning models in natural language processing, where they have been used for both encoding and decoding a text-for example, language modeling (Mikolov 2012a; Tran, Zukerman, and Haffari 2016) , machine translation (Bahdanau, Cho, and Bengio 2015) , summarization (Rush, Chopra, and Weston 2015) , and syntactic parsing (Dyer et al. 2015) . RNNs have also been used as a sequence tagger, as in opinion mining (Irsoy and Cardie 2014; Liu, Joty, and Meng 2015) , named entity recognition (Lample et al. 2016) , and part-of-speech tagging (Plank, S\u00f8gaard, and Goldberg 2016).",
"cite_spans": [
{
"start": 183,
"end": 198,
"text": "(Mikolov 2012a;",
"ref_id": null
},
{
"start": 199,
"end": 232,
"text": "Tran, Zukerman, and Haffari 2016)",
"ref_id": "BIBREF75"
},
{
"start": 255,
"end": 287,
"text": "(Bahdanau, Cho, and Bengio 2015)",
"ref_id": "BIBREF3"
},
{
"start": 304,
"end": 335,
"text": "(Rush, Chopra, and Weston 2015)",
"ref_id": null
},
{
"start": 360,
"end": 378,
"text": "(Dyer et al. 2015)",
"ref_id": "BIBREF12"
},
{
"start": 449,
"end": 472,
"text": "(Irsoy and Cardie 2014;",
"ref_id": "BIBREF21"
},
{
"start": 473,
"end": 498,
"text": "Liu, Joty, and Meng 2015)",
"ref_id": "BIBREF38"
},
{
"start": 526,
"end": 546,
"text": "(Lample et al. 2016)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM-RNNs for Composition",
"sec_num": "2.1"
},
{
"text": "Relevant to our implementation, Kalchbrenner and Blunsom (2013) use a simple RNN to model sequential dependencies between act types for speech act recognition in phone conversations. They use a convolutional neural network (CNN) to compose sentence representations from word vectors. Lee and Dernoncourt (2016) use a similar model, but they also experiment with RNNs to compose sentence representations. Similarly, Khanpour, Guntakandla, and Nielsen (2016) use an LSTM-based RNN to compose sentence representations. Ji, Haffari, and Eisenstein (2016) propose a latent variable RNN that can jointly model sequences of words (i.e., language modeling) and discourse relations between adjacent sentences. The discourse relations are modeled with a latent variable that can be marginalized during testing. In one experiment, they use coherence relations from the Penn Discourse Treebank corpus as the discourse relations. In another setting, they use speech acts from the SWBD corpus as the discourse relations. They show improvements on both language modeling and discourse relation prediction tasks. Shen and Lee (2016) use an attention-based LSTM-RNN model for speech act classification. The purpose of the attention is to focus on the relevant part of the input sentence. Tran, Zukerman, and Haffari (2017) use an online inference technique similar to the forward pass of the traditional forward-backward inference algorithm to improve upon the greedy decoding methods typically used in the RNN-based sequence labeling models. Vinyals and Le (2015) and Serban et al. (2016) use RNN-based encoderdecoder framework for conversation modeling. Vinyals and Le (2015) use a single RNN to encode all the previous utterances (i.e., by concatenating the tokens of previous utterances), whereas Serban et al. (2016) use a hierarchical encoder-one to encode the words in each utterance, and another to connect the encoded context vectors. Li et al. (2015) compare recurrent neural models with recursive (syntax-based) models for several NLP tasks and conclude that recurrent models perform on par with the recursive for most tasks (or even better). For example, recurrent models outperform recursive on sentence level sentiment classification. This finding motivated us to use recurrent models rather than recursive ones.",
"cite_spans": [
{
"start": 32,
"end": 63,
"text": "Kalchbrenner and Blunsom (2013)",
"ref_id": "BIBREF30"
},
{
"start": 284,
"end": 310,
"text": "Lee and Dernoncourt (2016)",
"ref_id": "BIBREF36"
},
{
"start": 415,
"end": 456,
"text": "Khanpour, Guntakandla, and Nielsen (2016)",
"ref_id": "BIBREF31"
},
{
"start": 516,
"end": 550,
"text": "Ji, Haffari, and Eisenstein (2016)",
"ref_id": "BIBREF23"
},
{
"start": 1097,
"end": 1116,
"text": "Shen and Lee (2016)",
"ref_id": "BIBREF66"
},
{
"start": 1271,
"end": 1305,
"text": "Tran, Zukerman, and Haffari (2017)",
"ref_id": "BIBREF76"
},
{
"start": 1526,
"end": 1547,
"text": "Vinyals and Le (2015)",
"ref_id": "BIBREF79"
},
{
"start": 1552,
"end": 1572,
"text": "Serban et al. (2016)",
"ref_id": "BIBREF65"
},
{
"start": 1639,
"end": 1660,
"text": "Vinyals and Le (2015)",
"ref_id": "BIBREF79"
},
{
"start": 1776,
"end": 1804,
"text": "whereas Serban et al. (2016)",
"ref_id": null
},
{
"start": 1927,
"end": 1943,
"text": "Li et al. (2015)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM-RNNs for Composition",
"sec_num": "2.1"
},
{
"text": "There has been an explosion of interest in CRFs for solving structured output problems in NLP; see Smith (2011) for an overview. The most common type of CRF has a linear chain structure that has been used in sequence labeling tasks like part-of-speech (POS) tagging, chunking, named entity recognition, and many others (Sutton and McCallum 2012) . Tree-structured CRFs have been used for parsing (e.g., Finkel, Kleeman, and Manning 2008) .",
"cite_spans": [
{
"start": 99,
"end": 111,
"text": "Smith (2011)",
"ref_id": "BIBREF68"
},
{
"start": 319,
"end": 345,
"text": "(Sutton and McCallum 2012)",
"ref_id": "BIBREF73"
},
{
"start": 403,
"end": 437,
"text": "Finkel, Kleeman, and Manning 2008)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Structured Models",
"sec_num": "2.2"
},
{
"text": "The idea of combining neural networks with graphical models for speech act recognition goes back to Ries (1999) , in which a feed-forward neural network is used to model the emission distribution of a supervised hidden Markov model (HMM). In this approach, each input sentence in a dialogue sequence is represented as a BOW vector, which is fed to the neural network. The corresponding sequence of speech acts is given by the hidden states of the HMM. Surendran and Levow (2006) first use support vector machines (SVMs) (i.e., local classifier) to estimate the probability of different speech acts for each individual utterance by combining sparse textual features (i.e., bag of n-grams) and dense acoustic features. The estimated probabilities are then used in the Viterbi algorithm to find the most probable tag sequence for a conversation. Julia and Iftekharuddin (2008) use a fusion of SVM and HMM classifiers with textual and acoustic features to classify utterances into speech acts.",
"cite_spans": [
{
"start": 100,
"end": 111,
"text": "Ries (1999)",
"ref_id": "BIBREF60"
},
{
"start": 452,
"end": 478,
"text": "Surendran and Levow (2006)",
"ref_id": "BIBREF72"
},
{
"start": 843,
"end": 873,
"text": "Julia and Iftekharuddin (2008)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Structured Models",
"sec_num": "2.2"
},
{
"text": "More recently, Lample et al. (2016) proposed an LSTM-CRF model for named entity recognition (NER), which first generates a bi-directional LSTM encoding for each input word, and then it passes this representation to a CRF layer, whose task is to encourage global consistency of the NER tags. For each input word, the input to the LSTM consists of a concatenation of the corresponding word embedding and of character-level bi-LSTM embeddings for the current word. The whole network is trained end-to-end with backpropagation, which can be done effectively for chain-structured graphs. Ma and Hovy (2016) proposed a similar framework, but they replace the character-level bi-LSTM with a CNN. They evaluated their approach on POS and NER tagging tasks. Strubell et al. (2017) extended these models by substituting the word-level LSTM with an iterated dilated convolutional neural network, a variant of CNN, for which the effective context window in the input can grow exponentially with the depth of the network, while having a modest number of parameters to estimate. Their approach permits fixeddepth convolutions to run in parallel across entire documents, thus making use of GPUs, which yields up to 20-fold speed up, while retaining performance comparable to that of LSTM-CRF. Speech act recognition in asynchronous conversation posits a different problem, where the challenge is to model arbitrary conversational structures. In this work, we propose a general class of models based on pairwise CRFs that work on arbitrary graph structures.",
"cite_spans": [
{
"start": 15,
"end": 35,
"text": "Lample et al. (2016)",
"ref_id": "BIBREF35"
},
{
"start": 583,
"end": 601,
"text": "Ma and Hovy (2016)",
"ref_id": "BIBREF41"
},
{
"start": 749,
"end": 771,
"text": "Strubell et al. (2017)",
"ref_id": "BIBREF71"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Structured Models",
"sec_num": "2.2"
},
{
"text": "Previous studies on speech act recognition in asynchronous conversation have used supervised, semi-supervised, and unsupervised methods. Cohen, Carvalho, and Mitchell (2004) first use the term e-mail speech act for classifying e-mails based on their acts (e.g., deliver, meeting). Their classifiers do not capture any contextual dependencies between the acts. To model contextual dependencies, Carvalho and Cohen (2005) use a collective classification approach with two different classifiers, one for content and one for context, in an iterative algorithm. The content classifier only looks at the content of the message, whereas the context classifier takes into account both the content of the message and the dialog act labels of its parent and children in the thread structure of the e-mail conversation. Our approach is similar in spirit to their approach with three crucial differences: (i) our CRFs are globally normalized to surmount the label bias problem, while their classifiers are normalized locally; (ii) the graph structure of the conversation is given in their case, which is not the case with ours; and (iii) their approach works at the comment level, whereas we work at the sentence level.",
"cite_spans": [
{
"start": 137,
"end": 173,
"text": "Cohen, Carvalho, and Mitchell (2004)",
"ref_id": "BIBREF9"
},
{
"start": 394,
"end": 419,
"text": "Carvalho and Cohen (2005)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Act Recognition in Asynchronous Conversation",
"sec_num": "2.3"
},
{
"text": "Identification of adjacency pairs like question-answer pairs in e-mail discussions using supervised methods was investigated in Shrestha and McKeown (2004) and Ravi and Kim (2007) . Ferschke, Gurevych, and Chebotar (2012) use speech acts to analyze the collaborative process of editing Wiki pages, and apply supervised models to identify the speech acts in Wikipedia Talk pages. Other sentence-level approaches use supervised classifiers and sequence taggers (Qadir and Riloff 2011; Tavafi et al. 2013; Oya and Carenini 2014) . Vosoughi and Roy (2016) trained off-the-shelf classifiers (e.g., SVM, naive Bayes, Logistic Regression) with syntactic (e.g., punctuations, dependency relations, abbreviations) and semantic feature sets (e.g., opinion words, vulgar words, emoticons) to classify tweets into six Twitter-specific speech act categories.",
"cite_spans": [
{
"start": 128,
"end": 155,
"text": "Shrestha and McKeown (2004)",
"ref_id": "BIBREF67"
},
{
"start": 160,
"end": 179,
"text": "Ravi and Kim (2007)",
"ref_id": "BIBREF59"
},
{
"start": 182,
"end": 221,
"text": "Ferschke, Gurevych, and Chebotar (2012)",
"ref_id": "BIBREF14"
},
{
"start": 459,
"end": 482,
"text": "(Qadir and Riloff 2011;",
"ref_id": "BIBREF58"
},
{
"start": 483,
"end": 502,
"text": "Tavafi et al. 2013;",
"ref_id": "BIBREF74"
},
{
"start": 503,
"end": 525,
"text": "Oya and Carenini 2014)",
"ref_id": "BIBREF53"
},
{
"start": 528,
"end": 551,
"text": "Vosoughi and Roy (2016)",
"ref_id": "BIBREF81"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Act Recognition in Asynchronous Conversation",
"sec_num": "2.3"
},
{
"text": "Several semi-supervised methods have been proposed for speech act recognition in asynchronous conversation. Jeong, Lin, and Lee (2009) use semi-supervised boosting to tag the sentences in e-mail and forum discussions with speech acts by inducing knowledge from annotated spoken conversations (MRDA meeting and SWBD telephone conversations). Given a sentence represented as a set of trees (i.e., dependency, n-gram tree, and POS tag tree), the boosting algorithm iteratively learns the best feature set (i.e., sub-trees) that minimizes the errors in the training data. This approach does not consider the dependencies between the act types, something we successfully exploit in our work. Zhang, Gao, and Li (2012) also use semi-supervised methods for speech act recognition in Twitter. They use a transductive SVM and a graph-based label propagation framework to leverage the knowledge from abundant unlabeled data. In our work, we leverage labeled data from synchronous conversations while adapting our model to account for the shift in the data distributions of the two domains. In our unsupervised adaptation scenario, we do not use any labeled data from the target (asynchronous) domain, whereas in the semi-supervised scenario, we use some labeled data from the target domain.",
"cite_spans": [
{
"start": 108,
"end": 134,
"text": "Jeong, Lin, and Lee (2009)",
"ref_id": "BIBREF22"
},
{
"start": 687,
"end": 712,
"text": "Zhang, Gao, and Li (2012)",
"ref_id": "BIBREF84"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Act Recognition in Asynchronous Conversation",
"sec_num": "2.3"
},
{
"text": "Among methods that use unsupervised learning, Ritter, Cherry, and Dolan (2010) propose two HMM-based unsupervised conversational models for modeling speech acts in Twitter. In particular, they use a simple HMM and a HMM+Topic model to cluster the Twitter posts (not the sentences) into act types. Because they use a unigram language model to define the emission distribution, their simple HMM model tends to find some topical clusters in addition to the clusters that are based on speech acts. The HMM+Topic model tries to separate the act indicators from the topic words. By visualizing the type of conversations found by the two models, they show that the output of the HMM+Topic model is more interpretable than that of the HMM one; however, their classification accuracy is not empirically evaluated. Therefore, it is not clear whether these models are actually useful, and which of the two models is a better speech act tagger. Paul (2012) proposes using a mixed membership Markov model to cluster sentences based on their speech acts, and show that this model outperforms a simple HMM. Joty, Carenini, and Lin (2011) propose unsupervised models for speech act recognition in e-mail and forum conversations. They propose a HMM+Mix model to separate out the topic indicators. By training their model based on a conversational structure, they demonstrate that conversational structure is crucial to learning a better speech act recognition model. In our work, we also demonstrate that conversational structure is important for modeling conversational dependencies, however, we do not use any given structure; rather, we build models based on arbitrary graph structures.",
"cite_spans": [
{
"start": 933,
"end": 944,
"text": "Paul (2012)",
"ref_id": "BIBREF55"
},
{
"start": 1092,
"end": 1122,
"text": "Joty, Carenini, and Lin (2011)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Speech Act Recognition in Asynchronous Conversation",
"sec_num": "2.3"
},
{
"text": "Let s n m denote the m-th sentence of comment n in an asynchronous conversation; our goal is to find the corresponding speech act tag y n m \u2208 T , where T is the set of available tags. Our approach works in two main steps, as outlined in Figure 2 . First, we use a RNN to encode each sentence into a task-specific distributed representation (i.e., embedding) by composing the words sequentially. The RNN is trained to classify sentences into speech act types, and is adapted to give domain-invariant sentence features when trained to leverage additional data from synchronous domains (e.g., meetings). In the second step, a structured model takes the sentence embeddings as input, and defines a joint distribution over sentences to capture the conversational dependencies. In the following sections, we describe these steps in detail.",
"cite_spans": [],
"ref_spans": [
{
"start": 237,
"end": 245,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Our Approach",
"sec_num": "3."
},
{
"text": "One of our main hypotheses is that a sentence representation method should consider the word order of the sentence. To this end, we use a RNN to encode each sentence into a vector by processing its words sequentially, at each time step combining the current input with the previous hidden state. Figure 3 (a) demonstrates the process for three sentences. Initially, we create an embedding matrix E \u2208 R |V|\u00d7D , where each row represents the distributed representation of dimension D for a word in a finite vocabulary V. We construct V from the training data after filtering out the infrequent words.",
"cite_spans": [],
"ref_spans": [
{
"start": 296,
"end": 304,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning Task-Specific Sentence Representation",
"sec_num": "3.1"
},
{
"text": "Our two-step inference framework for speech act recognition in asynchronous conversation. Each sentence in the conversation is first encoded into a task-specific representation by a recurrent neural network (RNN). The RNN is trained on the speech act classification task, and leverages large labeled data from synchronous domains (e.g., meetings) in an adversarial domain adaptation training method. A structured model (CRF) then takes the encoded sentence vectors as input, and performs joint prediction over all sentences in a conversation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 2",
"sec_num": null
},
{
"text": "Output layer",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word tokens",
"sec_num": null
},
{
"text": "s 1 1 s 1 2 s 2 1 z 1 1 z 2 1 z 1 2 z 1 1 z 2 1 z 1 2 y 1 1 y 2 1 y 1 2 (a) Bidirectional LSTM-based RNN model (b) An LSTM cell.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word tokens",
"sec_num": null
},
{
"text": "A bidirectional LSTM-RNN to encode each sentence s n m into a condensed vector z n m . The network is trained to classify each sentence into its speech act type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "Given an input sentence s = (w 1 , \u2022 \u2022 \u2022 , w T ) of length T, we first map each word w t to its corresponding index in E (equivalently, in V). The first layer of our network is a lookup layer that transforms each of these indices to a distributed representation x t \u2208 R D by looking up the embedding matrix E. We consider E a model parameter to be learned by backpropagation. We can initialize E randomly or using pretrained word vectors (to be described in Section 4.2). The output of the look-up layer is a matrix in R T\u00d7D , which is fed to the recurrent layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "The recurrent layer computes a compositional representation \u2212 \u2192 h t at every time step t by performing nonlinear transformations of the current input x t and the output of the previous time step \u2212 \u2192 h t\u22121 . We use LSTM blocks (Hochreiter and Schmidhuber 1997) in the recurrent layer. As shown in Figure 3 (b), each LSTM block is composed of four elements: (i) a memory cell c (a neuron) with a self-connection, (ii) an input gate i to control the flow of input signal into the neuron, (iii) an output gate o to control the effect of neuron activation on other neurons, and (iv) a forget gate f to allow the neuron to adaptively reset its current state through a self-connection. The following sequence of equations describe how the memory blocks are updated at every time step t:",
"cite_spans": [],
"ref_spans": [
{
"start": 296,
"end": 304,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "i t = sigh(U i h t\u22121 + V i x t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f t = sigh(U f h t\u22121 + V f x t ) (2) c t = i t tanh(U c h t\u22121 + V c x t ) + f t c t\u22121 (3) o t = sigh(U o h t\u22121 + V o x t ) (4) h t = o t tanh(c t )",
"eq_num": "(5)"
}
],
"section": "Figure 3",
"sec_num": null
},
{
"text": "where U and V are the weight matrices between two consecutive hidden layers, and between the input and the hidden layers, respectively. 3 The symbols sigh and tanh denote hard sigmoid and hard tan nonlinear functions, respectively, and the symbol denotes an element-wise product of two vectors. LSTM-RNNs, by means of their specifically designed gates (as opposed to simple RNNs), are capable of capturing longrange dependencies. We can interpret h t as an intermediate representation summarizing the past, that is, the sequence (w 1 , w 2 , . . . , w t ). The output of the last time step h T = z can thus be considered as the representation of the entire sentence, which can be fed to the classification layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "The classification layer uses a softmax for multi-class classification. Formally, the probability of the k-th class for classifying into K speech act classes is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p(y = k|s, W, \u03b8) = exp (w T k z) K k=1 exp (w T k z)",
"eq_num": "(6)"
}
],
"section": "Figure 3",
"sec_num": null
},
{
"text": "where W are the classifier weights, and \u03b8 = {E, U, V} are the encoder parameters. We minimize the negative log likelihood of the gold labels. The negative log likelihood for one data point is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L c (W, \u03b8) = \u2212 K k=1 I(y = k) log p(y = k|s, W, \u03b8)",
"eq_num": "(7)"
}
],
"section": "Figure 3",
"sec_num": null
},
{
"text": "where I(y = k) is an indicator function to encode the gold labels: I(y = k) = 1 if the gold label y = k, otherwise 0. 4 The loss function minimizes the cross-entropy between the predicted distribution and the target distribution (i.e., gold labels).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "Bidirectionality. The RNN just described encodes information that it obtains only from the past. However, information from the future could also be crucial for recognizing speech acts. This is especially true for longer sentences, where a unidirectional LSTM can be limited in encoding the necessary information into a single vector. Bidirectional RNNs (Schuster and Paliwal 1997) capture dependencies from both directions, thus providing two different views of the same sentence. This amounts to having a backward counterpart for each of the equations from (1) to (5). For classification, we use the con-",
"cite_spans": [
{
"start": 353,
"end": 380,
"text": "(Schuster and Paliwal 1997)",
"ref_id": "BIBREF64"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "catenated vector z = [ \u2212 \u2192 z , \u2190 \u2212 z ] (equivalently, [ \u2212 \u2192 h T , \u2190 \u2212 h T ])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": ", where \u2212 \u2192 z and \u2190 \u2212 z are the encoded vectors summarizing the past and the future, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 3",
"sec_num": null
},
{
"text": "The LSTM-RNN described in the previous section can model long-distance dependencies between words, and, given enough training data, it should be able to compose a sentence, capturing its syntactic and semantic properties. However, when it comes to speech act recognition in asynchronous conversations, as mentioned before, not many large corpora annotated with a standard tagset are available. Because of the large number of parameters, the LSTM-RNN model usually overfits when it is trained on small data sets of asynchronous conversations (shown later in Section 5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "One solution to address this problem is to use data from synchronous domains for which large annotated corpora are available (e.g., MRDA meeting corpus). However, as we will see, although simple concatenation of data sets generally improves the performance of the LSTM-RNN model, it does not provide the optimal solution because the conversations in synchronous and asynchronous domains are different in modality (spoken vs. written) and in style. In other words, to get the best out of the available synchronous domain data, we need to adapt our model. Our goal is to adapt the LSTM-RNN encoder so that it learns to encode sentence representations z (i.e., features used for classification) that are not only discriminative for the act classification task, but also invariant across the domains. To this end, we propose to use the domain adversarial training of neural networks proposed recently by Ganin et al. (2016) .",
"cite_spans": [
{
"start": 900,
"end": 919,
"text": "Ganin et al. (2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "Let D S = {s n , y n } N n=1 denote the set of N training instances (labeled) in the source domain (e.g., MRDA meeting corpus). We consider two possible adaptation scenarios:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "(i) Unsupervised adaptation: In this scenario, we have only unlabeled examples in the target domain (e.g., forum). Let D u T = {s n } M n=N+1 be the set of (M \u2212 N \u2212 1) unlabeled training instances in the target domain with M being the total number of training instances in the two domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "(ii) Supervised adaptation: In addition to the unlabeled instances D u T , here we have access to some labeled training instances in the target domain,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "D l T = {s n , y n } L n=M+1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": ", with L being the total number of training examples in the two domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "In the following, we describe our models for these two adaptation scenarios in turn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "3.2.1 Unsupervised Adaptation. Figure 4 shows our extended LSTM-RNN network trained for domain adaptation. The input sentence s is sampled either from a synchronous domain (e.g., meeting) or from an asynchronous (e.g., forum) domain. As before, we pass the sentence through a look-up layer and a bidirectional recurrent layer to encode it into a distributed representation z = [ \u2212 \u2192 z , \u2190 \u2212 z ], using our bidirectional LSTM-RNN encoder. For domain adaptation, our goal is to adapt the encoder to generate z, such that it is not only informative for the target classification task (i.e., speech act recognition) but also invariant across domains. Upon achieving this, we can use the adapted LSTM-RNN encoder to encode a target sentence, and use the source classifier (the softmax layer) to classify the sentence into its corresponding speech act type. To this end, we add a domain discriminator, another neural network that takes z as input and tries to discriminate the domains of the input sentence (e.g., meeting vs. forum). Formally, the output of the domain discriminator is defined by a sigmoid function:d",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 39,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c9 = p(d = 1|z, \u03c9, \u03b8) = sigm(w T d h d )",
"eq_num": "(8)"
}
],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "where d \u2208 {0, 1} denotes the domain of the sentence s (1 for meeting, 0 for forum), w d are the final layer weights of the discriminator, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "h d = g(U d z)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "defines the hidden layer of the discriminator with U d being the layer weights, and g(.) being the ReLU activations (Nair and Hinton 2010) . We use the negative log-probability as the discrimination loss:",
"cite_spans": [
{
"start": 116,
"end": 138,
"text": "(Nair and Hinton 2010)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L d (\u03c9, \u03b8) = \u2212d logd \u03c9 \u2212 (1 \u2212 d) log 1 \u2212d \u03c9",
"eq_num": "(9)"
}
],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "The composite network ( Figure 4 ) has three players: (i) the encoder (E), (ii) the classifier (C), and (iii) the discriminator (D). During training, the encoder and the classifier play a co-operative game, while the encoder and the discriminator play an adversarial game. The training objective of the composite model can be written as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 32,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(W, \u03b8, \u03c9) = N n=1 L n c (W, \u03b8) act classification (source) \u2212\u03bb N n=1 L n d (\u03c9, \u03b8) domain discrimination (source) + M n=N+1 L n d (\u03c9, \u03b8) domain discrimination (target)",
"eq_num": "(10)"
}
],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "where \u03b8 = {E, U, V} are the parameters of the LSTM-RNN encoder, W are the classifier weights, and \u03c9 = {U d , w d } are the parameters of the discriminator network. 5 The hyper-parameter \u03bb controls the relative strength of the two networks. In training, we look for parameter values that satisfy a min-max optimization criterion as follows:",
"cite_spans": [
{
"start": 164,
"end": 165,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 * = argmin W,\u03b8 max U d ,w d L(W, \u03b8, \u03c9)",
"eq_num": "(11)"
}
],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "which involves a maximization (gradient ascent) with respect to {U d , w d } and a minimization (gradient descent) with respect to \u03b8 and W. Maximizing 9, which aims to improve the discrimination accuracy. When put together, the updates of the shared encoder parameters \u03b8 = {E, U, V} for the two networks work adversarially with respect to each other. In our gradient descent training, the min-max optimization is achieved by reversing the gradients (Ganin et al. 2016) of the domain discrimination loss L d (\u03c9, \u03b8), when they are backpropagated to the encoder. As shown in Figure 4 , the gradient reversal is applied to the recurrent and embedding layers. This optimization set-up is related to the training method of Generative Adversarial Networks (Goodfellow et al. 2014) , where the goal is to build deep generative models that can generate realistic images. The discriminator in Generative Adversarial Networks tries to distinguish real images from model-generated images, and thus the training attempts to minimize the discrepancy between the two image distributions. When backpropagating to the generator network, they consider a slight variation of the reverse gradients with respect to the discriminator loss. In particular, ifd \u03c9 is the discriminator probability for real images, rather than reversing the gradients of \u2212 log(1 \u2212d \u03c9 ), they backpropagate the gradients of \u2212 logd \u03c9 to the generator. Reversing the gradient is just a different way of achieving the same goal.",
"cite_spans": [
{
"start": 449,
"end": 468,
"text": "(Ganin et al. 2016)",
"ref_id": "BIBREF16"
},
{
"start": 749,
"end": 773,
"text": "(Goodfellow et al. 2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 572,
"end": 580,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "L(W, \u03b8, \u03c9) with respect to {U d , w d } is equivalent to minimizing the discriminator loss L d (\u03c9, \u03b8) in Equation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "Algorithm 1 presents pseudocode of our training algorithm based on stochastic gradient descent (SGD). We first initialize the model parameters by sampling from Glorot-uniform distribution (Glorot and Bengio 2010) . We then form minibatches of size b by randomly sampling b/2 labeled examples from D S and b/2 unlabeled examples from D u T . For labeled instances, both L c (W, \u03b8) and L d (\u03c9, \u03b8) losses are active, while only L d (\u03c9, \u03b8) is active for unlabeled instances.",
"cite_spans": [
{
"start": 188,
"end": 212,
"text": "(Glorot and Bengio 2010)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "The main challenge in adversarial training is to balance the two components (the task classifier and the discriminator) of the network. If one component becomes smarter, its loss to the shared layer becomes useless, and the training fails to converge (Arjovsky, Chintala, and Bottou 2017) . Equivalently, if one component gets weaker, its loss overwhelms that of the other, causing training to fail. In our experiments, we found the domain discriminator to be weaker; initially, it could not distinguish the domains often. To balance the two components, we would need the error signals from the discriminator to be fairly weak initially, with full power unleashed only as the classification errors start to dominate. We follow the weighting schedule proposed by Ganin et al. (2016, page 21) , who initialize \u03bb to 0, and then change it gradually to 1 as training progresses. That is, we start training the task classifier first, and we gradually add the discriminator's loss.",
"cite_spans": [
{
"start": 251,
"end": 288,
"text": "(Arjovsky, Chintala, and Bottou 2017)",
"ref_id": null
},
{
"start": 762,
"end": 790,
"text": "Ganin et al. (2016, page 21)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "Algorithm 1: Model training with stochastic gradient descent. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "Input : Data D S = {s n , y n } N n=1 , D u T = {s n } M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "(b) Randomly Sample b 2 unlabeled examples from D u T (c) Compute L c (W, \u03b8) and L d (\u03c9, \u03b8) (d) Set \u03bb = 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "1+exp(\u221210 * p) \u2212 1; p is the training progress linearly changing form 0 to 1. // Classifier & Encoder (e) Take a gradient step for 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "b \u2207 W,\u03b8 L c (W, \u03b8) // Discriminator ( f ) Take a gradient step for 2\u03bb b \u2207 U d ,w d L d (\u03c9, \u03b8) // Gradient reversal to fool Discriminator (g) Take a gradient step for \u2212 2\u03bb b \u2207 \u03b8 L d (\u03c9, \u03b8) until convergence;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "3.2.2 Supervised Adaptation. It is quite straightforward to extend our adaptation method to a supervised setting, where we have access to some labeled instances in the target domain. Similar to the instances in the source domain (D S ), the labeled instances in the target domain (D l T ) are used for act classification and domain discrimination. The total training loss in the supervised adaptation setting can be written as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(W, \u03b8, \u03c9) = N n=1 L n c (W, \u03b8) act classif. (source) + L n=M+1 L n c (W, \u03b8) act classif. (target) \u2212\u03bb N n=1 L n d (\u03c9, \u03b8) dom classif. (source) + L n=N+1 L n d (\u03c9, \u03b8) dom classif. (target)",
"eq_num": "(12)"
}
],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "where the second term is the classification loss on the labeled target data set D l T , and the last term is the discrimination loss on both labeled and unlabeled data in the target domain. We modify the training algorithm accordingly. Specifically, each minibatch in SGD training is formed by labeled instances from both D S and D l T , and unlabeled instances from D u T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adapting LSTM-RNN with Adversarial Training",
"sec_num": "3.2"
},
{
"text": "Given the vector representation of the sentences in an asynchronous conversation, we explore two different approaches to learn classification functions. The first and the traditional approach is to learn a local classifier, ignoring the structure in the output and using it for predicting the label of each sentence separately. Indeed, this is the approach we took in the previous subsection when we fed the output layer of the LSTM RNNs (Figures 3 and 4) with the sentence vectors. However, this approach does not model the conversational dependency between sentences in a conversation (e.g., adjacency relations between question-answer and request-accept pairs).",
"cite_spans": [],
"ref_spans": [
{
"start": 438,
"end": 455,
"text": "(Figures 3 and 4)",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Conditional Structured Model for Conversational Dependencies",
"sec_num": "3.3"
},
{
"text": "The second approach, which we adopt in this article, is to model the dependencies between the output variables (i.e., speech act labels of the sentences), while learning the classification functions jointly by optimizing a global performance criterion. We represent each conversation by a graph G = (V, E), as shown in Figure 5 . Each node i \u2208 V is associated with an input vector z i = z n m (extracted from the LSTM-RNN), representing",
"cite_spans": [],
"ref_spans": [
{
"start": 319,
"end": 327,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conditional Structured Model for Conversational Dependencies",
"sec_num": "3.3"
},
{
"text": "z 1 1 y 1 1 z 1 2 y 1 2 y 2 1 z 2 1 (a) A linear chain CRF z 1 1 y 1 1 z 1 2 y 1 2 y 2 1 z 2 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Structured Model for Conversational Dependencies",
"sec_num": "3.3"
},
{
"text": "(b) A fully connected CRF",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Structured Model for Conversational Dependencies",
"sec_num": "3.3"
},
{
"text": "Examples of conditional structured models for speech act recognition in asynchronous conversation. The sentence vectors (z n m ) are extracted from the LSTM-RNN model. the encoded features for the sentence s n m , and an output variable y i \u2208 {1, 2, \u2022 \u2022 \u2022 , K}, representing the speech act type. Similarly, each edge (i, j) \u2208 E is associated with an input feature vector \u03c6(z i , z j ), derived from the node-level features, and an output variable y i,j \u2208 {1, 2, \u2022 \u2022 \u2022 , L}, representing the state transitions for the pair of nodes. We define the following conditional joint distribution:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5",
"sec_num": null
},
{
"text": "p(y|v, w, z) = 1 Z(v, w, z) i\u2208V \u03c8 n (y i |z, v) node factor (i,j)\u2208E \u03c8 e (y i,j |z, w)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5",
"sec_num": null
},
{
"text": "edge factor 13where \u03c8 n and \u03c8 e are the node and the edge factors, and Z(.) is the global normalization constant that ensures a valid probability distribution. We use a log-linear representation for the factors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c8 n (y i |z, v) = exp(v T \u03c6(y i , z)) (14) \u03c8 e (y i,j |z, w) = exp(w T \u03c6(y i,j , z))",
"eq_num": "(15)"
}
],
"section": "Figure 5",
"sec_num": null
},
{
"text": "where \u03c6(.) is a feature vector derived from the inputs and the labels. This model is essentially a pairwise conditional random field (Murphy 2012 ). The global normalization allows CRFs to surmount the so-called label bias problem (Lafferty, McCallum, and Pereira 2001) , allowing them to take long-range interactions into account. The log likelihood for one data point (z, y) (i.e., a conversation) is:",
"cite_spans": [
{
"start": 133,
"end": 145,
"text": "(Murphy 2012",
"ref_id": "BIBREF47"
},
{
"start": 231,
"end": 269,
"text": "(Lafferty, McCallum, and Pereira 2001)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (\u03b8) = i\u2208V v T \u03c6(y i , z) + (i,j)\u2208E w T \u03c6(y i,j , z) \u2212 log Z(v, w, z)",
"eq_num": "(16)"
}
],
"section": "Figure 5",
"sec_num": null
},
{
"text": "This objective is convex, so we can use gradient-based methods to find the global optimum. The gradients have the following form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (v) = i\u2208V \u03c6(y i , z) \u2212 E[\u03c6(y i , z)] (17) f (w) = (i,j)\u2208E \u03c6(y i,j , z) \u2212 E[\u03c6(y i,j , z)]",
"eq_num": "(18)"
}
],
"section": "Figure 5",
"sec_num": null
},
{
"text": "where the E[\u03c6(.)] denote the expected feature vectors. In our case, the node or sentence features are the task-specific sentence embeddings extracted from the bi-directional LSTM-RNN model (possibly domain adapted by adversarial training), and for edge features, we use the hadamard product (i.e., element-wise product) of the two corresponding node vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5",
"sec_num": null
},
{
"text": "3.3.1 Training and Inference in CRFs. Traditionally, CRFs have been trained using offline methods like limited-memory BFGS (Murphy 2012) . Online training of CRFs using SGD was proposed by Vishwanathan et al. (2006) . Because RNNs are trained with online methods, to compare our two methods, we use an SGD-based algorithm to train our CRFs. Algorithm 2 gives the pseudocode of the training procedure. We use Belief Propagation (BP) (Pearl 1988) for inference in our CRFs. BP is guaranteed to converge to an exact solution if the graph is a tree. However, exact inference is intractable for graphs with loops. Despite this, Pearl (1988) advocates for BP in loopy Algorithm 2: Online learning algorithm for conditional random fields. 1. Initialize the model parameters v and w; 2. repeat for each thread G = (V, E) do (a) Compute node and edge factors \u03c8 n (y i |z, v) and \u03c8 e (y i,j |z, w); (b) Infer node and edge marginals using sum-product loopy BP; (c) Update",
"cite_spans": [
{
"start": 123,
"end": 136,
"text": "(Murphy 2012)",
"ref_id": "BIBREF47"
},
{
"start": 189,
"end": 215,
"text": "Vishwanathan et al. (2006)",
"ref_id": "BIBREF80"
},
{
"start": 432,
"end": 444,
"text": "(Pearl 1988)",
"ref_id": "BIBREF56"
},
{
"start": 623,
"end": 635,
"text": "Pearl (1988)",
"ref_id": "BIBREF56"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5",
"sec_num": null
},
{
"text": ": v = v \u2212 \u03b7 1 |V| f (v); (d) Update: w = w \u2212 \u03b7 1 |E| f (w); end until convergence;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5",
"sec_num": null
},
{
"text": "graphs as an approximation (see Murphy 2012, page 768). The algorithm is then called loopy BP. Although loopy BP gives approximate solutions for general graphs, it often works well in practice (Murphy, Weiss, and Jordan 1999) , outperforming other methods such as mean field (Weiss 2001 ).",
"cite_spans": [
{
"start": 193,
"end": 225,
"text": "(Murphy, Weiss, and Jordan 1999)",
"ref_id": "BIBREF48"
},
{
"start": 275,
"end": 286,
"text": "(Weiss 2001",
"ref_id": "BIBREF82"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 5",
"sec_num": null
},
{
"text": "One of the advantages of the pairwise CRF in Equation (13) is that we can define this model over arbitrary graph structures, which allows us to capture conversational dependencies at various levels. Modeling the arbitrary graph structure can be crucial, especially in scenarios where the reply-to structure of the conversation is not known. By defining structured models over plausible graph structures, we can get a sense of the underlying conversational structure. We distinguish between two types of conversational dependencies:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variations of Graph Structures.",
"sec_num": "3.3.2"
},
{
"text": "(i) Intra-comment connections: This defines how the speech acts of the sentences inside a comment are connected with each other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Variations of Graph Structures.",
"sec_num": "3.3.2"
},
{
"text": "(ii) Across-comment connections: This defines how the speech acts of the sentences across comments are connected in a conversation. Table 1 summarizes the connection types that we have explored in our CRF models. Each configuration of intra-and across-connections yields a different pairwise CRF. Figure 6 shows four such CRFs with three comments -C 1 being the first comment, and C i and C j being two other comments in the conversation. Figure 6(a) shows the structure for the NO-NO configuration, where there is no link between nodes of both intra-and across-comments. In this setting, the CRF model boils down to the MaxEnt model. Figure 6(b) shows the structure for LC-LC configuration, where there are linear ",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 297,
"end": 305,
"text": "Figure 6",
"ref_id": null
},
{
"start": 439,
"end": 450,
"text": "Figure 6(a)",
"ref_id": null
},
{
"start": 635,
"end": 646,
"text": "Figure 6(b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Variations of Graph Structures.",
"sec_num": "3.3.2"
},
{
"text": "CRFs over different graph structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 6",
"sec_num": null
},
{
"text": "chain relations between nodes of both intra-and across-comments. The linear chain across comments refers to the structure, where the last sentence of each comment is connected to the first sentence of the comment that comes next in the temporal order. Figures 6(c) shows the CRF for LC-LC 1 , in which the sentences inside a comment have linear chain connections, and the last sentence of the first comment is connected to the first sentence of the other comments. Figure 6(d) shows the graph structure for LC-FC 1 configuration, in which the sentences inside comments have linear chain connections, and sentences of the first comment are fully connected with the sentences of the other comments. Similarly, Figures 6(e) and 6(f ) show the graph structures for FC-LC and FC-FC configurations.",
"cite_spans": [],
"ref_spans": [
{
"start": 252,
"end": 264,
"text": "Figures 6(c)",
"ref_id": null
},
{
"start": 465,
"end": 476,
"text": "Figure 6(d)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Figure 6",
"sec_num": null
},
{
"text": "In this section, we describe the data sets used in our experiments. We use a number of labeled data sets to train and test our models, one of which we constructed in this work. Additionally, we use a large unlabeled conversational data set to train our (unsupervised) word embedding models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpora",
"sec_num": "4."
},
{
"text": "There exist large corpora of utterances annotated with speech acts in synchronous spoken domains, for example, Switchboard-DAMSL (SWBD) (Jurafsky, Shriberg, and (Dhillon et al. 2004) . However, the asynchronous domain lacks such large corpora. Some prior studies (Cohen, Carvalho, and Mitchell 2004; Feng et al. 2006; Ravi and Kim 2007; Bhatia, Biyani, and Mitra 2014) tackle the task at the comment level, and use task-specific tagsets. In contrast, in this work we are interested in identifying speech acts at the sentence level, and also using a standard tagset like the ones defined in SWBD or MRDA. Several studies attempt to solve the task at the sentence level. Jeong, Lin, and Lee (2009) created a data set of TripAdvisor (TA) forum conversations annotated with the standard 12 act types defined in MRDA. They also remapped the BC3 e-mail corpus (Ulrich, Murray, and Carenini 2008) according to this tagset. Table 2 shows the tags and their relative frequency in the two data sets. Subsequent studies (Joty, Carenini, and Lin 2011; Tavafi et al. 2013 ; Oya and Carenini 2014) use these data sets. We also use these data sets in our work. Table 3 shows some basic statistics about these data sets. On average, BC3 conversations are longer than those of TripAdvisor in terms of both number of comments and number of sentences.",
"cite_spans": [
{
"start": 161,
"end": 182,
"text": "(Dhillon et al. 2004)",
"ref_id": "BIBREF10"
},
{
"start": 263,
"end": 299,
"text": "(Cohen, Carvalho, and Mitchell 2004;",
"ref_id": "BIBREF9"
},
{
"start": 300,
"end": 317,
"text": "Feng et al. 2006;",
"ref_id": "BIBREF13"
},
{
"start": 318,
"end": 336,
"text": "Ravi and Kim 2007;",
"ref_id": "BIBREF59"
},
{
"start": 337,
"end": 368,
"text": "Bhatia, Biyani, and Mitra 2014)",
"ref_id": "BIBREF5"
},
{
"start": 854,
"end": 889,
"text": "(Ulrich, Murray, and Carenini 2008)",
"ref_id": "BIBREF78"
},
{
"start": 1009,
"end": 1039,
"text": "(Joty, Carenini, and Lin 2011;",
"ref_id": "BIBREF24"
},
{
"start": 1040,
"end": 1058,
"text": "Tavafi et al. 2013",
"ref_id": "BIBREF74"
}
],
"ref_spans": [
{
"start": 916,
"end": 923,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1146,
"end": 1153,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Labeled Corpora",
"sec_num": "4.1"
},
{
"text": "Since these data sets are relatively small in size with sparse tag distributions, we group the 12 act types into 5 coarser classes to learn a reasonable classifier. Some prior work (Tavafi et al. 2013; Oya and Carenini 2014) has also taken the same approach. More specifically, all the question types are grouped into one general class Question, all response types into Response, and appreciation and polite mechanisms into the Polite class.",
"cite_spans": [
{
"start": 181,
"end": 201,
"text": "(Tavafi et al. 2013;",
"ref_id": "BIBREF74"
},
{
"start": 202,
"end": 224,
"text": "Oya and Carenini 2014)",
"ref_id": "BIBREF53"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Labeled Corpora",
"sec_num": "4.1"
},
{
"text": "In addition to the asynchronous data sets -TA, BC3, and QC3 (to be introduced subsequently), we also demonstrate the performance of our models on the synchronous MRDA meeting corpus, and use it for domain adaptation. Table 4 shows the label distribution of the resulting data sets; Statement is the most dominant class, followed by Question, Polite, and Suggestion.",
"cite_spans": [],
"ref_spans": [
{
"start": 217,
"end": 224,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Labeled Corpora",
"sec_num": "4.1"
},
{
"text": "4.1.1 QC3 Conversational Corpus: A New Asynchronous Data Set. Because both TripAdvisor and BC3 are quite small to make a general comment about model performance in asynchronous conversations, we have created a new annotated data set of forum conversations called Qatar Computing Conversational Corpus or QC3. 6 We selected 50 conversations from a popular community question answering site named Qatar Living 7 for our annotation. We used three conversations for our pilot study and used the remaining 47 for the actual study. The resulting corpus, as shown in the last column of Table 3 , on average contains 13.32 comments and 33.28 sentences per conversation, and 19.78 words per sentence. Two native speakers of English annotated each conversation using a Web-based annotation framework (Ulrich, Murray, and Carenini 2008) . They were asked to annotate each sentence with the most appropriate speech act tag from the list of five speech act types. Because this task is not always obvious, we gave them detailed annotation guidelines with real examples. We use Cohen's \u03ba to measure the agreement between the annotators. The third column in Table 5 presents the \u03ba values for the act types, which vary from 0.43 (for Response) to 0.87 (for Question).",
"cite_spans": [
{
"start": 309,
"end": 310,
"text": "6",
"ref_id": null
},
{
"start": 790,
"end": 825,
"text": "(Ulrich, Murray, and Carenini 2008)",
"ref_id": "BIBREF78"
}
],
"ref_spans": [
{
"start": 579,
"end": 586,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 1142,
"end": 1149,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Labeled Corpora",
"sec_num": "4.1"
},
{
"text": "In order to create a consolidated data set, we collected the disagreements between the two annotators, and used a third annotator to resolve those cases. The fifth column in Table 4 presents the distribution of the speech acts in the resulting data set. As we can see, after Statement, Suggestion is the most frequent class, followed by the Question and the Polite classes.",
"cite_spans": [],
"ref_spans": [
{
"start": 174,
"end": 181,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Labeled Corpora",
"sec_num": "4.1"
},
{
"text": "One simple way to exploit unlabeled data for semi-supervised learning is to use word embeddings that are learned from large unlabeled data sets (Turian, Ratinov, and Bengio 2010) . Word embeddings such as word2vec skip-gram (Mikolov, Yih, and Zweig 2013) and Glove vectors (Pennington, Socher, and Manning 2014) capture syntactic and semantic properties of words and their linguistic regularities in the vector space. The skip-gram model was trained on part of the Google news data set containing about 100 billion words, and it contains 300-dimensional vectors for 3 million unique words and phrases. 8 Glove was trained on the combination of Wikipedia 2014 and Gigaword 5 data sets containing 6B tokens and 400K unique (uncased) words. It comes with 50d, 100d, 200d, and 300d vectors. 9 For our experiments, we use the 300d vectors. Many recent studies have shown that the pretrained embeddings improve the performance on supervised tasks (Schnabel et al. 2015) . In our work, we have used these generic off-the-shelf pretrained embeddings to boost the performance of our models. In addition, we have also trained the word2vec skip-gram model and Glove on a large conversational corpus to obtain more relevant conversational word embeddings. Later in our experiments (Section 5) we will demonstrate that the conversational word embeddings are more effective than the generic ones because they are trained on similar data sets.",
"cite_spans": [
{
"start": 144,
"end": 178,
"text": "(Turian, Ratinov, and Bengio 2010)",
"ref_id": "BIBREF77"
},
{
"start": 224,
"end": 254,
"text": "(Mikolov, Yih, and Zweig 2013)",
"ref_id": "BIBREF46"
},
{
"start": 273,
"end": 311,
"text": "(Pennington, Socher, and Manning 2014)",
"ref_id": "BIBREF57"
},
{
"start": 941,
"end": 963,
"text": "(Schnabel et al. 2015)",
"ref_id": "BIBREF63"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conversational Word Embeddings",
"sec_num": "4.2"
},
{
"text": "To train the word embeddings, we collected conversations of both synchronous and asynchronous types. For asynchronous, we collected e-mail threads from W3C (w3c.org), and forum conversations from TripAdvisor and QatarLiving sites. The raw data was too noisy to directly inform our models, as it contains system messages and signatures. We cleaned up the data with the intention of keeping only the headers, bodies, and quotations. For synchronous, we used the utterances from the SWBD and MRDA corpora. Table 6 shows some basic statistics about these (unlabeled) data sets. We trained our word vectors on the concatenated set of all data sets (i.e., 120M tokens). Note that the conversations in our labeled data sets were taken from these sources (e.g., BC3 from W3C, QC3 from QatarLiving, and TA from TripAdvisor.) ",
"cite_spans": [],
"ref_spans": [
{
"start": 503,
"end": 510,
"text": "Table 6",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Conversational Word Embeddings",
"sec_num": "4.2"
},
{
"text": "In this section, we present our experimental settings, results, and analysis. We start with an outline of the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "Our main objective is to evaluate our speech act recognizer on asynchronous conversations. For this, we evaluate our models on the forum and e-mail data sets introduced earlier in Section 4.1: (i) our newly created QC3 data set, (ii) the TripAdvisor (TA) data set from Jeong, Lin, and Lee (2009) , and (iii) the BC3 e-mail corpus from (Ulrich, Murray, and Carenini 2008) . In addition, we validate our sentence encoding approach on the MRDA meeting corpus. Because of the noisy and informal nature of conversational texts, we performed a series of preprocessing steps before using it for training or testing. We normalize all characters to their lowercased forms, truncate elongations to two characters, and spell out every digit and URL. We further tokenized the texts using the CMU TweetNLP tool (Gimpel et al. 2011) .",
"cite_spans": [
{
"start": 269,
"end": 295,
"text": "Jeong, Lin, and Lee (2009)",
"ref_id": "BIBREF22"
},
{
"start": 335,
"end": 370,
"text": "(Ulrich, Murray, and Carenini 2008)",
"ref_id": "BIBREF78"
},
{
"start": 798,
"end": 818,
"text": "(Gimpel et al. 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Outline of Experiments",
"sec_num": "5.1"
},
{
"text": "For performance comparison, we use both accuracy and macro-averaged F 1 score. Accuracy gives the overall performance of a classifier but could be biased toward the most populated classes, whereas macro-averaged F 1 weights every class equally, and is not influenced by class imbalance. Statistical significance tests are done using an approximate randomization test based on the accuracy. 10 We used SIGF V.2 (Pad\u00f3 2006) with 10,000 iterations.",
"cite_spans": [
{
"start": 390,
"end": 392,
"text": "10",
"ref_id": null
},
{
"start": 410,
"end": 421,
"text": "(Pad\u00f3 2006)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Outline of Experiments",
"sec_num": "5.1"
},
{
"text": "In the following, we first demonstrate the effectiveness of our LSTM-RNN model for learning task-specific sentence encoding by training it on the task in three different settings: (i) training on in-domain data only, (ii) training on a simple concatenation of synchronous and asynchronous data, and (iii) training it with adversarial training for domain adaptation. We also compare the effectiveness of different embedding types in these three training settings. The best task-specific embeddings are then extracted and fed into the CRF models to learn inter-sentence dependencies. In Section 5.3, we compare how our CRF models with different conversational graph structure perform. Table 7 gives an outline of our experimental roadmap. ",
"cite_spans": [],
"ref_spans": [
{
"start": 683,
"end": 690,
"text": "Table 7",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Outline of Experiments",
"sec_num": "5.1"
},
{
"text": "We first describe the experimental settings for our LSTM RNN sentence encoding model-the data set splits, training settings, and compared baselines. Then we present our results on the three training scenarios as outlined in Table 7 .",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 231,
"text": "Table 7",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Effectiveness of LSTM RNN",
"sec_num": "5.2"
},
{
"text": "5.2.1 Experimental Settings. We split each of our asynchronous corpora randomly into 70% sentences for training, 10% for development, and 20% for testing. For MRDA, we use the same train:test:dev split as Jeong, Lin, and Lee (2009) . Table 8 summarizes the resulting data sets. We compare the performance of our LSTM-RNN model with MaxEnt (ME) and Multi-layer Perceptron (MLP) with one hidden layer. 11 In one setting, we fed them with the bag-of-words (BOW) representation of the sentence, namely, vectors containing binary values indicating the presence or absence of a word in the training set vocabulary. In another setting, we use a concatenation of the pretrained word embeddings as the sentence representation.",
"cite_spans": [
{
"start": 205,
"end": 231,
"text": "Jeong, Lin, and Lee (2009)",
"ref_id": "BIBREF22"
},
{
"start": 400,
"end": 402,
"text": "11",
"ref_id": null
}
],
"ref_spans": [
{
"start": 234,
"end": 241,
"text": "Table 8",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Effectiveness of LSTM RNN",
"sec_num": "5.2"
},
{
"text": "We train the models by optimizing the cross entropy in Equation (7) using the gradient-based learning algorithm ADAM (Kingma and Ba 2014) . 12 The learning rate and other parameters were set to the values as suggested by the authors. To avoid overfitting, we use dropout (Srivastava et al. 2014) of hidden units and early-stopping based on the loss on the development set. 13 Maximum number of epochs was set to 50 for RNNs, ME, and MLP. We experimented with dropout rates of {0.0, 0.2, 0.4}, minibatch sizes of {16, 32, 64}, and hidden layer units of {100, 150, 200} in MLP and LSTMs. The vocabulary V in LSTMs was limited to the most frequent P% (P \u2208 {85, 90, 95}) words in the training corpus, where P is considered a hyperparameter.",
"cite_spans": [
{
"start": 117,
"end": 137,
"text": "(Kingma and Ba 2014)",
"ref_id": "BIBREF32"
},
{
"start": 140,
"end": 142,
"text": "12",
"ref_id": null
},
{
"start": 271,
"end": 295,
"text": "(Srivastava et al. 2014)",
"ref_id": "BIBREF69"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of LSTM RNN",
"sec_num": "5.2"
},
{
"text": "We initialize the word vectors in our model either by sampling randomly from the small uniform distribution U (\u22120.05, 0.05), or by using pretrained embeddings. The dimension for random initialization was set to 128. For pretrained embeddings, we experiment with off-the-shelf embeddings that come with word2vec (Mikolov et al. 2013b) and Glove (Pennington, Socher, and Manning 2014) as well as with our conversational word embeddings (Section 4.2).",
"cite_spans": [
{
"start": 311,
"end": 333,
"text": "(Mikolov et al. 2013b)",
"ref_id": "BIBREF45"
},
{
"start": 344,
"end": 382,
"text": "(Pennington, Socher, and Manning 2014)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of LSTM RNN",
"sec_num": "5.2"
},
{
"text": "We experimented with four variations of our LSTM-RNN model: (i) U-LSTM rand , referring to unidirectional RNN with random word vector initialization; (ii) U-LSTM pre , referring to unidirectional RNN initialized with pretrained word embeddings of type pre; (iii) B-LSTM rand , referring to bidirectional RNN with random initialization; and (iv) B-LSTM pre , referring to bidirectional RNN initialized with pretrained word vectors of type pre.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of LSTM RNN",
"sec_num": "5.2"
},
{
"text": "In-Domain Training. Before reporting the performance of our sentence encoding model on asynchronous domains, we first evaluate it on the (synchronous) MRDA meeting corpus where it can be compared to previous studies on a large data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results for",
"sec_num": "5.2.2"
},
{
"text": "Results on MRDA Meeting Corpus. Table 9 presents the results on MRDA for in-domain training. The first two rows show the best results reported so far on this data set from Jeong, Lin, and Lee (2009) for classifying sentences into 12 speech act types; the first row shows the results of the model that uses only n-grams, and the second row shows the results using all of the features, including n-grams, speaker, part-of-speech, and dependency structure. Note that our LSTM RNNs and their n-gram model use the same word sequence information.",
"cite_spans": [
{
"start": 172,
"end": 198,
"text": "Jeong, Lin, and Lee (2009)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 32,
"end": 39,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results for",
"sec_num": "5.2.2"
},
{
"text": "The second group of results (third and fourth rows) are for ME and MLP models with BOW sentence representation. The third group shows the results for unidirectional LSTM with random and pretrained off-the-shelf embeddings. The fourth group shows the corresponding results for bi-directional LSTMs. Finally, the fifth row presents the results for bi-directional LSTM with our conversational embeddings. To compare our results with the results of Jeong, Lin, and Lee (2009) , we ran our models on 12-class classification task in addition to our original 5-class task.",
"cite_spans": [
{
"start": 445,
"end": 471,
"text": "Jeong, Lin, and Lee (2009)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results for",
"sec_num": "5.2.2"
},
{
"text": "It can be observed that all of our LSTM-RNNs achieve state-of-the-art results, and the bi-directional ones with pretrained embeddings generally perform better than others in terms of the F 1 -score. The best results are obtained with our conversational embeddings. Our best model B-LSTM conv-glove (B-LSTM with Glove conversational embeddings) gives absolute improvements of about 5.0% and 3.5% in F 1 compared to the n-gram and all-features models, respectively, of Jeong, Lin, and Lee (2009) . This is Table 9 Results on MRDA (synchronous) meeting corpus in macro-averaged F 1 and accuracy. Accuracy numbers are shown in parentheses. Top two rows report results from Jeong, Lin, and Lee (2009) for their model with n-gram and all feature sets. Best results are boldfaced. Accuracy numbers significantly superior to the best baselines are marked with *. remarkable because our LSTM-RNNs learn the sentence representation automatically from the word sequence and do not use any hand-engineered features.",
"cite_spans": [
{
"start": 467,
"end": 493,
"text": "Jeong, Lin, and Lee (2009)",
"ref_id": "BIBREF22"
},
{
"start": 669,
"end": 695,
"text": "Jeong, Lin, and Lee (2009)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 504,
"end": 511,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results for",
"sec_num": "5.2.2"
},
{
"text": "Results on Asynchronous Data Sets. Now let us consider the results in Table 10 for the asynchronous data sets-QC3, TA, and BC3. We show the results of our models based on 5-fold cross validation in addition to the random (20%) test set in Table 8 . The 5-fold setting allows us to get more generic performance of the models on a particular data set. For simplicity, we only report the results for Glove embeddings that were found to be superior to word2vec embeddings. We can observe trends similar to those for MRDA: (i) bidirectional LSTMs outperform their unidirectional counterparts, (ii) pretrained Glove vectors provide better results than the randomly initialized ones, and (iii) conversational word embeddings give the best results among the embedding types. When we compare these results with those of the baselines (ME bow and MLP bow ), we see our method outperforms those on QC3 and TA (3.8% to 8.0%), but fails to do so on BC3. This is due to the small size of the data that affects deep neural methods like LSTM-RNNs, which usually require much labeled data to learn an effective compositional model. In the following, we show the effect of adding more labeled data from the MRDA meeting corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 78,
"text": "Table 10",
"ref_id": "TABREF1"
},
{
"start": 239,
"end": 246,
"text": "Table 8",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "MRDA",
"sec_num": null
},
{
"text": "To validate our claim that LSTM-RNNs can learn a more effective model for our task when they are provided with enough training data, we create a concatenated training setting by merging the training and the development sets of the four corpora in Table 8 (see the Train and Dev. columns in the last row); the test set for each data set remains the same. We will refer to this train-test setting as CONCAT. Table 11 shows the results of the baseline and the B-LSTM models on the three test sets for this concatenated training setting. We notice that our B-LSTM models with pretrained embeddings outperform ME bow and MLP bow significantly. Again, the conversational Glove embeddings prove to be the best word vectors giving the best results across the data sets. Our best model gives absolute improvements of 2% to 12% in F 1 across the data sets over the best baselines.",
"cite_spans": [],
"ref_spans": [
{
"start": 247,
"end": 254,
"text": "Table 8",
"ref_id": "TABREF6"
},
{
"start": 406,
"end": 414,
"text": "Table 11",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Adding Meeting Data.",
"sec_num": "5.2.3"
},
{
"text": "When we compare these results with those in Table 10 , we notice that with more heterogeneous data sets, B-LSTM, by virtue of its distributed and condensed representation, generalizes well across different domains. In contrast, ME and MLP, because of their BOW representation, suffer from the data diversity of different domains. These Table 11 Macro-averaged F 1 and Accuracy (in parentheses) results for training on the concatenated (CONCAT) data set without any explicit domain adaptation. Best results are boldfaced. Accuracy numbers significantly higher than the best baseline MLP bow are marked with *. results also confirm that B-LSTM gives better sentence representation than BOW when it is given enough training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 44,
"end": 52,
"text": "Table 10",
"ref_id": "TABREF1"
},
{
"start": 336,
"end": 344,
"text": "Table 11",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Adding Meeting Data.",
"sec_num": "5.2.3"
},
{
"text": "Comparison with Other Classifiers and Sentence Encoders. Now, we compare our best B-LSTM model (i.e., B-LSTM conv-glove ) with other classifiers and sentence encoders in the concatenated (CONCAT) training setting. The models that we compare with are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Meeting Data.",
"sec_num": "5.2.3"
},
{
"text": "(a) ME conv-glove : We represent each sentence as a concatenated vector of its word vectors, and train a MaxEnt (ME) classifier based on this representation. For word vectors, we use our best performing conversational Glove vectors as we use in our B-LSTM conv-glove model. We set a maximum sentence length of 100 words, and used zero-padding for shorter sentences. This model has a total of 100 (input words) \u00d7 300 (embedding dimensions) \u00d7 5 (class labels) = 150,000 trainable parameters. 14 (b) MLP conv-glove : We represent each sentence similarly as above, and train a one-hidden layer Multi-layer Perceptron (MLP) based on the representation. The hidden layer has 1, 000 units, which is determined based on the performance on the development set. This model has a total of 100 \u00d7 300 \u00d7 1000 \u00d7 5 = 150,000,000 parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Meeting Data.",
"sec_num": "5.2.3"
},
{
"text": "(c) ME conv-glove-averaging : We represent each sentence as a mean vector of its word vectors, and train a MaxEnt classifier using this representation. This model has a total of 300 \u00d7 5 = 1, 500 trainable parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Meeting Data.",
"sec_num": "5.2.3"
},
{
"text": "(d) SVM conv-glove-averaging : We train a SVM classifier based on the mean vector. 15 In our training, we use a linear kernel with the default C value of 1.0.",
"cite_spans": [
{
"start": 83,
"end": 85,
"text": "15",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Meeting Data.",
"sec_num": "5.2.3"
},
{
"text": "(e) ME skip-thought : We encode each sentence with the skip-thought encoder of Kiros et al. (2015) . The skip-thought model uses an encoder-decoder framework to learn the sentence representation in a task-agnostic (unsupervised) way. It encodes each sentence with a GRU-RNN (Cho et al. 2014) , and uses the encoded vector to decode the words of the neighboring sentences using another GRU-based RNN as a language model. The model is originally trained on the BookCorpus 16 with a vocabulary size of 20K words. It then uses the CBOW word2vec vectors (Mikolov et al. 2013a) to expand the vocabulary size to 930,911 words. Following the recommendation from the authors, we use the combine-skip model that concatenates the vectors encoded by a uni-directional encoder (uni-skip) and a bi-directional encoder (bi-skip). The resulting vectors are of 4,800 dimensions-the first 2,400 dimensions is the uniskip vector, and the last 2,400 dimensions is the bi-skip vector. We learn a ME classifier based on this representation. This model has a total of 4,800 \u00d7 5 = 24,000 parameters.",
"cite_spans": [
{
"start": 79,
"end": 98,
"text": "Kiros et al. (2015)",
"ref_id": null
},
{
"start": 274,
"end": 291,
"text": "(Cho et al. 2014)",
"ref_id": "BIBREF8"
},
{
"start": 549,
"end": 571,
"text": "(Mikolov et al. 2013a)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Meeting Data.",
"sec_num": "5.2.3"
},
{
"text": "(f) B-GRU: This is a variation of our B-LSTM conv-glove model, where we replace the LSTM cells with GRU cells (Cho et al. 2014) in the recurrent layer. This model has a total of 2 (bi-direction) \u00d7 3 (gates) \u00d7 128 2 (hidden-hidden) + 300 \u00d7 128 (input-hidden) + 256 \u00d7 5 = 329,984 trainable parameters (excluding the biases). Our LSTM-based RNN model uses four gates, which gives a total of 439,552 parameters to train.",
"cite_spans": [
{
"start": 110,
"end": 127,
"text": "(Cho et al. 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Adding Meeting Data.",
"sec_num": "5.2.3"
},
{
"text": "We notice that all these models have a large number of parameters to learn an effective classification model for our task using the sentence representation as input features. Similar to our B-LSTM, the B-GRU and the skip-thought models are compositional, that is, they compose the sentence representation from the representation of its words using the sentence structure. Although the 4,800 dimensional sentence representation for skipthought is not learned on the task, the associated weight parameters in the ME skip-thought model are trained on the task. Table 12 presents the results. It can be observed that in general the compositional methods perform better than the non-compositional ones (e.g., averaging, concatenation), and when the compositional method is trained on the task, we get the best performance on two out of three data sets. In particular, our B-LSTM conv-glove gets the best results on QC3 and TA, outperforming B-GRU conv-glove by a slight margin in F 1 . 17 The ME skip-thought performs the best on BC3, and close to the best results on TA. This is not so surprising because the skip-thought model encodes a sentence like a neural conversation model (Vinyals and Le 2015) , and it has been shown that such models capture information relevant to speech acts (Ritter, Cherry, and Dolan 2010) .",
"cite_spans": [
{
"start": 981,
"end": 983,
"text": "17",
"ref_id": null
},
{
"start": 1176,
"end": 1197,
"text": "(Vinyals and Le 2015)",
"ref_id": "BIBREF79"
},
{
"start": 1283,
"end": 1315,
"text": "(Ritter, Cherry, and Dolan 2010)",
"ref_id": "BIBREF61"
}
],
"ref_spans": [
{
"start": 558,
"end": 566,
"text": "Table 12",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Adding Meeting Data.",
"sec_num": "5.2.3"
},
{
"text": "To further analyze the cases where B-LSTM conv-glove makes a difference, Figure 7 shows the corresponding confusion matrices for B-LSTM conv-glove and MLP conv-glove on the concatenated testsets of QC3, TA, and BC3. In general, our classifiers get confused between Response and Statement, and between Suggestion and Statement the most. We noticed a similar observation in the human annotations, where annotators had difficulties with these three acts. It is noticeable that B-LSTM conv-glove is less affected by class imbalance, and it can detect the Suggestion and Polite acts much more correctly than MLP conv-glove . This indicates that LSTM-RNNs can model the grammar of the sentence when composing the words into phrases and sentences sequentially.",
"cite_spans": [],
"ref_spans": [
{
"start": 73,
"end": 81,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Adding Meeting Data.",
"sec_num": "5.2.3"
},
{
"text": "We have seen that semi-supervised learning in the form of word embeddings learned from a large unlabeled conversational corpus benefits our B-LSTM model. In the previous section, we witnessed further performance gains by exploiting more labeled data from the synchronous domain (MRDA). However, these methods make a simplified assumption that the conversational data comes from the same distribution. As mentioned before, the conversations in QC3, TA, or BC3 are quite different from MRDA meeting conversations in terms of style (spoken vs. written) and vocabulary usage. We believe that the results can be improved further by modeling the shift of domains (or distributions) explicitly. In Section 3.2, we described two adaptation scenarios: (i) unsupervised, where no annotated data is available in the target domains, and (ii) supervised, where some annotated data is available in the target domain. We use all the available labels in the CONCAT data set for our supervised training. This makes the adaptation results comparable with our pre-adaptation results reported earlier in Table 12 . Table 13 presents the results for the adapted B-LSTM conv-glove model under the above training conditions (last two rows). For comparison, we have also shown the results for two baselines: (i) a transfer B-LSTM conv-glove model in the first row that is trained on only MRDA (source domain) data, and (ii) a merge B-LSTM conv-glove model in the second row that is trained on the concatenation of MRDA and the target domain data (QC3, TA, or BC3). Recall that the merge model is the one that gave the best results so far (i.e., last row of Table 11 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 1084,
"end": 1092,
"text": "Table 12",
"ref_id": "TABREF1"
},
{
"start": 1095,
"end": 1103,
"text": "Table 13",
"ref_id": "TABREF1"
},
{
"start": 1633,
"end": 1641,
"text": "Table 11",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Effectiveness of Domain Adaptation.",
"sec_num": "5.2.4"
},
{
"text": "We can observe that without any labeled data in the target domain, the adapted B-LSTM conv-glove in the third row performs worse than the transfer baseline in QC3 and TA. In this case, because the out-of-domain labeled data set (MRDA) is much larger, it overwhelms the model, inducing features that are not relevant for the task in the target domain. However, when we provide the model with some labeled in-domain examples ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of Domain Adaptation.",
"sec_num": "5.2.4"
},
{
"text": "Confusion matrices for (a) MLP conv-glove and (b) B-LSTM conv-glove on the test sets of QC3, TA, and BC3. P stands for Polite, Q for Question, R for Response, ST for Statement, and SU stands for Suggestion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Figure 7",
"sec_num": null
},
{
"text": "Results on the concatenated (CONCAT) data set with adversarial training. in the supervised adaptation setting (last row), we observe gains over the merge model (second row) in all three data sets. Remarkably, the absolute improvements in F 1 for BC3 and TA are more than 11% and 3%, respectively. To analyze further the performance of our adapted model, Figure 8 presents the F 1 scores with varying amounts of labeled data in the target domain. It can be noticed that for all three data sets, the largest improvements come from the first 25% of the labeled data. The gains from the second quartile are also relatively higher than the last two quartiles for TA and QC3. This demonstrates the effectiveness of our adversarial domain adaptation method. In the future, it will be interesting to compare adversarial training with other domain adaptation methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 354,
"end": 362,
"text": "Figure 8",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Table 13",
"sec_num": null
},
{
"text": "Conversation-Level Data Set for CRFs. To demonstrate the effectiveness of CRFs in capturing inter-sentence dependencies in an asynchronous conversation, we create another training setting called CONV-LEVEL (Conversation-level), in which training instances are entire conversations and the random splits are done at the conversation level (as opposed to sentence) for the asynchronous corpora. This is required because the CRFs perform joint learning and inference based on an entire conversation. Table 14 shows the resulting data sets that we use to train and evaluate our CRFs. We have 229 conversations for training and 27 conversations for development. 18 The test sets contain 5, 20, and 5 conversations for QC3, TA, and BC3, respectively.",
"cite_spans": [
{
"start": 644,
"end": 659,
"text": "development. 18",
"ref_id": null
}
],
"ref_spans": [
{
"start": 497,
"end": 505,
"text": "Table 14",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Effectiveness of CRFs",
"sec_num": "5.3"
},
{
"text": "Baselines and CRF Variants. We use the following three models as baselines:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of CRFs",
"sec_num": "5.3"
},
{
"text": "(a) ME bow : a MaxEnt model with BOW representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of CRFs",
"sec_num": "5.3"
},
{
"text": "(b) Adapted B-LSTM conv-glove (semi-supervised): This model performs adversarial semi-supervised domain adaptation using labeled sentences from MRDA and CONV-LEVEL training sets. Note that this is our best system so far (see Table 13 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 225,
"end": 233,
"text": "Table 13",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Effectiveness of CRFs",
"sec_num": "5.3"
},
{
"text": "(c) ME adapt-lstm : a MaxEnt learned from the sentence embeddings extracted from the adapted B-LSTM conv-glove (semi-supervised), that is, the sentence embeddings are used as feature vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of CRFs",
"sec_num": "5.3"
},
{
"text": "We experiment with the CRF variants shown in Table 1 . Similar to ME adapt-lstm , the CRFs are trained on the CONV-LEVEL training set using the sentence embeddings extracted by applying the adapted B-LSTM conv-glove (semi-supervised) model. The CRF models are therefore the structured versions of the ME adapt-lstm baseline. . Table 15 shows our results on the CONV-LEVEL data sets. We can notice that CRFs generally outperform MEs in accuracy, and for some CRF variants we get better results in both macro F 1 and accuracy. This indicates that there are conversational dependencies between the sentences in a conversation.",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 52,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 325,
"end": 335,
"text": ". Table 15",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Effectiveness of CRFs",
"sec_num": "5.3"
},
{
"text": "If we compare the CRF variants, we can see that the model that does not consider any link across comments (CRF LC-NO ) performs the worst. A simple linear chain connection between sentences in their temporal order (CRF LC-LC ) does not improve much. This indicates that the linear chain CRF (Lafferty, McCallum, and Pereira 2001) is not the most appropriate model for capturing conversational dependencies in asynchronous conversations.",
"cite_spans": [
{
"start": 291,
"end": 329,
"text": "(Lafferty, McCallum, and Pereira 2001)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": null
},
{
"text": "The CRF LC-LC 1 is one of the well performing models and performs significantly better than the adapted B-LSTM conv-glove . 19 This model considers linear chain connections between sentences in a comment and only to the first comment. When we change this model to consider relations with every sentence in the first comment (CRF LC-FC 1 ), this improves the performance further, giving the best results in two of the three data sets. This indicates that there are strong dependencies between the sentences of the initial comment and the sentences of the rest of the comments, and these dependencies are better captured if the relations between them are explicitly considered. The CRF FC-FC also yields as good results as CRF LC-FC 1 . This could be attributed to the robustness of the fully connected CRF, which learns from all possible pairwise relations. Another interesting observation is that no single graph structure performs the best across all conversation types. For example, CRF LC-FC 1 gives the highest F 1 scores for QC3 and BC3, whereas CRF FC-FC gives the highest results for TA. This shows the variation and the complicated ways in which participants communicate with each other in these conversations. One interesting future work would be to learn the underlying conversational structure automatically. However, we believe that in order to learn an effective model, this would require more labeled data.",
"cite_spans": [
{
"start": 124,
"end": 126,
"text": "19",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": null
},
{
"text": "To see some real examples in which CRF by means of its global learning and inference makes a difference, let us consider the example in Figure 1 again. We notice that the two sentences in comment C 4 were mistakenly identified as Statement and Response, respectively, by the B-LSTM conv-glove local model. However, by considering these two sentences together with others in the conversation, the global CRF LC-LC 1 , CRF LC-FC 1 , and CRF FC-FC models were able to correct them (see GLOBAL). CRF LC-LC could correctly identify the first one as Question.",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 144,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": null
},
{
"text": "We have presented a novel two-step framework for speech act recognition in asynchronous conversation. An LSTM-RNN first composes sentences of a conversation into vector representations by considering the word order in a sentence. Then a pairwise CRF jointly models the inter-sentence dependencies in a conversation. In order to mitigate the problem of limited annotated data in the asynchronous domains, we further adapt the LSTM-RNN to learn from synchronous meeting conversations using adversarial training of neural networks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Directions",
"sec_num": "6."
},
{
"text": "We experimented with different LSTM variants (uni-vs. bi-directional, random vs. pretrained initialization), and different CRF variants, depending on the underlying graph structure. We trained word2vec and Glove conversational word embeddings from a large conversational corpus. We trained our models on many different settings using synchronous and asynchronous corpora, including in-domain training, concatenated training, unsupervised adaptation, supervised adaptation, and conversation level CRF joint training. We evaluated our approach on a synchronous data set (meeting) and three asynchronous data sets (two forum data sets and one e-mail data set), one of which is presented in this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Directions",
"sec_num": "6."
},
{
"text": "Our experiments show that conversational word embeddings, especially conversational Glove, are quite beneficial for learning better sentence representations for the speech act classification task through bidirectional LSTM. This is especially true when the amount of labeled data in the target domain is limited. Adding more labeled data from synchronous domains yields improvements for bi-LSTMs, but even more gains can be achieved by domain adaptation with adversarial training. Further experiments with CRFs show that global joint models improve over local models given that the models consider the right graph structure. In particular, the LC-FC 1 and FC-FC graph structures were among the best performers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Directions",
"sec_num": "6."
},
{
"text": "This work leads us to a number of future directions. First, we would like to combine CRFs with LSTM-RNNs for doing the two steps jointly, so that the LSTM-RNNs can learn the embeddings directly using the global thread-level feedback. This would require the backpropagation algorithm to take error signals from the loopy BP inference. Second, we would also like to apply our models to conversations, where the graph structure is extractable using metadata or other clues, for example, the fragment quotation graphs for e-mail threads (Carenini, Ng, and Zhou 2008) . One interesting future work would be to jointly model the conversational structure (e.g., reply-to structure) and the speech acts so that the two tasks can inform each other.",
"cite_spans": [
{
"start": 533,
"end": 562,
"text": "(Carenini, Ng, and Zhou 2008)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Directions",
"sec_num": "6."
},
{
"text": "In another direction, we would like to evaluate our speech act recognition model on extrinsic tasks. In a separate thread of work, we are developing coherence models for asynchronous conversations (Nguyen and Joty 2017; Joty, Mohiuddin, and Tien Nguyen 2018) . Such coherence models could be useful for a number of downstream tasks including next utterance (or comment) ranking, conversation generation, and thread reconstruction (Nguyen et al. 2017) . We are now looking into whether speech act information can help us in building better coherence models for asynchronous conversations. We also plan to evaluate the utility of speech acts in downstream NLP tasks involving asynchronous conversations like next utterance ranking (Lowe et al. 2015) , conversation generation (Ritter, Cherry, and Dolan 2010) , and summarization (Murray, Carenini, and Ng 2010) . Finally, we hope that the new corpus, the conversational word embeddings, and the source code that we have made publicly available in this work will facilitate other researchers in extending our work and in applying speech act models to their NLP tasks.",
"cite_spans": [
{
"start": 197,
"end": 219,
"text": "(Nguyen and Joty 2017;",
"ref_id": "BIBREF52"
},
{
"start": 220,
"end": 258,
"text": "Joty, Mohiuddin, and Tien Nguyen 2018)",
"ref_id": "BIBREF27"
},
{
"start": 430,
"end": 450,
"text": "(Nguyen et al. 2017)",
"ref_id": "BIBREF52"
},
{
"start": 729,
"end": 747,
"text": "(Lowe et al. 2015)",
"ref_id": "BIBREF40"
},
{
"start": 774,
"end": 806,
"text": "(Ritter, Cherry, and Dolan 2010)",
"ref_id": "BIBREF61"
},
{
"start": 827,
"end": 858,
"text": "(Murray, Carenini, and Ng 2010)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Directions",
"sec_num": "6."
},
{
"text": "Portions of this work were previously published in the ACL-2016 conference proceeding (Joty and Hoque 2016) . This article significantly extends the published work in several ways, most notably: (i) we train new word2vec and Glove word embeddings based on a large conversational corpus, and show their effectiveness by comparing with off-the-shelf word embeddings (Section 4.2 and the Results section), (ii) we extend the LSTM-RNN for domain adaptation using adversarial training of neural networks (Section 3.2), (iii) we evaluate the domain adapted LSTM-RNN model on meeting and forum data sets (Section 5.2), and (iv) we train and evaluate CRFs based on sentence embeddings extracted from the adapted LSTM-RNN (Section 5.3). Beside these extensions, a significant portion of the article was rewritten to adapt to a journal-style publication.",
"cite_spans": [
{
"start": 86,
"end": 107,
"text": "(Joty and Hoque 2016)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bibliographic Note",
"sec_num": null
},
{
"text": "Taken from http://www.qatarliving.com/forum/advice-help/posts/study-canada. 2 Speech acts are also known as \"dialog acts\" in the literature.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "There is bias associated with each nonlinear transformation, which we have omitted for notational simplicity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This is also known as one-hot vector representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For simplicity, we list U and V parameters of LSTM in a generic way rather than being specific to the gates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available from https://ntunlpsg.github.io/project/speech-act/. 7 http://www.qatarliving.com/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Available from https://code.google.com/archive/p/word2vec/. 9 Available from https://nlp.stanford.edu/projects/glove/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Significance tests operate on individual instances rather than individual classes; thus not applicable for macro F 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "More hidden layers worsened the performance. 12 Other algorithms like Adagrad or RMSProp gave similar results. 13 l 1 and l 2 regularization on weights did not work well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For simplicity, we excluded the bias vectors from our computation. 15 SVM training with linear kernel did not scale to the concatenated vector. 16 http://yknzhu.wixsite.com/mbweb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "There is no significant difference between the accuracy numbers for B-GRU and B-LSTM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the concatenated sets as train and dev sets. 19 Significance was computed based on the accuracy on the concatenated test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Aseel Ghazal for her efforts in creating the new QC3 corpus. We also thank Enamul Hoque for running and organizing some of the experiments reported in the ACL-2016 paper. Many thanks to the anonymous reviewers for their insightful comments on the ACL-2016 submitted version.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Plow: A collaborative task learning agent",
"authors": [
{
"first": "James",
"middle": [],
"last": "Allen",
"suffix": ""
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Ferguson",
"suffix": ""
},
{
"first": "Lucian",
"middle": [],
"last": "Galescu",
"suffix": ""
},
{
"first": "Hyuckchul",
"middle": [],
"last": "Jung",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Swift",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Taysom",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 22nd National Conference on Artificial Intelligence",
"volume": "2",
"issue": "",
"pages": "1514--1519",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Allen, James, Nathanael Chambers, George Ferguson, Lucian Galescu, Hyuckchul Jung, Mary Swift, and William Taysom. 2007. Plow: A collaborative task learning agent. In Proceedings of the 22nd National Conference on Artificial Intelligence - Volume 2, AAAI'07, pages 1514-1519.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "How To Do Things with Words",
"authors": [
{
"first": "John",
"middle": [],
"last": "Austin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Langshaw",
"suffix": ""
}
],
"year": 1962,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Austin, John Langshaw. 1962. How To Do Things with Words. Harvard University Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR, San Diego, CA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning the structure of task-driven human-human dialogs",
"authors": [
{
"first": "Srinivas",
"middle": [],
"last": "Bangalore",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [
"Di"
],
"last": "Fabbrizio",
"suffix": ""
},
{
"first": "Amanda",
"middle": [],
"last": "Stent",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 44nd Annual Meeting of the Association for Computational Linguistics, ACL'06",
"volume": "",
"issue": "",
"pages": "201--208",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bangalore, Srinivas, Giuseppe Di Fabbrizio, and Amanda Stent. 2006. Learning the structure of task-driven human-human dialogs. In Proceedings of the 44nd Annual Meeting of the Association for Computational Linguistics, ACL'06, pages 201-208.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Summarizing online forum discussions-can dialog acts of individual messages help?",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Bhatia",
"suffix": ""
},
{
"first": "Prakhar",
"middle": [],
"last": "Biyani",
"suffix": ""
},
{
"first": "Prasenjit",
"middle": [],
"last": "Mitra",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2127--2131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bhatia, Sumit, Prakhar Biyani, and Prasenjit Mitra. 2014. Summarizing online forum discussions-can dialog acts of individual messages help? In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2127-2131, Doha.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Summarizing emails with conversational cohesion and subjectivity",
"authors": [
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, ACL'08",
"volume": "",
"issue": "",
"pages": "353--361",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carenini, Giuseppe, Raymond T. Ng, and Xiaodong Zhou. 2008. Summarizing emails with conversational cohesion and subjectivity. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, ACL'08, pages 353-361, OH.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "On the collective classification of e-mail \"speech acts",
"authors": [
{
"first": "Vitor",
"middle": [
"R"
],
"last": "Carvalho",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2005,
"venue": "SIGIR '05: Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "345--352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carvalho, Vitor R. and William W. Cohen. 2005. On the collective classification of e-mail \"speech acts.\" In SIGIR '05: Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 345-352, New York, NY.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Gulcehre",
"middle": [],
"last": "Caglar",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cho, Kyunghyun, Bart van Merri\u00ebnboer, Gulcehre Caglar, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734, Doha.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning to classify e-mail into \"speech acts",
"authors": [
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Vitor",
"suffix": ""
},
{
"first": "Tom",
"middle": [
"M"
],
"last": "Carvalho",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "309--316",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cohen, William W., Vitor R. Carvalho, and Tom M. Mitchell. 2004. Learning to classify e-mail into \"speech acts.\" In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 309-316.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Meeting recorder project: Dialog act labeling guide",
"authors": [
{
"first": "Rajdip",
"middle": [],
"last": "Dhillon",
"suffix": ""
},
{
"first": "Sonali",
"middle": [],
"last": "Bhagat",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Carvey",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
}
],
"year": 2004,
"venue": "ICSI Tech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dhillon, Rajdip, Sonali Bhagat, Hannah Carvey, and Elizabeth Shriberg. 2004. Meeting recorder project: Dialog act labeling guide. Technical report, ICSI Tech., Berkeley, USA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Recognition of dialogue acts in multiparty meetings using a switching DBN",
"authors": [
{
"first": "Alfred",
"middle": [],
"last": "Dielmann",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Renals",
"suffix": ""
}
],
"year": 2008,
"venue": "IEEE Transactions on Audio, Speech, and Language Processing",
"volume": "16",
"issue": "",
"pages": "1303--1314",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dielmann, Alfred and Steve Renals. 2008. Recognition of dialogue acts in multiparty meetings using a switching DBN. IEEE Transactions on Audio, Speech, and Language Processing, 16:1303-1314.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Transition-based dependency parsing with stack long short-term memory",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Wang",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Austin",
"middle": [],
"last": "Matthews",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "334--343",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dyer, Chris, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition-based dependency parsing with stack long short-term memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 334-343.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Learning to detect conversation focus of threaded discussions",
"authors": [
{
"first": "Donghui",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Shaw",
"suffix": ""
},
{
"first": "Jihie",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, HLT-NAACL '06",
"volume": "",
"issue": "",
"pages": "208--215",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Feng, Donghui, Erin Shaw, Jihie Kim, and Eduard Hovy. 2006. Learning to detect conversation focus of threaded discussions. In Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, HLT-NAACL '06, pages 208-215, Stroudsburg, PA.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Behind the article: Recognizing dialog acts in Wikipedia talk pages",
"authors": [
{
"first": "Oliver",
"middle": [],
"last": "Ferschke",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
},
{
"first": "Yevgen",
"middle": [],
"last": "Chebotar",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, EACL '12",
"volume": "",
"issue": "",
"pages": "777--786",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ferschke, Oliver, Iryna Gurevych, and Yevgen Chebotar. 2012. Behind the article: Recognizing dialog acts in Wikipedia talk pages. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, EACL '12, pages 777-786, Avignon.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Efficient, feature-based, conditional random field parsing",
"authors": [
{
"first": "J",
"middle": [],
"last": "Finkel",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Kleeman",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, ACL'08",
"volume": "",
"issue": "",
"pages": "959--967",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finkel, J., A. Kleeman, and C. Manning. 2008. Efficient, feature-based, conditional random field parsing. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, ACL'08, pages 959-967, Columbus, OH.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Domain-adversarial training of neural networks",
"authors": [
{
"first": "Yaroslav",
"middle": [],
"last": "Ganin",
"suffix": ""
},
{
"first": "Evgeniya",
"middle": [],
"last": "Ustinova",
"suffix": ""
},
{
"first": "Hana",
"middle": [],
"last": "Ajakan",
"suffix": ""
},
{
"first": "Pascal",
"middle": [],
"last": "Germain",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Laviolette",
"suffix": ""
},
{
"first": "Mario",
"middle": [],
"last": "Marchand",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Lempitsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Machine Learning Research",
"volume": "17",
"issue": "1",
"pages": "2096--2030",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ganin, Yaroslav, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, Fran\u00e7ois Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. Journal of Machine Learning Research, 17(1):2096-2030.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Part-of-speech tagging for twitter: Annotation, features, and experiments",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Mills",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Flanigan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers",
"volume": "2",
"issue": "",
"pages": "42--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gimpel, Kevin, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A Smith. 2011. Part-of-speech tagging for twitter: Annotation, features, and experiments. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 42-47.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Understanding the difficulty of training deep feedforward neural networks",
"authors": [
{
"first": "Xavier",
"middle": [],
"last": "Glorot",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
}
],
"year": 2010,
"venue": "JMLR W&CP: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2010)",
"volume": "9",
"issue": "",
"pages": "249--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Glorot, Xavier and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In JMLR W&CP: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics (AISTATS 2010), volume 9, pages 249-256, Sardinia. Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Generative adversarial nets",
"authors": [
{
"first": "Ian",
"middle": [],
"last": "Goodfellow",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Pouget-Abadie",
"suffix": ""
},
{
"first": "Mehdi",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Warde-Farley",
"suffix": ""
},
{
"first": "Sherjil",
"middle": [],
"last": "Ozair",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of Advances in Neural Information Processing Systems Conference 27, NIPS '14",
"volume": "9",
"issue": "",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of Advances in Neural Information Processing Systems Conference 27, NIPS '14, pages 2672-2680, Montr\u00e9al. Hochreiter, Sepp and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A classification-based approach to question answering in discussion boards",
"authors": [
{
"first": "Liangjie",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "Brian",
"middle": [
"D"
],
"last": "Davison",
"suffix": ""
}
],
"year": 2009,
"venue": "32nd Annual International ACM SIGIR Conference on Research and Development on Information Retrieval",
"volume": "",
"issue": "",
"pages": "171--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hong, Liangjie and Brian D. Davison. 2009. A classification-based approach to question answering in discussion boards. In 32nd Annual International ACM SIGIR Conference on Research and Development on Information Retrieval, pages 171-178, Boston, MA.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Opinion mining with deep recurrent neural networks",
"authors": [
{
"first": "Ozan",
"middle": [],
"last": "Irsoy",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "720--728",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Irsoy, Ozan and Claire Cardie. 2014. Opinion mining with deep recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 720-728, Doha.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Semi-supervised speech act recognition in emails and forums",
"authors": [
{
"first": "Minwoo",
"middle": [],
"last": "Jeong",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Gary Geunbae",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1250--1259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeong, Minwoo, Chin-Yew Lin, and Gary Geunbae Lee. 2009. Semi-supervised speech act recognition in emails and forums. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1250-1259, Singapore.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A latent variable recurrent neural network for discourse-driven language models",
"authors": [
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "332--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ji, Yangfeng, Gholamreza Haffari, and Jacob Eisenstein. 2016. A latent variable recurrent neural network for discourse-driven language models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 332-342.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Unsupervised modeling of dialog acts in asynchronous conversations",
"authors": [
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, IJCAI'11",
"volume": "",
"issue": "",
"pages": "1--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joty, Shafiq, Giuseppe Carenini, and Chin-Yew Lin. 2011. Unsupervised modeling of dialog acts in asynchronous conversations. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, IJCAI'11, pages 1-130, Barcelona.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Topic segmentation and labeling in asynchronous conversations",
"authors": [
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Artificial Intelligence Research",
"volume": "47",
"issue": "1",
"pages": "521--573",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joty, Shafiq, Giuseppe Carenini, and Raymond T. Ng. 2013. Topic segmentation and labeling in asynchronous conversations. Journal of Artificial Intelligence Research, 47(1):521-573.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Speech act modeling of written asynchronous conversations with task-specific embeddings and conditional structured models",
"authors": [
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Enamul",
"middle": [],
"last": "Hoque",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1746--1756",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joty, Shafiq and Enamul Hoque. 2016. Speech act modeling of written asynchronous conversations with task-specific embeddings and conditional structured models. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL '16, pages 1746-1756, Berlin.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Coherence modeling of asynchronous conversations: A neural entity grid approach",
"authors": [
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Muhammad",
"middle": [],
"last": "Tasnim Mohiuddin",
"suffix": ""
},
{
"first": "Dat Tien",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "558--568",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joty, Shafiq, Muhammad Tasnim Mohiuddin, and Dat Tien Nguyen. 2018. Coherence modeling of asynchronous conversations: A neural entity grid approach. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 558-568.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Dialog act classification using acoustic and discourse information of maptask data",
"authors": [
{
"first": "Fatema",
"middle": [
"N"
],
"last": "Julia",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Khan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Iftekharuddin",
"suffix": ""
}
],
"year": 2008,
"venue": "IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence)",
"volume": "",
"issue": "",
"pages": "1472--1479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia, Fatema N. and Khan M. Iftekharuddin. 2008. Dialog act classification using acoustic and discourse information of maptask data. 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), pages 1472-1479.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Switchboard SWBD-DAMSL shallow-discourse-function annotation coders manual, draft 13",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Liz",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "Debra",
"middle": [],
"last": "Biasca",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jurafsky, Dan, Liz Shriberg, and Debra Biasca. 1997. Switchboard SWBD-DAMSL shallow-discourse-function annotation coders manual, draft 13. Technical Report, University of Colorado at Boulder & +SRI International.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Recurrent convolutional neural networks for discourse compositionality",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality",
"volume": "",
"issue": "",
"pages": "119--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kalchbrenner, Nal and Phil Blunsom. 2013. Recurrent convolutional neural networks for discourse compositionality. In Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality, pages 119-126.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Dialogue act classification in domain-independent conversations using a deep recurrent neural network",
"authors": [
{
"first": "Hamed",
"middle": [],
"last": "Khanpour",
"suffix": ""
},
{
"first": "Nishitha",
"middle": [],
"last": "Guntakandla",
"suffix": ""
},
{
"first": "Rodney",
"middle": [],
"last": "Nielsen",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2012--2021",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khanpour, Hamed, Nishitha Guntakandla, and Rodney Nielsen. 2016. Dialogue act classification in domain-independent conversations using a deep recurrent neural network. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2012-2021.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "Diederik",
"middle": [
"P"
],
"last": "Kingma",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kingma, Diederik P. and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Cambridge",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 28th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Skip-thought vectors. In Proceedings of the 28th International Conference on Neural Information Processing Systems -Volume 2, NIPS'15, pages 3294-3302, Cambridge, MA. Lafferty, John, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01, pages 282-289, San Francisco, CA.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "260--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lample, Guillaume, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 260-270.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Sequential short-text classification with recurrent and convolutional neural networks",
"authors": [
{
"first": "Ji",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Franck",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dernoncourt",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "515--520",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee, Ji Young and Franck Dernoncourt. 2016. Sequential short-text classification with recurrent and convolutional neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 515-520.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "When are tree structures necessary for deep learning of representations?",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2304--2314",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, Jiwei, Thang Luong, Dan Jurafsky, and Eduard Hovy. 2015. When are tree structures necessary for deep learning of representations? In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2304-2314, Lisbon.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Fine-grained opinion mining with recurrent neural networks and word embeddings",
"authors": [
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Helen",
"middle": [],
"last": "Meng",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1433--1443",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, Pengfei, Shafiq Joty, and Helen Meng. 2015. Fine-grained opinion mining with recurrent neural networks and word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1433-1443, Lisbon.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Conversation trees: A grammar model for topic structure in forums",
"authors": [
{
"first": "Annie",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1543--1553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Louis, Annie and Shay B. Cohen. 2015. Conversation trees: A grammar model for topic structure in forums. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1543-1553, Lisbon.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "The Ubuntu dialogue corpus: A large data set for research in unstructured multi-turn dialogue systems",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Lowe",
"suffix": ""
},
{
"first": "Nissan",
"middle": [],
"last": "Pow",
"suffix": ""
},
{
"first": "Iulian",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2015,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lowe, Ryan, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu dialogue corpus: A large data set for research in unstructured multi-turn dialogue systems. CoRR, abs/1506.08909.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "End-to-end sequence labeling via bi-directional LSTM-CNNS-CRF",
"authors": [
{
"first": "Xuezhe",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1064--1074",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ma, Xuezhe and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional LSTM-CNNS-CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1064-1074.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Using question-answer pairs in extractive summarization of e-mail conversations",
"authors": [
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "Lokesh",
"middle": [],
"last": "Shrestha",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2007,
"venue": "CICLing",
"volume": "",
"issue": "",
"pages": "542--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McKeown, Kathleen, Lokesh Shrestha, and Owen Rambow. 2007. Using question-answer pairs in extractive summarization of e-mail conversations. In CICLing, pages 542-550.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Statistical language models based on neural networks",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, Tom\u00e1\u0161. 2012. Statistical language models based on neural networks. Ph.D. thesis, Brno University of Technology.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Mikolov, Tomas, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv:1301.3781.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111-3119.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Linguistic Regularities in Continuous Space Word Representations",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Zweig",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT '13",
"volume": "",
"issue": "",
"pages": "746--751",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikolov, Tomas, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic Regularities in Continuous Space Word Representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT '13, pages 746-751, Atlanta, GA.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Machine Learning A Probabilistic Perspective",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Murphy",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Murphy, Kevin. 2012. Machine Learning A Probabilistic Perspective. The MIT Press.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Loopy belief propagation for approximate inference: An empirical study",
"authors": [
{
"first": "Kevin",
"middle": [
"P"
],
"last": "Murphy",
"suffix": ""
},
{
"first": "Yair",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, UAI'99",
"volume": "",
"issue": "",
"pages": "467--475",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Murphy, Kevin P., Yair Weiss, and Michael I. Jordan. 1999. Loopy belief propagation for approximate inference: An empirical study. In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, UAI'99, pages 467-475, Stockholm.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Interpretation and transformation for abstracting conversations",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "894--902",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Murray, Gabriel, Giuseppe Carenini, and Raymond Ng. 2010. Interpretation and transformation for abstracting conversations. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 894-902, Los Angeles, CA.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Incorporating speaker and discourse features into speech summarization",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Renals",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Carletta",
"suffix": ""
},
{
"first": "Johanna",
"middle": [],
"last": "Moore",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, HLT-NAACL '06",
"volume": "",
"issue": "",
"pages": "367--374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Murray, Gabriel, Steve Renals, Jean Carletta, and Johanna Moore. 2006. Incorporating speaker and discourse features into speech summarization. In Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, HLT-NAACL '06, pages 367-374, Stroudsburg, PA.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Rectified linear units improve restricted Boltzmann machines",
"authors": [
{
"first": "Vinod",
"middle": [],
"last": "Nair",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 27th International Conference on Machine Learning, ICML '10",
"volume": "",
"issue": "",
"pages": "807--814",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nair, Vinod and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted Boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning, ICML '10, pages 807-814, Haifa.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Thread reconstruction in conversational data using neural coherence models",
"authors": [
{
"first": "Dat",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Dat",
"suffix": ""
},
{
"first": "Basma",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "Boussaha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rijke",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL'17",
"volume": "",
"issue": "",
"pages": "1320--1330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nguyen, Dat and Shafiq Joty. 2017. A neural local coherence model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL'17, pages 1320-1330, Association for Computational Linguistics, Vancouver. Nguyen, Dat, Shafiq Joty, Basma Boussaha, and Maarten Rijke. 2017. Thread reconstruction in conversational data using neural coherence models. In Proceedings of the Neu-IR 2017 SIGIR Workshop on Neural Information Retrieval, NeuIR'17, Tokyo.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Extractive summarization and dialogue act modeling on e-mail threads: An integrated probabilistic approach",
"authors": [
{
"first": "Tatsuro",
"middle": [],
"last": "Oya",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)",
"volume": "",
"issue": "",
"pages": "133--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oya, Tatsuro and Giuseppe Carenini. 2014. Extractive summarization and dialogue act modeling on e-mail threads: An integrated probabilistic approach. In Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 133-140, Philadelphia, PA.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "User's guide to sigf: Significance testing by approximate randomisation",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Pad\u00f3",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pad\u00f3, Sebastian. 2006. User's guide to sigf: Significance testing by approximate randomisation.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Mixed membership Markov models for unsupervised conversation modeling",
"authors": [
{
"first": "Michael",
"middle": [
"J"
],
"last": "Paul",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "12",
"issue": "",
"pages": "94--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul, Michael J. 2012. Mixed membership Markov models for unsupervised conversation modeling. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL '12, pages 94-104, Stroudsburg, PA.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference",
"authors": [
{
"first": "Judea",
"middle": [],
"last": "Pearl",
"suffix": ""
}
],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pearl, Judea. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers Inc., San Francisco, CA.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP '14",
"volume": "2",
"issue": "",
"pages": "412--418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pennington, Jeffrey, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP '14, pages 1532-1543, Doha. Plank, Barbara, Anders S\u00f8gaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 412-418.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "It's not you, it's me: Detecting flirting and its misperception in speed-dates",
"authors": [
{
"first": "Ashequl",
"middle": [],
"last": "Qadir",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Ranganath",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Rajesh",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mcfarland",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "334--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qadir, Ashequl and Ellen Riloff. 2011. Classifying sentences as speech acts in message board posts. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 748-758, Edinburgh. Ranganath, Rajesh, Dan Jurafsky, and Dan McFarland. 2009. It's not you, it's me: Detecting flirting and its misperception in speed-dates. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 334-342.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Profiling student interactions in threaded discussions with speech act classifiers",
"authors": [
{
"first": "Sujith",
"middle": [],
"last": "Ravi",
"suffix": ""
},
{
"first": "Jihie",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Conference on Artificial Intelligence in Education: Building Technology Rich Learning Contexts That Work",
"volume": "",
"issue": "",
"pages": "357--364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ravi, Sujith and Jihie Kim. 2007. Profiling student interactions in threaded discussions with speech act classifiers. In Proceedings of the 2007 Conference on Artificial Intelligence in Education: Building Technology Rich Learning Contexts That Work, pages 357-364, IOS Press, Amsterdam.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "HMM and neural network based speech act detection",
"authors": [
{
"first": "Klaus",
"middle": [],
"last": "Ries",
"suffix": ""
}
],
"year": 1999,
"venue": "ICASSP",
"volume": "",
"issue": "",
"pages": "497--500",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ries, Klaus. 1999. HMM and neural network based speech act detection. In ICASSP, pages 497-500, Phoenix, AZ.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "A neural attention model for abstractive sentence summarization",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10",
"volume": "",
"issue": "",
"pages": "379--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ritter, Alan, Colin Cherry, and Bill Dolan. 2010. Unsupervised modeling of Twitter conversations. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10, pages 172-180, Stroudsburg, PA. Rush, Alexander M., Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379-389.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Sequencing in conversational openings",
"authors": [
{
"first": "Emanuel",
"middle": [
"A"
],
"last": "Schegloff",
"suffix": ""
}
],
"year": 1968,
"venue": "American Anthropologist",
"volume": "70",
"issue": "6",
"pages": "1075--1095",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schegloff, Emanuel A. 1968. Sequencing in conversational openings. American Anthropologist, 70(6):1075-1095.",
"links": null
},
"BIBREF63": {
"ref_id": "b63",
"title": "Evaluation methods for unsupervised word embeddings",
"authors": [
{
"first": "Tobias",
"middle": [],
"last": "Schnabel",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Labutov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "298--307",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schnabel, Tobias, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 298-307, Lisbon.",
"links": null
},
"BIBREF64": {
"ref_id": "b64",
"title": "Bidirectional recurrent neural networks",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Kuldip",
"middle": [
"K"
],
"last": "Paliwal",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Transactions on Signal Processing",
"volume": "45",
"issue": "11",
"pages": "2673--2681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schuster, Mike and Kuldip K. Paliwal. 1997. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.",
"links": null
},
"BIBREF65": {
"ref_id": "b65",
"title": "Building end-to-end dialogue systems using generative hierarchical neural network models",
"authors": [
{
"first": "Iulian",
"middle": [
"V"
],
"last": "Serban",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Aaron",
"middle": [],
"last": "Courville",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16",
"volume": "",
"issue": "",
"pages": "3776--3783",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Serban, Iulian V., Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hierarchical neural network models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI'16, pages 3776-3783.",
"links": null
},
"BIBREF66": {
"ref_id": "b66",
"title": "Neural attention models for sequence classification: Analysis and application to key term extraction and dialogue act detection",
"authors": [
{
"first": "",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Hung-Yi",
"middle": [],
"last": "Sheng-Syun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shen, Sheng-syun and Hung-yi Lee. 2016. Neural attention models for sequence classification: Analysis and application to key term extraction and dialogue act detection. CoRR, abs/1604.00077.",
"links": null
},
"BIBREF67": {
"ref_id": "b67",
"title": "Detection of question-answer pairs in e-mail conversations",
"authors": [
{
"first": "Lokesh",
"middle": [],
"last": "Shrestha",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2004,
"venue": "COLING '04: Proceedings of the 20th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shrestha, Lokesh and Kathleen McKeown. 2004. Detection of question-answer pairs in e-mail conversations. In COLING '04: Proceedings of the 20th International Conference on Computational Linguistics, page 889, Morristown, NJ.",
"links": null
},
"BIBREF68": {
"ref_id": "b68",
"title": "Linguistic Structure Prediction. Synthesis Lectures on Human Language Technologies",
"authors": [
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smith, Noah A. 2011. Linguistic Structure Prediction. Synthesis Lectures on Human Language Technologies. Morgan and Claypool.",
"links": null
},
"BIBREF69": {
"ref_id": "b69",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Srivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929-1958.",
"links": null
},
"BIBREF70": {
"ref_id": "b70",
"title": "Dialogue act modeling for automatic tagging and recognition of conversational speech",
"authors": [
{
"first": "A",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Coccaro",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Van Ess-Dykema",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Ries",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Meteer",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "",
"pages": "339--373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stolcke, A., N. Coccaro, R. Bates, P. Taylor, C. Van Ess-Dykema, K. Ries, E. Shriberg, D. Jurafsky, R. Martin, and M. Meteer. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Linguistics, 26:339-373.",
"links": null
},
"BIBREF71": {
"ref_id": "b71",
"title": "Fast and accurate entity recognition with iterated dilated convolutions",
"authors": [
{
"first": "Emma",
"middle": [],
"last": "Strubell",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Verga",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Belanger",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '17",
"volume": "",
"issue": "",
"pages": "2670--2680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Strubell, Emma, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate entity recognition with iterated dilated convolutions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '17, pages 2670-2680, Copenhagen.",
"links": null
},
"BIBREF72": {
"ref_id": "b72",
"title": "Dialog act tagging with support vector machines and hidden markov models",
"authors": [
{
"first": "Dinoj",
"middle": [],
"last": "Surendran",
"suffix": ""
},
{
"first": "Gina-Anne",
"middle": [],
"last": "Levow",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of Interspeech/ICSLP",
"volume": "",
"issue": "",
"pages": "1950--1953",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Surendran, Dinoj and Gina-Anne Levow. 2006. Dialog act tagging with support vector machines and hidden markov models. In Proceedings of Interspeech/ICSLP, pages 1950-1953.",
"links": null
},
"BIBREF73": {
"ref_id": "b73",
"title": "An introduction to conditional random fields. Foundations and Trends in Machine Learning",
"authors": [
{
"first": "C",
"middle": [],
"last": "Sutton",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "4",
"issue": "",
"pages": "267--373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sutton, C. and A. McCallum. 2012. An introduction to conditional random fields. Foundations and Trends in Machine Learning, 4(4):267-373.",
"links": null
},
"BIBREF74": {
"ref_id": "b74",
"title": "Dialogue act recognition in synchronous and asynchronous conversations",
"authors": [
{
"first": "Maryam",
"middle": [],
"last": "Tavafi",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)",
"volume": "",
"issue": "",
"pages": "117--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tavafi, Maryam, Yashar Mehdad, Shafiq Joty, Giuseppe Carenini, and Raymond Ng. 2013. Dialogue act recognition in synchronous and asynchronous conversations. In Proceedings of the 14th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 117-121, Metz.",
"links": null
},
"BIBREF75": {
"ref_id": "b75",
"title": "Inter-document contextual language model",
"authors": [
{
"first": "Quan",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Ingrid",
"middle": [],
"last": "Hung",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Zukerman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Haffari",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "762--766",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tran, Quan Hung, Ingrid Zukerman, and Gholamreza Haffari. 2016. Inter-document contextual language model. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 762-766.",
"links": null
},
"BIBREF76": {
"ref_id": "b76",
"title": "Preserving distributional information in dialogue act classification",
"authors": [
{
"first": "Quan",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Ingrid",
"middle": [],
"last": "Hung",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Zukerman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Haffari",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2151--2156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tran, Quan Hung, Ingrid Zukerman, and Gholamreza Haffari. 2017. Preserving distributional information in dialogue act classification. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2151-2156.",
"links": null
},
"BIBREF77": {
"ref_id": "b77",
"title": "Word representations: A simple and general method for semi-supervised learning",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Turian",
"suffix": ""
},
{
"first": "Lev",
"middle": [],
"last": "Ratinov",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10",
"volume": "",
"issue": "",
"pages": "384--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Turian, Joseph, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10, pages 384-394, Stroudsburg, PA.",
"links": null
},
"BIBREF78": {
"ref_id": "b78",
"title": "A publicly available annotated corpus for supervised e-mail summarization",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Ulrich",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Murray",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
}
],
"year": 2008,
"venue": "AAAI'08 EMAIL Workshop",
"volume": "",
"issue": "",
"pages": "77--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulrich, Jan, Gabriel Murray, and Giuseppe Carenini. 2008. A publicly available annotated corpus for supervised e-mail summarization. In AAAI'08 EMAIL Workshop, pages 77-82, AAAI, Chicago, IL.",
"links": null
},
"BIBREF79": {
"ref_id": "b79",
"title": "A neural conversational model",
"authors": [
{
"first": "",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vinyals, Oriol and Quoc V. Le. 2015. A neural conversational model. CoRR, abs/1506.05869.",
"links": null
},
"BIBREF80": {
"ref_id": "b80",
"title": "Accelerated training of conditional random fields with stochastic gradient methods",
"authors": [
{
"first": "S",
"middle": [
"V N"
],
"last": "Vishwanathan",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Nicol",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"W"
],
"last": "Schraudolph",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"P"
],
"last": "Schmidt",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Murphy",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 23rd International Conference on Machine Learning, ICML '06",
"volume": "",
"issue": "",
"pages": "969--976",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vishwanathan, S. V. N., Nicol N. Schraudolph, Mark W. Schmidt, and Kevin P. Murphy. 2006. Accelerated training of conditional random fields with stochastic gradient methods. In Proceedings of the 23rd International Conference on Machine Learning, ICML '06, pages 969-976, Pittsburgh, PA.",
"links": null
},
"BIBREF81": {
"ref_id": "b81",
"title": "Tweet acts: A speech act classifier for Twitter",
"authors": [
{
"first": "Soroush",
"middle": [],
"last": "Vosoughi",
"suffix": ""
},
{
"first": "Deb",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International AAAI Conference on Weblogs and Social Media",
"volume": "",
"issue": "",
"pages": "711--714",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vosoughi, Soroush and Deb Roy. 2016. Tweet acts: A speech act classifier for Twitter. In Proceedings of the 10th International AAAI Conference on Weblogs and Social Media, pages 711-714.",
"links": null
},
"BIBREF82": {
"ref_id": "b82",
"title": "Comparing the mean field method and belief propagation for approximate inference in MRFS",
"authors": [
{
"first": "Yair",
"middle": [],
"last": "Weiss",
"suffix": ""
}
],
"year": 2001,
"venue": "Advanced Mean Field Methods",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiss, Yair. 2001. Comparing the mean field method and belief propagation for approximate inference in MRFS. In Saad and Opper, editors, Advanced Mean Field Methods, pages 1-12, MIT Press.",
"links": null
},
"BIBREF83": {
"ref_id": "b83",
"title": "Artificial companions as a new kind of interface to the future internet",
"authors": [
{
"first": "Yorick",
"middle": [],
"last": "Wilks",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilks, Yorick. 2006. Artificial companions as a new kind of interface to the future internet. OII Research Report No. 13.",
"links": null
},
"BIBREF84": {
"ref_id": "b84",
"title": "Towards scalable speech act recognition in Twitter: Tackling insufficient training data",
"authors": [
{
"first": "Renxian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Dehong",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Workshop on Semantic Analysis in Social Media",
"volume": "",
"issue": "",
"pages": "18--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhang, Renxian, Dehong Gao, and Wenjie Li. 2012. Towards scalable speech act recognition in Twitter: Tackling insufficient training data. In Proceedings of the Workshop on Semantic Analysis in Social Media, pages 18-27, Avignon.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "My son wish to do his bachelor degree in Mechanical Engineering in an affordable Canadian university. \u21d2 HUMAN: Statement, LOCAL: Statement, GLOBAL: Statement The information available in the net and the people who wish to offer services are too many and some are misleading. \u21d2 HUMAN: Statement, LOCAL: Statement, GLOBAL: Statement"
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "\u21d2 HUMAN: Suggestion, LOCAL: Suggestion, GLOBAL: Suggestion most of them accept on-line or email application. \u21d2 HUMAN: Statement, LOCAL: Statement, GLOBAL: Statement Good luck !! \u21d2 HUMAN: Polite, LOCAL: Polite, GLOBAL: Polite C 4 : snakyy21: UVIC is a short form of? I have already started researching for my brother and found \"College of North Atlantic\" and planning to visit their branch in Qatar to inquire about more details \u21d2 HUMAN: Question, LOCAL: Statement, GLOBAL: Question but not sure about the reputation.. \u21d2 HUMAN: Statement, LOCAL: Response, GLOBAL: Statement C 5 : thank you for sharing useful tips will follow your advise. \u21d2 HUMAN: Polite, LOCAL: Polite, GLOBAL: Polite Figure 1"
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Adversarial LSTM-RNN for domain adaptation."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "(a) MLP conv-glove (b) B-LSTM conv-glove"
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "scores of our adapted model with varying amounts of labeled in-domain data."
},
"TABREF1": {
"type_str": "table",
"text": "Connection types in CRF models.",
"content": "<table><tr><td>Tag</td><td>Connection type</td><td>Applicable to</td></tr><tr><td>NO</td><td>No connection between nodes</td><td>intra &amp; across</td></tr><tr><td>LC</td><td>Linear chain connection</td><td>intra &amp; across</td></tr><tr><td>FC</td><td>Fully connected</td><td>intra &amp; across</td></tr><tr><td>FC 1</td><td>Fully connected with first comment only</td><td>across</td></tr><tr><td>LC 1</td><td>Linear chain with first comment only</td><td>across</td></tr></table>",
"num": null,
"html": null
},
"TABREF2": {
"type_str": "table",
"text": "Dialog act tags and their relative frequencies in the BC3 and TripAdvisor (TA) corpora.",
"content": "<table><tr><td>Tag</td><td>Description</td><td>BC3</td><td>TA</td></tr><tr><td>S</td><td>Statement</td><td>69.56%</td><td>65.62%</td></tr><tr><td>P</td><td>Polite mechanism</td><td>6.97%</td><td>9.11%</td></tr><tr><td>QY</td><td>Yes-no question</td><td>6.75%</td><td>8.33%</td></tr><tr><td>AM</td><td>Action motivator</td><td>6.09%</td><td>7.71%</td></tr><tr><td>QW</td><td>Wh-question</td><td>2.29%</td><td>4.23%</td></tr><tr><td>A</td><td>Accept response</td><td>2.07%</td><td>1.10%</td></tr><tr><td>QO</td><td>Open-ended question</td><td>1.32%</td><td>0.92%</td></tr><tr><td>AA</td><td>Acknowledge and appreciate</td><td>1.24%</td><td>0.46%</td></tr><tr><td>QR</td><td>Or/or-clause question</td><td>1.10%</td><td>1.16%</td></tr><tr><td>R</td><td>Reject response</td><td>1.06%</td><td>0.64%</td></tr><tr><td>U</td><td>Uncertain response</td><td>0.79%</td><td>0.65%</td></tr><tr><td>QH</td><td>Rhetorical question</td><td>0.75%</td><td>0.08%</td></tr><tr><td colspan=\"4\">Biasca 1997) and Meeting Recorder Dialog Act (MRDA)</td></tr></table>",
"num": null,
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "Statistics about TripAdvisor (TA), BC3, and QC3 corpora. Distribution of speech acts (in percentage) in our corpora.",
"content": "<table><tr><td/><td colspan=\"2\">Asynchronous</td></tr><tr><td>TA</td><td>BC3</td><td>QC3</td></tr></table>",
"num": null,
"html": null
},
"TABREF4": {
"type_str": "table",
"text": "Data sets and their statistics used for training the conversational word embeddings.",
"content": "<table><tr><td/><td>Domain</td><td>Data sets</td><td colspan=\"3\">Number of Threads Number of Tokens Number of Words</td></tr><tr><td colspan=\"2\">Asynchronous E-mail</td><td>W3C</td><td>23,940</td><td>21,465,830</td><td>546,921</td></tr><tr><td/><td>Forum</td><td>TripAdvisor</td><td>25,000</td><td>2,037,239</td><td>127,233</td></tr><tr><td/><td>Forum</td><td>Qatar Living</td><td>219,690</td><td>103,255,922</td><td>1,157,757</td></tr><tr><td>Synchronus</td><td>Meeting</td><td>MRDA</td><td>-</td><td>675,110</td><td>18,514</td></tr><tr><td/><td>Phone</td><td>SWBD</td><td>-</td><td>1,131,516</td><td>57,075</td></tr></table>",
"num": null,
"html": null
},
"TABREF5": {
"type_str": "table",
"text": "Roadmap to our experiments.",
"content": "<table><tr><td colspan=\"2\">Model Tested Training Regime</td><td colspan=\"2\">Section Corpora Used</td></tr><tr><td>LSTM-RNN</td><td>In-domain supervised</td><td>5.2.2</td><td>QC3/TA/BC3/MRDA (all labeled)</td></tr><tr><td/><td>Concatenation supervised</td><td>5.2.3</td><td>QC3+TA+BC3+MRDA (labeled)</td></tr><tr><td/><td>Unsup. adaptation</td><td>5.2.4</td><td>QC3/TA/BC3 (unlabeled) + MRDA (labeled)</td></tr><tr><td/><td>Semi-sup. adaptation</td><td>5.2.4</td><td>QC3/TA/BC3 (labeled) + MRDA (labeled)</td></tr><tr><td>CRFs</td><td>In-domain supervised</td><td>5.3</td><td>QC3/TA/BC3 (labeled; conversation level)</td></tr></table>",
"num": null,
"html": null
},
"TABREF6": {
"type_str": "table",
"text": "Number of sentences in train, development, and test sets for different data sets.",
"content": "<table><tr><td>Corpora</td><td>Type</td><td>Train</td><td>Dev.</td><td>Test</td></tr><tr><td>QC3</td><td>asynchronous</td><td>1,252</td><td>157</td><td>156</td></tr><tr><td>TA</td><td>asynchronous</td><td>2,968</td><td>372</td><td>371</td></tr><tr><td>BC3</td><td>asynchronous</td><td>1,065</td><td>34</td><td>133</td></tr><tr><td>MRDA</td><td>synchronous</td><td>50,865</td><td>8,366</td><td>10,492</td></tr><tr><td>Total (CONCAT)</td><td>asynchronous + synchronous</td><td>56,150</td><td>8,929</td><td>11,152</td></tr></table>",
"num": null,
"html": null
},
"TABREF8": {
"type_str": "table",
"text": "Results for in-domain training on QC3, TA, and BC3 asynchronous data sets in macro-averaged F 1 and accuracy (in parentheses). Best results are boldfaced. Accuracy numbers significantly superior to the best baselines are marked with *.",
"content": "<table><tr><td/><td>QC3</td><td/><td>TA</td><td/><td>BC3</td></tr><tr><td/><td>Testset</td><td>5 folds</td><td>Testset</td><td>5 folds</td><td>Testset</td><td>5 folds</td></tr><tr><td>ME bow</td><td>55.11 (76.28)</td><td>55.15 (73.16)</td><td>62.82 (82.47)</td><td>62.65 (85.04)</td><td colspan=\"2\">54.37 (84.47) 52.69 (81.78)</td></tr><tr><td>MLP bow</td><td>56.71 (74.35)</td><td>59.72 (72.46)</td><td>70.45 (83.83)</td><td>65.18 (84.02)</td><td colspan=\"2\">63.98 (84.58) 62.37 (82.04)</td></tr><tr><td>U-LSTM random</td><td>54.52 (70.51)</td><td>53.39 (67.22)</td><td>64.52 (80.32)</td><td>59.20 (80.06)</td><td colspan=\"2\">44.41 (81.95) 42.21 (72.44)</td></tr><tr><td>U-LSTM glove</td><td>59.95 (72.44)</td><td>55.56 (70.03)</td><td>67.70 (83.83)</td><td>60.82 (83.22)</td><td colspan=\"2\">45.67 (78.95) 43.75 (73.50)</td></tr><tr><td>U-LSTM conv-glove</td><td>60.59 (75.64)</td><td>58.70 (72.78)</td><td>69.48 (83.56)</td><td>64.64 (83.39)</td><td colspan=\"2\">53.51 (84.21) 49.67 (77.71)</td></tr><tr><td>B-LSTM random</td><td>57.57 (74.35)</td><td>58.24 (72.46)</td><td>74.70 (86.25*)</td><td>67.08 (84.53)</td><td colspan=\"2\">47.12 (81.20) 44.97 (77.59)</td></tr><tr><td>B-LSTM glove</td><td>59.16 (73.07)</td><td>58.86 (72.45)</td><td>75.49 (86.77*)</td><td>68.31 (83.81)</td><td colspan=\"2\">51.15 (84.21) 50.67 (75.59)</td></tr><tr><td>B-LSTM conv-glove</td><td>64.72 (77.56*)</td><td>63.47 (75.59*)</td><td>76.15 (86.52*)</td><td>69.59 (86.18*)</td><td colspan=\"2\">61.44 (83.45) 55.84 (79.95)</td></tr></table>",
"num": null,
"html": null
},
"TABREF10": {
"type_str": "table",
"text": "Comparison of different sentence encoders on the concatenated (CONCAT) data set. Best results are boldfaced. Accuracies significantly higher than ME skip-thought are marked with *.",
"content": "<table><tr><td>Encoder</td><td colspan=\"2\">Classifier Model Name</td><td>QC3 (Testset)</td><td>TA (Testset)</td><td>BC3 (Testset)</td></tr><tr><td>Concatenation</td><td>ME</td><td>ME conv-glove</td><td>60.52 (76.28)</td><td>75.47 (86.79)</td><td>60.46 (79.00)</td></tr><tr><td>Concatenation</td><td>MLP</td><td>MLP conv-glove</td><td>60.47 (73.07)</td><td>75.85 (86.52)</td><td>55.33 (78.00)</td></tr><tr><td>Averaging</td><td>ME</td><td>ME conv-glove-averaging</td><td>63.32 (76.92)</td><td>73.72 (84.09)</td><td>45.65 (74.00)</td></tr><tr><td>Averaging</td><td>SVM</td><td>SVM conv-glove-averaging</td><td>18.74 (60.89)</td><td>29.46 (64.69)</td><td>16.19 (68.00)</td></tr><tr><td>Skip-thought</td><td>ME</td><td>ME skip-thought</td><td>59.65 (78.13)</td><td>77.09 (86.22)</td><td>71.89 (89.00)</td></tr><tr><td>B-GRU</td><td>ME</td><td>B-GRU conv-glove</td><td>69.45 (81.41*)</td><td>77.77 (88.68*)</td><td>58.66 (79.00)</td></tr><tr><td>B-LSTM</td><td>ME</td><td>B-LSTM conv-glove</td><td>70.51 (80.77*)</td><td>78.08 (88.95*)</td><td>58.28 (79.00)</td></tr></table>",
"num": null,
"html": null
},
"TABREF12": {
"type_str": "table",
"text": "Data sets for CONV-LEVEL (conversation-level) setting to train, validate, and test our CRF models. Numbers in parentheses indicate the number of sentences.",
"content": "<table><tr><td/><td>Train</td><td>Dev.</td><td>Test</td></tr><tr><td>QC3</td><td>38 (1,332)</td><td>4 (111)</td><td>5 (122)</td></tr><tr><td>TA</td><td>160 (2,957)</td><td>20 (310)</td><td>20 (444)</td></tr><tr><td>BC3</td><td>31 (1,012)</td><td>3 (101)</td><td>5 (222)</td></tr><tr><td>Total</td><td>229 (5,301)</td><td>27 (522)</td><td>30 (788)</td></tr></table>",
"num": null,
"html": null
},
"TABREF13": {
"type_str": "table",
"text": "Results of CRFs on CONV-LEVEL data set. Best results are boldfaced. Accuracies significantly higher than adapted B-LSTM conv-glove are marked with *.",
"content": "<table><tr><td/><td>QC3</td><td>TA</td><td>BC3</td></tr><tr><td>ME bow</td><td>57.37 (69.18)</td><td>65.39 (85.04)</td><td>60.32 (80.74)</td></tr><tr><td colspan=\"2\">Adapted B-LSTM conv-glove (semi-sup) 67.34 (80.15)</td><td>70.36 (86.73)</td><td>62.65 (83.59)</td></tr><tr><td>ME adapt-lstm</td><td>62.36 (78.93)</td><td>63.31 (85.92)</td><td>58.32 (81.43)</td></tr><tr><td>CRF LC-NO</td><td>67.02 (79.51)</td><td>67.82 (86.94)</td><td>63.15 (84.65)</td></tr><tr><td>CRF LC-LC</td><td>67.12 (79.83)</td><td>67.94 (86.74)</td><td>63.75 (84.10)</td></tr><tr><td>CRF LC-LC 1 CRF LC-FC 1 CRF FC-FC</td><td colspan=\"3\">69.32 (81.03*) 68.84 (87.34) 70.11 (80.67) 69.73 (86.51) 69.65 (80.77) 72.31 (88.61*) 64.82 (86.18*) 64.22 (84.71) 66.34 (86.51*)</td></tr></table>",
"num": null,
"html": null
}
}
}
}