ACL-OCL / Base_JSON /prefixN /json /N18 /N18-1035.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N18-1035",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:50:32.265642Z"
},
"title": "Microblog Conversation Recommendation via Joint Modeling of Topics and Discourse",
"authors": [
{
"first": "Xingshan",
"middle": [],
"last": "Zeng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Chinese University of Hong",
"location": {}
},
"email": "xszeng@se.cuhk.edu.hk"
},
{
"first": "Jing",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lu",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Northeastern University",
"location": {
"region": "US"
}
},
"email": "luwang@ccs.neu.edu"
},
{
"first": "Nicholas",
"middle": [],
"last": "Beauchamp",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Northeastern University",
"location": {
"country": "US"
}
},
"email": "n.beauchamp@northeastern.edu"
},
{
"first": "Sarah",
"middle": [],
"last": "Shugars",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Northeastern University",
"location": {
"region": "US"
}
},
"email": "shugars.s@husky.neu.edu"
},
{
"first": "Kam-Fai",
"middle": [],
"last": "Wong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Chinese University of Hong",
"location": {}
},
"email": "kfwong@se.cuhk.edu.hk"
},
{
"first": "",
"middle": [],
"last": "Kong",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Tencent",
"middle": [
"Ai"
],
"last": "Lab",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Millions of conversations are generated every day on social media platforms. With limited attention, it is challenging for users to select which discussions they would like to participate in. Here we propose a new method for microblog conversation recommendation. While much prior work has focused on postlevel recommendation, we exploit both the conversational context, and user content and behavior preferences. We propose a statistical model that jointly captures: (1) topics for representing user interests and conversation content, and (2) discourse modes for describing user replying behavior and conversation dynamics. Experimental results on two Twitter datasets demonstrate that our system outperforms methods that only model content without considering discourse.",
"pdf_parse": {
"paper_id": "N18-1035",
"_pdf_hash": "",
"abstract": [
{
"text": "Millions of conversations are generated every day on social media platforms. With limited attention, it is challenging for users to select which discussions they would like to participate in. Here we propose a new method for microblog conversation recommendation. While much prior work has focused on postlevel recommendation, we exploit both the conversational context, and user content and behavior preferences. We propose a statistical model that jointly captures: (1) topics for representing user interests and conversation content, and (2) discourse modes for describing user replying behavior and conversation dynamics. Experimental results on two Twitter datasets demonstrate that our system outperforms methods that only model content without considering discourse.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Online platforms have revolutionized the way individuals collect and share information (O'Connor et al., 2010; Lee and Ma, 2012; Bakshy et al., 2015) , but the vast bulk of online content is irrelevant or unpalatable to any given individual. A user interested in political discussion, for instance, might prefer content concerning a specific candidate or issue, and only then if discussed in a positive light without controversy (Adamic and Glance, 2005; Bakshy et al., 2015) .",
"cite_spans": [
{
"start": 87,
"end": 110,
"text": "(O'Connor et al., 2010;",
"ref_id": "BIBREF32"
},
{
"start": 111,
"end": 128,
"text": "Lee and Ma, 2012;",
"ref_id": "BIBREF26"
},
{
"start": 129,
"end": 149,
"text": "Bakshy et al., 2015)",
"ref_id": "BIBREF5"
},
{
"start": 429,
"end": 454,
"text": "(Adamic and Glance, 2005;",
"ref_id": "BIBREF0"
},
{
"start": 455,
"end": 475,
"text": "Bakshy et al., 2015)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "How do individuals facing such large quantities of superfluous material select which conversations to engage in, and how might we better algorithmically recommend conversations suited to individual users? We approach this problem from a microblog conversation recommendation framework. Where prior work has focused on the content of individual posts for recommendation (Chen Conversation 1 ... [U1] : The sheer cognitive dissonance required for a \"liberal\" to say Clinton is as bad as Trump is just staggering.",
"cite_spans": [
{
"start": 369,
"end": 398,
"text": "(Chen Conversation 1 ... [U1]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "[U2]: Hillarists, Troll; they insult Liberals trying to distract from Hillary's Conseratism. [U i ]: The message is posted by user U i . \"-\" is the dividing line between training history and test part. U 1 did not reengage in Conversation 1 but reengaged in Conversation 2. Yan et al., 2012; Vosecky et al., 2014; He and Tan, 2015) , we examine the entire history and context of a conversation, including both topical content and discourse modes such as agreement, question-asking, argument and other dialogue acts (Ritter et al., 2010) . 1 And where Backstrom et al. (2013) leveraged conversation reply structure (such as previous user engagement), their model is unable to predict first entry into new conversations, while ours is able to predict both new and repeated entry into conversations based on a combination of topical and discourse features.",
"cite_spans": [
{
"start": 274,
"end": 291,
"text": "Yan et al., 2012;",
"ref_id": "BIBREF45"
},
{
"start": 292,
"end": 313,
"text": "Vosecky et al., 2014;",
"ref_id": "BIBREF41"
},
{
"start": 314,
"end": 331,
"text": "He and Tan, 2015)",
"ref_id": "BIBREF16"
},
{
"start": 515,
"end": 536,
"text": "(Ritter et al., 2010)",
"ref_id": "BIBREF37"
},
{
"start": 539,
"end": 540,
"text": "1",
"ref_id": null
},
{
"start": 551,
"end": 574,
"text": "Backstrom et al. (2013)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To illustrate the interplay between topics and discourse, Figure 1 displays two snippets of conversations on Twitter collected during the 2016 United States presidential election. User U 1 participates in both conversations. The first conversation is centered around Clinton, and U 1 , who is more typically involved with conversations about candidate Sanders, does not return. In the second conversation, however, U 1 is involved in a heated back-and-forth debate, and thus is drawn back to a conversation that they may otherwise have abandoned but for their enjoyment of adversarial discourse.",
"cite_spans": [],
"ref_spans": [
{
"start": 58,
"end": 66,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Effective conversation prediction and recommendation requires an understanding of both user interests and discourse behaviors, such as agreement, disagreement, inquiry, backchanneling, and emotional reactions. However, acquiring manual labels for both is a time-consuming process and hard to scale for new datasets. We instead propose a unified statistical learning framework for conversation recommendation, which jointly learns (1) hidden factors that reflect user interests based on conversation history, and (2) topics and discourse modes in ongoing conversations, as discovered by a novel probabilistic latent variable model. Our model is built on the success of collaborative filtering (CF) in recommendation systems, where latent dimensions of product ratings or movie reviews are extracted to better capture user preferences (Linden et al., 2003; Salakhutdinov and Mnih, 2008; Wang and Blei, 2011; McAuley and Leskovec, 2013) . To the best of our knowledge, we are the first to model both topics and discourse modes as part of a CF framework and apply it to microblog conversation recommendation. 2 Experimental results on two Twitter conversation datasets show that our proposed model yields significantly better performance than state-of-theart post-level recommendation systems. For example, by leveraging both topical content and discourse structure, our model achieves a mean average precision (MAP) of 0.76 on conversations about the U.S. presidential election, compared with 0.70 by McAuley and Leskovec (2013) , which only considers topics. We further con-ducted detailed analysis on the latent topics and discourse modes and find that our model can discover reasonable topic and discourse representations, which play an important role in characterizing reply behaviors. Finally, we also provide a pilot study on recommendation for first time replies, which shows that our model outperforms comparable recommendation systems.",
"cite_spans": [
{
"start": 833,
"end": 854,
"text": "(Linden et al., 2003;",
"ref_id": "BIBREF27"
},
{
"start": 855,
"end": 884,
"text": "Salakhutdinov and Mnih, 2008;",
"ref_id": "BIBREF39"
},
{
"start": 885,
"end": 905,
"text": "Wang and Blei, 2011;",
"ref_id": "BIBREF42"
},
{
"start": 906,
"end": 933,
"text": "McAuley and Leskovec, 2013)",
"ref_id": "BIBREF30"
},
{
"start": 1105,
"end": 1106,
"text": "2",
"ref_id": null
},
{
"start": 1498,
"end": 1525,
"text": "McAuley and Leskovec (2013)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is structured as follows. The related work is discussed in Section 2. We then present our microblog conversation recommendation model in Section 3. The experimental setup and results are described in Sections 4 and 5. Finally, we conclude in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Social media has attracted increasing attention in digital communication research (Agichtein et al., 2008; Kwak et al., 2010; Wu et al., 2011) . The problem studied here is closely related to work on recommendation and response prediction in microblogs (Artzi et al., 2012; Hong et al., 2013) , where the goal is to predict whether a user will share or reply to a given post. Existing methods focus on measuring features that reflect personalized user interests, including topics (Hong et al., 2013) and network structures (Pan et al., 2013; He and Tan, 2015) . These features have been investigated under a learning to rank framework (Duan et al., 2010; Artzi et al., 2012) , graph ranking models (Yan et al., 2012; Feng and Wang, 2013; Alawad et al., 2016) , and neural network-based representation learning methods (Yu et al., 2016) .",
"cite_spans": [
{
"start": 82,
"end": 106,
"text": "(Agichtein et al., 2008;",
"ref_id": "BIBREF1"
},
{
"start": 107,
"end": 125,
"text": "Kwak et al., 2010;",
"ref_id": "BIBREF24"
},
{
"start": 126,
"end": 142,
"text": "Wu et al., 2011)",
"ref_id": "BIBREF44"
},
{
"start": 253,
"end": 273,
"text": "(Artzi et al., 2012;",
"ref_id": "BIBREF3"
},
{
"start": 274,
"end": 292,
"text": "Hong et al., 2013)",
"ref_id": "BIBREF17"
},
{
"start": 480,
"end": 499,
"text": "(Hong et al., 2013)",
"ref_id": "BIBREF17"
},
{
"start": 523,
"end": 541,
"text": "(Pan et al., 2013;",
"ref_id": "BIBREF35"
},
{
"start": 542,
"end": 559,
"text": "He and Tan, 2015)",
"ref_id": "BIBREF16"
},
{
"start": 635,
"end": 654,
"text": "(Duan et al., 2010;",
"ref_id": "BIBREF11"
},
{
"start": 655,
"end": 674,
"text": "Artzi et al., 2012)",
"ref_id": "BIBREF3"
},
{
"start": 698,
"end": 716,
"text": "(Yan et al., 2012;",
"ref_id": "BIBREF45"
},
{
"start": 717,
"end": 737,
"text": "Feng and Wang, 2013;",
"ref_id": "BIBREF12"
},
{
"start": 738,
"end": 758,
"text": "Alawad et al., 2016)",
"ref_id": "BIBREF2"
},
{
"start": 818,
"end": 835,
"text": "(Yu et al., 2016)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Distinguishing from prior work that focuses on post-level recommendation, we tackle the challenges of predicting user reply behaviors at the conversation-level. In addition, our model not only captures latent factors such as the topical interests of users, but also leverages the automatically learned discourse structure. Much of the previous work on discourse structure and dialogue acts has relied on labeled data (Jurafsky et al., 1997; Stolcke et al., 2000) , while unsupervised approaches have not been applied to the problem of conversation recommendation (Woszczyna and Waibel, 1994; Crook et al., 2009; Ritter et al., 2010; Joty et al., 2011) .",
"cite_spans": [
{
"start": 417,
"end": 440,
"text": "(Jurafsky et al., 1997;",
"ref_id": "BIBREF22"
},
{
"start": 441,
"end": 462,
"text": "Stolcke et al., 2000)",
"ref_id": "BIBREF40"
},
{
"start": 563,
"end": 591,
"text": "(Woszczyna and Waibel, 1994;",
"ref_id": "BIBREF43"
},
{
"start": 592,
"end": 611,
"text": "Crook et al., 2009;",
"ref_id": "BIBREF10"
},
{
"start": 612,
"end": 632,
"text": "Ritter et al., 2010;",
"ref_id": "BIBREF37"
},
{
"start": 633,
"end": 651,
"text": "Joty et al., 2011)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work is also in line with conversation modeling for social media discussions (Ritter et al., 2010; Budak and Agrawal, 2013; Louis and Cohen, 2015; Cheng et al., 2017) . Topic modeling has been employed to identify conversation content on Twitter (Ritter et al., 2010) . In this work, we propose a probabilistic model to capture both topics and discourse modes as latent variables. A further line of work studies the reposting and reply structure of conversations (G\u00f3mez et al., 2011; Laniado et al., 2011; Backstrom et al., 2013; Budak and Agrawal, 2013) . But none of this work distinguishes the rich discourse functions of replies, which is modeled and exploited in our work.",
"cite_spans": [
{
"start": 81,
"end": 102,
"text": "(Ritter et al., 2010;",
"ref_id": "BIBREF37"
},
{
"start": 103,
"end": 127,
"text": "Budak and Agrawal, 2013;",
"ref_id": "BIBREF7"
},
{
"start": 128,
"end": 150,
"text": "Louis and Cohen, 2015;",
"ref_id": "BIBREF28"
},
{
"start": 151,
"end": 170,
"text": "Cheng et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 250,
"end": 271,
"text": "(Ritter et al., 2010)",
"ref_id": "BIBREF37"
},
{
"start": 467,
"end": 487,
"text": "(G\u00f3mez et al., 2011;",
"ref_id": "BIBREF14"
},
{
"start": 488,
"end": 509,
"text": "Laniado et al., 2011;",
"ref_id": "BIBREF25"
},
{
"start": 510,
"end": 533,
"text": "Backstrom et al., 2013;",
"ref_id": "BIBREF4"
},
{
"start": 534,
"end": 558,
"text": "Budak and Agrawal, 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our proposed microblog conversation recommendation framework is based on collaborative filtering and a novel probabilistic graphical model. Concretely, our objective function takes the form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Joint Model of Topic and Discourse for Recommendation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "min L + \u00b5 \u2022 N LL(C | \u0398)",
"eq_num": "(1)"
}
],
"section": "The Joint Model of Topic and Discourse for Recommendation",
"sec_num": "3"
},
{
"text": "This function encodes two types of information. First, L models user reply preference in a similar fashion to collaborative filtering (CF) (Hu et al., 2008; Pan et al., 2008) . It captures topics of interests and discourse structures users are commonly involved (e.g., argumentation), and takes the form of mean square error (MSE) based on user reply history. This part is detailed in Section 3.1. The second term, N LL(C | \u0398), denotes the negative log-likelihood of a set of conversations C, with \u0398 containing all parameters. A probabilistic model is described in Section 3.2 that shows how the topical content and discourse structures of conversations are captured by these latent variables.",
"cite_spans": [
{
"start": 139,
"end": 156,
"text": "(Hu et al., 2008;",
"ref_id": "BIBREF18"
},
{
"start": 157,
"end": 174,
"text": "Pan et al., 2008)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Joint Model of Topic and Discourse for Recommendation",
"sec_num": "3"
},
{
"text": "The hyperparameter \u00b5 controls the trade-off between the two effects. 2 regularization is also added for parameters to avoid model overfitting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Joint Model of Topic and Discourse for Recommendation",
"sec_num": "3"
},
{
"text": "For the rest of this section, we first present the construction of L and N LL(C | \u0398) in Sections 3.1 and 3.2. We then discuss how these two components can be mutually informed by each other in Section 3.3. Finally, the generative process and parameter learning are described in Section 3.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Joint Model of Topic and Discourse for Recommendation",
"sec_num": "3"
},
{
"text": "Our user reply preference modeling is built on the success of collaborative filtering (CF) for product ratings. However, classic CF problems, such as product recommendation, generally rely on explicit user feedback. Unlike user ratings on products, our input lacks explicit feedback from users about negative preferences and nonresponse. Therefore, we follow one-class Collaborative Filtering (Hu et al., 2008; Pan et al., 2008) , which weights positive instances higher during training and is thus suited to our data. Formally, for user u and conversation c, we measure reply preference based on the MSE between predicted preference score p u,c and reply history r u,c . r u,c equals 1 if u is in the conversation history; otherwise, it is 0. The first term of objective (Eq. 1) takes the following form:",
"cite_spans": [
{
"start": 393,
"end": 410,
"text": "(Hu et al., 2008;",
"ref_id": "BIBREF18"
},
{
"start": 411,
"end": 428,
"text": "Pan et al., 2008)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reply Preference (L)",
"sec_num": "3.1"
},
{
"text": "L = |U | u=1 |C| c=1 f u,c \u2022 (p u,c \u2212 r u,c ) 2 (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reply Preference (L)",
"sec_num": "3.1"
},
{
"text": "where U consists of users {u} and C is a set of conversations {c} in a dataset. f u,c is the corresponding weight for a conversation c and a target user u. Intuitively, it has a large value if positive feedback (user replied) is observed. Therefore, we adapt the formulation from Pan et al. (2008) :",
"cite_spans": [
{
"start": 280,
"end": 297,
"text": "Pan et al. (2008)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reply Preference (L)",
"sec_num": "3.1"
},
{
"text": "f u,c = s if r u,c = 1 (i.e., user replied) 1 if r u,c = 0 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reply Preference (L)",
"sec_num": "3.1"
},
{
"text": "where s > 1, an integer hyperparameter to be tuned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reply Preference (L)",
"sec_num": "3.1"
},
{
"text": "Inspired by prior models (Koren et al., 2009; McAuley and Leskovec, 2013) , we propose the following latent factor model to describe p u,c :",
"cite_spans": [
{
"start": 25,
"end": 45,
"text": "(Koren et al., 2009;",
"ref_id": "BIBREF23"
},
{
"start": 46,
"end": 73,
"text": "McAuley and Leskovec, 2013)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reply Preference (L)",
"sec_num": "3.1"
},
{
"text": "pu,c = \u03bb \u2022 \u03b3 U u \u2022 \u03b3 C c + (1 \u2212 \u03bb) \u2022 \u03b4 U u \u2022 \u03b4 C c + bu + bc + a (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reply Preference (L)",
"sec_num": "3.1"
},
{
"text": "\u03b3 U u and \u03b3 C c are K-dimensional latent vectors that encode topic-specific information (where K is the number of latent topics) for users and conversations. Specifically, \u03b3 U u reflects the topical interests of u, with higher value \u03b3 U u,k indicating greater interest by u in topic k. \u03b3 C c captures the extents that topics are discussed in conversation c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reply Preference (L)",
"sec_num": "3.1"
},
{
"text": "Similarly, D-dimensional vectors \u03b4 U u and \u03b4 C c capture discourse structures in shaping reply behaviors (where D is the number of discourse clusters). \u03b4 U u reflects the discourse behaviors u prefers, such as u 1 often enjoys arguments as in the second conversation of Figure 1 , while \u03b4 C c captures the discourse modes used throughout conversation c. By multiplying user and conversation factors, we can measure the corresponding similarity. The predicted score p u,c thereby reflects the tendency for a user u to be involved in conversation c.",
"cite_spans": [],
"ref_spans": [
{
"start": 270,
"end": 278,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Reply Preference (L)",
"sec_num": "3.1"
},
{
"text": "As pointed out by McAuley and Leskovec (2013), these latent vectors often encode hidden factors that are hard to interpret under a CF framework. Therefore, in Section 3.2, we present a novel probabilistic model which can extract interpretable topics and discourse modes as word distributions. We then describe how they can be aligned with the latent vectors of \u03b3 C and \u03b4 U .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reply Preference (L)",
"sec_num": "3.1"
},
{
"text": "Parameter a is an offset parameter, b u and b c are user and conversation biases, and \u03bb \u2208 [0, 1] serves as the weight for trading offs of topic and discourse factors in reply preference modeling.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reply Preference (L)",
"sec_num": "3.1"
},
{
"text": "Here we present a novel probabilistic model that learns coherent word distributions for latent topics and discourse modes of conversations. Formally, we assume that each conversation c \u2208 C contains M c messages, and each message m has N c,m words. We distinguish three latent components -discourse, topic, and background -underlying conversations, each with their own type of word distribution. At the corpus level, there are K topics represented by word distribution \u03c6 T",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Likelihood N LL(C | \u0398)",
"sec_num": "3.2"
},
{
"text": "k (k = 1, 2, ..., K), while \u03c6 D d (d = 1, 2, ..., D)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Likelihood N LL(C | \u0398)",
"sec_num": "3.2"
},
{
"text": "represents the D discourse modes embedded in corpus. In addition, we add a background word distribution \u03c6 B to capture general information (e.g., common words), which do not indicate either discourse or topic information. \u03c6 D d , \u03c6 T k , and \u03c6 B are all multinomial word distributions over vocabulary size V . Below describes more details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Likelihood N LL(C | \u0398)",
"sec_num": "3.2"
},
{
"text": "Message-level Modeling. Our model assigns two types of message-level multinomial variables to each message: z c,m reflects its latent topic and d c,m represents its discourse mode.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Likelihood N LL(C | \u0398)",
"sec_num": "3.2"
},
{
"text": "Topic assignments. Due to the short nature of microblog posts, we assume each message m in conversation c contains only one topic, indexed as z c,m . This strategy has been proven useful to alleviate data sparsity for topic inference (Quan et al., 2015) . We further assume messages in the same conversation would focus on similar topics. We thus draw topic z c,m \u223c \u03b8 c , where \u03b8 c denotes the fractions of topics discussed in conversation c.",
"cite_spans": [
{
"start": 234,
"end": 253,
"text": "(Quan et al., 2015)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Likelihood N LL(C | \u0398)",
"sec_num": "3.2"
},
{
"text": "Discourse assignments. To capture discourse behaviors of u, distribution \u03c0 u is used to represent the discourse modes in messages posted by u. The discourse mode d c,m for message m is then generated from \u03c0 uc,m , where u c,m is the author of m in c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Likelihood N LL(C | \u0398)",
"sec_num": "3.2"
},
{
"text": "Word-level Modeling. We aim to separate discourse, topic, and background information for conversations. Therefore, for each word w c,m,n of message m, a ternary switcher x c,m,n \u2208 {DISC, TOPIC, BACK} controls word w c,m,n to fall into one of the three types: discourse, topic, and background.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Likelihood N LL(C | \u0398)",
"sec_num": "3.2"
},
{
"text": "Discourse words (DISC) are indicative of the discourse modes of messages. When x c,m,n = DISC (i.e., w c,m,n is assigned as a discourse word), word w c,m,n is generated from the discourse word distribution \u03c6 D dc,m where d c,m is discourse assignment to message m.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Likelihood N LL(C | \u0398)",
"sec_num": "3.2"
},
{
"text": "Topic words (TOPIC) describe the topical focus of a conversation. When x c,m,n = TOPIC, w c,m,n is assigned as a topic word and generated from \u03c6 T zc,m -word distribution given topic of m. Background words (BACK) capture the general information that is not related to discourse or topic. When word w c,m,n is assigned as a background word (x c,m,n = BACK), it is drawn from background distribution \u03c6 B .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Likelihood N LL(C | \u0398)",
"sec_num": "3.2"
},
{
"text": "Switching among Topic, Discourse, and Background. We further assume the word type switcher x c,m,n is sampled from a multinomial distribution which depends on the current discourse mode d c,m . The intuition is that messages of different discourse modes may show different distributions of the three word types. For instance, a statement message may contain more content words than a rhetorical question. Specifically, x c,m,n \u223c M ulti(\u03c4 dc,m ), where \u03c4 d is a 3-dimension stochastic vector that expresses the appearing probabilities of three kinds of words (DISC, TOPIC, BACK), when the discourse assignment is d. Stop words and punctuations are forced to be labeled as discourse or background. By explicitly distinguishing different types of words with switcher x c,m,n , we can thus separate word distributions that reflect discourse, topic, and background information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Likelihood N LL(C | \u0398)",
"sec_num": "3.2"
},
{
"text": "Likelihood. Based on the message-level and the word-level generation process, the probability of observing words in the given corpus is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Likelihood N LL(C | \u0398)",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P r(C | \u03b8, \u03c0, \u03c6, \u03c4 , z, d, x) = C c=1 Mc m=1 \u03b8c,z c,m \u03c0 uc,m,dc,m \u00d7 xc,m,n=BACK \u03c4 dc,m,BACK \u03c6 B wc,m,n \u00d7 xc,m,n=DISC \u03c4 dc,m,DISC \u03c6 D dc,m,wc,m,n \u00d7 xc,m,n=TOPIC \u03c4 dc,m,TOPIC \u03c6 T zc,m,wc,m,n",
"eq_num": "(5)"
}
],
"section": "Corpus Likelihood N LL(C | \u0398)",
"sec_num": "3.2"
},
{
"text": "And we use negative log likelihood to model corpus likelihood effect in Eq. 1, i.e., N LL(C | \u0398) = \u2212 log(P r(C | \u0398), where parameters set \u0398 = {\u03b8, \u03c0, \u03c6, \u03c4 , z, d, x}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Likelihood N LL(C | \u0398)",
"sec_num": "3.2"
},
{
"text": "Latent Variables As mentioned above, the hidden factors discovered in Section 3.1 lack interpretability, which can be boosted by the learned latent topics and discourse modes in Section 3.2. However, it is nontrivial to link the topic-related parameters of \u03b3 C c to the conversation topic distributions of \u03b8 c , since the former takes real values from \u2212\u221e to +\u221e while the latter is a stochastic vector. Therefore, we follow the strategy from McAuley and Leskovec (2013) to apply a softmax function over \u03b3 C c :",
"cite_spans": [
{
"start": 441,
"end": 468,
"text": "McAuley and Leskovec (2013)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mutually Informed User Preference and",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b8 c,k = exp(\u03ba T \u03b3 C c,k ) K k =1 exp(\u03ba T \u03b3 C c,k )",
"eq_num": "(6)"
}
],
"section": "Mutually Informed User Preference and",
"sec_num": "3.3"
},
{
"text": "We further assume that the discourse mode preference by users, \u03b4 U u , can also be informed by the discourse mode distribution captured by \u03c0 u , i.e., a user who enjoys arguments may be willing to participate another. So similarly, we define:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mutually Informed User Preference and",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c0 u,d = exp(\u03ba D \u03b4 U u,d ) D d =1 exp(\u03ba D \u03b4 U u,d )",
"eq_num": "(7)"
}
],
"section": "Mutually Informed User Preference and",
"sec_num": "3.3"
},
{
"text": "where \u03ba T and \u03ba D are learnable parameters that control the \"peakiness\" of the transformation. For example, a larger \u03ba T indicates a more focused conversation, while a smaller \u03ba T means users discuss diverse topics. Finally, softmax transformation is also applied to Figure 2 ). This is to ensure that the distributions \u03c6 * * and \u03c4 d are stochastic vectors. In doing so, these distributions can be learned via optimizing \u03c8 * * and \u03c7 d , which take any value and thus ensure that the cost function in Eq. 1 is optimized without considering any parameter constraints.",
"cite_spans": [],
"ref_spans": [
{
"start": 267,
"end": 275,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Mutually Informed User Preference and",
"sec_num": "3.3"
},
{
"text": "\u03c6 T k , \u03c6 D d , \u03c6 B ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mutually Informed User Preference and",
"sec_num": "3.3"
},
{
"text": "Our word generation process is displayed in Parameter Learning. For learning, we randomly initialize all learnable parameters and then alternate between the following two steps:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Process and Model Learning",
"sec_num": "3.4"
},
{
"text": "Step 1. Fix topic and discourse assignments z and d, and word type switcher x, then optimize the remaining parameters in Eq. 1 by L-BFGS (Nocedal, 1980) :",
"cite_spans": [
{
"start": 130,
"end": 152,
"text": "L-BFGS (Nocedal, 1980)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Process and Model Learning",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Update a, b, \u03b3 * , \u03b4 * , \u03ba * , \u03c8 * , \u03c7 = argmin L + \u00b5 \u2022 N LL(C | \u0398)",
"eq_num": "(8)"
}
],
"section": "Generative Process and Model Learning",
"sec_num": "3.4"
},
{
"text": "Step 2. Sample topic and discourse assignments z and d at the message level and word type switcher x at the word level, using the distributions, computed according to parameters optimized in step 1:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Process and Model Learning",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Sample z c,m , d c,m , x c,m,n with probabilities p(z c,m = k) = \u03b8 c,k p(d c,m = d) = \u03c0 uc,m,d p(x c,m,n = BACK) = \u03c6 B wc,m,n \u03c4 dc,m,BACK p(x c,m,n = DISC) = \u03c6 D dc,m,wc,m,n \u03c4 dc,m,DISC p(x c,m,n = T OP IC) = \u03c6 T zc,m,wc,m,n \u03c4 dc,m,T OP IC",
"eq_num": "(9)"
}
],
"section": "Generative Process and Model Learning",
"sec_num": "3.4"
},
{
"text": "Step 2 is analogous to Gibbs Sampling (Griffiths, 2002) in probabilistic graphical models, such as LDA (Blei et al., 2003) . However, distinguishing from previous models, the multinomial distributions in our models are not drawn from a Dirichlet prior. Instead, they are computed based on the parameters learned in Step 1.",
"cite_spans": [
{
"start": 103,
"end": 122,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Process and Model Learning",
"sec_num": "3.4"
},
{
"text": "Our learning process stops when the change of parameters is small (i.e., below a pre-specified Figure 3: Horizontal axis: number of conversations that a user is involved. Vertical axis: number of users fall in the category (unit: 1,000). Notice that most of users (about 98%) participate in less than 10 conversations. threshold). Multiple restarts are tried, and similar results are achieved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generative Process and Model Learning",
"sec_num": "3.4"
},
{
"text": "Datasets. We collected two microblog conversation datasets from Twitter for experiments 3 : one contains discussions about the U.S. presidential election (henceforth US Election), the other gathers conversations of diverse topics based on the tweets released by TREC 2011 microblog track (henceforth TREC) 4 . US Election was collected from January to June of 2016 using Twitter's Streaming API 5 with a small set of political keywords. 6 To recover conversations, Tweet Search API 7 was used to retrieve messages with the \"inreply-to\" relations to collect tweets in a recursive way until full conversations were recovered. Statistics of the datasets are shown in Table 1 . Figure 3 displays the number of conversations individual users participated in. As can be seen, most users are involved in only a few conversations. Simply leveraging personal chat history will not produce good performance for conversation recommendation.",
"cite_spans": [
{
"start": 437,
"end": 438,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 664,
"end": 671,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 674,
"end": 682,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "In our experiments, we predict whether a user will engage in a conversation given the previous messages in that conversation and past conversations the user is involved. For model training and testing, we divide conversations into three ordered segments, corresponding to training, development, and test sets at 75%, 12.5%, and 12.5%. 8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Preprocessing and Hyperparameter Tuning. For preprocessing, links, mentions (i.e., @username), and hashtags in tweets were replaced with generic tags of \"URL\", \"MENTION\", and \"HASHTAG\". We then utilized the Twitter NLP tool 9 (Gimpel et al., 2011; Owoputi et al., 2013) for tokenization and non-alphabetic token removal. We removed stop words and punctuations for all comparisons to ensure comparable performance. We maintain a vocabulary with the 5,000 most frequent words.",
"cite_spans": [
{
"start": 226,
"end": 247,
"text": "(Gimpel et al., 2011;",
"ref_id": "BIBREF13"
},
{
"start": 248,
"end": 269,
"text": "Owoputi et al., 2013)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Our model parameters are tuned on the development set based on grid search, i.e. the parameters that give the lowest value for our objective are selected. Specifically, the number of discourse modes (D) and topics (K) are tuned to be 10. The trade-off parameter \u00b5 between user preference and corpus negative log-likelihood takes value of 0.1, and \u03bb, the parameter for balancing topic and discourse, is set to 0.5. Finally, the confidence parameter s takes a value of 200 to give higher weight for positive instances, i.e., a user replied to a conversation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Evaluation Metrics. Following prior work on social media post recommendation (Chen et al., 2012; Yan et al., 2012) , we treat our task on conversation recommendation as a ranking problem. Therefore, popular information retrieval evaluation metrics, including precision at K (P@K), mean average precision (MAP) (Manning et al., 2008) , and normalized Discounted Cumulative Gain at K (nDCG@K) (J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002) are reported. The metrics are computed per user in the dataset and then averaged over all users. The values range from 0.0 to 1.0, with higher values indicating better performance. conversations randomly (RANDOM); 2) longer conversations (i.e., more words) ranked higher (LENGTH); 3) conversations with more distinct users ranked higher (POPULARITY). We further compare results with three established recommendation models:",
"cite_spans": [
{
"start": 77,
"end": 96,
"text": "(Chen et al., 2012;",
"ref_id": "BIBREF8"
},
{
"start": 97,
"end": 114,
"text": "Yan et al., 2012)",
"ref_id": "BIBREF45"
},
{
"start": 310,
"end": 332,
"text": "(Manning et al., 2008)",
"ref_id": "BIBREF29"
},
{
"start": 391,
"end": 422,
"text": "(J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "\u2022 OCCF: one-class Collaborative Filtering (Pan et al., 2008) , which only considers users' reply history without modeling content in conversations.",
"cite_spans": [
{
"start": 42,
"end": 60,
"text": "(Pan et al., 2008)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "\u2022 RSVM: ranking SVM (Joachims, 2002) , which ranks conversations for each user with the content and Twitter features as in Duan et al. (2010) .",
"cite_spans": [
{
"start": 20,
"end": 36,
"text": "(Joachims, 2002)",
"ref_id": "BIBREF20"
},
{
"start": 123,
"end": 141,
"text": "Duan et al. (2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "\u2022 CTR: messages in one conversation are aggregated into one post and a state-of-the art Collaborative Filtering-based post recommendation model is applied (Chen et al., 2012) .",
"cite_spans": [
{
"start": 155,
"end": 174,
"text": "(Chen et al., 2012)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Finally, we also adapt the \"hidden factors as topics\" (HFT) model proposed in McAuley and Leskovec (2013) (henceforth ADAPTED HFT). Because the original model leverages the ratings for all product reviews and does not handle implicit user feedback well, we replace their user preference objective function with ours (Eq. 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "In this section, we first discuss our main evaluation in Section 5.1. A case study and corresponding discussion are provided in Section 5.2 to provide further insights, which is followed by an analysis of the topics and discourse modes discovered by our model (Section 5.3). We also examine our performance on first time replies (Section 5.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "Experimental results are displayed in Table 2 , where our model yields statistically significantly better results than baselines and comparisons (paired t-tests, p < 0.01). For P@K, we only report P@1, because a significant amount of users participate only in 1 or 2 conversations. For nDCG@K, different K values are experimented, which results in similar trend, so only nDCG@5 is reported. We find that the baselines that rank conversations with simple features (e.g., length or popularity) perform poorly. This implies that generic algorithms that do not consider conversation content or user preference cannot produce reasonable recommendations.",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 45,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Conversation Recommendation Results",
"sec_num": "5.1"
},
{
"text": "Although some non-baseline systems capture content in one way or another, only ADAPTED HFT and our model exploit latent topic models to better represent content in tweets, and outperform other methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversation Recommendation Results",
"sec_num": "5.1"
},
{
"text": "Compared to ADAPTED HFT, which only considers latent topics under a collaborative filtering framework, our model extracts both topics and discourse modes as latent variables, and shows superior performance on both datasets. Our discourse variables go beyond topical content to capture social behaviors that affect user engagement, such as 381 arguments, question-asking, agreement, and other discourse modes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conversation Recommendation Results",
"sec_num": "5.1"
},
{
"text": "To test the model performance based different levels of user engagement history, we further experiment with varying the length of conversations for training. Specifically, in addition to using 75% of conversation history, we also extract the first 25% and 50% of history as training. The rest of a conversation is separated equally for development and test. Figure 4 shows the MAP scores for US Election and TREC datasets. The increasing MAP for all methods as the training history increases indicates that generally, conversation history is essential for recommendation. Our model performs consistently better over different lengths of conversation histories.",
"cite_spans": [],
"ref_spans": [
{
"start": 358,
"end": 366,
"text": "Figure 4",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Training with Varying Conversation History.",
"sec_num": null
},
{
"text": "Results for Varying Degree of Data Sparsity. From Table 1 and Figure 3 , we observe that most users in our datasets are involved in only a few conversations. In order to study the effects of data sparsity on recommendation models, we examine in Figure 5 the MAP scores for users engaged in a varying number of conversations, as measured on the TREC dataset. The results on the US Election dataset have similar distributions. As we see, the prediction results become worse for users involved in fewer conversations. This indicates that data sparsity serves as a challenge for all recommendation models. We also observe that our model performs consistently better than other models over different degrees of sparsity. This implies that effectively capturing discourse structure in conversation context is useful to mitigating the effects of Table 3 : Predicted recommendation scores by different models of U 1 for conversations c 1 and c 2 in Figure 1 . U 1 later replies to c 2 but not c 1 , where our model predicts scores of 0.961 for c 2 (higher than 0.924 for c 1 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 50,
"end": 57,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 62,
"end": 70,
"text": "Figure 3",
"ref_id": null
},
{
"start": 245,
"end": 253,
"text": "Figure 5",
"ref_id": "FIGREF7"
},
{
"start": 839,
"end": 846,
"text": "Table 3",
"ref_id": null
},
{
"start": 941,
"end": 949,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Training with Varying Conversation History.",
"sec_num": null
},
{
"text": "Latent Dim. Table 4 : Sample latent dimensions of topics (\u03b3 U u1 for user, and \u03b3 C c * for conversations) and discourse modes (\u03b4 U u1 for user, and \u03b4 C c * for conversations). User U 1 shows interest in topic 1 (about Sanders), which is also a dominating topic in conversation c 2 , but is not interested in topic 2 (about Clinton). U 1 shows a preference for discourse mode 1 (argument) over mode 2 (statement).",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training with Varying Conversation History.",
"sec_num": null
},
{
"text": "User U 1 Conv 1 (c 1 ) Conv 2 (c 2 ) Topic 1 (Sanders) 0.92 (\u03b3 U u 1 ,1 ) 0.10 (\u03b3 C c 1 ,1 ) 0.63 (\u03b3 C c 2 ,1 ) Topic 2 (Clinton) 0.14 (\u03b3 U u 1 ,2 ) 0.84 (\u03b3 C c 1 ,2 ) 0.12 (\u03b3 C c 2 ,2 ) Disc 1 (argument) 0.46 (\u03b4 U u 1 ,1 ) 0.28 (\u03b4 C c 1 ,1 ) 0.38 (\u03b4 C c 2 ,1 ) Disc 2 (statement) -0.24 (\u03b4 U u 1 ,2 ) 0.98 (\u03b4 C c 1 ,2 ) -0.09 (\u03b4 C c 2 ,2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training with Varying Conversation History.",
"sec_num": null
},
{
"text": "data sparsity on conversation recommendation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training with Varying Conversation History.",
"sec_num": null
},
{
"text": "Here we present a case study based on the sample conversations in Figure 1 . Recall that user U 1 is interested in conversations about Sanders, and also prefers more argumentative discourse, and thus returns in conversation c 2 but not c 1 . Table 3 shows the predicted scores for the two conversations from OCCF, ADAPTED HFT, and our model (as in Eq. 2). Both ADAPTED HFT and our model more accurately recommend c 2 over c 1 , with our model producing a slightly higher recommendation score for c 2 . Table 4 shows the latent dimension values for the learned topics and discourse modes for this user and these two conversations. Based on human inspection, topic 1 appears to contain words about Sanders, which is the main topic in conversation c 2 . Topic 2 is about Clinton, which is a dominating topic in conversation c 1 . Our model also picks up user interest in topic 1 (Sanders), and thus assigns \u03b3 U u 1 ,1 a high value. For discourse modes, our model also generates a high score for \"argument\" discourse (labeled via human inspection) for both the user and c 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 74,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 242,
"end": 249,
"text": "Table 3",
"ref_id": null
},
{
"start": 502,
"end": 509,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Case Study and Discussion",
"sec_num": "5.2"
},
{
"text": "Ablation Study. We have shown that joint modeling of topical content and discourse modes produces the superior performance for our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Further Analysis of Topic and Discourse",
"sec_num": "5.3"
},
{
"text": "Here we provide an ablation study to examine the relative contributions of those two aspects by setting the trade-off parameter \u03bb to 1.0 (topic only) or 0.0 (discourse only). Table 5 shows that topics or discourse individually improve slightly upon the comparison ADAPTED HFT, but only jointly do they improve significantly upon it.",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 182,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Further Analysis of Topic and Discourse",
"sec_num": "5.3"
},
{
"text": "US Election TREC ADAPTED HFT 0.698 0.487 Our model (topic only) 0.711 0.491 Our model (discourse only) 0.705 0.483 Our model (full) 0.762 0.591 Topic Coherence. To examine the quality of topics found by our model, we use the C V topic coherence score measured via the open-source toolkit Palmetto 10 , which has been shown to produce evaluation performance comparable to human judgment (R\u00f6der et al., 2015) . Our model achieves topic coherence scores of 0.343 and 0.376 on TREC and US Election datasets, compared to 0.338 and 0.371 for the topics from ADAPTED HFT. Table 6 shows a sample of discourse modes as labeled by human. Although this is merely a qualitative human judgment at this point, there does appear to be a notable overlap in discourse modes between the two datasets even though they were learned separately.",
"cite_spans": [
{
"start": 386,
"end": 406,
"text": "(R\u00f6der et al., 2015)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [
{
"start": 565,
"end": 572,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "10 https://github.com/AKSW/Palmetto/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": null
},
{
"text": "From a recommendation perspective, users may be interested in joining new conversations. We thus compare each recommendation system for first time replies. For each user, we only evaluate for conversations where they are newcomers. Table 7 shows that, unsurprisingly, all systems perform poorly on this task, though our model performs slightly better. This suggests that other features, e.g., network structures or other discussion thread features, could usefully be included in future studies that target new conversations. ",
"cite_spans": [],
"ref_spans": [
{
"start": 232,
"end": 239,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "First Time Reply Results",
"sec_num": "5.4"
},
{
"text": "This paper has presented a framework for microblog conversation recommendation via jointly modeling topics and discourse modes. Experimental results show that our method can outperform competitive approaches that omit user discourse behaviors. Qualitative analysis shows that our joint model yields meaningful topics and discourse representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In this paper, discourse mode refers to a certain type of dialogue act, e.g., agreement or argument. The discourse structure of a conversation means some combination (or a probability distribution) of discourse modes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To ensure the general applicability of our approach to domains lacking such information, we do not utilize external features such as network structure, but it may certainly be added in future, more narrowly targeted applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The datasets are available at http://www.ccs. neu.edu/home/luwang/ 4 http://trec.nist.gov/data/tweets/ 5 https://developer.twitter.com/ en/docs/tweets/filter-realtime/ api-reference/post-statuses-filter.html 6 Keyword list: \"trump\", \"hillary\", \"clinton\", \"president\", \"politics\", and \"election.\"7 https://developer.twitter.com/en/ docs/tweets/search/api-reference/ get-saved_searches-show-id",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "At least one turn per conversation is retained for training. It is possible that one user only replies in either development set or test set, but it is rather infrequent.9 http://www.cs.cmu.edu/\u02dcark/TweetNLP/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is partly supported by Innovation and Technology Fund (ITF) Project No. 6904333, General Research Fund (GRF) Project No. 14232816 (12183516), and National Science Foundation Grant IIS-1566382. We thank Shuming Shi, Yan Song, and the three anonymous reviewers for the insightful suggestions on various aspects of this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The political blogosphere and the",
"authors": [
{
"first": "A",
"middle": [],
"last": "Lada",
"suffix": ""
},
{
"first": "Natalie",
"middle": [],
"last": "Adamic",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Glance",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 3rd International Workshop on Link Discovery. ACM, LinkKDD '05",
"volume": "",
"issue": "",
"pages": "36--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lada A. Adamic and Natalie Glance. 2005. The politi- cal blogosphere and the 2004 U.S. election: Divided they blog. In Proceedings of the 3rd International Workshop on Link Discovery. ACM, LinkKDD '05, pages 36-43.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Finding high-quality content in social media",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Agichtein",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Castillo",
"suffix": ""
},
{
"first": "Debora",
"middle": [],
"last": "Donato",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 International Conference on Web Search and Data Mining",
"volume": "",
"issue": "",
"pages": "183--194",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Agichtein, Carlos Castillo, Debora Donato, Aristides Gionis, and Gilad Mishne. 2008. Finding high-quality content in social media. In Proceedings of the 2008 International Conference on Web Search and Data Mining. ACM, pages 183-194.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Network-aware recommendations of novel tweets",
"authors": [
{
"first": "Aris",
"middle": [],
"last": "Noor Aldeen Alawad",
"suffix": ""
},
{
"first": "Stefano",
"middle": [],
"last": "Anagnostopoulos",
"suffix": ""
},
{
"first": "Ida",
"middle": [],
"last": "Leonardi",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Mele",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Silvestri",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 39th International ACM SI-GIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "913--916",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noor Aldeen Alawad, Aris Anagnostopoulos, Stefano Leonardi, Ida Mele, and Fabrizio Silvestri. 2016. Network-aware recommendations of novel tweets. In Proceedings of the 39th International ACM SI- GIR Conference on Research and Development in Information Retrieval. ACM, pages 913-916.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Predicting responses to microblog posts",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Gamon",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "602--606",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Artzi, Patrick Pantel, and Michael Gamon. 2012. Predicting responses to microblog posts. In Pro- ceedings of the 2012 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies. Asso- ciation for Computational Linguistics, pages 602- 606.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Characterizing and curating conversation threads: expansion, focus, volume, re-entry",
"authors": [
{
"first": "Lars",
"middle": [],
"last": "Backstrom",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Kleinberg",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Cristian",
"middle": [],
"last": "Danescu-Niculescu-Mizil",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the sixth ACM International Conference on Web Search and Data Mining",
"volume": "",
"issue": "",
"pages": "13--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lars Backstrom, Jon Kleinberg, Lillian Lee, and Cris- tian Danescu-Niculescu-Mizil. 2013. Characteriz- ing and curating conversation threads: expansion, focus, volume, re-entry. In Proceedings of the sixth ACM International Conference on Web Search and Data Mining. ACM, pages 13-22.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Exposure to ideologically diverse news and opinion on Facebook",
"authors": [
{
"first": "Eytan",
"middle": [],
"last": "Bakshy",
"suffix": ""
},
{
"first": "Solomon",
"middle": [],
"last": "Messing",
"suffix": ""
},
{
"first": "Lada A",
"middle": [],
"last": "Adamic",
"suffix": ""
}
],
"year": 2015,
"venue": "Science",
"volume": "348",
"issue": "6239",
"pages": "1130--1132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eytan Bakshy, Solomon Messing, and Lada A Adamic. 2015. Exposure to ideologically diverse news and opinion on Facebook. Science 348(6239):1130- 1132.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Latent Dirichlet Allocation",
"authors": [
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Michael I Jordan",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of machine Learning research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent Dirichlet Allocation. Journal of ma- chine Learning research 3(Jan):993-1022.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "On participation in group chats on Twitter",
"authors": [
{
"first": "Ceren",
"middle": [],
"last": "Budak",
"suffix": ""
},
{
"first": "Rakesh",
"middle": [],
"last": "Agrawal",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 22nd International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "165--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ceren Budak and Rakesh Agrawal. 2013. On participa- tion in group chats on Twitter. In Proceedings of the 22nd International Conference on World Wide Web. ACM, pages 165-176.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Collaborative personalized tweet recommendation",
"authors": [
{
"first": "Kailong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Tianqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Guoqing",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Ou",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Enpeng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 35th international ACM SIGIR Conference on Research and development in information retrieval. ACM",
"volume": "",
"issue": "",
"pages": "661--670",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kailong Chen, Tianqi Chen, Guoqing Zheng, Ou Jin, Enpeng Yao, and Yong Yu. 2012. Collaborative per- sonalized tweet recommendation. In Proceedings of the 35th international ACM SIGIR Conference on Research and development in information retrieval. ACM, pages 661-670.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Anyone can become a troll: Causes of trolling behavior in online discussions",
"authors": [
{
"first": "Justin",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Bernstein",
"suffix": ""
},
{
"first": "Cristian",
"middle": [],
"last": "Danescu-Niculescu-Mizil",
"suffix": ""
},
{
"first": "Jure",
"middle": [],
"last": "Leskovec",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. ACM, CSCW '17",
"volume": "",
"issue": "",
"pages": "1217--1230",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justin Cheng, Michael Bernstein, Cristian Danescu- Niculescu-Mizil, and Jure Leskovec. 2017. Any- one can become a troll: Causes of trolling behavior in online discussions. In Proceedings of the 2017 ACM Conference on Computer Supported Coopera- tive Work and Social Computing. ACM, CSCW '17, pages 1217-1230.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised classification of dialogue acts using a Dirichlet process mixture model",
"authors": [
{
"first": "Nigel",
"middle": [],
"last": "Crook",
"suffix": ""
},
{
"first": "Ram\u00f3n",
"middle": [],
"last": "Granell",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"G"
],
"last": "Pulman",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue",
"volume": "",
"issue": "",
"pages": "341--348",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nigel Crook, Ram\u00f3n Granell, and Stephen G. Pulman. 2009. Unsupervised classification of dialogue acts using a Dirichlet process mixture model. In Pro- ceedings of The 10th Annual Meeting of the Spe- cial Interest Group on Discourse and Dialogue, SIG- DIAL 2009. pages 341-348.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An empirical study on learning to rank of tweets",
"authors": [
{
"first": "Yajuan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Long",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Heung-Yeung",
"middle": [],
"last": "Shum",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "295--303",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yajuan Duan, Long Jiang, Tao Qin, Ming Zhou, and Heung-Yeung Shum. 2010. An empirical study on learning to rank of tweets. In Proceedings of the 23rd International Conference on Computational Linguistics. Association for Computational Linguis- tics, pages 295-303.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Retweet or not?: personalized tweet re-ranking",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Jianyong",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the sixth ACM International Conference on Web Search and Data Mining",
"volume": "",
"issue": "",
"pages": "577--586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Feng and Jianyong Wang. 2013. Retweet or not?: personalized tweet re-ranking. In Proceedings of the sixth ACM International Conference on Web Search and Data Mining. ACM, pages 577-586.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Part-of-speech tagging for Twitter: Annotation, features, and experiments",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Mills",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Eisenstein",
"suffix": ""
},
{
"first": "Dani",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Yogatama",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Flanigan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "42--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tagging for Twitter: Annotation, features, and experiments. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Tech- nologies, Proceedings of the Conference. pages 42- 47.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Modeling the structure and evolution of discussion cascades",
"authors": [
{
"first": "Vicen\u00e7",
"middle": [],
"last": "G\u00f3mez",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hilbert",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kappen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kaltenbrunner",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 22nd ACM Conference on Hypertext and hypermedia",
"volume": "",
"issue": "",
"pages": "181--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vicen\u00e7 G\u00f3mez, Hilbert J Kappen, and Andreas Kaltenbrunner. 2011. Modeling the structure and evolution of discussion cascades. In Proceedings of the 22nd ACM Conference on Hypertext and hyper- media. ACM, pages 181-190.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Gibbs sampling in the generative model of Latent Dirichlet Allocation",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Griffiths",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Griffiths. 2002. Gibbs sampling in the generative model of Latent Dirichlet Allocation .",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Study on sina microblog personalized recommendation based on semantic network",
"authors": [
{
"first": "Yue",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jinxiu",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2015,
"venue": "Expert Systems with Applications",
"volume": "42",
"issue": "10",
"pages": "4797--4804",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yue He and Jinxiu Tan. 2015. Study on sina micro- blog personalized recommendation based on se- mantic network. Expert Systems with Applications 42(10):4797-4804.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Co-factorization machines: modeling user interests and predicting individual decisions in Twitter",
"authors": [
{
"first": "Liangjie",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Aziz",
"suffix": ""
},
{
"first": "Brian",
"middle": [
"D"
],
"last": "Doumith",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Davison",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the sixth ACM International Conference on Web Search and Data Mining",
"volume": "",
"issue": "",
"pages": "557--566",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liangjie Hong, Aziz S Doumith, and Brian D Davison. 2013. Co-factorization machines: modeling user in- terests and predicting individual decisions in Twit- ter. In Proceedings of the sixth ACM International Conference on Web Search and Data Mining. ACM, pages 557-566.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Collaborative filtering for implicit feedback datasets",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Yehuda",
"middle": [],
"last": "Koren",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Volinsky",
"suffix": ""
}
],
"year": 2008,
"venue": "Data Mining, 2008. ICDM'08. Eighth IEEE International Conference on. IEEE",
"volume": "",
"issue": "",
"pages": "263--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yifan Hu, Yehuda Koren, and Chris Volinsky. 2008. Collaborative filtering for implicit feedback datasets. In Data Mining, 2008. ICDM'08. Eighth IEEE International Conference on. IEEE, pages 263-272.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Cumulated gain-based evaluation of IR techniques",
"authors": [
{
"first": "Kalervo",
"middle": [],
"last": "J\u00e4rvelin",
"suffix": ""
},
{
"first": "Jaana",
"middle": [],
"last": "Kek\u00e4l\u00e4inen",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Trans. Inf. Syst",
"volume": "20",
"issue": "4",
"pages": "422--446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kalervo J\u00e4rvelin and Jaana Kek\u00e4l\u00e4inen. 2002. Cumu- lated gain-based evaluation of IR techniques. ACM Trans. Inf. Syst. 20(4):422-446.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Optimizing search engines using clickthrough data",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "133--142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pages 133- 142.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Unsupervised modeling of dialog acts in asynchronous conversations",
"authors": [
{
"first": "R",
"middle": [],
"last": "Shafiq",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 22nd International Joint Conference on Artificial Intelligence, IJCAI 2011",
"volume": "",
"issue": "",
"pages": "1807--1813",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shafiq R. Joty, Giuseppe Carenini, and Chin-Yew Lin. 2011. Unsupervised modeling of dialog acts in asynchronous conversations. In Proceedings of the 22nd International Joint Conference on Artificial In- telligence, IJCAI 2011. pages 1807-1813.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Switchboard SWBD-DAMSL shallow-discourse-function annotation coders manual",
"authors": [
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Biasca",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Jurafsky, E. Shriberg, and D. Biasca. 1997. Switch- board SWBD-DAMSL shallow-discourse-function annotation coders manual. Technical Report Draft 13, University of Colorado, Institute of Cognitive Science.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Matrix factorization techniques for recommender systems",
"authors": [
{
"first": "Yehuda",
"middle": [],
"last": "Koren",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"M"
],
"last": "Bell",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Volinsky",
"suffix": ""
}
],
"year": 2009,
"venue": "IEEE Computer",
"volume": "42",
"issue": "8",
"pages": "30--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yehuda Koren, Robert M. Bell, and Chris Volinsky. 2009. Matrix factorization techniques for recom- mender systems. IEEE Computer 42(8):30-37.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "What is Twitter",
"authors": [
{
"first": "Haewoon",
"middle": [],
"last": "Kwak",
"suffix": ""
},
{
"first": "Changhyun",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Hosung",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Sue",
"middle": [],
"last": "Moon",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 19th International Conference on World wide Web",
"volume": "",
"issue": "",
"pages": "591--600",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haewoon Kwak, Changhyun Lee, Hosung Park, and Sue Moon. 2010. What is Twitter, a social network or a news media? In Proceedings of the 19th In- ternational Conference on World wide Web. ACM, pages 591-600.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "When the wikipedians talk: Network and tree structure of wikipedia discussion pages",
"authors": [
{
"first": "David",
"middle": [],
"last": "Laniado",
"suffix": ""
},
{
"first": "Riccardo",
"middle": [],
"last": "Tasso",
"suffix": ""
},
{
"first": "Yana",
"middle": [],
"last": "Volkovich",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kaltenbrunner",
"suffix": ""
}
],
"year": 2011,
"venue": "ICWSM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Laniado, Riccardo Tasso, Yana Volkovich, and Andreas Kaltenbrunner. 2011. When the wikipedi- ans talk: Network and tree structure of wikipedia discussion pages. In ICWSM.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "News sharing in social media: The effect of gratifications and prior experience",
"authors": [
{
"first": "Lee",
"middle": [],
"last": "Chei Sian",
"suffix": ""
},
{
"first": "Long",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2012,
"venue": "Computers in Human Behavior",
"volume": "28",
"issue": "2",
"pages": "331--339",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chei Sian Lee and Long Ma. 2012. News sharing in social media: The effect of gratifications and prior experience. Computers in Human Behavior 28(2):331-339.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Amazon. com recommendations: Item-to-item collaborative filtering",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Linden",
"suffix": ""
},
{
"first": "Brent",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "York",
"suffix": ""
}
],
"year": 2003,
"venue": "IEEE Internet computing",
"volume": "7",
"issue": "1",
"pages": "76--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Greg Linden, Brent Smith, and Jeremy York. 2003. Amazon. com recommendations: Item-to-item col- laborative filtering. IEEE Internet computing 7(1):76-80.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Conversation trees: A grammar model for topic structure in forums",
"authors": [
{
"first": "Annie",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1543--1553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annie Louis and Shay B. Cohen. 2015. Conversa- tion trees: A grammar model for topic structure in forums. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Pro- cessing. Association for Computational Linguistics, pages 1543-1553.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Introduction to information retrieval",
"authors": [
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Prabhakar",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch\u00fctze. 2008. Introduction to information retrieval. Cambridge University Press.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Hidden factors and hidden topics: understanding rating dimensions with review text",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Mcauley",
"suffix": ""
},
{
"first": "Jure",
"middle": [],
"last": "Leskovec",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 7th ACM Conference on Recommender Systems",
"volume": "",
"issue": "",
"pages": "165--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julian McAuley and Jure Leskovec. 2013. Hidden fac- tors and hidden topics: understanding rating dimen- sions with review text. In Proceedings of the 7th ACM Conference on Recommender Systems. ACM, pages 165-172.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Updating quasi-newton matrices with limited storage",
"authors": [
{
"first": "Jorge",
"middle": [],
"last": "Nocedal",
"suffix": ""
}
],
"year": 1980,
"venue": "Mathematics of computation",
"volume": "35",
"issue": "151",
"pages": "773--782",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jorge Nocedal. 1980. Updating quasi-newton matrices with limited storage. Mathematics of computation 35(151):773-782.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "From tweets to polls: Linking text sentiment to public opinion time series",
"authors": [
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Ramnath",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Balasubramanyan",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Bryan R Routledge",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "ICWSM",
"volume": "11",
"issue": "",
"pages": "1--2",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brendan O'Connor, Ramnath Balasubramanyan, Bryan R Routledge, and Noah A Smith. 2010. From tweets to polls: Linking text sentiment to public opinion time series. ICWSM 11(122-129):1-2.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Improved part-of-speech tagging for online conversational text with word clusters",
"authors": [
{
"first": "Olutobi",
"middle": [],
"last": "Owoputi",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Schneider",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "380--390",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Human Language Technologies: Conference of the North American Chapter of the Association of Com- putational Linguistics. pages 380-390.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "One-class collaborative filtering",
"authors": [
{
"first": "Rong",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Yunhong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Nathan",
"middle": [
"Nan"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Rajan",
"middle": [
"M"
],
"last": "Lukose",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Scholz",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 8th IEEE International Conference on Data Mining",
"volume": "",
"issue": "",
"pages": "502--511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rong Pan, Yunhong Zhou, Bin Cao, Nathan Nan Liu, Rajan M. Lukose, Martin Scholz, and Qiang Yang. 2008. One-class collaborative filtering. In Proceed- ings of the 8th IEEE International Conference on Data Mining. pages 502-511.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Diffusion-aware personalized social update recommendation",
"authors": [
{
"first": "Ye",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Cong",
"suffix": ""
},
{
"first": "Kailong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 7th ACM Conference on Recommender Systems",
"volume": "",
"issue": "",
"pages": "69--76",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ye Pan, Feng Cong, Kailong Chen, and Yong Yu. 2013. Diffusion-aware personalized social update recom- mendation. In Proceedings of the 7th ACM Confer- ence on Recommender Systems. ACM, pages 69-76.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Short and sparse text topic modeling via self-aggregation",
"authors": [
{
"first": "Xiaojun",
"middle": [],
"last": "Quan",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Sinno Jialin",
"middle": [],
"last": "Pan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "2270--2276",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojun Quan, Chunyu Kit, Yong Ge, and Sinno Jialin Pan. 2015. Short and sparse text topic modeling via self-aggregation. In Proceedings of the Twenty- Fourth International Joint Conference on Artificial Intelligence. pages 2270-2276.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Unsupervised modeling of Twitter conversations",
"authors": [
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "172--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alan Ritter, Colin Cherry, and Bill Dolan. 2010. Un- supervised modeling of Twitter conversations. In Human Language Technologies: Conference of the North American Chapter of the Association of Com- putational Linguistics. pages 172-180.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Exploring the space of topic coherence measures",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "R\u00f6der",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Both",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Hinneburg",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Eighth ACM International Conference on Web Search and Data Mining",
"volume": "",
"issue": "",
"pages": "399--408",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael R\u00f6der, Andreas Both, and Alexander Hinneb- urg. 2015. Exploring the space of topic coherence measures. In Proceedings of the Eighth ACM Inter- national Conference on Web Search and Data Min- ing. pages 399-408.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Bayesian probabilistic matrix factorization using markov chain monte carlo",
"authors": [
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Andriy",
"middle": [],
"last": "Mnih",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th International Conference on Machine learning",
"volume": "",
"issue": "",
"pages": "880--887",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruslan Salakhutdinov and Andriy Mnih. 2008. Bayesian probabilistic matrix factorization using markov chain monte carlo. In Proceedings of the 25th International Conference on Machine learning. ACM, pages 880-887.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Dialogue act modeling for automatic tagging and recognition of conversational speech",
"authors": [
{
"first": "Andreas",
"middle": [],
"last": "Stolcke",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Ries",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Coccaro",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Shriberg",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Bates",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Taylor",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Carol",
"middle": [],
"last": "Van Ess-Dykema",
"suffix": ""
},
{
"first": "Marie",
"middle": [],
"last": "Meteer",
"suffix": ""
}
],
"year": 2000,
"venue": "Computational Linguistics",
"volume": "26",
"issue": "3",
"pages": "339--373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andreas Stolcke, Klaus Ries, Noah Coccaro, Eliza- beth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and Marie Meteer. 2000. Dialogue act modeling for au- tomatic tagging and recognition of conversational speech. Computational Linguistics 26(3):339-373.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Collaborative personalized Twitter search with topic-language models",
"authors": [
{
"first": "Jan",
"middle": [],
"last": "Vosecky",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Wai-Ting Leung",
"suffix": ""
},
{
"first": "Wilfred",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 37th international ACM SIGIR Conference on Research & Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "53--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jan Vosecky, Kenneth Wai-Ting Leung, and Wilfred Ng. 2014. Collaborative personalized Twitter search with topic-language models. In Proceedings of the 37th international ACM SIGIR Conference on Research & Development in Information Retrieval. ACM, pages 53-62.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Collaborative topic modeling for recommending scientific articles",
"authors": [
{
"first": "Chong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "David",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "448--456",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chong Wang and David M Blei. 2011. Collaborative topic modeling for recommending scientific articles. In Proceedings of the 17th ACM SIGKDD Inter- national Conference on Knowledge Discovery and Data Mining. ACM, pages 448-456.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Inferring linguistic structure in spoken language",
"authors": [
{
"first": "M",
"middle": [],
"last": "Woszczyna",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of IC-SLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M Woszczyna and A Waibel. 1994. Inferring linguistic structure in spoken language. In Proceedings of IC- SLP. IC-SLP.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Who says what to whom on Twitter",
"authors": [
{
"first": "Shaomei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jake",
"middle": [
"M"
],
"last": "Hofman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Winter",
"suffix": ""
},
{
"first": "Duncan J",
"middle": [],
"last": "Mason",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Watts",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 20th International Conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "705--714",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shaomei Wu, Jake M Hofman, Winter A Mason, and Duncan J Watts. 2011. Who says what to whom on Twitter. In Proceedings of the 20th International Conference on World Wide Web. ACM, pages 705- 714.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Tweet recommendation with graph co-ranking",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers",
"volume": "1",
"issue": "",
"pages": "516--525",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Yan, Mirella Lapata, and Xiaoming Li. 2012. Tweet recommendation with graph co-ranking. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1. Association for Computational Linguistics, pages 516-525.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "User embedding for scholarly microblog recommendation",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Xinjie",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "449--453",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Yu, Xiaojun Wan, and Xinjie Zhou. 2016. User embedding for scholarly microblog recommenda- tion. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. vol- ume 2, pages 449-453.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "[U3]: I still prefer Hillarist b/c it describes their Cultish and ideological aspects. ----------... Conversation 2 ... [U4]: I do not like trump at all, but Comey left her in place knowing Bernie is much stronger. [U1]: If you're going to actively start rooting against the Democrats, get off my mentions. I have enough GOP doing that. [U5]: Your tweets are an example of why open primaries are stupid. You're not a Dem, you're just for one guy. ----------[U1]: No offense, but you've been wrong about pretty much everything so far. Why would I trust your prognostication now? ..."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Two snippets of conversations on Twitter."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "and \u03c4 d , as done in McAuley and Leskovec (2013), with additional parameters \u03c8 T k , \u03c8 D d , \u03c8 B , and \u03c7 d (as shown in"
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Figure 2 and described as follows: \u2022 Compute topic distribution \u03b8c by Eq. 6 For message m = 1 to Mc: -Compute discourse distribution \u03c0u c,m by Eq. 7 -Draw topic assignment zc,m \u223c M ulti(\u03b8c) -Draw discourse mode dc,m \u223c M ulti(\u03c0u c,m ) -For word index n = 1 to Nc,m: * Draw word type xc,m,n \u223c M ulti(\u03c4 d ) Generative process of our joint model of topic and discourse. u represents users. c represents conversations. Dotted arrows represent the softmax linkings, while solid arrows mean conditional priors. * if xc,m,n == BACK: Draw word wc,m,n \u223c M ulti(\u03c6 B ) * if xc,m,n == DISC: Draw word wc,m,n \u223c M ulti(\u03c6 D dc,m ) * if xc,m,n == T OP IC: Draw word wc,m,n \u223c M ulti(\u03c6 T zc,m )"
},
"FIGREF5": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "MAP scores for models trained on 25%, 50%, and 75% of conversation history. For each quantile, from left to right shows the result of OCCF, RSVM, CTR, ADAPTED HFT, and our model. In general, longer conversation history leads to better performance, and our model outperforms compared systems in all settings."
},
"FIGREF7": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "MAP scores of models for users involved in varying number of conversations on TREC dataset. Horizontal axis: degree of data sparsity indicated by the number of conversations a user involved in training data. Vertical axis: MAP scores. For each degree level, from left to right shows the results of OCCF, RSVM, CTR, ADPATED HFT, and our model."
},
"TABREF1": {
"num": null,
"type_str": "table",
"html": null,
"text": "Statistics of two datasets.",
"content": "<table><tr><td/><td>(unit: 1,000)</td><td/></tr><tr><td/><td>8</td><td/></tr><tr><td/><td>7</td><td>US Election</td></tr><tr><td># of users</td><td>3 4 5 6</td><td>TREC</td></tr><tr><td/><td>2</td><td/></tr><tr><td/><td>1</td><td/></tr><tr><td/><td>0</td><td/></tr><tr><td/><td colspan=\"2\">1 4 7 10 13 16 19 22 25 28 &gt;30</td></tr><tr><td/><td colspan=\"2\"># of conversations a user participated</td></tr></table>"
},
"TABREF3": {
"num": null,
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td>: Conversation recommendation results on US Election and TREC. The best result for each column is highlighted in bold. Our model performs significantly better than all the comparisons (p &lt; 0.01, paired t-test).</td></tr></table>"
},
"TABREF5": {
"num": null,
"type_str": "table",
"html": null,
"text": "MAP of different variants of our model. Best results in each column is in bold.",
"content": "<table/>"
},
"TABREF7": {
"num": null,
"type_str": "table",
"html": null,
"text": "Top 10 representative terms for sample discourse modes discovered by our model in two datasets. Names of discourse modes are our interpretations according to the word distributions generated by our model. Discourse Modes. While our topic word distributions are relatively unsurprising, of greater interest are the discourse mode word distributions.",
"content": "<table><tr><td>Sample</td></tr></table>"
},
"TABREF9": {
"num": null,
"type_str": "table",
"html": null,
"text": "MAP of models considering only first time replies. Best results in each column is in bold.",
"content": "<table/>"
}
}
}
}