ACL-OCL / Base_JSON /prefixP /json /P11 /P11-1039.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P11-1039",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:47:35.807370Z"
},
"title": "Topical Keyphrase Extraction from Twitter",
"authors": [
{
"first": "Wayne",
"middle": [
"Xin"
],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Peking University",
"location": {}
},
"email": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Singapore Management University",
"location": {}
},
"email": "jingjiang@smu.edu.sg"
},
{
"first": "Jing",
"middle": [],
"last": "He",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Peking University",
"location": {}
},
"email": ""
},
{
"first": "Yang",
"middle": [],
"last": "Song",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Peking University",
"location": {}
},
"email": ""
},
{
"first": "Palakorn",
"middle": [],
"last": "Achananuparp",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Singapore Management University",
"location": {}
},
"email": "palakorna@smu.edu.sg"
},
{
"first": "Ee-Peng",
"middle": [],
"last": "Lim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Singapore Management University",
"location": {}
},
"email": "eplim@smu.edu.sg"
},
{
"first": "Xiaoming",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Peking University",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Summarizing and analyzing Twitter content is an important and challenging task. In this paper, we propose to extract topical keyphrases as one way to summarize Twitter. We propose a context-sensitive topical PageRank method for keyword ranking and a probabilistic scoring function that considers both relevance and interestingness of keyphrases for keyphrase ranking. We evaluate our proposed methods on a large Twitter data set. Experiments show that these methods are very effective for topical keyphrase extraction.",
"pdf_parse": {
"paper_id": "P11-1039",
"_pdf_hash": "",
"abstract": [
{
"text": "Summarizing and analyzing Twitter content is an important and challenging task. In this paper, we propose to extract topical keyphrases as one way to summarize Twitter. We propose a context-sensitive topical PageRank method for keyword ranking and a probabilistic scoring function that considers both relevance and interestingness of keyphrases for keyphrase ranking. We evaluate our proposed methods on a large Twitter data set. Experiments show that these methods are very effective for topical keyphrase extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Twitter, a new microblogging website, has attracted hundreds of millions of users who publish short messages (a.k.a. tweets) on it. They either publish original tweets or retweet (i.e. forward) others' tweets if they find them interesting. Twitter has been shown to be useful in a number of applications, including tweets as social sensors of realtime events (Sakaki et al., 2010) , the sentiment prediction power of Twitter (Tumasjan et al., 2010) , etc. However, current explorations are still in an early stage and our understanding of Twitter content still remains limited. How to automatically understand, extract and summarize useful Twitter content has therefore become an important and emergent research topic.",
"cite_spans": [
{
"start": 359,
"end": 380,
"text": "(Sakaki et al., 2010)",
"ref_id": "BIBREF11"
},
{
"start": 425,
"end": 448,
"text": "(Tumasjan et al., 2010)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose to extract keyphrases as a way to summarize Twitter content. Traditionally, keyphrases are defined as a short list of terms to summarize the topics of a document (Turney, 2000) .",
"cite_spans": [
{
"start": 188,
"end": 202,
"text": "(Turney, 2000)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It can be used for various tasks such as document summarization (Litvak and Last, 2008) and indexing (Li et al., 2004) . While it appears natural to use keyphrases to summarize Twitter content, compared with traditional text collections, keyphrase extraction from Twitter is more challenging in at least two aspects: 1) Tweets are much shorter than traditional articles and not all tweets contain useful information; 2) Topics tend to be more diverse in Twitter than in formal articles such as news reports.",
"cite_spans": [
{
"start": 64,
"end": 87,
"text": "(Litvak and Last, 2008)",
"ref_id": "BIBREF6"
},
{
"start": 101,
"end": 118,
"text": "(Li et al., 2004)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "So far there is little work on keyword or keyphrase extraction from Twitter. Wu et al. (2010) proposed to automatically generate personalized tags for Twitter users. However, user-level tags may not be suitable to summarize the overall Twitter content within a certain period and/or from a certain group of people such as people in the same region. Existing work on keyphrase extraction identifies keyphrases from either individual documents or an entire text collection (Turney, 2000; Tomokiyo and Hurst, 2003) . These approaches are not immediately applicable to Twitter because it does not make sense to extract keyphrases from a single tweet, and if we extract keyphrases from a whole tweet collection we will mix a diverse range of topics together, which makes it difficult for users to follow the extracted keyphrases.",
"cite_spans": [
{
"start": 77,
"end": 93,
"text": "Wu et al. (2010)",
"ref_id": "BIBREF16"
},
{
"start": 471,
"end": 485,
"text": "(Turney, 2000;",
"ref_id": "BIBREF14"
},
{
"start": 486,
"end": 511,
"text": "Tomokiyo and Hurst, 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Therefore, in this paper, we propose to study the novel problem of extracting topical keyphrases for summarizing and analyzing Twitter content. In other words, we extract and organize keyphrases by topics learnt from Twitter. In our work, we follow the standard three steps of keyphrase extraction, namely, keyword ranking, candidate keyphrase generation and keyphrase ranking. For keyword ranking, we modify the Topical PageRank method proposed by Liu et al. (2010) by introducing topic-sensitive score propagation. We find that topic-sensitive propagation can largely help boost the performance. For keyphrase ranking, we propose a principled probabilistic phrase ranking method, which can be flexibly combined with any keyword ranking method and candidate keyphrase generation method. Experiments on a large Twitter data set show that our proposed methods are very effective in topical keyphrase extraction from Twitter. Interestingly, our proposed keyphrase ranking method can incorporate users' interests by modeling the retweet behavior.",
"cite_spans": [
{
"start": 449,
"end": 466,
"text": "Liu et al. (2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We further examine what topics are suitable for incorporating users' interests for topical keyphrase extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To the best of our knowledge, our work is the first to study how to extract keyphrases from microblogs. We perform a thorough analysis of the proposed methods, which can be useful for future work in this direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work is related to unsupervised keyphrase extraction. Graph-based ranking methods are the state of the art in unsupervised keyphrase extraction. Mihalcea and Tarau (2004) proposed to use TextRank, a modified PageRank algorithm to extract keyphrases. Based on the study by Mihalcea and Tarau (2004) , Liu et al. (2010) proposed to decompose a traditional random walk into multiple random walks specific to various topics. Language modeling methods (Tomokiyo and Hurst, 2003) and natural language processing techniques (Barker and Cornacchia, 2000) have also been used for unsupervised keyphrase extraction. Our keyword extraction method is mainly based on the study by Liu et al. (2010) . The difference is that we model the score propagation with topic context, which can lower the effect of noise, especially in microblogs.",
"cite_spans": [
{
"start": 149,
"end": 174,
"text": "Mihalcea and Tarau (2004)",
"ref_id": "BIBREF9"
},
{
"start": 276,
"end": 301,
"text": "Mihalcea and Tarau (2004)",
"ref_id": "BIBREF9"
},
{
"start": 304,
"end": 321,
"text": "Liu et al. (2010)",
"ref_id": "BIBREF7"
},
{
"start": 451,
"end": 477,
"text": "(Tomokiyo and Hurst, 2003)",
"ref_id": "BIBREF12"
},
{
"start": 521,
"end": 550,
"text": "(Barker and Cornacchia, 2000)",
"ref_id": "BIBREF0"
},
{
"start": 672,
"end": 689,
"text": "Liu et al. (2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work is also related to automatic topic labeling (Mei et al., 2007) . We focus on extracting topical keyphrases in microblogs, which has its own challenges. Our method can also be used to label topics in other text collections.",
"cite_spans": [
{
"start": 53,
"end": 71,
"text": "(Mei et al., 2007)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Another line of relevant research is Twitterrelated text mining. The most relevant work is by Wu et al. (2010) , who directly applied Tex-tRank (Mihalcea and Tarau, 2004) to extract keywords from tweets to tag users. Topic discovery from Twitter is also related to our work (Ramage et al., 2010 ), but we further extract keyphrases from each topic for summarizing and analyzing Twitter content.",
"cite_spans": [
{
"start": 94,
"end": 110,
"text": "Wu et al. (2010)",
"ref_id": "BIBREF16"
},
{
"start": 144,
"end": 170,
"text": "(Mihalcea and Tarau, 2004)",
"ref_id": "BIBREF9"
},
{
"start": 274,
"end": 294,
"text": "(Ramage et al., 2010",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Let U be a set of Twitter users. Let C = {{d u,m } Mu m=1 } u\u2208U be a collection of tweets generated by U, where M u is the total number of tweets generated by user u and d u,m is the m-th tweet of u. Let V be the vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3.1"
},
{
"text": "d u,m consists of a sequence of words (w u,m,1 , w u,m,2 , . . . , w u,m,Nu,m )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3.1"
},
{
"text": "where N u,m is the number of words in d u,m and w u,m,n \u2208 V (1 \u2264 n \u2264 N u,m ). We also assume that there is a set of topics T over the collection C.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3.1"
},
{
"text": "Given T and C, topical keyphrase extraction is to discover a list of keyphrases for each topic t \u2208 T . Here each keyphrase is a sequence of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3.1"
},
{
"text": "To extract keyphrases, we first identify topics from the Twitter collection using topic models (Section 3.2). Next for each topic, we run a topical PageRank algorithm to rank keywords and then generate candidate keyphrases using the top ranked keywords (Section 3.3). Finally, we use a probabilistic model to rank the candidate keyphrases (Section 3.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preliminaries",
"sec_num": "3.1"
},
{
"text": "We first describe how we discover the set of topics T . Author-topic models have been shown to be effective for topic modeling of microblogs (Weng et al., 2010; Hong and Davison, 2010) .",
"cite_spans": [
{
"start": 141,
"end": 160,
"text": "(Weng et al., 2010;",
"ref_id": "BIBREF15"
},
{
"start": 161,
"end": 184,
"text": "Hong and Davison, 2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic discovery",
"sec_num": "3.2"
},
{
"text": "In Twitter, we observe an important characteristic of tweets: tweets are short and a single tweet tends to be about a single topic. So we apply a modified author-topic model called Twitter-LDA introduced by Zhao et al. (2011) , which assumes a single topic assignment for an entire tweet.",
"cite_spans": [
{
"start": 207,
"end": 225,
"text": "Zhao et al. (2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic discovery",
"sec_num": "3.2"
},
{
"text": "The model is based on the following assumptions. There is a set of topics T in Twitter, each represented by a word distribution. Each user has her topic interests modeled by a distribution over the topics. When a user wants to write a tweet, she first chooses a topic based on her topic distribution. Then she chooses a bag of words one by one based on the chosen topic. However, not all words in a tweet are closely related to the topic of that tweet; some are background words commonly used in tweets on different topics. Therefore, for each word in a tweet, the user first decides whether it is a background word or a topic word and then chooses the word from its respective word distribution. Formally, let \u03c6 t denote the word distribution for topic t and \u03c6 B the word distribution for background words. Let \u03b8 u denote the topic distribution of user u. Let \u03c0 denote a Bernoulli distribution that governs the choice between background words and topic words. The generation process of tweets is described in Figure 1 . Each multinomial distribution is governed by some symmetric Dirichlet distribution parameterized by \u03b1, \u03b2 or \u03b3.",
"cite_spans": [],
"ref_spans": [
{
"start": 1010,
"end": 1018,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Topic discovery",
"sec_num": "3.2"
},
{
"text": "1. Draw \u03c6 B \u223c Dir(\u03b2), \u03c0 \u223c Dir(\u03b3) 2. For each topic t \u2208 T , (a) draw \u03c6 t \u223c Dir(\u03b2) 3. For each user u \u2208 U, (a) draw \u03b8 u \u223c Dir(\u03b1) (b) for each tweet d u,m i. draw z u,m \u223c Multi(\u03b8 u ) ii. for each word w u,m,n A. draw y u,m,n \u223c Bernoulli(\u03c0) B. draw w u,m,n \u223c Multi(\u03c6 B ) if y u,m,n = 0 and w u,m,n \u223c Multi(\u03c6 zu,m ) if y u,m,n = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topic discovery",
"sec_num": "3.2"
},
{
"text": "Topical PageRank was introduced by Liu et al. (2010) to identify keywords for future keyphrase extraction. It runs topic-biased PageRank for each topic separately and boosts those words with high relevance to the corresponding topic. Formally, the topic-specific PageRank scores can be defined as follows:",
"cite_spans": [
{
"start": 35,
"end": 52,
"text": "Liu et al. (2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank for Keyword Ranking",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R t (w i ) = \u03bb j:wj \u2192wi e(w j , w i ) O(w j ) R t (w j ) + (1 \u2212 \u03bb)P t (w i ),",
"eq_num": "(1)"
}
],
"section": "Topical PageRank for Keyword Ranking",
"sec_num": "3.3"
},
{
"text": "where R t (w) is the topic-specific PageRank score of word w in topic t, e(w j , w i ) is the weight for the edge (w j \u2192 w i ), O(w j ) = w e(w j , w ) and \u03bb is a damping factor ranging from 0 to 1. The topic-specific preference value P t (w) for each word w is its random jumping probability with the constraint that w\u2208V P t (w) = 1 given topic t. A large R t (\u2022) indicates a word is a good candidate keyword in topic t. We denote this original version of the Topical PageRank as TPR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank for Keyword Ranking",
"sec_num": "3.3"
},
{
"text": "However, the original TPR ignores the topic context when setting the edge weights; the edge weight is set by counting the number of co-occurrences of the two words within a certain window size. Taking the topic of \"electronic products\" as an example, the word \"juice\" may co-occur frequently with a good keyword \"apple\" for this topic because of Apple electronic products, so \"juice\" may be ranked high by this context-free co-occurrence edge weight although it is not related to electronic products. In other words, context-free propagation may cause the scores to be off-topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank for Keyword Ranking",
"sec_num": "3.3"
},
{
"text": "So in this paper, we propose to use a topic context sensitive PageRank method. Formally, we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank for Keyword Ranking",
"sec_num": "3.3"
},
{
"text": "R t (w i ) = \u03bb j:wj \u2192wi e t (w j , w i ) O t (w j ) R t (w j )+(1\u2212\u03bb)P t (w i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank for Keyword Ranking",
"sec_num": "3.3"
},
{
"text": "( 2)Here we compute the propagation from w j to w i in the context of topic t, namely, the edge weight from w j to w i is parameterized by t. In this paper, we compute edge weight e t (w j , w i ) between two words by counting the number of co-occurrences of these two words in tweets assigned to topic t. We denote this context-sensitive topical PageRank as cTPR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank for Keyword Ranking",
"sec_num": "3.3"
},
{
"text": "After keyword ranking using cTPR or any other method, we adopt a common candidate keyphrase generation method proposed by Mihalcea and Tarau (2004) as follows. We first select the top S keywords for each topic, and then look for combinations of these keywords that occur as frequent phrases in the text collection. More details are given in Section 4.",
"cite_spans": [
{
"start": 122,
"end": 147,
"text": "Mihalcea and Tarau (2004)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank for Keyword Ranking",
"sec_num": "3.3"
},
{
"text": "With the candidate keyphrases, our next step is to rank them. While a standard method is to simply aggregate the scores of keywords inside a candidate keyphrase as the score for the keyphrase, here we propose a different probabilistic scoring function. Our method is based on the following hypotheses about good keyphrases given a topic: Relevance: A good keyphrase should be closely related to the given topic and also discriminative. For example, for the topic \"news,\" \"president obama\" is a good keyphrase while \"math class\" is not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Models for Topical Keyphrase Ranking",
"sec_num": "3.4"
},
{
"text": "Interestingness: A good keyphrase should be interesting and can attract users' attention. For example, for the topic \"music,\" \"justin bieber\" is more interesting than \"song player.\" Sometimes, there is a trade-off between these two properties and a good keyphrase has to balance both.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Models for Topical Keyphrase Ranking",
"sec_num": "3.4"
},
{
"text": "Let R be a binary variable to denote relevance where 1 is relevant and 0 is irrelevant. Let I be another binary variable to denote interestingness where 1 is interesting and 0 is non-interesting. Let k denote a candidate keyphrase. Following the probabilistic relevance models in information retrieval (Lafferty and Zhai, 2003) , we propose to use P (R = 1, I = 1|t, k) to rank candidate keyphrases for topic t. We have P (R = 1, I = 1|t, k) = P (R = 1|t, k)P (I = 1|t, k, R = 1) = P (I = 1|t, k, R = 1)P (R = 1|t, k) = P (I = 1|k)P (R = 1|t, k) = P (I = 1|k) \u00d7 P (R = 1|t, k) P (R = 1|t, k) + P (R = 0|t, k)",
"cite_spans": [
{
"start": 302,
"end": 327,
"text": "(Lafferty and Zhai, 2003)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Models for Topical Keyphrase Ranking",
"sec_num": "3.4"
},
{
"text": "= P (I = 1|k) \u00d7 1 1 + P (R=0|t,k) P (R=1|t,k) = P (I = 1|k) \u00d7 1 1 + P (R=0,k|t) P (R=1,k|t) = P (I = 1|k) \u00d7 1 1 + P (R=0|t) P (R=1|t) \u00d7 P (k|t,R=0) P (k|t,R=1) = P (I = 1|k) \u00d7 1 1 + P (R=0) P (R=1) \u00d7 P (k|t,R=0) P (k|t,R=1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Models for Topical Keyphrase Ranking",
"sec_num": "3.4"
},
{
"text": ".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Models for Topical Keyphrase Ranking",
"sec_num": "3.4"
},
{
"text": "Here we have assumed that I is independent of t and R given k, i.e. the interestingness of a keyphrase is independent of the topic or whether the keyphrase is relevant to the topic. We have also assumed that R is independent of t when k is unknown, i.e. without knowing the keyphrase, the relevance is independent of the topic. Our assumptions can be depicted by Figure 2 . We further define \u03b4 = P (R=0) P (R=1) . In general we can assume that P (R = 0) P (R = 1) because there are much more non-relevant keyphrases than relevant ones, that is, \u03b4 1. In this case, we have",
"cite_spans": [],
"ref_spans": [
{
"start": 363,
"end": 371,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Probabilistic Models for Topical Keyphrase Ranking",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log P (R = 1, I = 1|t, k)",
"eq_num": "(3)"
}
],
"section": "Probabilistic Models for Topical Keyphrase Ranking",
"sec_num": "3.4"
},
{
"text": "= log P (I = 1|k) \u00d7 1 1 + \u03b4 \u00d7 P (k|t,R=0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Models for Topical Keyphrase Ranking",
"sec_num": "3.4"
},
{
"text": "P (k|t,R=1) \u2248 log P (I = 1|k) \u00d7 P (k|t, R = 1) P (k|t, R = 0) \u00d7 1 \u03b4 = log P (I = 1|k) + log P (k|t, R = 1) P (k|t, R = 0) \u2212 log \u03b4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Models for Topical Keyphrase Ranking",
"sec_num": "3.4"
},
{
"text": "We can see that the ranking score log P (R = 1, I = 1|t, k) can be decomposed into two components, a relevance score log P (k|t,R=1) P (k|t,R=0) and an interestingness score log P (I = 1|k). The last term log \u03b4 is a constant and thus not relevant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Models for Topical Keyphrase Ranking",
"sec_num": "3.4"
},
{
"text": "Let a keyphrase candidate k be a sequence of words (w 1 , w 2 , . . . , w N ). Based on an independent assumption of words given R and t, we have",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the relevance score",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log P (k|t, R = 1) P (k|t, R = 0) = log P (w 1 w 2 . . . w N |t, R = 1) P (w 1 w 2 . . . w N |t, R = 0) = N n=1 log P (w n |t, R = 1) P (w n |t, R = 0) .",
"eq_num": "(4)"
}
],
"section": "Estimating the relevance score",
"sec_num": null
},
{
"text": "Given the topic model \u03c6 t previously learned for topic t, we can set P (w|t, R = 1) to \u03c6 t w , i.e. the probability of w under \u03c6 t . Following Griffiths and Steyvers (2004), we estimate \u03c6 t w as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the relevance score",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c6 t w = #(C t , w) + \u03b2 #(C t , \u2022) + \u03b2|V| .",
"eq_num": "(5)"
}
],
"section": "Estimating the relevance score",
"sec_num": null
},
{
"text": "Here C t denotes the collection of tweets assigned to topic t, #(C t , w) is the number of times w appears in C t , and #(C t , \u2022) is the total number of words in C t . P (w|t, R = 0) can be estimated using a smoothed background model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the relevance score",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (w|R = 0, t) = #(C, w) + \u00b5 #(C, \u2022) + \u00b5|V| .",
"eq_num": "(6)"
}
],
"section": "Estimating the relevance score",
"sec_num": null
},
{
"text": "Here #(C, \u2022) denotes the number of words in the whole collection C, and #(C, w) denotes the number of times w appears in the whole collection. After plugging Equation 5and Equation 6into Equation (4), we get the following formula for the relevance score:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the relevance score",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log P (k|t, R = 1) P (k|t, R = 0) = w\u2208k log #(C t , w) + \u03b2 #(C, w) + \u00b5 + log #(C, \u2022) + \u00b5|V| #(C t , \u2022) + \u03b2|V| = w\u2208k log #(C t , w) + \u03b2 #(C, w) + \u00b5 + |k|\u03b7,",
"eq_num": "(7)"
}
],
"section": "Estimating the relevance score",
"sec_num": null
},
{
"text": "where \u03b7 = #(C,\u2022)+\u00b5|V| #(Ct,\u2022)+\u03b2|V| and |k| denotes the number of words in k.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the relevance score",
"sec_num": null
},
{
"text": "To capture the interestingness of keyphrases, we make use of the retweeting behavior in Twitter. We use string matching with RT to determine whether a tweet is an original posting or a retweet. If a tweet is interesting, it tends to get retweeted multiple times. Retweeting is therefore a stronger indicator of user interests than tweeting. We use retweet ratio |ReTweets k | |Tweets k | to estimate P (I = 1|k). To prevent zero frequency, we use a modified add-one smoothing method. Finally, we get log P (I = 1|k) = log",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the interestingness score",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "|ReTweets k | + 1.0 |Tweets k | + l avg .",
"eq_num": "(8)"
}
],
"section": "Estimating the interestingness score",
"sec_num": null
},
{
"text": "Here |ReTweets k | and |Tweets k | denote the numbers of retweets and tweets containing the keyphrase k, respectively, and l avg is the average number of tweets that a candidate keyphrase appears in. Finally, we can plug Equation 7and Equation (8) into Equation (3) and obtain the following scoring function for ranking:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the interestingness score",
"sec_num": null
},
{
"text": "Score t (k) = log |ReTweets k | + 1.0 |Tweets k | + l avg (9) + w\u2208k log #(C t , w) + \u03b2 #(C, w) + \u00b5 + |k|\u03b7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the interestingness score",
"sec_num": null
},
{
"text": "#user #tweet #term #token 13,307 1,300,300 50,506 11,868,910 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Estimating the interestingness score",
"sec_num": null
},
{
"text": "Our preliminary experiments with Equation 9show that this scoring function usually ranks longer keyphrases higher than shorter ones. However, because our candidate keyphrase are extracted without using any linguistic knowledge such as noun phrase boundaries, longer candidate keyphrases tend to be less meaningful as a phrase. Moreover, for our task of using keyphrases to summarize Twitter, we hypothesize that shorter keyphrases are preferred by users as they are more compact. We would therefore like to incorporate some length preference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating length preference",
"sec_num": null
},
{
"text": "Recall that Equation 9is derived from P (R = 1, I = 1|t, k), but this probability does not allow us to directly incorporate any length preference. We further observe that Equation (9) tends to give longer keyphrases higher scores mainly due to the term |k|\u03b7. So here we heuristically incorporate our length preference by removing |k|\u03b7 from Equation 9, resulting in the following final scoring function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating length preference",
"sec_num": null
},
{
"text": "Score t (k) = log |ReTweets k | + 1.0 |Tweets k | + l avg (10) + w\u2208k log #(C t , w) + \u03b2 #(C, w) + \u00b5 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incorporating length preference",
"sec_num": null
},
{
"text": "We use a Twitter data set collected from Singapore users for evaluation. We used Twitter REST API 1 to facilitate the data collection. The majority of the tweets collected were published in a 20-week period from December 1, 2009 through April 18, 2010. We removed common stopwords and words which appeared in fewer than 10 tweets. We also removed all users who had fewer than 5 tweets. Some statistics of this data set after cleaning are shown in Table 1 . We ran Twitter-LDA with 500 iterations of Gibbs sampling. After trying a few different numbers of topics, we empirically set the number of topics to 30. We set \u03b1 to 50.0/|T | as Griffiths and Steyvers (2004) suggested, but set \u03b2 to a smaller value of 0.01 and \u03b3 to 20. We chose these parameter settings because they generally gave coherent and meaningful topics for our data set. We selected 10 topics that cover a diverse range of content in Twitter for evaluation of topical keyphrase extraction. The top 10 words of these topics are shown in Table 2 .",
"cite_spans": [
{
"start": 635,
"end": 664,
"text": "Griffiths and Steyvers (2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 447,
"end": 454,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1002,
"end": 1009,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Set and Preprocessing",
"sec_num": "4.1"
},
{
"text": "We also tried the standard LDA model and the author-topic model on our data set and found that our proposed topic model was better or at least comparable in terms of finding meaningful topics. In addition to generating meaningful topics, Twitter-LDA is much more convenient in supporting the computation of tweet-level statistics (e.g. the number of co-occurrences of two words in a specific topic) than the standard LDA or the author-topic model because Twitter-LDA assumes a single topic assignment for an entire tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set and Preprocessing",
"sec_num": "4.1"
},
{
"text": "As we have described in Section 3.1, there are three steps to generate keyphrases, namely, keyword ranking, candidate keyphrase generation, and keyphrase ranking. We have proposed a context-sensitive topical PageRank method (cTPR) for the first step of keyword ranking, and a probabilistic scoring function for the third step of keyphrase ranking. We now describe the baseline methods we use to compare with our proposed methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods for Comparison",
"sec_num": "4.2"
},
{
"text": "We compare our cTPR method with the original topical PageRank method (Equation (1)), which represents the state of the art. We refer to this baseline as TPR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Ranking",
"sec_num": null
},
{
"text": "For both TPR and cTPR, the damping factor is empirically set to 0.1, which always gives the best performance based on our preliminary experiments. We use normalized P (t|w) to set P t (w) because our preliminary experiments showed that this was the best among the three choices discussed by Liu et al. (2010) . This finding is also consistent with what Liu et al. (2010) found.",
"cite_spans": [
{
"start": 291,
"end": 308,
"text": "Liu et al. (2010)",
"ref_id": "BIBREF7"
},
{
"start": 353,
"end": 370,
"text": "Liu et al. (2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Ranking",
"sec_num": null
},
{
"text": "In addition, we also use two other baselines for comparison: (1) kwBL1: ranking by P (w|t) = \u03c6 t w . (2) kwBL2: ranking by P (t|w) = P (t)\u03c6 t w t P (t )\u03c6 t w .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Ranking",
"sec_num": null
},
{
"text": "We use kpRelInt to denote our relevance and interestingness based keyphrase ranking function P (R = 1, I = 1|t, k), i.e. Equation (10). \u03b2 and \u00b5 are empirically set to 0.01 and 500. Usually \u00b5 can be set to zero, but in our experiments we find that our ranking method needs a more uniform estimation of the background model. We use the following ranking functions for comparison:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyphrase Ranking",
"sec_num": null
},
{
"text": "\u2022 kpBL1: Similar to what is used by Liu et al. (2010) , we can rank candidate keyphrases by w\u2208k f (w), where f (w) is the score assigned to word w by a keyword ranking method.",
"cite_spans": [
{
"start": 36,
"end": 53,
"text": "Liu et al. (2010)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Keyphrase Ranking",
"sec_num": null
},
{
"text": "\u2022 kpBL2: We consider another baseline ranking method by w\u2208k log f (w). \u2022 kpRel: If we consider only relevance but not interestingness, we can rank candidate keyphrases by w\u2208k log #(Ct,w)+\u03b2 #(C,w)+\u00b5 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyphrase Ranking",
"sec_num": null
},
{
"text": "Since there is no existing test collection for topical keyphrase extraction from Twitter, we manually constructed our test collection. For each of the 10 selected topics, we ran all the methods to rank keywords. For each method we selected the top 3000 keywords and searched all the combinations of these words as phrases which have a frequency larger than 30. In order to achieve high phraseness, we first computed the minimum value of pointwise mutual information for all bigrams in one combination, and we removed combinations having a value below a threshold, which was empirically set to 2.135. Then we merged all these candidate phrases. We did not consider single-word phrases because we found that it would include too many frequent words that might not be useful for summaries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Standard Generation",
"sec_num": "4.3"
},
{
"text": "We asked two judges to judge the quality of the candidate keyphrases. The judges live in Singapore and had used Twitter before. For each topic, the judges were given the top topic words and a short topic description. Web search was also available. For each candidate keyphrase, we asked the judges to score it as follows: 2 (relevant, meaningful and informative), 1 (relevant but either too general or too specific, or informal) and 0 (irrelevant or meaningless). Here in addition to relevance, the other two criteria, namely, whether a phrase is meaningful and informative, were studied by Tomokiyo and Hurst eat twitter love singapore singapore hot iphone song study win food tweet idol road #singapore rain google video school game dinner blog adam mrt #business weather social youtube time team lunch facebook watch sgreinfo #news cold media love homework match eating internet april east health morning ipad songs tomorrow play ice tweets hot park asia sun twitter bieber maths chelsea chicken follow lambert room market good free music class world cream msn awesome sqft world night app justin paper united tea followers girl price prices raining apple feature math liverpool hungry time american built bank air marketing twitter finish arsenal Table 2 : Top 10 Words of Sample Topics on our Singapore Twitter Dateset. (2003) . We then averaged the scores of the two judges as the final scores. The Cohen's Kappa coefficients of the 10 topics range from 0.45 to 0.80, showing fair to good agreement 2 . We further discarded all candidates with an average score less than 1. The number of the remaining keyphrases for each topic ranges from 56 to 282.",
"cite_spans": [
{
"start": 1426,
"end": 1432,
"text": "(2003)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 610,
"end": 1359,
"text": "eat twitter love singapore singapore hot iphone song study win food tweet idol road #singapore rain google video school game dinner blog adam mrt #business weather social youtube time team lunch facebook watch sgreinfo #news cold media love homework match eating internet april east health morning ipad songs tomorrow play ice tweets hot park asia sun twitter bieber maths chelsea chicken follow lambert room market good free music class world cream msn awesome sqft world night app justin paper united tea followers girl price prices raining apple feature math liverpool hungry time american built bank air marketing twitter finish arsenal Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Gold Standard Generation",
"sec_num": "4.3"
},
{
"text": "T 2 T 4 T 5 T 10 T 12 T 13 T 18 T 20 T 23 T 25",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Standard Generation",
"sec_num": "4.3"
},
{
"text": "Traditionally keyphrase extraction is evaluated using precision and recall on all the extracted keyphrases. We choose not to use these measures for the following reasons: (1) Traditional keyphrase extraction works on single documents while we study topical keyphrase extraction. The gold standard keyphrase list for a single document is usually short and clean, while for each Twitter topic there can be many keyphrases, some are more relevant and interesting than others. (2) Our extracted topical keyphrases are meant for summarizing Twitter content, and they are likely to be directly shown to the users. It is therefore more meaningful to focus on the quality of the top-ranked keyphrases. Inspired by the popular nDCG metric in information retrieval (J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002) , we define the following normalized keyphrase quality measure (nKQM) for a method M:",
"cite_spans": [
{
"start": 755,
"end": 786,
"text": "(J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.4"
},
{
"text": "nKQM@K = 1 |T | t\u2208T K j=1 1 log 2 (j+1) score(M t,j ) IdealScore (K,t) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.4"
},
{
"text": "where T is the set of topics, M t,j is the jth keyphrase generated by method M for topic t, score(\u2022) is the average score from the two human judges, and IdealScore (K,t) is the normalization factor-score of the top K keyphrases of topic t under the ideal ranking. Intuitively, if M returns more good keyphrases in top ranks, its nKQM value will be higher.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.4"
},
{
"text": "We also use mean average precision (MAP) to measure the overall performance of keyphrase ranking:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.4"
},
{
"text": "MAP = 1 |T | t\u2208T 1 N M,t |Mt| j=1 N M,t,j j 1(score(M t,j ) \u2265 1),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.4"
},
{
"text": "where 1(S) is an indicator function which returns 1 when S is true and 0 otherwise, N M,t,j denotes the number of correct keyphrases among the top j keyphrases returned by M for topic t, and N M,t denotes the total number of correct keyphrases of topic t returned by M.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.4"
},
{
"text": "Since keyword ranking is the first step for keyphrase extraction, we first compare our keyword ranking method cTPR with other methods. For each topic, we pooled the top 20 keywords ranked by all four methods. We manually examined whether a word is a good keyword or a noisy word based on topic context. Then we computed the average number of noisy words in the 10 topics for each method. As shown in Table 5 , we can observe that cTPR performed the best among the four methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 400,
"end": 407,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Evaluation of keyword ranking methods",
"sec_num": null
},
{
"text": "Since our final goal is to extract topical keyphrases, we further compare the performance of cTPR and TPR when they are combined with a keyphrase ranking algorithm. Here we use the two baseline keyphrase ranking algorithms kpBL1 and kpBL2. The comparison is shown in Table 3 . We can see that cTPR is consistently better than the three other methods for both kpBL1 and kpBL2.",
"cite_spans": [],
"ref_spans": [
{
"start": 267,
"end": 274,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Evaluation of keyword ranking methods",
"sec_num": null
},
{
"text": "In this section we compare keypharse ranking methods. Previously we have shown that cTPR is better than TPR, kwBL1 and kwBL2 for keyword ranking. Therefore we use cTPR as the keyword ranking method and examine the keyphrase ranking method kpRelInt with kpBL1, kpBL2 and kpRel when they are combined with cTPR. The results are shown in Table 4 . From the results we can see the following: (1) Keyphrase ranking methods kpRelInt and kpRel are more effective than kpBL1 and kpBL2, especially when using the nKQM metric. (2) kpRe-lInt is better than kpRel, especially for the nKQM metric. Interestingly, we also see that for the nKQM metric, kpBL1, which is the most commonly used keyphrase ranking method, did not perform as well as kpBL2, a modified version of kpBL1.",
"cite_spans": [],
"ref_spans": [
{
"start": 335,
"end": 342,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Evaluation of keyphrase ranking methods",
"sec_num": null
},
{
"text": "We also tested kpRelInt and kpRel on TPR, kwBL1 and kwBL2 and found that kpRelInt and kpRel are consistently better than kpBL2 and kpBL1. Due to space limit, we do not report all the results here. These findings support our assumption that our proposed keyphrase ranking method is effective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of keyphrase ranking methods",
"sec_num": null
},
{
"text": "The comparison between kpBL2 with kpBL1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of keyphrase ranking methods",
"sec_num": null
},
{
"text": "shows that taking the product of keyword scores is more effective than taking their sum. kpRel and kpRelInt also use the product of keyword scores. This may be because there is more noise in Twitter than traditional documents. Common words (e.g. \"good\") and domain background words (e.g. \"Singapore\") tend to gain higher weights during keyword ranking due to their high frequency, especially in graph-based method, but we do not want such words to contribute too much to keyphrase scores. Taking the product of keyword scores is therefore more suitable here than taking their sum.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation of keyphrase ranking methods",
"sec_num": null
},
{
"text": "As shown in Table 4 , kpRelInt performs better in terms of nKQM compared with kpRel. Here we study why it worked better for keyphrase ranking. The only difference between kpRel and kpRelInt is that kpRelInt includes the factor of user interests. By manually examining the top keyphrases, we find that the topics \"Movie-TV\" (T 5 ), \"News\" (T 12 ), \"Music\" (T 20 ) and \"Sports\" (T 25 ) particularly benefited from kpRelInt compared with other topics. We find that well-known named entities (e.g. celebrities, political leaders, football clubs and big companies) and significant events tend to be ranked higher by kpRe-lInt than kpRel.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 19,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Further analysis of interestingness",
"sec_num": null
},
{
"text": "We then counted the numbers of entity and event keyphrases for these four topics retrieved by different methods, shown in Table 6 . We can see that in these four topics, kpRelInt is consistently better than kpRel in terms of the number of entity and event keyphrases retrieved. On the other hand, we also find that for some topics interestingness helped little or even hurt the performance a little, e.g. for the topics \"Food\" and \"Traffic.\" We find that the keyphrases in these topics are stable and change less over time. This may suggest that we can modify our formula to handle different topics different. We will explore this direction in our future work.",
"cite_spans": [],
"ref_spans": [
{
"start": 122,
"end": 129,
"text": "Table 6",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Further analysis of interestingness",
"sec_num": null
},
{
"text": "We also examine how the parameters in our model affect the performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter settings",
"sec_num": null
},
{
"text": "\u03bb: We performed a search from 0.1 to 0.9 with a step size of 0.1. We found \u03bb = 0.1 was the optimal parameter for cTPR and TPR. However, TPR is more sensitive to \u03bb. The performance went down quickly with \u03bb increasing. \u00b5: We checked the overall performance with \u00b5 \u2208 {400, 450, 500, 550, 600}. We found that \u00b5 = 500 \u2248 0.01|V| gave the best performance generally for cTPR. The performance difference is not very significant between these different values of \u00b5, which indicates that the our method is robust.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter settings",
"sec_num": null
},
{
"text": "We show the top 10 keyphrases discovered by cTPR+kRelInt in Table 7 . We can observe that these keyphrases are clear, interesting and informative for summarizing Twitter topics.",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 7",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Qualitative evaluation of cTPR+kpRelInt",
"sec_num": "4.6"
},
{
"text": "We hypothesize that the following applications can benefit from the extracted keyphrases: Automatic generation of realtime trendy phrases:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative evaluation of cTPR+kpRelInt",
"sec_num": "4.6"
},
{
"text": "For exampoe, keyphrases in the topic \"Food\" (T 2 ) can be used to help online restaurant reviews. Event detection and topic tracking: In the topic \"News\" top keyphrases can be used as candidate trendy topics for event detection and topic tracking. Automatic discovery of important named entities: As discussed previously, our methods tend to rank important named entities such as celebrities in high ranks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative evaluation of cTPR+kpRelInt",
"sec_num": "4.6"
},
{
"text": "In this paper, we studied the novel problem of topical keyphrase extraction for summarizing and analyzing Twitter content. We proposed the context-sensitive topical PageRank (cTPR) method for keyword ranking. Experiments showed that cTPR is consistently better than the original TPR and other baseline methods in terms of top keyword and keyphrase extraction. For keyphrase ranking, we proposed a probabilistic ranking method, which models both relevance and interestingness of keyphrases. In our experiments, this method is shown to be very effective to boost the performance of keyphrase extraction for different kinds of keyword ranking methods. In the future, we may consider how to incorporate keyword scores into our keyphrase ranking method. Note that we propose to rank keyphrases by a general formula P (R = 1, I = 1|t, k) and we have made some approximations based on reasonable assumptions. There should be other potential ways to estimate P (R = 1, I = 1|t, k). the grant No. 60933004, 61073082, 61050009 and HGJ Grant No. 2011ZX01042-001-001. ",
"cite_spans": [
{
"start": 984,
"end": 1055,
"text": "No. 60933004, 61073082, 61050009 and HGJ Grant No. 2011ZX01042-001-001.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://apiwiki.twitter.com/w/page/22554663/REST-API-Documentation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We find that judgments on topics related to social media (e.g. T4) and daily life (e.g. T13) tend to have a higher degree of disagreement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was done during Xin Zhao's visit to the Singapore Management University. Xin Zhao and Xiaoming Li are partially supported by NSFC under",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Using noun phrase heads to extract document keyphrases",
"authors": [
{
"first": "Ken",
"middle": [],
"last": "Barker",
"suffix": ""
},
{
"first": "Nadia",
"middle": [],
"last": "Cornacchia",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 13th Biennial Conference of the Canadian Society on Computational Studies of Intelligence: Advances in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "40--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ken Barker and Nadia Cornacchia. 2000. Using noun phrase heads to extract document keyphrases. In Pro- ceedings of the 13th Biennial Conference of the Cana- dian Society on Computational Studies of Intelligence: Advances in Artificial Intelligence, pages 40-52.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Finding scientific topics",
"authors": [
{
"first": "L",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Steyvers",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the National Academy of Sciences of the United States of America",
"volume": "101",
"issue": "",
"pages": "5228--5235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas L. Griffiths and Mark Steyvers. 2004. Find- ing scientific topics. Proceedings of the National Academy of Sciences of the United States of America, 101(Suppl. 1):5228-5235.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Empirical study of topic modeling in Twitter",
"authors": [
{
"first": "Liangjie",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Brian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Davison",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the First Workshop on Social Media Analytics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liangjie Hong and Brian D. Davison. 2010. Empirical study of topic modeling in Twitter. In Proceedings of the First Workshop on Social Media Analytics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Cumulated gain-based evaluation of ir techniques",
"authors": [
{
"first": "Kalervo",
"middle": [],
"last": "J\u00e4rvelin",
"suffix": ""
},
{
"first": "Jaana",
"middle": [],
"last": "Kek\u00e4l\u00e4inen",
"suffix": ""
}
],
"year": 2002,
"venue": "ACM Transactions on Information Systems",
"volume": "20",
"issue": "4",
"pages": "422--446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kalervo J\u00e4rvelin and Jaana Kek\u00e4l\u00e4inen. 2002. Cumu- lated gain-based evaluation of ir techniques. ACM Transactions on Information Systems, 20(4):422-446.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Probabilistic relevance models based on document and query generation. Language Modeling and Information Retrieval",
"authors": [
{
"first": "John",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Lafferty and Chengxiang Zhai. 2003. Probabilistic relevance models based on document and query gener- ation. Language Modeling and Information Retrieval, 13.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Incorporating document keyphrases in search results",
"authors": [
{
"first": "Quanzhi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yi-Fang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Razvan",
"middle": [],
"last": "Bot",
"suffix": ""
},
{
"first": "Xin",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 10th Americas Conference on Information Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quanzhi Li, Yi-Fang Wu, Razvan Bot, and Xin Chen. 2004. Incorporating document keyphrases in search results. In Proceedings of the 10th Americas Confer- ence on Information Systems.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Graph-based keyword extraction for single-document summarization",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Litvak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Last",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Workshop on Multi-source Multilingual Information Extraction and Summarization",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Litvak and Mark Last. 2008. Graph-based key- word extraction for single-document summarization. In Proceedings of the Workshop on Multi-source Mul- tilingual Information Extraction and Summarization, pages 17-24.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatic keyphrase extraction via topic decomposition",
"authors": [
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wenyi",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yabin",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "366--376",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiyuan Liu, Wenyi Huang, Yabin Zheng, and Maosong Sun. 2010. Automatic keyphrase extraction via topic decomposition. In Proceedings of the 2010 Confer- ence on Empirical Methods in Natural Language Pro- cessing, pages 366-376.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic labeling of multinomial topic models",
"authors": [
{
"first": "Qiaozhu",
"middle": [],
"last": "Mei",
"suffix": ""
},
{
"first": "Xuehua",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining",
"volume": "",
"issue": "",
"pages": "490--499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiaozhu Mei, Xuehua Shen, and ChengXiang Zhai. 2007. Automatic labeling of multinomial topic mod- els. In Proceedings of the 13th ACM SIGKDD Interna- tional Conference on Knowledge Discovery and Data Mining, pages 490-499.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "TextRank: Bringing order into texts",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Mihalcea and P. Tarau. 2004. TextRank: Bringing or- der into texts. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Process- ing.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Characterizing micorblogs with topic models",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Ramage",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Dumais",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Liebling",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 4th International Conference on Weblogs and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Ramage, Susan Dumais, and Dan Liebling. 2010. Characterizing micorblogs with topic models. In Pro- ceedings of the 4th International Conference on We- blogs and Social Media.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Earthquake shakes Twitter users: real-time event detection by social sensors",
"authors": [
{
"first": "Takeshi",
"middle": [],
"last": "Sakaki",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Matsuo",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 19th International World Wide Web Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake shakes Twitter users: real-time event detection by social sensors. In Proceedings of the 19th International World Wide Web Conference.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A language model approach to keyphrase extraction",
"authors": [
{
"first": "Takashi",
"middle": [],
"last": "Tomokiyo",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Hurst",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the ACL 2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takashi Tomokiyo and Matthew Hurst. 2003. A lan- guage model approach to keyphrase extraction. In Proceedings of the ACL 2003 Workshop on Multi- word Expressions: Analysis, Acquisition and Treat- ment, pages 33-40.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Predicting elections with Twitter: What 140 characters reveal about political sentiment",
"authors": [
{
"first": "Andranik",
"middle": [],
"last": "Tumasjan",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Timm",
"suffix": ""
},
{
"first": "Philipp",
"middle": [
"G"
],
"last": "Sprenger",
"suffix": ""
},
{
"first": "Isabell",
"middle": [
"M"
],
"last": "Sandner",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welpe",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 4th International Conference on Weblogs and Social Media",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andranik Tumasjan, Timm O. Sprenger, Philipp G. Sand- ner, and Isabell M. Welpe. 2010. Predicting elections with Twitter: What 140 characters reveal about politi- cal sentiment. In Proceedings of the 4th International Conference on Weblogs and Social Media.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning algorithms for keyphrase extraction",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2000,
"venue": "Information Retrieval",
"volume": "",
"issue": "4",
"pages": "303--336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Turney. 2000. Learning algorithms for keyphrase extraction. Information Retrieval, (4):303-336.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "TwitterRank: finding topic-sensitive influential twitterers",
"authors": [
{
"first": "Jianshu",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Ee-Peng",
"middle": [],
"last": "Lim",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "He",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the third ACM International Conference on Web Search and Data Mining",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianshu Weng, Ee-Peng Lim, Jing Jiang, and Qi He. 2010. TwitterRank: finding topic-sensitive influential twitterers. In Proceedings of the third ACM Interna- tional Conference on Web Search and Data Mining.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic generation of personalized annotation tags for twitter users",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Mari",
"middle": [],
"last": "Ostendorf",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "689--692",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Wu, Bin Zhang, and Mari Ostendorf. 2010. Au- tomatic generation of personalized annotation tags for twitter users. In Human Language Technologies: The 2010 Annual Conference of the North American Chap- ter of the Association for Computational Linguistics, pages 689-692.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Comparing Twitter and traditional media using topic models",
"authors": [
{
"first": "Xin",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Jianshu",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Lim",
"middle": [],
"last": "Ee-Peng",
"suffix": ""
},
{
"first": "Hongfei",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Xiaoming",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 33rd European Conference on Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xin Zhao, Jing Jiang, Jianshu Weng, Jing He, Lim Ee- Peng, Hongfei Yan, and Xiaoming Li. 2011. Compar- ing Twitter and traditional media using topic models. In Proceedings of the 33rd European Conference on Information Retrieval.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "The generation process of tweets."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Assumptions of variable dependencies."
},
"TABREF0": {
"type_str": "table",
"content": "<table/>",
"text": "Some statistics of the data set.",
"html": null,
"num": null
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td>Method</td><td colspan=\"4\">nKQM@5 nKQM@10 nKQM@25 nKQM@50</td><td>MAP</td></tr><tr><td>cTPR+kpBL1</td><td>0.61095</td><td>0.62182</td><td>0.61389</td><td>0.60618</td><td>0.6608</td></tr><tr><td>cTPR+kpBL2</td><td>0.74913</td><td>0.74294</td><td>0.69303</td><td>0.65194</td><td>0.6688</td></tr><tr><td>cTPR+kpRel</td><td>0.75361</td><td>0.74926</td><td>0.69645</td><td>0.65065</td><td>0.6696</td></tr><tr><td>cTPR+kpRelInt</td><td>0.81061</td><td>0.75184</td><td>0.71422</td><td>0.66319</td><td>0.6694</td></tr></table>",
"text": "Comparisons of keyphrase extraction for cTPR and baselines.",
"html": null,
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">kwBL1 kwBL2 TPR cTPR</td></tr><tr><td>2</td><td>3</td><td>4.9</td><td>1.5</td></tr></table>",
"text": "Comparisons of keyphrase extraction for different keyphrase ranking methods.",
"html": null,
"num": null
},
"TABREF4": {
"type_str": "table",
"content": "<table/>",
"text": "Average number of noisy words among the top 20 keywords of the 10 topics.",
"html": null,
"num": null
},
"TABREF6": {
"type_str": "table",
"content": "<table><tr><td>Methods</td><td>T 5</td><td>T 12</td><td>T 20</td><td>T 25</td></tr><tr><td>cTPR+kpRel</td><td>8</td><td>9</td><td>16</td><td>11</td></tr><tr><td>cTPR+kpRelInt</td><td>10</td><td>12</td><td>17</td><td>14</td></tr></table>",
"text": "Top 10 keyphrases of 6 topics from cTPR+kpRelInt.",
"html": null,
"num": null
},
"TABREF7": {
"type_str": "table",
"content": "<table/>",
"text": "Numbers of entity and event keyphrases retrieved by different methods within top 20.",
"html": null,
"num": null
}
}
}
}