ACL-OCL / Base_JSON /prefixD /json /D10 /D10-1036.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D10-1036",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:52:32.994443Z"
},
"title": "Automatic Keyphrase Extraction via Topic Decomposition",
"authors": [
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Wenyi",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Yabin",
"middle": [],
"last": "Zheng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": "yabin.zheng@gmail.com"
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Existing graph-based ranking methods for keyphrase extraction compute a single importance score for each word via a single random walk. Motivated by the fact that both documents and words can be represented by a mixture of semantic topics, we propose to decompose traditional random walk into multiple random walks specific to various topics. We thus build a Topical PageRank (TPR) on word graph to measure word importance with respect to different topics. After that, given the topic distribution of the document, we further calculate the ranking scores of words and extract the top ranked ones as keyphrases. Experimental results show that TPR outperforms state-of-the-art keyphrase extraction methods on two datasets under various evaluation metrics.",
"pdf_parse": {
"paper_id": "D10-1036",
"_pdf_hash": "",
"abstract": [
{
"text": "Existing graph-based ranking methods for keyphrase extraction compute a single importance score for each word via a single random walk. Motivated by the fact that both documents and words can be represented by a mixture of semantic topics, we propose to decompose traditional random walk into multiple random walks specific to various topics. We thus build a Topical PageRank (TPR) on word graph to measure word importance with respect to different topics. After that, given the topic distribution of the document, we further calculate the ranking scores of words and extract the top ranked ones as keyphrases. Experimental results show that TPR outperforms state-of-the-art keyphrase extraction methods on two datasets under various evaluation metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Keyphrases are defined as a set of terms in a document that give a brief summary of its content for readers. Automatic keyphrase extraction is widely used in information retrieval and digital library (Turney, 2000; Nguyen and Kan, 2007) . Keyphrase extraction is also an essential step in various tasks of natural language processing such as document categorization, clustering and summarization (Manning and Schutze, 2000) .",
"cite_spans": [
{
"start": 200,
"end": 214,
"text": "(Turney, 2000;",
"ref_id": "BIBREF20"
},
{
"start": 215,
"end": 236,
"text": "Nguyen and Kan, 2007)",
"ref_id": "BIBREF15"
},
{
"start": 396,
"end": 423,
"text": "(Manning and Schutze, 2000)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are two principled approaches to extracting keyphrases: supervised and unsupervised. The supervised approach (Turney, 1999) regards keyphrase extraction as a classification task, in which a model is trained to determine whether a candidate phrase is a keyphrase. Supervised methods require a doc-ument set with human-assigned keyphrases as training set. In Web era, articles increase exponentially and change dynamically, which demands keyphrase extraction to be efficient and adaptable. However, since human labeling is time consuming, it is impractical to label training set from time to time. We thus focus on the unsupervised approach in this study.",
"cite_spans": [
{
"start": 115,
"end": 129,
"text": "(Turney, 1999)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the unsupervised approach, graph-based ranking methods are state-of-the-art (Mihalcea and Tarau, 2004) . These methods first build a word graph according to word co-occurrences within the document, and then use random walk techniques (e.g., PageRank) to measure word importance. After that, top ranked words are selected as keyphrases.",
"cite_spans": [
{
"start": 79,
"end": 105,
"text": "(Mihalcea and Tarau, 2004)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Existing graph-based methods maintain a single importance score for each word. However, a document (e.g., news article or research article) is usually composed of multiple semantic topics. Taking this paper for example, it refers to two major topics, \"keyphrase extraction\" and \"random walk\". As words are used to express various meanings corresponding to different semantic topics, a word will play different importance roles in different topics of the document. For example, the words \"phrase\" and \"extraction\" will be ranked to be more important in topic \"keyphrase extraction\", while the words \"graph\" and \"PageRank\" will be more important in topic \"random walk\". Since they do not take topics into account, graph-based methods may suffer from the following two problems:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Good keyphrases should be relevant to the major topics of the given document. In graphbased methods, the words that are strongly connected with other words tend to be ranked high, which do not necessarily guarantee they are relevant to major topics of the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. An appropriate set of keyphrases should also have a good coverage of the document's major topics. In graph-based methods, the extracted keyphrases may fall into a single topic of the document and fail to cover other substantial topics of the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address the problem, it is intuitive to consider the topics of words and document in random walk for keyphrase extraction. In this paper, we propose to decompose traditional PageRank into multiple PageRanks specific to various topics and obtain the importance scores of words under different topics. After that, with the help of the document topics, we can further extract keyphrases that are relevant to the document and at the same time have a good coverage of the document's major topics. We call the topic-decomposed PageRank as Topical PageRank (TPR).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In experiments we find that TPR can extract keyphrases with high relevance and good coverage, which outperforms other baseline methods under various evaluation metrics on two datasets. We also investigate the performance of TPR with different parameter values and demonstrate its robustness. Moreover, TPR is unsupervised and languageindependent, which is applicable in Web era with enormous information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "TPR for keyphrase extraction is a two-stage process:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. Build a topic interpreter to acquire the topics of words and documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2. Perform TPR to extract keyphrases for documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We will introduce the two stages in Section 2 and Section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To run TPR on a word graph, we have to acquire topic distributions of words. There are roughly two approaches that can provide topics of words: (1) Use manually annotated knowledge bases, e.g., Word-Net (Miller et al., 1990) ; (2) Use unsupervised machine learning techniques to obtain word topics from a large-scale document collection. Since the vocabulary in WordNet cannot cover many words in modern news and research articles, we employ the second approach to build topic interpreters for TPR.",
"cite_spans": [
{
"start": 203,
"end": 224,
"text": "(Miller et al., 1990)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Building Topic Interpreters",
"sec_num": "2"
},
{
"text": "In machine learning, various methods have been proposed to infer latent topics of words and documents. These methods, known as latent topic models, derive latent topics from a large-scale document collection according to word occurrence information. Latent Dirichlet Allocation (LDA) (Blei et al., 2003) is a representative of topic models. Compared to Latent Semantic Analysis (LSA) (Landauer et al., 1998) and probabilistic LSA (pLSA) (Hofmann, 1999) , LDA has more feasibility for inference and can reduce the risk of over-fitting.",
"cite_spans": [
{
"start": 284,
"end": 303,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF0"
},
{
"start": 384,
"end": 407,
"text": "(Landauer et al., 1998)",
"ref_id": "BIBREF9"
},
{
"start": 437,
"end": 452,
"text": "(Hofmann, 1999)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Building Topic Interpreters",
"sec_num": "2"
},
{
"text": "In LDA, each word w of a document d is regarded to be generated by first sampling a topic z from d's topic distribution \u03b8 (d) , and then sampling a word from the distribution over words \u03c6 (z) that characterizes topic z. In LDA, \u03b8 (d) and \u03c6 (z) are drawn from conjugate Dirichlet priors \u03b1 and \u03b2, separately. Therefore, \u03b8 and \u03c6 are integrated out and the probability of word w given document d and priors is represented as follows:",
"cite_spans": [
{
"start": 122,
"end": 125,
"text": "(d)",
"ref_id": null
},
{
"start": 188,
"end": 191,
"text": "(z)",
"ref_id": null
},
{
"start": 230,
"end": 233,
"text": "(d)",
"ref_id": null
},
{
"start": 240,
"end": 243,
"text": "(z)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Building Topic Interpreters",
"sec_num": "2"
},
{
"text": "pr(w|d, \u03b1, \u03b2) = K z=1 pr(w|z, \u03b2)pr(z|d, \u03b1), (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Topic Interpreters",
"sec_num": "2"
},
{
"text": "where K is the number of topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building Topic Interpreters",
"sec_num": "2"
},
{
"text": "Using LDA, we can obtain the topic distribution of each word w, namely pr(z|w) for topic z \u2208 K. The word topic distributions will be used in TPR. Moreover, using the obtained word topic distributions, we can infer the topic distribution of a new document (Blei et al., 2003) , namely pr(z|d) for each topic z \u2208 K, which will be used for ranking keyphrases.",
"cite_spans": [
{
"start": 255,
"end": 274,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Building Topic Interpreters",
"sec_num": "2"
},
{
"text": "After building a topic interpreter to acquire the topics of words and documents, we can perform keyphrase extraction for documents via TPR. Given a document d, the process of keyphrase extraction using TPR consists of the following four steps which is also illustrated in Fig. 1 :",
"cite_spans": [],
"ref_spans": [
{
"start": 272,
"end": 278,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Topical PageRank for Keyphrase Extraction",
"sec_num": "3"
},
{
"text": "1. Construct a word graph for d according to word co-occurrences within d. 2. Perform TPR to calculate the importance scores for each word with respect to different topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank for Keyphrase Extraction",
"sec_num": "3"
},
{
"text": "3. Using the topic-specific importance scores of words, rank candidate keyphrases respect to each topic separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank for Keyphrase Extraction",
"sec_num": "3"
},
{
"text": "4. Given the topics of document d, integrate the topic-specific rankings of candidate keyphrases into a final ranking, and the top ranked ones are selected as keyphrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank for Keyphrase Extraction",
"sec_num": "3"
},
{
"text": "We construct a word graph according to word cooccurrences within the given document, which expresses the cohesion relationship between words in the context of document. The document is regarded as a word sequence, and the link weights between words is simply set to the co-occurrence count within a sliding window with maximum W words in the word sequence. It was reported in (Mihalcea and Tarau, 2004 ) the graph direction does not influence the performance of keyphrase extraction very much. In this paper we simply construct word graphs with directions. The link directions are determined as follows. When sliding a W -width window, at each position, we add links from the first word pointing to other words within the window. Since keyphrases are usually noun phrases, we only add adjectives and nouns in word graph.",
"cite_spans": [
{
"start": 376,
"end": 401,
"text": "(Mihalcea and Tarau, 2004",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing Word Graph",
"sec_num": "3.1"
},
{
"text": "Before introducing TPR, we first give some formal notations. We denote G = (V, E) as the graph of a document, with vertex set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank",
"sec_num": "3.2"
},
{
"text": "V = {w 1 , w 2 , \u2022 \u2022 \u2022 , w N } and link set (w i , w j ) \u2208 E if",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank",
"sec_num": "3.2"
},
{
"text": "there is a link from w i to w j . In a word graph, each vertex represents a word, and each link indicates the relatedness between words. We denote the weight of link (w i , w j ) as e(w i , w j ), and the out-degree of vertex",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank",
"sec_num": "3.2"
},
{
"text": "w i as O(w i ) = j:w i \u2192w j e(w i , w j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank",
"sec_num": "3.2"
},
{
"text": "Topical PageRank is based on PageRank (Page et al., 1998) . PageRank is a well known ranking algorithm that uses link information to assign global importance scores to web pages. The basic idea of PageRank is that a vertex is important if there are other important vertices pointing to it. This can be regarded as voting or recommendation among vertices. In PageRank, the score",
"cite_spans": [
{
"start": 38,
"end": 57,
"text": "(Page et al., 1998)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R(w i ) of word w i is defined as R(w i ) = \u03bb j:w j \u2192w i e(w j , w i ) O(w j ) R(w j ) + (1 \u2212 \u03bb) 1 |V | ,",
"eq_num": "(2)"
}
],
"section": "Topical PageRank",
"sec_num": "3.2"
},
{
"text": "where \u03bb is a damping factor range from 0 to 1, and |V | is the number of vertices. The damping factor indicates that each vertex has a probability of (1 \u2212 \u03bb) to perform random jump to another vertex within this graph. PageRank scores are obtained by running Eq. (2) iteratively until convergence. The second term in Eq. (2) can be regarded as a smoothing factor to make the graph fulfill the property of being aperiodic and irreducible, so as to guarantee that PageRank converges to a unique stationary dis-tribution. In PageRank, the second term is set to be the same value 1 |V | for all vertices within the graph, which indicates there are equal probabilities of random jump to all vertices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank",
"sec_num": "3.2"
},
{
"text": "In fact, the second term of PageRank in Eq. (2) can be set to be non-uniformed. Suppose we assign larger probabilities to some vertices, the final PageRank scores will prefer these vertices. We call this Biased PageRank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank",
"sec_num": "3.2"
},
{
"text": "The idea of Topical PageRank (TPR) is to run Biased PageRank for each topic separately. Each topic-specific PageRank prefers those words with high relevance to the corresponding topic. And the preferences are represented using random jump probabilities of words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank",
"sec_num": "3.2"
},
{
"text": "Formally, in the PageRank of a specific topic z, we will assign a topic-specific preference value p z (w) to each word w as its random jump probability with w\u2208V p z (w) = 1. The words that are more relevant to topic z will be assigned larger probabilities when performing the PageRank. For topic z, the topic-specific PageRank scores are defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank",
"sec_num": "3.2"
},
{
"text": "R z (w i ) = \u03bb j:w j \u2192w i e(w j , w i ) O(w j ) R z (w j )+(1\u2212\u03bb)p z (w i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank",
"sec_num": "3.2"
},
{
"text": "(3) In Fig. 1 , we show an example with two topics. In this figure, we use the size of circles to indicate how relevant the word is to the topic. In the PageRanks of the two topics, high preference values will be assigned to different words with respect to the topic. Finally, the words will get different PageRank values in the two PageRanks.",
"cite_spans": [],
"ref_spans": [
{
"start": 7,
"end": 13,
"text": "Fig. 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Topical PageRank",
"sec_num": "3.2"
},
{
"text": "The setting of preference values p z (w) will have a great influence to TPR. In this paper we use three measures to set preference values for TPR:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank",
"sec_num": "3.2"
},
{
"text": "\u2022 p z (w) = pr(w|z), is the probability that word w occurs given topic z. This indicates how much that topic z focuses on word w.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank",
"sec_num": "3.2"
},
{
"text": "\u2022 p z (w) = pr(z|w), is the probability of topic z given word w. This indicates how much that word w focuses on topic z.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank",
"sec_num": "3.2"
},
{
"text": "\u2022 p z (w) = pr(w|z) \u00d7 pr(z|w), is the product of hub and authority values. This measure is inspired by the work in (Cohn and Chang, 2000) .",
"cite_spans": [
{
"start": 115,
"end": 137,
"text": "(Cohn and Chang, 2000)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank",
"sec_num": "3.2"
},
{
"text": "Both PageRank and TPR are all iterative algorithms. We terminate the algorithms when the number of iterations reaches 100 or the difference of each vertex between two neighbor iterations is less than 0.001.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Topical PageRank",
"sec_num": "3.2"
},
{
"text": "After obtaining word ranking scores using TPR, we begin to rank candidate keyphrases. As reported in (Hulth, 2003) , most manually assigned keyphrases turn out to be noun phrases. We thus select noun phrases from a document as candidate keyphrases for ranking.",
"cite_spans": [
{
"start": 101,
"end": 114,
"text": "(Hulth, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extract Keyphrases Using Ranking Scores",
"sec_num": "3.3"
},
{
"text": "The candidate keyphrases of a document is obtained as follows. The document is first tokenized. After that, we annotate the document with partof-speech (POS) tags 1 . Third, we extract noun phrases with pattern (adjective) * (noun)+, which represents zero or more adjectives followed by one or more nouns. We regard these noun phrases as candidate keyphrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extract Keyphrases Using Ranking Scores",
"sec_num": "3.3"
},
{
"text": "After identifying candidate keyphrases, we rank them using the ranking scores obtained by TPR. In PageRank for keyphrase extraction, the ranking score of a candidate keyphrase p is computed by summing up the ranking scores of all words within the phrase: R(p) = w i \u2208p R(w i ) (Mihalcea and Tarau, 2004; Wan and Xiao, 2008a; Wan and Xiao, 2008b) . Then candidate keyphrases are ranked in descending order of ranking scores. The top M candidates are selected as keyphrases.",
"cite_spans": [
{
"start": 277,
"end": 303,
"text": "(Mihalcea and Tarau, 2004;",
"ref_id": "BIBREF13"
},
{
"start": 304,
"end": 324,
"text": "Wan and Xiao, 2008a;",
"ref_id": "BIBREF22"
},
{
"start": 325,
"end": 345,
"text": "Wan and Xiao, 2008b)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extract Keyphrases Using Ranking Scores",
"sec_num": "3.3"
},
{
"text": "In TPR for keyphrase extraction, we first compute the ranking scores of candidate keyphrases separately for each topic. That is for each topic z we compute",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extract Keyphrases Using Ranking Scores",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R z (p) = w i \u2208p R z (w i ).",
"eq_num": "(4)"
}
],
"section": "Extract Keyphrases Using Ranking Scores",
"sec_num": "3.3"
},
{
"text": "By considering the topic distribution of document, We further integrate topic-specific rankings of candidate keyphrases into a final ranking and extract top-ranked ones as the keyphrases of the document. Denote the topic distribution of the document d as pr(z|d) for each topic z. For each candidate keyphrase p, we compute its final ranking score as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extract Keyphrases Using Ranking Scores",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R(p) = K z=1 R z (p) \u00d7 pr(z|d).",
"eq_num": "(5)"
}
],
"section": "Extract Keyphrases Using Ranking Scores",
"sec_num": "3.3"
},
{
"text": "After ranking candidate phrases in descending order of their integrated ranking scores, we select the top M as the keyphrases of document d.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extract Keyphrases Using Ranking Scores",
"sec_num": "3.3"
},
{
"text": "To evaluate the performance of TPR for keyphrase extraction, we carry out experiments on two datasets. One dataset was built by Wan and Xiao 2 which was used in (Wan and Xiao, 2008b) . This dataset contains 308 news articles in DUC2001 (Over et al., 2001 ) with 2, 488 manually annotated keyphrases. There are at most 10 keyphrases for each document. In experiments we refer to this dataset as NEWS.",
"cite_spans": [
{
"start": 161,
"end": 182,
"text": "(Wan and Xiao, 2008b)",
"ref_id": "BIBREF23"
},
{
"start": 236,
"end": 254,
"text": "(Over et al., 2001",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments 4.1 Datasets",
"sec_num": "4"
},
{
"text": "The other dataset was built by Hulth 3 which was used in (Hulth, 2003) . This dataset contains 2, 000 abstracts of research articles and 19, 254 manually annotated keyphrases. In experiments we refer to this dataset as RESEARCH.",
"cite_spans": [
{
"start": 57,
"end": 70,
"text": "(Hulth, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments 4.1 Datasets",
"sec_num": "4"
},
{
"text": "Since neither NEWS nor RESEARCH itself is large enough to learn efficient topics, we use the Wikipedia snapshot at March 2008 4 to build topic interpreters with LDA. After removing non-article pages and the articles shorter than 100 words, we collected 2, 122, 618 articles. After tokenization, stop word removal and word stemming, we build the vocabulary by selecting 20, 000 words according to their document frequency. We learn LDA models by taking each Wikipedia article as a document. In experiments we learned several models with different numbers of topics, from 50 to 1, 500 respectively. For the words absent in topic models, we simply set the topic distribution of the word as uniform distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments 4.1 Datasets",
"sec_num": "4"
},
{
"text": "For evaluation, the words in both standard and extracted keyphrases are reduced to base forms using Porter Stemmer 5 for comparison. In experiments we select three evaluation metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "The first metric is precision/recall/F-measure represented as follows,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p = c correct c extract , r = c correct c standard , f = 2pr p + r ,",
"eq_num": "(6)"
}
],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "where c correct is the total number of correct keyphrases extracted by a method, c extract the total number of automatic extracted keyphrases, and c standard the total number of human-labeled standard keyphrases. We note that the ranking order of extracted keyphrases also indicates the method performance. An extraction method will be better than another one if it can rank correct keyphrases higher. However, precision/recall/F-measure does not take the order of extracted keyphrases into account. To address the problem, we select the following two additional metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "One metric is binary preference measure (Bpref) (Buckley and Voorhees, 2004) . Bpref is desirable to evaluate the performance considering the order in which the extracted keyphrases are ranked. For a document, if there are R correct keyphrases within M extracted keyphrases by a method, in which r is a correct keyphrase and n is an incorrect keyphrase, Bpref is defined as follows,",
"cite_spans": [
{
"start": 48,
"end": 76,
"text": "(Buckley and Voorhees, 2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Bpref = 1 R r\u2208R 1 \u2212 |n ranked higher than r| M .",
"eq_num": "(7)"
}
],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "The other metric is mean reciprocal rank (MRR) (Voorhees, 2000) which is used to evaluate how the first correct keyphrase for each document is ranked. For a document d, rank d is denoted as the rank of the first correct keyphrase with all extracted keyphrases, MRR is defined as follows,",
"cite_spans": [
{
"start": 47,
"end": 63,
"text": "(Voorhees, 2000)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "MRR = 1 |D| d\u2208D 1 rank d ,",
"eq_num": "(8)"
}
],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "where D is the document set for keyphrase extraction. Note that although the evaluation scores of most keyphrase extractors are still lower compared to other NLP-tasks, it does not indicate the performance is poor because even different annotators may assign different keyphrases to the same document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "There are four parameters in TPR that may influence the performance of keyphrase extraction including:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Influences of Parameters to TPR",
"sec_num": "4.3"
},
{
"text": "(1) window size W for constructing word graph, (2) the number of topics K learned by LDA, (3) different settings of preference values p z (w), and (4) damping factor \u03bb of TPR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Influences of Parameters to TPR",
"sec_num": "4.3"
},
{
"text": "In this section, we look into the influences of these parameters to TPR for keyphrase extraction. Except the parameter under investigation, we set parameters to the following values: W = 10, K = 1, 000, \u03bb = 0.3 and p z (w) = pr(z|w), which are the settings when TPR achieves the best (or near best) performance on both NEWS and RESEARCH. In the following tables, we use \"Pre.\", \"Rec.\" and \"F.\" as the abbreviations of precision, recall and F-measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Influences of Parameters to TPR",
"sec_num": "4.3"
},
{
"text": "In experiments on NEWS, we find that the performance of TPR is stable when W ranges from 5 to 20 as shown in Table 1 . This observation is consistent with the findings reported in (Wan and Xiao, 2008b Similarly, when W ranges from 2 to 10, the performance on RESEARCH does not change much. However, the performance on NEWS will become poor when W = 20. This is because the abstracts in RESEARCH (there are 121 words per abstract on average) are much shorter than the news articles in NEWS (there are 704 words per article on average). If the window size W is set too large on RESEARCH, the graph will become full-connected and the weights of links will tend to be equal, which cannot capture the local structure information of abstracts for keyphrase extraction.",
"cite_spans": [
{
"start": 180,
"end": 200,
"text": "(Wan and Xiao, 2008b",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 109,
"end": 116,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Window Size W",
"sec_num": "4.3.1"
},
{
"text": "We demonstrate the influence of the number of topics K of LDA models in Table 2. Table 2 shows the results when K ranges from 50 to 1, 500 and M = 10 on NEWS. We observe that the performance does not change much as the number of topics varies until the number is much smaller (K = 50). The influence is similar on RESEARCH which indicates that LDA is appropriate for obtaining topics of words and documents for TPR to extract keyphrases. Table 2 : Influence of the number of topics K when the number of keyphrases M = 10 on NEWS.",
"cite_spans": [],
"ref_spans": [
{
"start": 72,
"end": 88,
"text": "Table 2. Table 2",
"ref_id": null
},
{
"start": 438,
"end": 445,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Number of Topics K",
"sec_num": "4.3.2"
},
{
"text": "Damping factor \u03bb of TPR reconciles the influences of graph walks (the first term in Eq. 3)and preference values (the second term in Eq.(3)) to the topic-specific PageRank scores. We demonstrate the influence of \u03bb on NEWS in Fig. 2 . This figure shows the precision/recall/F-measure when \u03bb = 0.1, 0.3, 0.5, 0.7, 0.9 and M ranges from 1 to 20. From this figure we find that, when \u03bb is set from 0.2 to 0.7, the performance is consistently good. The values of Bpref and MRR also keep stable with the variations of \u03bb.",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 230,
"text": "Fig. 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Damping Factor \u03bb",
"sec_num": "4.3.3"
},
{
"text": "Finally, we explore the influences of different settings of preference values for TPR in Eq.(3). In Table 3 we show the influence when the number of keyphrases M = 10 on NEWS. From the table, we observe that pr(z|w) performs the best. The similar observation is also got on RESEARCH.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preference Values",
"sec_num": "4.3.4"
},
{
"text": "In keyphrase extraction task, it is required to find the keyphrases that can appropriately represent the topics of the document. It thus does not want to extract those phrases that may appear in multiple topics like common words. The measure pr(w|z) assigns preference values according to how frequently that words appear in the given topic. Therefore, the common words will always be assigned to a relatively large value in each topic-specific PageRank and finally obtain a high rank. pr(w|z) is thus not a good setting of preference values in TPR. In the contrast, pr(z|w) prefers those words that are focused on the given topic. Using pr(z|w) to set preference values for TPR, we will tend to extract topic-focused phrases as keyphrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preference Values",
"sec_num": "4.3.4"
},
{
"text": "Pre.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pref",
"sec_num": null
},
{
"text": "Bpref MRR pr(w|z) 0.256 0.316 0.283 0.192 0.584 pr(z|w) 0.282 0.348 0.312 0.214 0.638 prod 0.259 0.320 0.286 0.193 0.587 Table 3 : Influence of three preference value settings when the number of keyphrases M = 10 on NEWS.",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 128,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Rec. F.",
"sec_num": null
},
{
"text": "After we explore the influences of parameters to TPR, we obtain the best results on both NEWS and RESEARCH. We further select three baseline methods, i.e., TFIDF, PageRank and LDA, to compare with TPR. The TFIDF computes the ranking scores of words based on words' tf idf values in the document, namely R(w) = tf w \u00d7 log(idf w ). While in PageRank (i.e., TextRank), the ranking scores of words are obtained using Eq.(2). The two baselines do not use topic information of either words or documents. The LDA computes the ranking score for each word using the topical similarity between the word and the document. Given the topics of the document d and a word w, We have used various methods to com-pute similarity including cosine similarity, predictive likelihood and KL-divergence (Heinrich, 2005) , among which cosine similarity performs the best on both datasets. Therefore, we only show the results of the LDA baseline calculated using cosine similarity.",
"cite_spans": [
{
"start": 781,
"end": 797,
"text": "(Heinrich, 2005)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparing with Baseline Methods",
"sec_num": "4.4"
},
{
"text": "In Tables 4 and 5 we show the comparing results of the four methods on both NEWS and RESEARCH. Since the average number of manual-labeled keyphrases on NEWS is larger than RESEARCH, we set M = 10 for NEWS and M = 5 for RESEARCH. The parameter settings on both NEWS and RESEARCH have been stated in Section 4.3.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 17,
"text": "Tables 4 and 5",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Comparing with Baseline Methods",
"sec_num": "4.4"
},
{
"text": "Pre. From the two tables, we have the following observations. First, TPR outperform all baselines on both datasets. The improvements are all statistically significant tested with bootstrap re-sampling with 95% confidence. This indicates the robustness and effectiveness of TPR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": null
},
{
"text": "Second, LDA performs equal or better than TFIDF and PageRank under precision/recall/Fmeasure. However, the performance of LDA under MRR is much worse than TFIDF and PageRank, which indicates LDA fails to correctly extract the first keyphrase earlier than other methods. The reason is: (1) LDA does not consider the local structure information of document as PageRank, and (2) LDA also does not consider the frequency information of words within the document. In the contrast, TPR enjoys the advantages of both LDA and TFIDF/PageRank, by using the external topic information like LDA and internal document structure like TFIDF/PageRank. Moreover, in Figures 3 and 4 we show the precision-recall relations of four methods on NEWS and RESEARCH. Each point on the precision-recall curve is evaluated on different numbers of extracted keyphrases M . The closer the curve to the upper right, the better the overall performance. The results again illustrate the superiority of TPR.",
"cite_spans": [],
"ref_spans": [
{
"start": 649,
"end": 664,
"text": "Figures 3 and 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Rec",
"sec_num": null
},
{
"text": "At the end, in Table 6 we show an example of extracted keyphrases using TPR from a news article with title \"Arafat Says U.S. Threatening to Kill PLO Officials\" (The article number in DUC2001 is AP880510-0178). Here we only show the top 10 keyphrases, and the correctly extracted ones are marked with \"(+)\". We also mark the number of correctly extracted keyphrases after method name like \"(+7)\" after TPR. We also illustrate the top 3 topics of the document with their topicspecific keyphrases. It is obvious that the top topics, on \"Palestine\", \"Israel\" and \"terrorism\" separately, have a good coverage on the discussion objects of this article, which also demonstrate a good diversity with each other. By integrating these topic-specific keyphrases considering the proportions of these topics, we obtain the best performance of keyphrase extraction using TPR.",
"cite_spans": [],
"ref_spans": [
{
"start": 15,
"end": 22,
"text": "Table 6",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Extracting Example",
"sec_num": "4.5"
},
{
"text": "In Table 7 we also show the extracted keyphrases of baselines from the same news article. For TFIDF, it only considered the frequency properties of words, and thus highly ranked the phrases with \"PLO\" which appeared about 16 times in this article, and failed to extract the keyphrases on topic \"Israel\". LDA only measured the importance of words using document topics without considering the frequency information of words and thus missed keyphrases with high-frequency words. For example, LDA failed to extract keyphrase \"political assassination\", in which the word \"assassination\" occurred 8 times in this article.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 7",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Extracting Example",
"sec_num": "4.5"
},
{
"text": "In this paper we proposed TPR for keyphrase extraction. A pioneering achievement in keyphrase extraction was carried out in (Turney, 1999) which regarded keyphrase extraction as a classification task. Generally, the supervised methods need manually annotated training set which is time-consuming and in this paper we focus on unsupervised method. Starting with TextRank (Mihalcea and Tarau, 2004) , graph-based ranking methods are becoming the most widely used unsupervised approach for keyphrase extraction. Litvak and Last (2008) applied HITS algorithm on the word graph of a document for keyphrase extraction. Although HITS itself worked the similar performance to PageRank, we plan to explore the integration of topics and HITS in future work. Wan (2008b; 2008a ) used a small number of nearest neighbor documents to provide more knowledge for keyphrase extraction. Some methods used clustering techniques on word graphs for keyphrase extraction (Grineva et al., 2009; Liu et al., 2009) . The clustering-based method performed well on short abstracts (with F-measure 0.382 on RESEARCH) but poorly on long articles (NEWS with F-measure score 0.216) due to two non-trivial issues: (1) how to determine the number of clus- ters, and (2) how to weight each cluster and select keyphrases from the clusters. In this paper we focus on improving graph-based methods via topic decomposition, we thus only compare with PageRank as well as TFIDF and LDA and do not compare with clustering-based methods in details.",
"cite_spans": [
{
"start": 124,
"end": 138,
"text": "(Turney, 1999)",
"ref_id": "BIBREF19"
},
{
"start": 370,
"end": 396,
"text": "(Mihalcea and Tarau, 2004)",
"ref_id": "BIBREF13"
},
{
"start": 509,
"end": 531,
"text": "Litvak and Last (2008)",
"ref_id": "BIBREF10"
},
{
"start": 748,
"end": 759,
"text": "Wan (2008b;",
"ref_id": "BIBREF23"
},
{
"start": 760,
"end": 765,
"text": "2008a",
"ref_id": "BIBREF22"
},
{
"start": 950,
"end": 972,
"text": "(Grineva et al., 2009;",
"ref_id": "BIBREF3"
},
{
"start": 973,
"end": 990,
"text": "Liu et al., 2009)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In recent years, two algorithms were proposed to rank web pages by incorporating topic information of web pages within PageRank (Haveliwala, 2002; Nie et al., 2006) . The method in (Haveliwala, 2002) , is similar to TPR which also decompose PageRank into various topics. However, the method in (Haveliwala, 2002) only considered to set the preference values using pr(w|z) (In the context of (Haveliwala, 2002), w indicates Web pages). In Section 4.3.4 we have shown that the setting of using pr(z|w) is much better than pr(w|z). Nie et al. (2006) proposed a more complicated ranking method. In this method, topical PageRanks are performed together. The basic idea of (Nie et al., 2006) is, when surfing following a graph link from vertex w i to w j , the ranking score on topic z of w i will have a higher probability to pass to the same topic of w j and have a lower probability to pass to a different topic of w j . When the inter-topic jump probability is 0, this method is identical to (Haveli-wala, 2002) . We implemented the method and found that the random jumps between topics did not help improve the performance for keyphrase extraction, and did not demonstrate the results of this method.",
"cite_spans": [
{
"start": 128,
"end": 146,
"text": "(Haveliwala, 2002;",
"ref_id": "BIBREF4"
},
{
"start": 147,
"end": 164,
"text": "Nie et al., 2006)",
"ref_id": "BIBREF16"
},
{
"start": 181,
"end": 199,
"text": "(Haveliwala, 2002)",
"ref_id": "BIBREF4"
},
{
"start": 294,
"end": 312,
"text": "(Haveliwala, 2002)",
"ref_id": "BIBREF4"
},
{
"start": 529,
"end": 546,
"text": "Nie et al. (2006)",
"ref_id": "BIBREF16"
},
{
"start": 667,
"end": 685,
"text": "(Nie et al., 2006)",
"ref_id": "BIBREF16"
},
{
"start": 990,
"end": 1009,
"text": "(Haveli-wala, 2002)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "TPR (+7)",
"sec_num": null
},
{
"text": "In this paper we propose a new graph-based framework, Topical PageRank, which incorporates topic information within random walk for keyphrase extraction. Experiments on two datasets show that TPR achieves better performance than other baseline methods. We also investigate the influence of various parameters on TPR, which indicates the effectiveness and robustness of the new method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "We consider the following research directions as future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "1. In this paper we obtained latent topics using LDA learned from Wikipedia. We design to obtain topics using other machine learning methods and from other knowledge bases, and investigate the influence to performance of keyphrase extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "2. In this paper we integrated topic information in PageRank. We plan to consider topic information in other graph-based ranking algorithms such as HITS (Kleinberg, 1999) .",
"cite_spans": [
{
"start": 153,
"end": 170,
"text": "(Kleinberg, 1999)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "3. In this paper we used Wikipedia to train LDA by assuming Wikipedia is an extensive snapshot of human knowledge which can cover most topics talked about in NEWS and RESEARCH. In fact, the learned topics are highly dependent on the learning corpus. We will investigate the influence of corpus selection in training LDA for keyphrase extraction using TPR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "In experiments we use Stanford POS Tagger from http: //nlp.stanford.edu/software/tagger.shtml with English tagging model left3words-distsim-wsj.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://wanxiaojun1979.googlepages.com. 3 It was obtained from the author. 4 http://en.wikipedia.org/wiki/Wikipedia_ database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://tartarus.org/\u02dcmartin/ PorterStemmer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported by the National Natural Science Foundation of China under Grant No. 60873174. The authors would like to thank Anette Hulth and Xiaojun Wan for kindly sharing their datasets. The authors would also thank Xiance Si, Tom Chao Zhou, Peng Li for their insightful suggestions and comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Latent dirichlet allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993-1022, January.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Retrieval evaluation with incomplete information",
"authors": [
{
"first": "C",
"middle": [],
"last": "Buckley",
"suffix": ""
},
{
"first": "E",
"middle": [
"M"
],
"last": "Voorhees",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Buckley and E.M. Voorhees. 2004. Retrieval evalu- ation with incomplete information. In Proceedings of SIGIR, pages 25-32.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning to probabilistically identify authoritative documents",
"authors": [
{
"first": "David",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Huan",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "167--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Cohn and Huan Chang. 2000. Learning to prob- abilistically identify authoritative documents. In Pro- ceedings of ICML, pages 167-174.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Extracting key terms from noisy and multi-theme documents",
"authors": [
{
"first": "M",
"middle": [],
"last": "Grineva",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Grinev",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lizorkin",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of WWW",
"volume": "",
"issue": "",
"pages": "661--670",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Grineva, M. Grinev, and D. Lizorkin. 2009. Extract- ing key terms from noisy and multi-theme documents. In Proceedings of WWW, pages 661-670.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Topic-sensitive pagerank",
"authors": [
{
"first": "H",
"middle": [],
"last": "Taher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Haveliwala",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of WWW",
"volume": "",
"issue": "",
"pages": "517--526",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taher H. Haveliwala. 2002. Topic-sensitive pagerank. In Proceedings of WWW, pages 517-526.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Parameter estimation for text analysis",
"authors": [
{
"first": "G",
"middle": [],
"last": "Heinrich",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Heinrich. 2005. Parameter estimation for text anal- ysis. Web: http://www. arbylon. net/publications/text- est.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Probabilistic latent semantic indexing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "50--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of SIGIR, pages 50-57.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improved automatic keyword extraction given more linguistic knowledge",
"authors": [
{
"first": "Anette",
"middle": [],
"last": "Hulth",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "216--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anette Hulth. 2003. Improved automatic keyword ex- traction given more linguistic knowledge. In Proceed- ings of EMNLP, pages 216-223.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Authoritative sources in a hyperlinked environment",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Kleinberg",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of the ACM",
"volume": "46",
"issue": "5",
"pages": "604--632",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.M. Kleinberg. 1999. Authoritative sources in a hyper- linked environment. Journal of the ACM, 46(5):604- 632.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An introduction to latent semantic analysis. Discourse Processes",
"authors": [
{
"first": "T",
"middle": [
"K"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "P",
"middle": [
"W"
],
"last": "Foltz",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Laham",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "25",
"issue": "",
"pages": "259--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T.K. Landauer, P.W. Foltz, and D. Laham. 1998. An in- troduction to latent semantic analysis. Discourse Pro- cesses, 25:259-284.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Graph-based keyword extraction for single-document summarization",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Litvak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Last",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the workshop Multi-source Multilingual Information Extraction and Summarization",
"volume": "",
"issue": "",
"pages": "17--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marina Litvak and Mark Last. 2008. Graph-based key- word extraction for single-document summarization. In Proceedings of the workshop Multi-source Mul- tilingual Information Extraction and Summarization, pages 17-24.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Clustering to find exemplar terms for keyphrase extraction",
"authors": [
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "257--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhiyuan Liu, Peng Li, Yabin Zheng, and Maosong Sun. 2009. Clustering to find exemplar terms for keyphrase extraction. In Proceedings of EMNLP, pages 257- 266.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Foundations of statistical natural language processing",
"authors": [
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Schutze",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C.D. Manning and H. Schutze. 2000. Foundations of statistical natural language processing. MIT Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Textrank: Bringing order into texts",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Tarau",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "404--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bring- ing order into texts. In Proceedings of EMNLP, pages 404-411.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "WordNet: An on-line lexical database",
"authors": [
{
"first": "George",
"middle": [
"A"
],
"last": "Miller",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Beckwith",
"suffix": ""
},
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1990,
"venue": "International Journal of Lexicography",
"volume": "3",
"issue": "",
"pages": "235--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller, Richard Beckwith, Christiane Fell- baum, Derek Gross, and Katherine Miller. 1990. WordNet: An on-line lexical database. International Journal of Lexicography, 3:235-244.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Keyphrase extraction in scientific publications",
"authors": [
{
"first": "Thuy",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 10th International Conference on Asian Digital Libraries",
"volume": "",
"issue": "",
"pages": "317--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thuy Nguyen and Min-Yen Kan. 2007. Keyphrase ex- traction in scientific publications. In Proceedings of the 10th International Conference on Asian Digital Li- braries, pages 317-326.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Topical link analysis for web search",
"authors": [
{
"first": "Lan",
"middle": [],
"last": "Nie",
"suffix": ""
},
{
"first": "Brian",
"middle": [
"D"
],
"last": "Davison",
"suffix": ""
},
{
"first": "Xiaoguang",
"middle": [],
"last": "Qi",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of SIGIR",
"volume": "",
"issue": "",
"pages": "91--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lan Nie, Brian D. Davison, and Xiaoguang Qi. 2006. Topical link analysis for web search. In Proceedings of SIGIR, pages 91-98.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Introduction to duc-2001: An intrinsic evaluation of generic news text summarization systems",
"authors": [
{
"first": "P",
"middle": [],
"last": "Over",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Liggett",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Gilbert",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sakharov",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Thatcher",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of DUC2001",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Over, W. Liggett, H. Gilbert, A. Sakharov, and M. Thatcher. 2001. Introduction to duc-2001: An in- trinsic evaluation of generic news text summarization systems. In Proceedings of DUC2001.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The pagerank citation ranking: Bringing order to the web",
"authors": [
{
"first": "L",
"middle": [],
"last": "Page",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Brin",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Motwani",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Winograd",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Page, S. Brin, R. Motwani, and T. Winograd. 1998. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford Digital Library Tech- nologies Project, 1998.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning to extract keyphrases from text. National Research Council Canada, Institute for Information Technology",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney. 1999. Learning to extract keyphrases from text. National Research Council Canada, In- stitute for Information Technology, Technical Report ERB-1057.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Learning algorithms for keyphrase extraction",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2000,
"venue": "Information Retrieval",
"volume": "2",
"issue": "4",
"pages": "303--336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D. Turney. 2000. Learning algorithms for keyphrase extraction. Information Retrieval, 2(4):303-336.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The trec-8 question answering track report",
"authors": [
{
"first": "M",
"middle": [],
"last": "Voorhees",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of TREC",
"volume": "",
"issue": "",
"pages": "77--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Voorhees. 2000. The trec-8 question answering track report. In Proceedings of TREC, pages 77-82.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Collabrank: Towards a collaborative approach to single-document keyphrase extraction",
"authors": [
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jianguo",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of COLING",
"volume": "",
"issue": "",
"pages": "969--976",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojun Wan and Jianguo Xiao. 2008a. Collabrank: Towards a collaborative approach to single-document keyphrase extraction. In Proceedings of COLING, pages 969-976.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Single document keyphrase extraction using neighborhood knowledge",
"authors": [
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jianguo",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "855--860",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaojun Wan and Jianguo Xiao. 2008b. Single document keyphrase extraction using neighborhood knowledge. In Proceedings of AAAI, pages 855-860.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Topical PageRank for Keyphrase Extraction.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Precision, recall and F-measure of TPR with \u03bb = 0.1, 0.3, 0.5, 0.7 and 0.9 when M ranges from 1 to 20 on NEWS.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "Precision-recall results on NEWS when M ranges from 1 to 20.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF4": {
"text": "Precision-recall results on RESEARCH when M ranges from 1 to 10.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF5": {
"text": "PLO leader Yasser Arafat(+), Abu Jihad, Khalil Wazir(+), slaying Wazir, political assassination(+), Palestinian guerrillas(+), particulary Palestinian circles, Israeli officials(+), Israeli squad(+), terrorist attacks(+) TPR, Rank 1 Topic on \"Palestine\" PLO leader Yasser Arafat(+), United States(+), State Department spokesman Charles Redman, Abu Jihad, U.S. government document, Palestine Liberation Organization leader, political assassination(+), Israeli officials(+), alleged document TPR, Rank 2 Topic on \"Israel\" PLO leader Yasser Arafat(+), United States(+), Palestine Liberation Organization leader, Israeli officials(+), U.S. government document, alleged document, Arab government, slaying Wazir, State Department spokesman Charles Redman, Khalil Wazir(+) TPR, Rank 3 Topic on \"terrorism\" terrorist attacks(+), PLO leader Yasser Arafat(+), Abu Jihad, United States(+), alleged document, U.S. government document, Palestine Liberation Organization leader, State Department spokesman Charles Redman, political assassination(+), full cooperation",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF6": {
"text": "Arafat(+), PLO attacks, PLO offices, PLO officials(+), PLO leaders, Abu Jihad, terrorist attacks(+), Khalil Wazir(+), slaying wazir, political assassination(+) PageRank (+3) PLO leader Yasser Arafat(+), PLO officials(+), PLO attacks, United States(+), PLO offices, PLO leaders, State Department spokesman Charles Redman, U.S. government document, alleged document, Abu Jihad LDA (+5) PLO leader Yasser Arafat(+), Palestine Liberation Organization leader, Khalil Wazir(+), Palestinian guerrillas(+), Abu Jihad, Israeli officials(+), particulary Palestinian circles, Arab government, State Department spokesman Charles Redman, Israeli squad(+)",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF0": {
"type_str": "table",
"text": "Influence of window size W when the number of keyphrases M = 10 on NEWS.",
"html": null,
"content": "<table><tr><td>).</td></tr></table>",
"num": null
},
"TABREF1": {
"type_str": "table",
"text": "",
"html": null,
"content": "<table><tr><td/><td>.</td><td>F.</td><td>Bpref MRR</td></tr><tr><td>TFIDF</td><td colspan=\"3\">0.239 0.295 0.264 0.179 0.576</td></tr><tr><td colspan=\"4\">PageRank 0.242 0.299 0.267 0.184 0.564</td></tr><tr><td>LDA</td><td colspan=\"3\">0.259 0.320 0.286 0.194 0.518</td></tr><tr><td>TPR</td><td>0.282</td><td/></tr></table>",
"num": null
},
"TABREF2": {
"type_str": "table",
"text": "Comparing results on NEWS when the number of keyphrases M = 10.",
"html": null,
"content": "<table><tr><td>Method</td><td>Pre.</td><td>Rec.</td><td>F.</td><td>Bpref MRR</td></tr><tr><td>TFIDF</td><td colspan=\"4\">0.333 0.173 0.227 0.255 0.565</td></tr><tr><td colspan=\"5\">PageRank 0.330 0.171 0.225 0.263 0.575</td></tr><tr><td>LDA</td><td colspan=\"4\">0.332 0.172 0.227 0.254 0.548</td></tr><tr><td>TPR</td><td colspan=\"4\">0.354 0.183 0.242 0.274 0.583</td></tr></table>",
"num": null
},
"TABREF3": {
"type_str": "table",
"text": "Comparing results on RESEARCH when the number of keyphrases M = 5.",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF4": {
"type_str": "table",
"text": "Extracted keyphrases by TPR.",
"html": null,
"content": "<table/>",
"num": null
},
"TABREF5": {
"type_str": "table",
"text": "Extracted keyphrases by baselines.",
"html": null,
"content": "<table/>",
"num": null
}
}
}
}