ACL-OCL / Base_JSON /prefixQ /json /Q17 /Q17-1014.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q17-1014",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:12:25.228117Z"
},
"title": "Joint Modeling of Topics, Citations, and Topical Authority in Academic Corpora",
"authors": [
{
"first": "Jooyeon",
"middle": [],
"last": "Kim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Australian National University",
"location": {}
},
"email": "jooyeon.kim@kaist.ac.kr"
},
{
"first": "Dongwoo",
"middle": [],
"last": "Kim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Australian National University",
"location": {}
},
"email": "dongwoo.kim@anu.edu.au"
},
{
"first": "Alice",
"middle": [],
"last": "Oh",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Australian National University",
"location": {}
},
"email": "alice.oh@kaist.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Much of scientific progress stems from previously published findings, but searching through the vast sea of scientific publications is difficult. We often rely on metrics of scholarly authority to find the prominent authors but these authority indices do not differentiate authority based on research topics. We present Latent Topical-Authority Indexing (LTAI) for jointly modeling the topics, citations, and topical authority in a corpus of academic papers. Compared to previous models, LTAI differs in two main aspects. First, it explicitly models the generative process of the citations, rather than treating the citations as given. Second, it models each author's influence on citations of a paper based on the topics of the cited papers, as well as the citing papers. We fit LTAI into four academic corpora: CORA, Arxiv Physics, PNAS, and Citeseer. We compare the performance of LTAI against various baselines, starting with the latent Dirichlet allocation, to the more advanced models including author-link topic model and dynamic author citation topic model. The results show that LTAI achieves improved accuracy over other similar models when predicting words, citations and authors of publications.",
"pdf_parse": {
"paper_id": "Q17-1014",
"_pdf_hash": "",
"abstract": [
{
"text": "Much of scientific progress stems from previously published findings, but searching through the vast sea of scientific publications is difficult. We often rely on metrics of scholarly authority to find the prominent authors but these authority indices do not differentiate authority based on research topics. We present Latent Topical-Authority Indexing (LTAI) for jointly modeling the topics, citations, and topical authority in a corpus of academic papers. Compared to previous models, LTAI differs in two main aspects. First, it explicitly models the generative process of the citations, rather than treating the citations as given. Second, it models each author's influence on citations of a paper based on the topics of the cited papers, as well as the citing papers. We fit LTAI into four academic corpora: CORA, Arxiv Physics, PNAS, and Citeseer. We compare the performance of LTAI against various baselines, starting with the latent Dirichlet allocation, to the more advanced models including author-link topic model and dynamic author citation topic model. The results show that LTAI achieves improved accuracy over other similar models when predicting words, citations and authors of publications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With a corpus of scientific literature, we can observe the complex and intricate process of scientific progress. We can learn the major topics in journal articles and conference proceedings, follow authors who are prolific and influential, and find papers that are highly cited. The huge number of publications Figure 1 : Overview of Latent Topical Authority Indexing (LTAI). Based on content, citation, and authorship information (top), the LTAI discovers the topical authority of authors; it increases when a paper with certain topics gets cited (bottom). Topical authority examples are the results of the LTAI with CORA dataset and 100 topics. and authors, however, makes it practically impossible to attain any deep or detailed understanding beyond the very broad trends. For example, if we want to identify authors who are particularly influential in a specific research field, it is difficult to do so without the aid of automatic analysis.",
"cite_spans": [],
"ref_spans": [
{
"start": 311,
"end": 319,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Online publication archives, such as Google Scholar, provide near real-time metrics of schol-arly impact, such as the h-index (Hirsch, 2005) , the journal impact factor (Garfield, 2006) , and citation count. Those indices, however, are still at a coarse level of granularity. For example, both Michael Jordan and Richard Sutton are researchers with very high citation counts and h-index, but they are authoritative in different topics, Jordan in the more general machine learning topic of statistical learning, and Sutton in the topic of reinforcement learning. It would be much more helpful to know that via topical authority scores, as shown in Figure 1 .",
"cite_spans": [
{
"start": 126,
"end": 140,
"text": "(Hirsch, 2005)",
"ref_id": "BIBREF14"
},
{
"start": 169,
"end": 185,
"text": "(Garfield, 2006)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 647,
"end": 655,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Fortunately, various academic publication archives contain the full contents, references, and meta-data including titles, venues, and authors. With such data, we can build and fit a model to partition researchers' scholarly domains into topics at a much finer-grain and discover their academic authority within each topic. To do that, we propose a model named Latent Topical-Authority Indexing (LTAI), based on the LDA, to jointly model the topics, authors' topical authority, and citations among the publications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We illustrate the modeling power of the LTAI with four corpora encompassing a diverse set of academic fields: CORA, Arxiv Physics, PNAS, and Citeseer. To show the improvements over other related models, we carry out prediction tasks on words, citations and authorship using the LTAI and compare the results with those of Latent Dirichlet Allocation (Blei et al., 2003) , relational topic model (Chang and Blei, 2010) , author-link topic model, and dynamic author-cite topic model (Kataria et al., 2011) , as well as simple baselines of topical h-index. The results show that the LTAI outperforms these other models for all prediction tasks.",
"cite_spans": [
{
"start": 349,
"end": 368,
"text": "(Blei et al., 2003)",
"ref_id": null
},
{
"start": 394,
"end": 416,
"text": "(Chang and Blei, 2010)",
"ref_id": "BIBREF5"
},
{
"start": 480,
"end": 502,
"text": "(Kataria et al., 2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows. In section 2, we describe related work, including models that are most similar to the LTAI, and describe how the LTAI fits in and contributes to the field. In section 3, we describe the LTAI model in detail and present the generative process. In section 4, we explain the algorithm for approximate inference, and in section 5, we present a faster algorithm for scalability. In section 6, we describe the experimental setup and in section 7, we present the results to show that the LTAI performs better than other related models for word, citation and authorship prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this section, we review related papers, first in the field of NLP and ML-based analysis of scientific corpora, then the approaches based on the Bayesian topic models for academic corpora, and lastly joint models of topics, authors, and citations. In analyzing scientific corpora, previous research presents classifying scientific publications (Caragea et al., 2015) , recommending yet unlinked citations (Huang et al., 2015; Neiswanger et al., 2014; Jiang, 2015) , summarizing and extracting key phrases (Cohan and Goharian, 2015; Caragea et al., 2014) , triggering a better model fit (He et al., 2015) , incorporating authorship information to increase the content and link predictability (Sim et al., 2015) , estimating a paper's potential influence on academic community (Dong et al., 2015) , and finding and classifying different functionalities of citation practices (Moravcsik and Murugesan, 1975; Teufel et al., 2006; Valenzuela et al., 2015) .",
"cite_spans": [
{
"start": 346,
"end": 368,
"text": "(Caragea et al., 2015)",
"ref_id": "BIBREF4"
},
{
"start": 407,
"end": 427,
"text": "(Huang et al., 2015;",
"ref_id": "BIBREF17"
},
{
"start": 428,
"end": 452,
"text": "Neiswanger et al., 2014;",
"ref_id": "BIBREF26"
},
{
"start": 453,
"end": 465,
"text": "Jiang, 2015)",
"ref_id": "BIBREF18"
},
{
"start": 507,
"end": 533,
"text": "(Cohan and Goharian, 2015;",
"ref_id": "BIBREF6"
},
{
"start": 534,
"end": 555,
"text": "Caragea et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 588,
"end": 605,
"text": "(He et al., 2015)",
"ref_id": "BIBREF13"
},
{
"start": 693,
"end": 711,
"text": "(Sim et al., 2015)",
"ref_id": "BIBREF34"
},
{
"start": 777,
"end": 796,
"text": "(Dong et al., 2015)",
"ref_id": "BIBREF9"
},
{
"start": 875,
"end": 906,
"text": "(Moravcsik and Murugesan, 1975;",
"ref_id": "BIBREF24"
},
{
"start": 907,
"end": 927,
"text": "Teufel et al., 2006;",
"ref_id": "BIBREF36"
},
{
"start": 928,
"end": 952,
"text": "Valenzuela et al., 2015)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Several variants of topic modeling consider the relationship between topics and citations in academic corpora. Topic models that use text and citation networks are divided into two types: (a) models that generate text given citation networks (Dietz et al., 2007; Foulds and Smyth, 2013) and (b) models that generate citation networks given text (Nallapati et al., 2008; Liu et al., 2009; Chang and Blei, 2010) . While our model falls into the latter category, we also take into account the influence of the authors on the citation structure.",
"cite_spans": [
{
"start": 242,
"end": 262,
"text": "(Dietz et al., 2007;",
"ref_id": "BIBREF8"
},
{
"start": 263,
"end": 286,
"text": "Foulds and Smyth, 2013)",
"ref_id": "BIBREF10"
},
{
"start": 345,
"end": 369,
"text": "(Nallapati et al., 2008;",
"ref_id": "BIBREF25"
},
{
"start": 370,
"end": 387,
"text": "Liu et al., 2009;",
"ref_id": "BIBREF21"
},
{
"start": 388,
"end": 409,
"text": "Chang and Blei, 2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Most closely related to the LTAI is the citation author topic model (Tu et al., 2010) , the authorlink topic model, and the dynamic author-cite topic model (Kataria et al., 2011) . Similar to the LTAI, they are designed to capture the influence of the authors. However, these models infer authority by referencing only the citing papers' text, while our authority is based on the predictive modeling of comparing both the citing and the cited papers. Furthermore, the LTAI defines a generative model of citations and publications by introducing a latent authority index, whereas the previous models assume the citation structure is given. The LTAI thus explicitly gives a topical authority index, which directly answers the question of which author increases the probability of a paper being cited.",
"cite_spans": [
{
"start": 68,
"end": 85,
"text": "(Tu et al., 2010)",
"ref_id": "BIBREF37"
},
{
"start": 156,
"end": 178,
"text": "(Kataria et al., 2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The LTAI models the complex relationships among the topics of publications, the topical authority of the authors, and the citations among these publications. The generative process of the LTAI can be divided into two parts: content generation and citation network generation. We make several assumptions in the LTAI to model citation structures of academic corpora. First, we assume a citation is more likely to occur between two papers that are similar in their topic proportions. Second, we assume that an author differs in their authority (i.e., potential to induce citation) for each topic, and an author's topical authority positively correlates with the probability of citations among publications. Also, in the LTAI, when there are multiple authors in a single cited publication, their contribution of forming citations with respect to different citing papers varies according to their topical authority. Lastly, we assign different concentration parameters for a pair of papers with and without citations. In this paper, we use positive and negative links to denote pairs of papers with and without citations respectively. Figure 2 illustrates the graphical model of the LTAI, and we summarize the generative process of the LTAI, where the variables of the model are explained in the remainder of this section, as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 1131,
"end": 1139,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Latent Topical-Authority Indexing",
"sec_num": "3"
},
{
"text": "1. For each topic k, draw topic \u03b2 k \u223c Dir(\u03b1 \u03b2 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Latent Topical-Authority Indexing",
"sec_num": "3"
},
{
"text": "(a) Draw topic proportion \u03b8 i \u223c Dir(\u03b1 \u03b8 ). (b) For each word w in : i. Draw topic assignment z in |\u03b8 i \u223c Mult(\u03b8 i ). ii. Draw word w in |z in , \u03b2 1:K \u223c Mult(\u03b2 z in ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For each document i:",
"sec_num": "2."
},
{
"text": "3. For each author a and topic k: The LTAI jointly models content-related variables \u03b8, z, w, \u03b2, and author and citation related variables \u03b7 and \u03c0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For each document i:",
"sec_num": "2."
},
{
"text": "(a) Draw authority index \u03b7 ak \u223c N (0, \u03b1 \u22121 \u03b7 ). 4. For each document pair from i to j: (a) Draw influence proportion parameter \u03c0 i\u2190j \u223c Dir(\u03c0 i ). (b) Draw author a i |\u03c0 i\u2190j \u223c Mult(\u03c0 i\u2190j ). (c) Draw link x i\u2190j \u223c N ( k \u03b7 a i kzikzjk , c \u22121 i\u2190j ). \u2713 i \u2713 j \u21b5 \u2318 \u21e1 i z in z jn w in w jn k \u2318 a \u21b5 \u21b5 \u2713 N i N j K A i j x \u21b5 i j",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "For each document i:",
"sec_num": "2."
},
{
"text": "To model the content of publications, we follow a standard document generative process of Latent Dirichlet Allocation (LDA) (Blei et al., 2003) . We inherit notations for variables from LDA; \u03b8 is the per-document topic distribution, \u03b2 is the per-topic word distribution, z is the topic for each word in a document where w is the corresponding word, and \u03b1 \u03b8 , \u03b1 \u03b2 are the Dirichlet parameters of \u03b8 and \u03b2.",
"cite_spans": [
{
"start": 124,
"end": 143,
"text": "(Blei et al., 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Content Generation",
"sec_num": "3.1"
},
{
"text": "Let x i\u2190j be a binary valued variable which indicates that publication j cites publication i. We formulate a continuous variable r i\u2190j which is a linear combination of the authority variable and the topic proportion variable to approximate x i\u2190j by minimizing the sum of squared errors between the two variables. There is a body of research on using continuous user and item-related variables to approximate binary variables in the field of recommender systems (Rennie and Srebro, 2005; Koren et al., 2009) . Approximating binary variables using a linear combination of continuous variables can be probabilistically generalized (Salakhutdinov and Mnih, 2007) . Using probabilistic matrix factorization, we approximate probability mass function p(x i\u2190j ) using probability density function N (x i\u2190j |r i\u2190j , c \u22121 i\u2190j ), where the precision parameter c i\u2190j can be set differently for each pair of papers as will be discussed below.",
"cite_spans": [
{
"start": 461,
"end": 486,
"text": "(Rennie and Srebro, 2005;",
"ref_id": "BIBREF30"
},
{
"start": 487,
"end": 506,
"text": "Koren et al., 2009)",
"ref_id": "BIBREF20"
},
{
"start": 628,
"end": 658,
"text": "(Salakhutdinov and Mnih, 2007)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Citation Generation",
"sec_num": "3.2"
},
{
"text": "Content Similarity Between Publications: In the LTAI, we model relationships between a random pair of documents i and j. The probability of publication j citing publication i is proportional to the similarity of topic proportions of two publications, i.e., r i\u2190j positively correlates to k \u03b8 ik \u03b8 jk . Following the relational topic model's approach (Chang and Blei, 2010) , we",
"cite_spans": [
{
"start": 350,
"end": 372,
"text": "(Chang and Blei, 2010)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Citation Generation",
"sec_num": "3.2"
},
{
"text": "usez i = 1 N i n z i,n \u2248 \u03b8 i in- stead of a topic proportion parameter \u03b8 i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Citation Generation",
"sec_num": "3.2"
},
{
"text": "Topical Authority of Cited Paper: We introduce a K-dimensional vector \u03b7 a for representing the topical authority index of author a. \u03b7 ak is a real number drawn from the zero-mean normal distribution with variance \u03b1 \u22121 \u03b7 . Given the authority indices \u03b7 a i for author a of cited publication i, the probability of a citation is further modeled as r i\u2190j = k \u03b7 a i kzikzjk , where the authority indices can promote or demote the probability of a citation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Citation Generation",
"sec_num": "3.2"
},
{
"text": "Different Degree of Contribution among Multiple Authors: Academic publications are often written by more than one author. Thus, we need to distinguish the influence of each author on a citation between two publications. Let A i be a set of authors of publication i. To measure the influence proportion of author a \u2208 A i on the citation from i to j, we introduce an additional parameter \u03c0 i\u2190j which is a one-hot vector drawn from a Dirichlet distribution with |A i |-dimensional parameter \u03c0 i . \u03c0 i\u2190ja \u2208 {0, 1} is an element of \u03c0 i\u2190j which measures the influence of author a on the citation from j to i and sums up to one ( a\u2208A i \u03c0 i\u2190ja = 1) over all authors of publication i. We approximate the probability of citation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Citation Generation",
"sec_num": "3.2"
},
{
"text": "x i\u2190j from publication j to publication i by p(x i\u2190j |z, \u03c0 i\u2190j , a i\u2190j , \u03b7 a ) \u2248 a\u2208A i \u03c0 i\u2190ja N (x i\u2190j | k \u03b7 akzikzjk , c \u22121 i\u2190j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Citation Generation",
"sec_num": "3.2"
},
{
"text": "which is a mixture of normal distributions with precision parameter c i\u2190j . Therefore, if topic distributions of paper i and j are similar and if \u03b7 values of the cited paper's authors are high, the citation formation probability increases. On the other hand, dissimilar or topically irrelevant pair of papers with less authoritative authors on the cited paper will be assigned with low probability of citation formation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Citation Generation",
"sec_num": "3.2"
},
{
"text": "Different Treatment between Positive and Negative links: Citation is a binary problem where x i\u2190j is either one or zero. When x i\u2190j is zero, this can be interpreted in two ways: 1) the authors of citing publication j are unaware of the publication i, or 2) the publication j is not relevant to publication i. Identifying which case is true is impossible unless we are the authors of the publication. Therefore the model embraces this uncertainty in the absence of a link between publications. We control the ambiguity by the Gaussian distribution with precision parameter c i\u2190j as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Citation Generation",
"sec_num": "3.2"
},
{
"text": "c i\u2190j = c + if x i\u2190j = 1 c \u2212 if x i\u2190j = 0 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Citation Generation",
"sec_num": "3.2"
},
{
"text": "where c + > c \u2212 to ensure that we have more confidence on the observed citations. This is an implicit feedback approach that permits using negative examples (x i\u2190j = 0) of sparse observations by mitigating their importance (Hu et al., 2008; Wang and Blei, 2011; Purushotham et al., 2012) . Setting different values to the precision parameter c i\u2190j according to x i\u2190j induces cyclic dependencies between the two variables. Due to this cycle, the model no longer becomes a Bayesian network, or a directed acyclic graph. However, we note that this setting does lead to better experimental results, and we show the pragmatic benefit of the setting in the Evaluation section.",
"cite_spans": [
{
"start": 223,
"end": 240,
"text": "(Hu et al., 2008;",
"ref_id": "BIBREF16"
},
{
"start": 241,
"end": 261,
"text": "Wang and Blei, 2011;",
"ref_id": "BIBREF40"
},
{
"start": 262,
"end": 287,
"text": "Purushotham et al., 2012)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Citation Generation",
"sec_num": "3.2"
},
{
"text": "In the LTAI, the topics and the link structures are simultaneously learned, and thus the content-related variables and the citation-related variables mutually reshape one another during the posterior inference.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Modeling of the LTAI",
"sec_num": "3.3"
},
{
"text": "On the other hand, if content and citation data are modeled separately, the topics would not reflect any information about the document citation structures. Thus, in the LTAI, documents with shared links are more likely to have similar topic distributions which leads to a better model fit. We develop and explain this joint inference in section 4. In section 7, we illustrate the differences in word-level predictive powers of the LTAI and LDA.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Joint Modeling of the LTAI",
"sec_num": "3.3"
},
{
"text": "Since computing the posterior distribution of the LTAI is intractable, we use variational inference to optimize variational parameters each of which correspond to original content-related variables. Following the standard mean-field variational approach, we define fully factorized variational distributions over the topic-related latent variables q(\u03b8, \u03b2, z) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "i q(\u03b8 i |\u03a8 in ) N i q(z in |\u03b3 i ) k q(\u03b2 k |\u03bb k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "where for each factorized variational distribution, we place the same family of distributions as the original distribution. Using the variational distributions, we bound the log-likelihood of the model as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "L [q] = E q [ k log p(\u03b2 k |\u03b1 \u03b2 ) + i log p(\u03b8 i |\u03b1 \u03b8 ) + i N i log p(z in |\u03b8 d ) + log p(w in |\u03b2 z in ) (2) + i,j log p(x i\u2190j |z i , z j , \u03c0 i )] \u2212 H[q]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "where H[q] is the negative entropy of q.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "Taking the derivatives of this lower bound with respect to each variational parameter, we can obtain the coordinate ascent updates. The update for the variational Dirichlet parameters \u03b3 i and the \u03bb k is the same as the standard variational update for LDA (Blei et al., 2003) . The update for the variational multinomial \u03c6 in is:",
"cite_spans": [
{
"start": 255,
"end": 274,
"text": "(Blei et al., 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "\u03c6 ink \u221d exp j \u2202E q [log p(x i\u2190j |z i ,z j , \u03c0 i , \u03b7)] \u2202\u03c6 ink + j \u2202E q [log p(x j\u2190i |z j ,z i , \u03c0 j , \u03b7)] \u2202\u03c6 ink (3) + E q [log \u03b8 ik ] + E q [log \u03b2 kw in ]}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "where the gradient of expected log probabilities of both incoming link x i\u2190j and outgoing link x j\u2190i contribute to the variational parameter. The first expectation can be rewritten as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "E q [log p(x i\u2190j |z i ,z j , \u03c0 i , \u03b7)] (4) = E q [log a\u2208A i p(a i\u2190j = a|\u03c0 i )p(x i\u2190j |z i ,z j , \u03b7 a )] \u2265 a\u2208A i p(a i\u2190j = a|\u03c0 i )E q [log p(x i\u2190j |z i ,z j , \u03b7 a )]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "Algorithm 1 Posterior inference algorithm for the LTAI Initialize \u03b3, \u03bb, \u03c0, and \u03b7 randomly Set learning-rate parameter \u03c1 t that satisfies Robbins-Monro condition Set subsample sizes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "S V , S E , S S and S A repeat",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "Variational update: local publication parameters",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "S S \u2190 S S randomly sampled publications for i in S S do for n = 1 to N i do S \u2190 , S \u2192 \u2190 Set of S V random samples Update \u03c6 ink using Equation 4, 5, 9. end for \u03b3 i \u2190 \u03b1 \u03b8 + Ni \u03c6 in end for EM update: local author parameters S A \u2190 S A randomly sampled authors for a in S A do S E \u2190 S E random publication pairs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "Update \u03b7 a using Equation 7, 10 for i in D a and j = 1 to D do",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "\u03c0 i\u2190ja \u221d \u03c0 ia N (z i \u03b7 azj , c \u22121 i\u2190j ) end for end for Stochastic variational update for k = 1 to K d\u00f4 \u03bb k \u2190 \u03b1 \u03b2 + D S S S S d=1 N d n=1 \u03c6 k dn w dn end for Set \u03bb (t) \u2190 (1 \u2212 \u03c1 t )\u03bb (t\u22121) + \u03c1 t\u03bb until satisfying converge criteria",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "where A i is the set of authors of i. We take the lower bound of the expectation using Jensen's inequality. The last term is approximated by the first order",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "Taylor expansion E q [log p(x i\u2190j |z i ,z j , \u03b7 a )] = N (x i\u2190j |\u03c6 i diag(\u03b7 a )\u03c6 j , c \u22121 i\u2190j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "Finally, the approximated gradient of \u03c6 ink with respect to the incoming directions to document i is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "j \u2202E q [log p(x i\u2190j |z i ,z j , \u03c0 i , \u03b7)] \u2202\u03c6 ink \u2248 (5) j\u03c6 jk c i\u2190j N i a\u2208Ai \u03b7 ak (x i\u2190j \u2212\u03c6 i diag(\u03b7 a )\u03c6 j )p(a|\u03c0 i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "where diag is a diagonalization operator and\u03c6 i is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "N i n=1 \u03c6 in /N i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "We can compute the gradient with respect to the outgoing directions in the same way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Content Parameters: Variational Update",
"sec_num": "4.1"
},
{
"text": "We use the EM algorithm to update author-related parameters \u03c0, and \u03b7 based on the lower bound computed by variational inference. In the E step, we compute the probability of author contribution to the link between document i and j.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Author Parameters: EM Step",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c0 i\u2190ja = \u03c0 ia N (z i \u03b7 azj , c \u22121 i\u2190j ) a \u2208A i \u03c0 ia N (z i \u03b7 a z j , c \u22121 i\u2190j )",
"eq_num": "(6)"
}
],
"section": "Author Parameters: EM Step",
"sec_num": "4.2"
},
{
"text": "In the M step, we optimize the authority parameter \u03b7 for each author. Given the other estimated parameters, taking the gradient of L with respect to \u03b7 a and setting it to zero leads to the following update equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Author Parameters: EM Step",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b7 a = (\u03a8 a C a \u03a8 a + \u03b1 \u03b7 I) \u22121 \u03a8 a C a X a",
"eq_num": "(7)"
}
],
"section": "Author Parameters: EM Step",
"sec_num": "4.2"
},
{
"text": "Let D a be the set of documents written by author a and D a (i) be the ith document written by a. Then \u03a8 a is a vertical stack of |D a | matrices \u03a8 Da(i) , whose jth row is\u03c6 Da(i) \u2022\u03c6 j , the Hadamard product between\u03c6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Author Parameters: EM Step",
"sec_num": "4.2"
},
{
"text": "Da(i) and\u03c6 j . Similarly, C a is a vertical stack of |D a | matrices C Da(i) whose j th diagonal element is c Da(i)\u2190j , and X a is a vertical stack of |D a | vectors X Da(i) whose j th element is \u03c0 Da(i)\u2190ja \u00d7 x Da(i)\u2190j .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Author Parameters: EM Step",
"sec_num": "4.2"
},
{
"text": "Finally, we update \u03c0 Da(i)a = j \u03c0 Da(i)\u2190ja /D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Author Parameters: EM Step",
"sec_num": "4.2"
},
{
"text": "To model topical authority, the LTAI considers the linkage information. If two papers are linked by citation, the topical authority of the cited paper's authors will increase while the negative link buffers the potential noise of irrelevant topics. This algorithmic design of the LTAI results in high model complexity. To remedy this issue, we adopt the noisy gradient method from the stochastic approximation algorithm (Robbins and Monro, 1951) to subsample negative links for updating per-document topic variational parameter \u03c6 and authority parameter \u03b7. The prior work of using subsampled negative links to reduce computational complexity is introduced in (Raftery et al., 2012) . We elucidate how stochastic variational inference (Hoffman et al., 2013) is applied in our model to update global per-topic-word variational parameter \u03bb.",
"cite_spans": [
{
"start": 420,
"end": 445,
"text": "(Robbins and Monro, 1951)",
"ref_id": "BIBREF31"
},
{
"start": 659,
"end": 681,
"text": "(Raftery et al., 2012)",
"ref_id": "BIBREF29"
},
{
"start": 734,
"end": 756,
"text": "(Hoffman et al., 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Faster Inference Using Stochastic Optimization",
"sec_num": "5"
},
{
"text": "Updating\u03c6 i for document i in variational update requires iterating over every other document and computing the gradient of link probability. This leads to the time complexity O(DK) for every\u03c6 i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Updating \u03c6 and \u03b7",
"sec_num": "5.1"
},
{
"text": "To apply the noisy gradient method, we divide the gradient of the expected log probability of link into two parts:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Updating \u03c6 and \u03b7",
"sec_num": "5.1"
},
{
"text": "j \u2202E q [log p(x i\u2190j |z i ,z j , \u03c0 i )] \u2202\u03c6 ink = (8) j:xi\u2190j =1 \u2202E q [log p(x i\u2190j )] \u2202\u03c6 ink + j:xi\u2190j =0 \u2202E q [log p(x i\u2190j )] \u2202\u03c6 ink",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Updating \u03c6 and \u03b7",
"sec_num": "5.1"
},
{
"text": "where the first and the second terms of the righthand side are the gradient sums of positive links (x i\u2190j = 1) and negative links (x i\u2190j = 0) respectively. Compared to positive links, the order of negative links is close to the total number of documents, and thus computing the second term results in computational inefficiency. However, in our model, we reduced the importance of the negative links by assigning a larger variance c \u22121 i\u2190j compared to the positive links, and the empirical mean of\u03c6 j for negative links follows the Dirichlet expectation due to the large number of negative links. Therefore, we approximate the expectation of the gradient for the negative links using the noisy gradient as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Updating \u03c6 and \u03b7",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "j:xi\u2190j =0 \u2202E q [log p(x i\u2190j )] \u2202\u03c6 ink = D \u2212 i S V S V \u2202E q [log p(x i\u2190s )] \u2202\u03c6 ink",
"eq_num": "(9)"
}
],
"section": "Updating \u03c6 and \u03b7",
"sec_num": "5.1"
},
{
"text": "where D \u2212 i is the number of negative links (i.e. x i\u2190j = 0) of document i, and S V is the size of subsamples S V for the variational update. We randomly sample S V documents, compute gradients on the sampled documents, and then scale the average gradient to the size of the negative link D \u2212 i . This noisy gradient method reduces the updating time complexity from O(DK) to O(S V K). Now, we discuss how to approximate author's topical authority based on Equation 7. When K D \u00d7 D a , the computational bottleneck is \u03a8 a C a \u03a8 a which has time complexity O(DD a K 2 ). To alleviate this complexity, we once again approximate the large number of negative links using smaller number of subsamples. Specifically, while keeping the positive link rows \u03a8 a + intact, we approximate negative link rows in \u03a8 a using a smaller matrix \u03a8 a \u2212 that Figure 3 : Training time of the LTAI on CORA dataset with stochastic and batch variational inference. Using stochastic variational inference, the perword predictive log likelihood converges faster than using the batch variational inference. has S E rows, or the size of subsamples for the EM step. Using this approximation, we can represent",
"cite_spans": [],
"ref_spans": [
{
"start": 836,
"end": 844,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Updating \u03c6 and \u03b7",
"sec_num": "5.1"
},
{
"text": "\u03a8 a C a \u03a8 a as \u03a8 a C a \u03a8 a = c + \u03a8 a + \u03a8 a + + c \u2212 D \u2212 a S E \u03a8 a \u2212 \u03a8 a \u2212 (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Updating \u03c6 and \u03b7",
"sec_num": "5.1"
},
{
"text": "with the time complexity of O(S E K 2 ), where D \u2212 a is the number of rows with negative links in \u03a8 a . Although we do not incorporate rigorous analysis on the performance of our model given the size of the subsamples, we confirm that the negative link size greater than 100 does not degrade the model performance in any of our experiment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Updating \u03c6 and \u03b7",
"sec_num": "5.1"
},
{
"text": "In traditional coordinate ascent based variational inference, the global variational parameter \u03bb is updated infrequently because all the other local parameters \u03c6 need to be updated beforehand. This problem is more noticeable in the LTAI since updating \u03c6 using equation 3 is slower than updating \u03c6 in vanilla LDA; moreover, per-author topical authority variable \u03b7 is another local variable that algorithm needs to update a priori. However, using the stochastic variational inference, the global parameters are updated after a small portion of local parameters are updated (Hoffman et al., 2013) . Applying stochastic variational inference for the LTAI is straightforward after we calculate the intermediate topic-word variational parameter\u03bb by",
"cite_spans": [
{
"start": 571,
"end": 593,
"text": "(Hoffman et al., 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Updating \u03bb",
"sec_num": "5.2"
},
{
"text": "\u03b1 \u03b2 + D S S S S d=1 N d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Updating \u03bb",
"sec_num": "5.2"
},
{
"text": "n=1 \u03c6 k dn w dn from the noisy estimate of the natural gradient with respect to subsampled local parameters where N d is the number of words for document d, and S S is the subsample size for the minibatch stochastic variational inference. The final global parameter for the t th iteration \u03bb (t) is updated by (1 \u2212 \u03c1 t )\u03bb (t\u22121) + \u03c1 t\u03bb where \u03c1 t is the learning-rate. Posterior inference is guaranteed to converge at local optimum when the learning rate satisfies the condition \u221e t=1 \u03c1 t = \u221e, \u221e t=1 \u03c1 2 t < \u221e (Robbins and Monro, 1951) . In Figure 3 , we confirm that stochastic variational inference is applicable for the LTAI and reduces the training time compared to using the batch counterpart, while maintaining similar performance.",
"cite_spans": [
{
"start": 507,
"end": 532,
"text": "(Robbins and Monro, 1951)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 538,
"end": 546,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Updating \u03bb",
"sec_num": "5.2"
},
{
"text": "In this section, we introduce the four academic corpora used to fit the LTAI, describe comparison models, and provide information about the evaluation metric and parameter settings for the LTAI 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "6"
},
{
"text": "We experiment with four academic corpora: CORA (McCallum et al., 2000) , Arxiv-Physics (Gehrke et al., 2003) , the Proceedings of the National Academy of Sciences (PNAS), and Citeseer (Lu and Getoor, 2003) . CORA, Arxiv-Physics, and PNAS datasets contain abstracts only, and the locations of the citations within each paper are not preserved, whereas the Citeseer dataset contains the citation locations. For CORA, Arxiv-Physics, and PNAS, we lemmatize words, remove stop words, and discard words that occur fewer than four times in the corpus. Table 1 describes the datasets in detail. Note that we obtain citation data from the entire document, not only from the abstract. Also, we consider withincorpus citation only, which leads to less than 13 average citation counts per document for all corpora. Figure 4 : Word-level prediction result. We measured per-word log predictive probability on four datasets. As shown in the graphs, our model performs better than LDA.",
"cite_spans": [
{
"start": 47,
"end": 70,
"text": "(McCallum et al., 2000)",
"ref_id": "BIBREF23"
},
{
"start": 87,
"end": 108,
"text": "(Gehrke et al., 2003)",
"ref_id": "BIBREF12"
},
{
"start": 184,
"end": 205,
"text": "(Lu and Getoor, 2003)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 803,
"end": 811,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Datasets",
"sec_num": "6.1"
},
{
"text": "We compare predictive performance of the LTAI with five other models. Different comparison models have different degrees of expressive powers. Each model conducts a certain type of prediction task; while RTM, ALTM, and DACTM predicts citation structures, the topical h-index predicts authorship information. The baseline topic models are implemented based on the inference methods suggested in the corresponding papers; LDA, RTM and the LTAI variants use variational inference, while ALTM and DACTM use collapsed Gibbs sampling. Finally, all the conditions for implementation such as the choice of programming language and modules, except for parts that convey each model's unique assumption, are identically set; thus, the performance differences between models are due to their model assumption and different degrees of data usage, rather than the implementation technicalities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Models",
"sec_num": "6.2"
},
{
"text": "Latent Dirichlet Allocation: LDA (Blei et al., 2003) discovers topics and represents each publication by a mixture of the topics. Compared to other models, LDA only uses the content information.",
"cite_spans": [
{
"start": 33,
"end": 52,
"text": "(Blei et al., 2003)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Models",
"sec_num": "6.2"
},
{
"text": "LTAI-n%: In LTAI-n%, we remove n% of actual citations and displace them with arbitrarily selected false connections. Note that the link structures are displaced rather than removed. If the citation links are just removed, the LTAI and LTAI-n% cannot be fairly compared as the density of the citation structures will be affected and each model needs different concentration values. Performance difference between the LTAI and this indicates that under identical conditions, using the correct linkage information is indeed beneficial for prediction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Models",
"sec_num": "6.2"
},
{
"text": "LTAI-C: In LTAI-C the precision parameter c i\u2190j has constant value, rather than assigning different values according to x i\u2190j as discussed in section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Models",
"sec_num": "6.2"
},
{
"text": "LTAI-SEP: LTAI-SEP has an identical structure as the LTAI, but the topic and the authority variables are separately learned. Once the topic variables are learned using the vanilla LDA, authority and citation variables are then inferred consecutively. Thus, the performance edge of the LTAI over LTAI-SEP highlights the necessity of the LTAI's joint modeling in which both topic and authority related variables reshape one another in an iterative fashion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Models",
"sec_num": "6.2"
},
{
"text": "Relational Topic Model: RTM (Chang and Blei, 2010) jointly models content and citation, and thus, topic proportions of a pair of publications become similar if the pair is connected by citations. Compared to the LTAI, the author information is not considered, the link structure does not have directionality and the model does not consider negative links.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Models",
"sec_num": "6.2"
},
{
"text": "Author-Link Topic Model: ALTM (Kataria et al., 2011 ) is a variation of author topic model (ATM) (Rosen-Zvi et al., 2004) that models both topical interests and influence of authors in scientific corpora. The model uses content information of citing papers and names of the cited authors as word tokens. ALTM outputs per-topic author distribution that functions as author influence indices.",
"cite_spans": [
{
"start": 30,
"end": 51,
"text": "(Kataria et al., 2011",
"ref_id": "BIBREF19"
},
{
"start": 97,
"end": 121,
"text": "(Rosen-Zvi et al., 2004)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Models",
"sec_num": "6.2"
},
{
"text": "Dynamic Author-Citation Topic Model: DACTM (Kataria et al., 2011) is an extension of ALTM that requires publication corpora which preserves sentence structures. To model author influence, DACTM selectively uses words that are close to the point where the citation is presented. Figure 5 : Citation prediction results. The task is to find out which paper is originally linked to a cited paper. We measure Mean Reciprocal Rank (MRR) to evaluate model performance. For all cases, the LTAI performs better than the other methods.",
"cite_spans": [
{
"start": 43,
"end": 65,
"text": "(Kataria et al., 2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 278,
"end": 286,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison Models",
"sec_num": "6.2"
},
{
"text": "In our corpora, only Citeseer dataset preserves the sentence structure. Topical h-index: To compute topical h-index, we separate the papers into several clusters using LDA and calculate the h-index within each cluster. Topical h-index is used for author prediction in the same manner as we did for our model, except the topic proportions are replaced to the LDA's result and \u03b7 is replaced to the topical h-index values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Models",
"sec_num": "6.2"
},
{
"text": "We use MRR (Voorhees, 1999) to measure the predictive performance of the LTAI and the comparison models. MRR is a widely used metric for evaluating link prediction tasks (Balog and de Rijke, 2007; Diehl et al., 2007; Radlinski et al., 2008; Huang et al., 2015) . When the models output the correct an-swers as ranks, MRR is the inverse of the harmonic mean of such ranks.",
"cite_spans": [
{
"start": 11,
"end": 27,
"text": "(Voorhees, 1999)",
"ref_id": "BIBREF39"
},
{
"start": 170,
"end": 196,
"text": "(Balog and de Rijke, 2007;",
"ref_id": "BIBREF1"
},
{
"start": 197,
"end": 216,
"text": "Diehl et al., 2007;",
"ref_id": "BIBREF7"
},
{
"start": 217,
"end": 240,
"text": "Radlinski et al., 2008;",
"ref_id": "BIBREF28"
},
{
"start": 241,
"end": 260,
"text": "Huang et al., 2015)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric and Parameter Settings",
"sec_num": "6.3"
},
{
"text": "We report the parameter values used for evaluations. For all datasets, we set c \u2212 to 1. To predict a citation, we set c + to 10,000, 100, 1,000, 10, and to predict authorship, we set c + to 1,000, 1,000, 10,000, 1,000 for CORA, Arxiv-Physics, PNAS, and Citeseer datasets. These values are obtained through exhaustive parameter analysis. We set \u03b1 \u03b8 to 1, and \u03b1 \u03b2 to 0.1. We fix the subsample sizes to 500 2 . For fair comparison, all the parameters that the LTAI and the baseline models share are set to have the same values, and for other parameters that uniquely belong to the baseline models, the values are exhaustively tuned as done in the LTAI. Finally, we note that all parameters are tuned using the training set, and test dataset is used only for the testing purpose.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric and Parameter Settings",
"sec_num": "6.3"
},
{
"text": "We conduct the evaluation of the LTAI with three different quantitative tasks, along with one qualitative analysis. In the first task, we check whether using citation and authorship information in the LTAI helps increase the word-level predictive performance. In the second and third tasks, we measure the predictability of the LTAI regarding missing publication-publication linkage and authorpublication linkage. With these two tasks, we compare the predictive power of the LTAI with other comparison models and use MRR as evaluation metric. Finally, we observe famous researchers' topical authority scores generated by the LTAI and investigate how these scores capture notable academic characteristics of the researchers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "7"
},
{
"text": "In the LTAI, citation and authorship information affect per-document topic proportions, as can be confirmed in equation 3. This joint modeling of content and linkage structure, compared to vanilla LDA that uses content data only, yields better performance in terms of predicting missing words in documents. In this task, we use log-predictive probability, a metric that is widely used in other researches for measuring model fitness (Teh et al., 2006; Asuncion et Figure 6 : Author prediction results. The task is to find out who the author of a cited paper is, given all the citing papers. For all cases, the LTAI performs better than the other methods. Hoffman et al., 2013) . For each corpus, we separate one third of the documents as the test set. For all documents in each test set, we use half of the words for training per-document topic proportion \u03b8 and predict the probability of word occurrence regarding the remaining half. Specifically, the predictive probability for a word in a test set w new with respect to the given words w obs and the training document D train is computed using the equation Figure 4 illustrates the per-word log-predictive probability in each corpus. We confirm that when using the LTAI, the log predictive probability converges at a higher value compared to the result using LDA. Also, when we corrupt the link structure from 10% to 30% the predictive performances of the LTAI gradually decrease. Thus, the LTAI's superior predictive performance is attributed to its usage of correct citations rather than the algorithmic bias.",
"cite_spans": [
{
"start": 433,
"end": 451,
"text": "(Teh et al., 2006;",
"ref_id": "BIBREF35"
},
{
"start": 452,
"end": 463,
"text": "Asuncion et",
"ref_id": "BIBREF0"
},
{
"start": 655,
"end": 676,
"text": "Hoffman et al., 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 464,
"end": 472,
"text": "Figure 6",
"ref_id": null
},
{
"start": 1110,
"end": 1118,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word-level Prediction",
"sec_num": "7.1"
},
{
"text": "p(w new |D train , w obs ) = K k=1 E q [\u03b8 k ]E q [\u03b2 k,wnew ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word-level Prediction",
"sec_num": "7.1"
},
{
"text": "We evaluate model predictability regarding which publication is originally citing a certain publication. Specifically, we randomly remove one citation from each of the documents in the test set. To predict the citation link between publications, we first compute the probability that publication j cites",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Citation Prediction",
"sec_num": "7.2"
},
{
"text": "i from p(x i\u2190j |z, A i , \u03c0 i ) \u221d a\u2208A i \u03c0 i\u2190ja N (x i\u2190j |z i diag(\u03b7 a )z j , c \u22121 + )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Citation Prediction",
"sec_num": "7.2"
},
{
"text": ". Given the topic proportion of the cited publication \u03b8 i and the topical authorities of the authors \u03b7 a , we compute which publication is more likely to cite the publication. Based on our model assumption in subsection 3.2, using topical authority increases the performance of predicting linkage structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Citation Prediction",
"sec_num": "7.2"
},
{
"text": "In Figure 5 , the LTAI yields better citation prediction performance than other models for all datasets and with the most number of topics. Since the LTAI incorporates topical authority for predicting citations, it performs better than RTM, which does not discover topical authority. We can attribute the better performance of the LTAI compared to ALTM and DACTM to the LTAI's multiple model assumptions explained in section 3. We note that DACTM requires additional information such as citation location and sentence structure, and thus, is only applicable for limited kinds of datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Citation Prediction",
"sec_num": "7.2"
},
{
"text": "For author prediction, we randomly remove one of the authors from documents in the test set while preserving citation structures. Similar to citation prediction, we predict which author is more likely to write the cited publication based on the topic proportions of cited publication i and a set of citing publications J . We approximate the probability of researcher a being an author of publication",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Author Prediction",
"sec_num": "7.3"
},
{
"text": "i from p(a|z, \u03b7 a , x i\u2190j ) \u221d j\u2208J N (x i\u2190j |z i diag(\u03b7 a )z j , c \u22121 + ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Author Prediction",
"sec_num": "7.3"
},
{
"text": "Because the mixture proportion of an unknown author \u03c0 i\u2190ja cannot be obtained during posterior inference, we assume the cited publication is written by a single author to approximate the probability. For author prediction, we choose the author that maximizes the above probability. In Figure 6 , the LTAI outperforms the comparison models in most of the settings. Table 4 : Authors who have a high authority score in an artificial intelligence topic.",
"cite_spans": [],
"ref_spans": [
{
"start": 285,
"end": 293,
"text": "Figure 6",
"ref_id": null
},
{
"start": 364,
"end": 371,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Author Prediction",
"sec_num": "7.3"
},
{
"text": "has a similar authority index for this topic but has lower authority indices for other topics than Tomaso Poggio. Also, our model is able to capture topic-specific authoritative researchers that have relatively low topic-irrelevant scores. For example, researchers such as Stan Sclaroff and Kentaro Toyama are the top 5 authoritative researchers in a computer vision topic according to the LTAI, but it is difficult to detect these researchers out of many other authoritative authors using the topic-irrelevant scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Author Prediction",
"sec_num": "7.3"
},
{
"text": "Finally, the LTAI detect researchers' topical authority that is peripheral but not negligible. Mark Jones in Table 4 , who has high h-index, a number of citations, and wrote many papers, is a researcher whose academic interest lies in programming language design and application. However, while most of his papers' main topics are about programming language, he often uses inference techniques and algorithms in machine learning in his papers. Our model captures that tendency and assigns a positive authority score for machine learning to him. ",
"cite_spans": [],
"ref_spans": [
{
"start": 109,
"end": 116,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Author Prediction",
"sec_num": "7.3"
},
{
"text": "We proposed Latent Topical Authority Indexing (LTAI) to model the topical-authority of academic researchers. Based on the hypothesis that authors play an important role in citations, we specifically focus on their authority and develop a Bayesian model to capture the authority. With model assumptions that are necessary for extracting convincing and interpretable topical authority values for authors, we have proposed speed-up methods that are based on stochastic optimization. While there is prior research in topic modeling that provides topic-specific indices when modeling the link structure, these do not extend to individual indices, and most previous citation-based indices are defined for each individual but without considering topics. On the other hand, our model combines the merits of both topic-specific and individual-specific indices to provide topical authority information for academic researchers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Discussion",
"sec_num": "8"
},
{
"text": "With four academic datasets, we demonstrated that the joint modeling of publication and author related variables improve topic quality, when compared to vanilla LDA. We quantitatively manifested that including authority variables increases the predictive performance in terms of citation and author predictions. Finally, we qualitatively demonstrated the interpretability by topical-authority outcomes of the LTAI from the CORA corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Discussion",
"sec_num": "8"
},
{
"text": "Finally, there are issues that can be dealt with in future work. We do not consider time information in terms of when papers are published and when pairs of papers are linked; we can use datasets that incorporate timestamps to enhance the model capability to predict future citations and authorships.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Discussion",
"sec_num": "8"
},
{
"text": "Transactions of the Association for Computational Linguistics, vol. 5, pp. 191-204, 2017. Action Editor: Noah Smith.Submission batch: 11/2016; Revision batch: 2/2017; Published 7/2017. c 2017 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Posterior InferenceWe develop a hybrid inference algorithm in which the posterior of content-related parameters \u03b8, z, and \u03b2 are approximated by variational inference, and author-related parameters \u03c0 and \u03b7 are approximated by an expectation-maximization (EM) algorithm. In algorithm 1, we summarize the full inference procedure of the LTAI.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Code and datasets are available at http://uilab. kaist.ac.kr/research/TACL2017/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Although we do not present thorough sensitivity analysis in this paper, we confirm that the performance of our model was robust against adjusting the parameters within a factor of 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Jae Won Kim for collecting and refining the dataset and for contributing to the early version of the manuscript. We also thank Action Editor Noah Smith and the anonymous reviewers for their detailed and thoughtful comments; and we thank Joon Hee Kim and the other UILab members for providing helpful insights in the research as well. Finally, we thank Editorial Assistant Cindy Robinson for carefully proofreading the manuscript. This work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIP) (No.B0101-15-0307, Basic Software Research in Human-level Lifelong Machine Learning (Machine Learning Center)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": " Table 2 : Authors with the highest h-index scores and their statistics from the CORA dataset. We show the authors with their h-index, number of citations (# cite), and number of papers (# paper), representative topic, and their topical authority (T Authority) of the corresponding topic. We show that while the authors have the highest h-indices with lots of papers written and lots of citations earned, the topics that the authors exert authority varies.",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 8,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "To stress our model's additional characteristics that are not observed in the quantitative analysis, we look at the assigned topical authority indices as well as other statistics of some researchers in the dataset. In the analyses, we set the number of topics to 100, and use CORA dataset for demonstration. We first demonstrate famous authors' authoritative topics that can be unveiled using our model. In Table 2 , we list top 10 authors with highest h-indices along with their number of citations, number of papers, and their representative topics. Authors' representative topics are the topics with highest authority scores. In the table, we observe that all authors with top h-indices have written at least 18 papers and earned at least 207 citations, which are the top 0.8% and 0.2% values respectively. However, their authoritative topics retrieved by the LTAI do not overlap for any of the authors. This table illustrates that each of the top authors in the table exerts authority on different academic topics that can be captured by the LTAI, while the authors commonly have highest h-index scores as well as other statistics.We now stress attributes of topical authority index that are different from other topic irrelevant statistics. From Tables 3 to 5, we show four example topics extracted by our model and list notable authors within each topic with their topical authority indices, h-indices, number of citations, and number of papers. In the tables, we first find that all four authors with highest topical authority values, Monica Lam, Alex Pentland, Michael Jordan, and Mihir Bellare are also listed in the topic-irrelevant authority rankings in Table 2 . From this, we confirm that authority score of the LTAI has a certain degree of correlation to other statistics, while it splits the authors by their authoritative topics.At the same time, the topical authority score correlates less with topic-irrelevant statistics than those statistics correlate with themselves. In Table 5 , Oded Goldreich has lower topical authority score for the computer security topic while having higher topic irrelevant scores than the above four researchers, because his main research filed is in the theory of computation and randomness. We can spot authors who exert high authority on multiple academic fields, such as Tomaso Poggio in Table 3 and in Table 4 . Similarity, when comparing Federico Girosi and Tomaso Poggio in Table 4 , the two researchers have similar authority indices for this topic while Tomaso Poggio has higher values for the other three topic-irrelevant indices. This is a reasonable outcome when we investigate the two researchers' publication history. Federico Girosi has relatively focused academic interest, with his publication history being skewed towards machine-learning-related subjects, while Tomaso Poggio has broader topical interests that include computer vision and statistical learning, while also co-authoring most of the papers that Federico Girosi wrote. Thus, Federico Girosi",
"cite_spans": [],
"ref_spans": [
{
"start": 407,
"end": 414,
"text": "Table 2",
"ref_id": null
},
{
"start": 1665,
"end": 1672,
"text": "Table 2",
"ref_id": null
},
{
"start": 1992,
"end": 1999,
"text": "Table 5",
"ref_id": null
},
{
"start": 2339,
"end": 2361,
"text": "Table 3 and in Table 4",
"ref_id": null
},
{
"start": 2428,
"end": 2435,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "7.4"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "On smoothing and inference for topic models",
"authors": [
{
"first": "Arthur",
"middle": [],
"last": "Asuncion",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Welling",
"suffix": ""
},
{
"first": "Padhraic",
"middle": [],
"last": "Smyth",
"suffix": ""
},
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
}
],
"year": 2009,
"venue": "UAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arthur Asuncion, Max Welling, Padhraic Smyth, and Yee Whye Teh. 2009. On smoothing and inference for topic models. In UAI.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Determining expert profiles (with an application to expert finding)",
"authors": [
{
"first": "Krisztian",
"middle": [],
"last": "Balog",
"suffix": ""
},
{
"first": "Maarten",
"middle": [],
"last": "De Rijke",
"suffix": ""
}
],
"year": 2007,
"venue": "In IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krisztian Balog and Maarten de Rijke. 2007. Determin- ing expert profiles (with an application to expert find- ing). In IJCAI.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Citationenhanced keyphrase extraction from research papers: A supervised approach",
"authors": [
{
"first": "Cornelia",
"middle": [],
"last": "Caragea",
"suffix": ""
},
{
"first": "Adrian",
"middle": [],
"last": "Florin",
"suffix": ""
},
{
"first": "Andreea",
"middle": [],
"last": "Bulgarov",
"suffix": ""
},
{
"first": "Sujatha Das",
"middle": [],
"last": "Godea",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gollapalli",
"suffix": ""
}
],
"year": 2014,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cornelia Caragea, Florin Adrian Bulgarov, Andreea Godea, and Sujatha Das Gollapalli. 2014. Citation- enhanced keyphrase extraction from research papers: A supervised approach. In EMNLP.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Co-training for topic classification of scholarly data",
"authors": [
{
"first": "Cornelia",
"middle": [],
"last": "Caragea",
"suffix": ""
},
{
"first": "Florin",
"middle": [],
"last": "Bulgarov",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cornelia Caragea, Florin Bulgarov, and Rada Mihalcea. 2015. Co-training for topic classification of scholarly data. In EMNLP.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Hierarchical relational models for document networks",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2010,
"venue": "The Annals of Applied Statistics",
"volume": "4",
"issue": "1",
"pages": "124--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Chang and David Blei. 2010. Hierarchical re- lational models for document networks. The Annals of Applied Statistics, 4(1):124-150.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Scientific article summarization using citation-context and article's discourse structure",
"authors": [
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
},
{
"first": "Nazli",
"middle": [],
"last": "Goharian",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arman Cohan and Nazli Goharian. 2015. Scientific arti- cle summarization using citation-context and article's discourse structure. In EMNLP.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Relationship identification for social network discovery",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Diehl",
"suffix": ""
},
{
"first": "Galileo",
"middle": [],
"last": "Namata",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2007,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Diehl, Galileo Namata, and Lise Getoor. 2007. Relationship identification for social network discovery. In AAAI.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Unsupervised prediction of citation influences",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Dietz",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Bickel",
"suffix": ""
},
{
"first": "Tobias",
"middle": [],
"last": "Scheffer",
"suffix": ""
}
],
"year": 2007,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura Dietz, Steffen Bickel, and Tobias Scheffer. 2007. Unsupervised prediction of citation influences. In ICML.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Will this paper increase your h-index?: Scientific impact prediction",
"authors": [
{
"first": "Yuxiao",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Reid",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Nitesh",
"middle": [],
"last": "Chawla",
"suffix": ""
}
],
"year": 2015,
"venue": "WSDM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuxiao Dong, Reid Johnson, and Nitesh Chawla. 2015. Will this paper increase your h-index?: Scientific im- pact prediction. In WSDM.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Modeling scientific impact with topical influence regression",
"authors": [
{
"first": "James",
"middle": [],
"last": "Foulds",
"suffix": ""
},
{
"first": "Padhraic",
"middle": [],
"last": "Smyth",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Foulds and Padhraic Smyth. 2013. Modeling sci- entific impact with topical influence regression. In EMNLP.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The history and meaning of the journal impact factor",
"authors": [
{
"first": "Eugene",
"middle": [],
"last": "Garfield",
"suffix": ""
}
],
"year": 2006,
"venue": "JAMA",
"volume": "295",
"issue": "1",
"pages": "90--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eugene Garfield. 2006. The history and meaning of the journal impact factor. JAMA, 295(1):90-93.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Overview of the",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Gehrke",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Ginsparg",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Kleinberg",
"suffix": ""
}
],
"year": 2003,
"venue": "KDD Cup. ACM SIGKDD Explorations Newsletter",
"volume": "5",
"issue": "2",
"pages": "149--151",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Gehrke, Paul Ginsparg, and Jon Kleinberg. 2003. Overview of the 2003 KDD Cup. ACM SIGKDD Explorations Newsletter, 5(2):149-151.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Discovering canonical correlations between topical and topological information in document networks",
"authors": [
{
"first": "Yuan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Cheng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Changjun",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2015,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuan He, Cheng Wang, and Changjun Jiang. 2015. Dis- covering canonical correlations between topical and topological information in document networks. In CIKM.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "An index to quantify an individual's scientific research output",
"authors": [
{
"first": "Jorge",
"middle": [],
"last": "Hirsch",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "102",
"issue": "",
"pages": "16569--16572",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jorge Hirsch. 2005. An index to quantify an individual's scientific research output. PNAS, 102(46):16569- 16572.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Stochastic variational inference. JMLR",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Hoffman",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Paisley",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "14",
"issue": "",
"pages": "1303--1347",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Hoffman, David Blei, Chong Wang, and John Paisley. 2013. Stochastic variational inference. JMLR, 14:1303-1347.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Collaborative filtering for implicit feedback datasets",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Yehuda",
"middle": [],
"last": "Koren",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Volinsky",
"suffix": ""
}
],
"year": 2008,
"venue": "ICDM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yifan Hu, Yehuda Koren, and Chris Volinsky. 2008. Col- laborative filtering for implicit feedback datasets. In ICDM.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A neural probabilistic model for context based citation recommendation",
"authors": [
{
"first": "Wenyi",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Zhaohui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Chen",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Prasenjit",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Giles",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2015,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wenyi Huang, Zhaohui Wu, Chen Liang, Prasenjit Mitra, and Giles Lee. 2015. A neural probabilistic model for context based citation recommendation. In AAAI.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Chronological scientific information recommendation via supervised dynamic topic modeling",
"authors": [
{
"first": "Zhuoren",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2015,
"venue": "WSDM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhuoren Jiang. 2015. Chronological scientific infor- mation recommendation via supervised dynamic topic modeling. In WSDM.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Context sensitive topic models for author influence in document networks",
"authors": [
{
"first": "Saurabh",
"middle": [],
"last": "Kataria",
"suffix": ""
},
{
"first": "Prasenjit",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Cornelia",
"middle": [],
"last": "Caragea",
"suffix": ""
},
{
"first": "Giles",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2011,
"venue": "IJCAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saurabh Kataria, Prasenjit Mitra, Cornelia Caragea, and Giles Lee. 2011. Context sensitive topic models for author influence in document networks. In IJCAI.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Matrix factorization techniques for recommender systems",
"authors": [
{
"first": "Yehuda",
"middle": [],
"last": "Koren",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Bell",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Volinsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Computer",
"volume": "42",
"issue": "8",
"pages": "30--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization techniques for recommender sys- tems. Computer, 42(8):30-37.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Topic-link LDA: joint models of topic and author community",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Alexandru",
"middle": [],
"last": "Niculescu-Mizil",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Gryc",
"suffix": ""
}
],
"year": 2009,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yan Liu, Alexandru Niculescu-Mizil, and Wojciech Gryc. 2009. Topic-link LDA: joint models of topic and author community. In ICML.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Link-based classification",
"authors": [
{
"first": "Qing",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Lise",
"middle": [],
"last": "Getoor",
"suffix": ""
}
],
"year": 2003,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qing Lu and Lise Getoor. 2003. Link-based classifica- tion. In ICML.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Automating the construction of internet portals with machine learning",
"authors": [
{
"first": "Andrew Kachites",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Kamal",
"middle": [],
"last": "Nigam",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Rennie",
"suffix": ""
},
{
"first": "Kristie",
"middle": [],
"last": "Seymore",
"suffix": ""
}
],
"year": 2000,
"venue": "formation Retrieval",
"volume": "3",
"issue": "",
"pages": "127--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Kachites McCallum, Kamal Nigam, Jason Ren- nie, and Kristie Seymore. 2000. Automating the con- struction of internet portals with machine learning. In- formation Retrieval, 3(2):127-163.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Some results on the function and quality of citations",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Moravcsik",
"suffix": ""
},
{
"first": "Poovanalingam",
"middle": [],
"last": "Murugesan",
"suffix": ""
}
],
"year": 1975,
"venue": "Social studies of science",
"volume": "5",
"issue": "1",
"pages": "86--92",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Moravcsik and Poovanalingam Murugesan. 1975. Some results on the function and quality of ci- tations. Social studies of science, 5(1):86-92.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Joint latent topic models for text and citations",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Amr",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Xing",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2008,
"venue": "SIGKDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Amr Ahmed, Eric Xing, and William Cohen. 2008. Joint latent topic models for text and citations. In SIGKDD.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Modeling citation networks using latent random offsets",
"authors": [
{
"first": "Willie",
"middle": [],
"last": "Neiswanger",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Qirong",
"middle": [],
"last": "Ho",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2014,
"venue": "UAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Willie Neiswanger, Chong Wang, Qirong Ho, and Eric P. Xing. 2014. Modeling citation networks using latent random offsets. In UAI.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Collaborative topic regression with social matrix factorization for recommendation systems",
"authors": [
{
"first": "Sanjay",
"middle": [],
"last": "Purushotham",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "C.-C. Jay",
"middle": [],
"last": "Kuo",
"suffix": ""
}
],
"year": 2012,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjay Purushotham, Yan Liu, and C.-C. Jay Kuo. 2012. Collaborative topic regression with social matrix fac- torization for recommendation systems. In ICML.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "How does clickthrough data reflect retrieval quality",
"authors": [
{
"first": "Filip",
"middle": [],
"last": "Radlinski",
"suffix": ""
},
{
"first": "Madhu",
"middle": [],
"last": "Kurup",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2008,
"venue": "CIKM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Filip Radlinski, Madhu Kurup, and Thorsten Joachims. 2008. How does clickthrough data reflect retrieval quality? In CIKM.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Fast inference for the latent space network model using a case-control approximate likelihood",
"authors": [
{
"first": "Adrian",
"middle": [],
"last": "Raftery",
"suffix": ""
},
{
"first": "Xiaoyue",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Hoff",
"suffix": ""
},
{
"first": "Ka",
"middle": [
"Yee"
],
"last": "Yeung",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of Computational and Graphical Statistics",
"volume": "21",
"issue": "4",
"pages": "901--919",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adrian Raftery, Xiaoyue Niu, Peter Hoff, and Ka Yee Ye- ung. 2012. Fast inference for the latent space net- work model using a case-control approximate likeli- hood. Journal of Computational and Graphical Statis- tics, 21(4):901-919.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Fast maximum margin matrix factorization for collaborative prediction",
"authors": [
{
"first": "Jasson",
"middle": [],
"last": "Rennie",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Srebro",
"suffix": ""
}
],
"year": 2005,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jasson Rennie and Nathan Srebro. 2005. Fast maximum margin matrix factorization for collaborative predic- tion. In ICML.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A stochastic approximation method. The annals of mathematical statistics",
"authors": [
{
"first": "Herbert",
"middle": [],
"last": "Robbins",
"suffix": ""
},
{
"first": "Sutton",
"middle": [],
"last": "Monro",
"suffix": ""
}
],
"year": 1951,
"venue": "",
"volume": "22",
"issue": "",
"pages": "400--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Herbert Robbins and Sutton Monro. 1951. A stochastic approximation method. The annals of mathematical statistics, 22(3):400-407.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The author-topic model for authors and documents",
"authors": [
{
"first": "Michal",
"middle": [],
"last": "Rosen-Zvi",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Griffiths",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steyvers",
"suffix": ""
},
{
"first": "Padhraic",
"middle": [],
"last": "Smyth",
"suffix": ""
}
],
"year": 2004,
"venue": "UAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michal Rosen-Zvi, Thomas Griffiths, Mark Steyvers, and Padhraic Smyth. 2004. The author-topic model for authors and documents. In UAI.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Probabilistic matrix factorization",
"authors": [
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Andriy",
"middle": [],
"last": "Mnih",
"suffix": ""
}
],
"year": 2007,
"venue": "NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruslan Salakhutdinov and Andriy Mnih. 2007. Proba- bilistic matrix factorization. In NIPS.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A utility model of authors in the scientific community",
"authors": [
{
"first": "Yanchuan",
"middle": [],
"last": "Sim",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Routledge",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanchuan Sim, Bryan Routledge, and Noah Smith. 2015. A utility model of authors in the scientific community. In EMNLP.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Hierarchical Dirichlet processes",
"authors": [
{
"first": "Yee Whye",
"middle": [],
"last": "Teh",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jordan",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Beal",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of the American Statistical Association",
"volume": "101",
"issue": "476",
"pages": "1566--1581",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Whye Teh, Michael Jordan, Matthew Beal, and David Blei. 2006. Hierarchical Dirichlet pro- cesses. Journal of the American Statistical Associa- tion, 101(476):1566-1581.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Automatic classification of citation function",
"authors": [
{
"first": "Simone",
"middle": [],
"last": "Teufel",
"suffix": ""
},
{
"first": "Advaith",
"middle": [],
"last": "Siddharthan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Tidhar",
"suffix": ""
}
],
"year": 2006,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simone Teufel, Advaith Siddharthan, and Dan Tidhar. 2006. Automatic classification of citation function. In EMNLP.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Citation author topic model in expert search",
"authors": [
{
"first": "Yuancheng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Johri",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2010,
"venue": "COLING",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuancheng Tu, Nikhil Johri, Dan Roth, and Julia Hock- enmaier. 2010. Citation author topic model in expert search. In COLING.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Identifying meaningful citations",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Valenzuela",
"suffix": ""
},
{
"first": "Vu",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2015,
"venue": "Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Valenzuela, Vu Ha, and Oren Etzioni. 2015. Identifying meaningful citations. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelli- gence.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "The TREC-8 question answering track report",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Voorhees",
"suffix": ""
}
],
"year": 1999,
"venue": "TREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen Voorhees. 1999. The TREC-8 question answering track report. In TREC.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Collaborative topic modeling for recommending scientific articles",
"authors": [
{
"first": "Chong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Blei",
"suffix": ""
}
],
"year": 2011,
"venue": "SIGKDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chong Wang and David Blei. 2011. Collaborative topic modeling for recommending scientific articles. In SIGKDD.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "LDTM: A latent document type model for cumulative citation recommendation",
"authors": [
{
"first": "Jingang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Dandan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Zhiwei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Lejian",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Luo",
"middle": [],
"last": "Si",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2015,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingang Wang, Dandan Song, Zhiwei Zhang, Lejian Liao, Luo Si, and Chin-Yew Lin. 2015. LDTM: A latent document type model for cumulative citation recom- mendation. In EMNLP.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Graphical representation of the LTAI.",
"uris": null,
"type_str": "figure"
},
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>",
"text": ""
},
"TABREF5": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"4\">Topic: approximation, intelligence, artificial,</td><td/></tr><tr><td colspan=\"4\">correlation, support, recognition, model, representation</td><td/></tr><tr><td>Rank Author</td><td colspan=\"4\">Topical Authority h-index # cite # paper</td></tr><tr><td>1 M Jordan</td><td>10.15</td><td>9</td><td>263</td><td>27</td></tr><tr><td>2 M Warmuth</td><td>9.57</td><td>8</td><td>160</td><td>17</td></tr><tr><td>13 T Poggio</td><td>3.48</td><td>6</td><td>178</td><td>27</td></tr><tr><td>17 F Girosi</td><td>3.22</td><td>3</td><td>101</td><td>9</td></tr><tr><td>34 M Jones</td><td>2.06</td><td>7</td><td>151</td><td>20</td></tr></table>",
"text": "Authors who have a high authority score in a computer vision topic."
},
"TABREF6": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td>Rank Author</td><td colspan=\"4\">Topical Authority h-index # cite # paper</td></tr><tr><td>1 M Bellare</td><td>13.21</td><td>11</td><td>280</td><td>43</td></tr><tr><td>2 P Rogaway</td><td>11.98</td><td>7</td><td>117</td><td>13</td></tr><tr><td>3 H Krawczyk</td><td>7.29</td><td>6</td><td>75</td><td>15</td></tr><tr><td>4 R Canetti</td><td>7.13</td><td>4</td><td>40</td><td>10</td></tr><tr><td>9 O Goldreich</td><td>3.70</td><td>9</td><td>229</td><td>49</td></tr></table>",
"text": "Topic: scheme, security, signature, attack, threshold, authentication, cryptographic, encryption"
},
"TABREF7": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table/>",
"text": "Authors who have a high authority score in a computer security topic."
}
}
}
}